Sep 11 00:26:02.964887 kernel: Linux version 6.12.46-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Wed Sep 10 22:25:29 -00 2025 Sep 11 00:26:02.964915 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=24178014e7d1a618b6c727661dc98ca9324f7f5aeefcaa5f4996d4d839e6e63a Sep 11 00:26:02.964926 kernel: BIOS-provided physical RAM map: Sep 11 00:26:02.964934 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 11 00:26:02.964941 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Sep 11 00:26:02.964948 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000044fdfff] usable Sep 11 00:26:02.964957 kernel: BIOS-e820: [mem 0x00000000044fe000-0x00000000048fdfff] reserved Sep 11 00:26:02.964965 kernel: BIOS-e820: [mem 0x00000000048fe000-0x000000003ff1efff] usable Sep 11 00:26:02.964971 kernel: BIOS-e820: [mem 0x000000003ff1f000-0x000000003ffc8fff] reserved Sep 11 00:26:02.964978 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Sep 11 00:26:02.964985 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Sep 11 00:26:02.964992 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Sep 11 00:26:02.964998 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Sep 11 00:26:02.965005 kernel: printk: legacy bootconsole [earlyser0] enabled Sep 11 00:26:02.965016 kernel: NX (Execute Disable) protection: active Sep 11 00:26:02.965023 kernel: APIC: Static calls initialized Sep 11 00:26:02.965031 kernel: efi: EFI v2.7 by Microsoft Sep 11 00:26:02.965038 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff88000 SMBIOS 3.0=0x3ff86000 MEMATTR=0x3ead5518 RNG=0x3ffd2018 Sep 11 00:26:02.965046 kernel: random: crng init done Sep 11 00:26:02.965053 kernel: secureboot: Secure boot disabled Sep 11 00:26:02.965060 kernel: SMBIOS 3.1.0 present. Sep 11 00:26:02.965068 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 01/28/2025 Sep 11 00:26:02.965077 kernel: DMI: Memory slots populated: 2/2 Sep 11 00:26:02.965084 kernel: Hypervisor detected: Microsoft Hyper-V Sep 11 00:26:02.965091 kernel: Hyper-V: privilege flags low 0xae7f, high 0x3b8030, hints 0x9e4e24, misc 0xe0bed7b2 Sep 11 00:26:02.965098 kernel: Hyper-V: Nested features: 0x3e0101 Sep 11 00:26:02.965105 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Sep 11 00:26:02.965112 kernel: Hyper-V: Using hypercall for remote TLB flush Sep 11 00:26:02.965120 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Sep 11 00:26:02.965127 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Sep 11 00:26:02.965135 kernel: tsc: Detected 2300.000 MHz processor Sep 11 00:26:02.965142 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 11 00:26:02.965150 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 11 00:26:02.965160 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x10000000000 Sep 11 00:26:02.965168 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Sep 11 00:26:02.965176 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 11 00:26:02.965183 kernel: e820: update [mem 0x48000000-0xffffffff] usable ==> reserved Sep 11 00:26:02.965191 kernel: last_pfn = 0x40000 max_arch_pfn = 0x10000000000 Sep 11 00:26:02.965198 kernel: Using GB pages for direct mapping Sep 11 00:26:02.965206 kernel: ACPI: Early table checksum verification disabled Sep 11 00:26:02.965217 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Sep 11 00:26:02.965226 kernel: ACPI: XSDT 0x000000003FFF90E8 00005C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 11 00:26:02.965247 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 11 00:26:02.965254 kernel: ACPI: DSDT 0x000000003FFD6000 01E27A (v02 MSFTVM DSDT01 00000001 INTL 20230628) Sep 11 00:26:02.965261 kernel: ACPI: FACS 0x000000003FFFE000 000040 Sep 11 00:26:02.965268 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 11 00:26:02.965275 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 11 00:26:02.965289 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 11 00:26:02.965301 kernel: ACPI: APIC 0x000000003FFD5000 000052 (v05 HVLITE HVLITETB 00000000 MSHV 00000000) Sep 11 00:26:02.965313 kernel: ACPI: SRAT 0x000000003FFD4000 0000A0 (v03 HVLITE HVLITETB 00000000 MSHV 00000000) Sep 11 00:26:02.965321 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 11 00:26:02.965329 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Sep 11 00:26:02.965337 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4279] Sep 11 00:26:02.965344 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Sep 11 00:26:02.965352 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Sep 11 00:26:02.965361 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Sep 11 00:26:02.965369 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Sep 11 00:26:02.965377 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5051] Sep 11 00:26:02.965384 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd409f] Sep 11 00:26:02.965392 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Sep 11 00:26:02.965400 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Sep 11 00:26:02.965407 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] Sep 11 00:26:02.965415 kernel: NUMA: Node 0 [mem 0x00001000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00001000-0x2bfffffff] Sep 11 00:26:02.965423 kernel: NODE_DATA(0) allocated [mem 0x2bfff8dc0-0x2bfffffff] Sep 11 00:26:02.965433 kernel: Zone ranges: Sep 11 00:26:02.965440 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 11 00:26:02.965448 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Sep 11 00:26:02.965456 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Sep 11 00:26:02.965463 kernel: Device empty Sep 11 00:26:02.965471 kernel: Movable zone start for each node Sep 11 00:26:02.965479 kernel: Early memory node ranges Sep 11 00:26:02.965487 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Sep 11 00:26:02.965495 kernel: node 0: [mem 0x0000000000100000-0x00000000044fdfff] Sep 11 00:26:02.965504 kernel: node 0: [mem 0x00000000048fe000-0x000000003ff1efff] Sep 11 00:26:02.965514 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Sep 11 00:26:02.965522 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Sep 11 00:26:02.965530 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Sep 11 00:26:02.965538 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 11 00:26:02.965546 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Sep 11 00:26:02.965554 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Sep 11 00:26:02.965562 kernel: On node 0, zone DMA32: 224 pages in unavailable ranges Sep 11 00:26:02.965570 kernel: ACPI: PM-Timer IO Port: 0x408 Sep 11 00:26:02.965578 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 11 00:26:02.965588 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 11 00:26:02.965596 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 11 00:26:02.965605 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Sep 11 00:26:02.965613 kernel: TSC deadline timer available Sep 11 00:26:02.965621 kernel: CPU topo: Max. logical packages: 1 Sep 11 00:26:02.965629 kernel: CPU topo: Max. logical dies: 1 Sep 11 00:26:02.965637 kernel: CPU topo: Max. dies per package: 1 Sep 11 00:26:02.965645 kernel: CPU topo: Max. threads per core: 2 Sep 11 00:26:02.965653 kernel: CPU topo: Num. cores per package: 1 Sep 11 00:26:02.965663 kernel: CPU topo: Num. threads per package: 2 Sep 11 00:26:02.965671 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Sep 11 00:26:02.965679 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Sep 11 00:26:02.965687 kernel: Booting paravirtualized kernel on Hyper-V Sep 11 00:26:02.965696 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 11 00:26:02.965704 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Sep 11 00:26:02.965712 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Sep 11 00:26:02.965720 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Sep 11 00:26:02.965729 kernel: pcpu-alloc: [0] 0 1 Sep 11 00:26:02.965738 kernel: Hyper-V: PV spinlocks enabled Sep 11 00:26:02.965746 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 11 00:26:02.965756 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=24178014e7d1a618b6c727661dc98ca9324f7f5aeefcaa5f4996d4d839e6e63a Sep 11 00:26:02.965765 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 11 00:26:02.965774 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Sep 11 00:26:02.965781 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 11 00:26:02.965788 kernel: Fallback order for Node 0: 0 Sep 11 00:26:02.965795 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2095807 Sep 11 00:26:02.965804 kernel: Policy zone: Normal Sep 11 00:26:02.965811 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 11 00:26:02.965818 kernel: software IO TLB: area num 2. Sep 11 00:26:02.965825 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 11 00:26:02.965832 kernel: ftrace: allocating 40103 entries in 157 pages Sep 11 00:26:02.965839 kernel: ftrace: allocated 157 pages with 5 groups Sep 11 00:26:02.965845 kernel: Dynamic Preempt: voluntary Sep 11 00:26:02.965853 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 11 00:26:02.965861 kernel: rcu: RCU event tracing is enabled. Sep 11 00:26:02.965875 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 11 00:26:02.965883 kernel: Trampoline variant of Tasks RCU enabled. Sep 11 00:26:02.965890 kernel: Rude variant of Tasks RCU enabled. Sep 11 00:26:02.965900 kernel: Tracing variant of Tasks RCU enabled. Sep 11 00:26:02.965907 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 11 00:26:02.965915 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 11 00:26:02.965922 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 11 00:26:02.965930 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 11 00:26:02.965938 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 11 00:26:02.965945 kernel: Using NULL legacy PIC Sep 11 00:26:02.965954 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Sep 11 00:26:02.965962 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 11 00:26:02.965969 kernel: Console: colour dummy device 80x25 Sep 11 00:26:02.965977 kernel: printk: legacy console [tty1] enabled Sep 11 00:26:02.965985 kernel: printk: legacy console [ttyS0] enabled Sep 11 00:26:02.965992 kernel: printk: legacy bootconsole [earlyser0] disabled Sep 11 00:26:02.966001 kernel: ACPI: Core revision 20240827 Sep 11 00:26:02.966008 kernel: Failed to register legacy timer interrupt Sep 11 00:26:02.966016 kernel: APIC: Switch to symmetric I/O mode setup Sep 11 00:26:02.966023 kernel: x2apic enabled Sep 11 00:26:02.966031 kernel: APIC: Switched APIC routing to: physical x2apic Sep 11 00:26:02.966038 kernel: Hyper-V: Host Build 10.0.26100.1293-1-0 Sep 11 00:26:02.966046 kernel: Hyper-V: enabling crash_kexec_post_notifiers Sep 11 00:26:02.966053 kernel: Hyper-V: Disabling IBT because of Hyper-V bug Sep 11 00:26:02.966061 kernel: Hyper-V: Using IPI hypercalls Sep 11 00:26:02.966070 kernel: APIC: send_IPI() replaced with hv_send_ipi() Sep 11 00:26:02.966078 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Sep 11 00:26:02.966085 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Sep 11 00:26:02.966093 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Sep 11 00:26:02.966100 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Sep 11 00:26:02.966108 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Sep 11 00:26:02.966116 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212735223b2, max_idle_ns: 440795277976 ns Sep 11 00:26:02.966123 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 4600.00 BogoMIPS (lpj=2300000) Sep 11 00:26:02.966131 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 11 00:26:02.966140 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Sep 11 00:26:02.966147 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Sep 11 00:26:02.966155 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 11 00:26:02.966162 kernel: Spectre V2 : Mitigation: Retpolines Sep 11 00:26:02.966169 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 11 00:26:02.966177 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Sep 11 00:26:02.966184 kernel: RETBleed: Vulnerable Sep 11 00:26:02.966192 kernel: Speculative Store Bypass: Vulnerable Sep 11 00:26:02.966199 kernel: active return thunk: its_return_thunk Sep 11 00:26:02.966206 kernel: ITS: Mitigation: Aligned branch/return thunks Sep 11 00:26:02.966214 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 11 00:26:02.966223 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 11 00:26:02.966247 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 11 00:26:02.966256 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Sep 11 00:26:02.966263 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Sep 11 00:26:02.966271 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Sep 11 00:26:02.966278 kernel: x86/fpu: Supporting XSAVE feature 0x800: 'Control-flow User registers' Sep 11 00:26:02.966286 kernel: x86/fpu: Supporting XSAVE feature 0x20000: 'AMX Tile config' Sep 11 00:26:02.966293 kernel: x86/fpu: Supporting XSAVE feature 0x40000: 'AMX Tile data' Sep 11 00:26:02.966301 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 11 00:26:02.966308 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Sep 11 00:26:02.966316 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Sep 11 00:26:02.966325 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Sep 11 00:26:02.966332 kernel: x86/fpu: xstate_offset[11]: 2432, xstate_sizes[11]: 16 Sep 11 00:26:02.966340 kernel: x86/fpu: xstate_offset[17]: 2496, xstate_sizes[17]: 64 Sep 11 00:26:02.966347 kernel: x86/fpu: xstate_offset[18]: 2560, xstate_sizes[18]: 8192 Sep 11 00:26:02.966355 kernel: x86/fpu: Enabled xstate features 0x608e7, context size is 10752 bytes, using 'compacted' format. Sep 11 00:26:02.966362 kernel: Freeing SMP alternatives memory: 32K Sep 11 00:26:02.966370 kernel: pid_max: default: 32768 minimum: 301 Sep 11 00:26:02.966377 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 11 00:26:02.966385 kernel: landlock: Up and running. Sep 11 00:26:02.966392 kernel: SELinux: Initializing. Sep 11 00:26:02.966400 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 11 00:26:02.966409 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 11 00:26:02.966416 kernel: smpboot: CPU0: Intel INTEL(R) XEON(R) PLATINUM 8573C (family: 0x6, model: 0xcf, stepping: 0x2) Sep 11 00:26:02.966424 kernel: Performance Events: unsupported p6 CPU model 207 no PMU driver, software events only. Sep 11 00:26:02.966431 kernel: signal: max sigframe size: 11952 Sep 11 00:26:02.966439 kernel: rcu: Hierarchical SRCU implementation. Sep 11 00:26:02.966447 kernel: rcu: Max phase no-delay instances is 400. Sep 11 00:26:02.966454 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 11 00:26:02.966462 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 11 00:26:02.966470 kernel: smp: Bringing up secondary CPUs ... Sep 11 00:26:02.966478 kernel: smpboot: x86: Booting SMP configuration: Sep 11 00:26:02.966487 kernel: .... node #0, CPUs: #1 Sep 11 00:26:02.966494 kernel: smp: Brought up 1 node, 2 CPUs Sep 11 00:26:02.966502 kernel: smpboot: Total of 2 processors activated (9200.00 BogoMIPS) Sep 11 00:26:02.966510 kernel: Memory: 8079080K/8383228K available (14336K kernel code, 2429K rwdata, 9960K rodata, 53832K init, 1088K bss, 297940K reserved, 0K cma-reserved) Sep 11 00:26:02.966518 kernel: devtmpfs: initialized Sep 11 00:26:02.966525 kernel: x86/mm: Memory block size: 128MB Sep 11 00:26:02.966533 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Sep 11 00:26:02.966541 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 11 00:26:02.966548 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 11 00:26:02.966557 kernel: pinctrl core: initialized pinctrl subsystem Sep 11 00:26:02.966565 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 11 00:26:02.966572 kernel: audit: initializing netlink subsys (disabled) Sep 11 00:26:02.966580 kernel: audit: type=2000 audit(1757550360.028:1): state=initialized audit_enabled=0 res=1 Sep 11 00:26:02.966588 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 11 00:26:02.966595 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 11 00:26:02.966603 kernel: cpuidle: using governor menu Sep 11 00:26:02.966610 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 11 00:26:02.966618 kernel: dca service started, version 1.12.1 Sep 11 00:26:02.966627 kernel: e820: reserve RAM buffer [mem 0x044fe000-0x07ffffff] Sep 11 00:26:02.966634 kernel: e820: reserve RAM buffer [mem 0x3ff1f000-0x3fffffff] Sep 11 00:26:02.966642 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 11 00:26:02.966649 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 11 00:26:02.966657 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 11 00:26:02.966665 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 11 00:26:02.966672 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 11 00:26:02.966680 kernel: ACPI: Added _OSI(Module Device) Sep 11 00:26:02.966689 kernel: ACPI: Added _OSI(Processor Device) Sep 11 00:26:02.966697 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 11 00:26:02.966704 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 11 00:26:02.966712 kernel: ACPI: Interpreter enabled Sep 11 00:26:02.966720 kernel: ACPI: PM: (supports S0 S5) Sep 11 00:26:02.966727 kernel: ACPI: Using IOAPIC for interrupt routing Sep 11 00:26:02.966735 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 11 00:26:02.966742 kernel: PCI: Ignoring E820 reservations for host bridge windows Sep 11 00:26:02.966750 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Sep 11 00:26:02.966758 kernel: iommu: Default domain type: Translated Sep 11 00:26:02.966767 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 11 00:26:02.966775 kernel: efivars: Registered efivars operations Sep 11 00:26:02.966782 kernel: PCI: Using ACPI for IRQ routing Sep 11 00:26:02.966789 kernel: PCI: System does not support PCI Sep 11 00:26:02.966797 kernel: vgaarb: loaded Sep 11 00:26:02.966804 kernel: clocksource: Switched to clocksource tsc-early Sep 11 00:26:02.966812 kernel: VFS: Disk quotas dquot_6.6.0 Sep 11 00:26:02.966820 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 11 00:26:02.966827 kernel: pnp: PnP ACPI init Sep 11 00:26:02.966836 kernel: pnp: PnP ACPI: found 3 devices Sep 11 00:26:02.966844 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 11 00:26:02.966852 kernel: NET: Registered PF_INET protocol family Sep 11 00:26:02.966859 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 11 00:26:02.966867 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Sep 11 00:26:02.966875 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 11 00:26:02.966882 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 11 00:26:02.966890 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Sep 11 00:26:02.966899 kernel: TCP: Hash tables configured (established 65536 bind 65536) Sep 11 00:26:02.966906 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Sep 11 00:26:02.966914 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Sep 11 00:26:02.966922 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 11 00:26:02.966929 kernel: NET: Registered PF_XDP protocol family Sep 11 00:26:02.966937 kernel: PCI: CLS 0 bytes, default 64 Sep 11 00:26:02.966944 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Sep 11 00:26:02.966952 kernel: software IO TLB: mapped [mem 0x000000003a9d3000-0x000000003e9d3000] (64MB) Sep 11 00:26:02.966960 kernel: RAPL PMU: API unit is 2^-32 Joules, 1 fixed counters, 10737418240 ms ovfl timer Sep 11 00:26:02.966969 kernel: RAPL PMU: hw unit of domain psys 2^-0 Joules Sep 11 00:26:02.966976 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212735223b2, max_idle_ns: 440795277976 ns Sep 11 00:26:02.966984 kernel: clocksource: Switched to clocksource tsc Sep 11 00:26:02.966992 kernel: Initialise system trusted keyrings Sep 11 00:26:02.966999 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Sep 11 00:26:02.967007 kernel: Key type asymmetric registered Sep 11 00:26:02.967014 kernel: Asymmetric key parser 'x509' registered Sep 11 00:26:02.967022 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 11 00:26:02.967030 kernel: io scheduler mq-deadline registered Sep 11 00:26:02.967039 kernel: io scheduler kyber registered Sep 11 00:26:02.967046 kernel: io scheduler bfq registered Sep 11 00:26:02.967054 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 11 00:26:02.967062 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 11 00:26:02.967069 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 11 00:26:02.967077 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Sep 11 00:26:02.967086 kernel: serial8250: ttyS2 at I/O 0x3e8 (irq = 4, base_baud = 115200) is a 16550A Sep 11 00:26:02.967095 kernel: i8042: PNP: No PS/2 controller found. Sep 11 00:26:02.967218 kernel: rtc_cmos 00:02: registered as rtc0 Sep 11 00:26:02.967305 kernel: rtc_cmos 00:02: setting system clock to 2025-09-11T00:26:02 UTC (1757550362) Sep 11 00:26:02.967372 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Sep 11 00:26:02.967382 kernel: intel_pstate: Intel P-state driver initializing Sep 11 00:26:02.967391 kernel: efifb: probing for efifb Sep 11 00:26:02.967400 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Sep 11 00:26:02.967409 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Sep 11 00:26:02.967418 kernel: efifb: scrolling: redraw Sep 11 00:26:02.967426 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 11 00:26:02.967437 kernel: Console: switching to colour frame buffer device 128x48 Sep 11 00:26:02.967445 kernel: fb0: EFI VGA frame buffer device Sep 11 00:26:02.967453 kernel: pstore: Using crash dump compression: deflate Sep 11 00:26:02.967460 kernel: pstore: Registered efi_pstore as persistent store backend Sep 11 00:26:02.967468 kernel: NET: Registered PF_INET6 protocol family Sep 11 00:26:02.967475 kernel: Segment Routing with IPv6 Sep 11 00:26:02.967482 kernel: In-situ OAM (IOAM) with IPv6 Sep 11 00:26:02.967490 kernel: NET: Registered PF_PACKET protocol family Sep 11 00:26:02.967497 kernel: Key type dns_resolver registered Sep 11 00:26:02.967506 kernel: IPI shorthand broadcast: enabled Sep 11 00:26:02.967513 kernel: sched_clock: Marking stable (2667003191, 96492423)->(3081691494, -318195880) Sep 11 00:26:02.967521 kernel: registered taskstats version 1 Sep 11 00:26:02.967528 kernel: Loading compiled-in X.509 certificates Sep 11 00:26:02.967536 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.46-flatcar: 8138ce5002a1b572fd22b23ac238f29bab3f249f' Sep 11 00:26:02.967543 kernel: Demotion targets for Node 0: null Sep 11 00:26:02.967550 kernel: Key type .fscrypt registered Sep 11 00:26:02.967558 kernel: Key type fscrypt-provisioning registered Sep 11 00:26:02.967565 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 11 00:26:02.967574 kernel: ima: Allocated hash algorithm: sha1 Sep 11 00:26:02.967581 kernel: ima: No architecture policies found Sep 11 00:26:02.967589 kernel: clk: Disabling unused clocks Sep 11 00:26:02.967596 kernel: Warning: unable to open an initial console. Sep 11 00:26:02.967603 kernel: Freeing unused kernel image (initmem) memory: 53832K Sep 11 00:26:02.967611 kernel: Write protecting the kernel read-only data: 24576k Sep 11 00:26:02.967618 kernel: Freeing unused kernel image (rodata/data gap) memory: 280K Sep 11 00:26:02.967625 kernel: Run /init as init process Sep 11 00:26:02.967632 kernel: with arguments: Sep 11 00:26:02.967641 kernel: /init Sep 11 00:26:02.967648 kernel: with environment: Sep 11 00:26:02.967655 kernel: HOME=/ Sep 11 00:26:02.967662 kernel: TERM=linux Sep 11 00:26:02.967670 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 11 00:26:02.967678 systemd[1]: Successfully made /usr/ read-only. Sep 11 00:26:02.967689 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 11 00:26:02.967699 systemd[1]: Detected virtualization microsoft. Sep 11 00:26:02.967706 systemd[1]: Detected architecture x86-64. Sep 11 00:26:02.967714 systemd[1]: Running in initrd. Sep 11 00:26:02.967721 systemd[1]: No hostname configured, using default hostname. Sep 11 00:26:02.967729 systemd[1]: Hostname set to . Sep 11 00:26:02.967737 systemd[1]: Initializing machine ID from random generator. Sep 11 00:26:02.967745 systemd[1]: Queued start job for default target initrd.target. Sep 11 00:26:02.967752 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 11 00:26:02.967760 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 11 00:26:02.967770 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 11 00:26:02.967778 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 11 00:26:02.967786 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 11 00:26:02.967795 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 11 00:26:02.967803 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 11 00:26:02.967811 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 11 00:26:02.967821 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 11 00:26:02.967829 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 11 00:26:02.967836 systemd[1]: Reached target paths.target - Path Units. Sep 11 00:26:02.967844 systemd[1]: Reached target slices.target - Slice Units. Sep 11 00:26:02.967852 systemd[1]: Reached target swap.target - Swaps. Sep 11 00:26:02.967860 systemd[1]: Reached target timers.target - Timer Units. Sep 11 00:26:02.967868 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 11 00:26:02.967875 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 11 00:26:02.967883 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 11 00:26:02.967893 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 11 00:26:02.967901 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 11 00:26:02.967908 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 11 00:26:02.967916 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 11 00:26:02.967923 systemd[1]: Reached target sockets.target - Socket Units. Sep 11 00:26:02.967930 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 11 00:26:02.967938 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 11 00:26:02.967946 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 11 00:26:02.967956 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 11 00:26:02.967965 systemd[1]: Starting systemd-fsck-usr.service... Sep 11 00:26:02.967973 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 11 00:26:02.967990 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 11 00:26:02.968000 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 11 00:26:02.968008 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 11 00:26:02.968019 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 11 00:26:02.968027 systemd[1]: Finished systemd-fsck-usr.service. Sep 11 00:26:02.968035 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 11 00:26:02.968057 systemd-journald[205]: Collecting audit messages is disabled. Sep 11 00:26:02.968079 systemd-journald[205]: Journal started Sep 11 00:26:02.968100 systemd-journald[205]: Runtime Journal (/run/log/journal/bdf7854d9d664f9c9b27bbe114744b43) is 8M, max 158.9M, 150.9M free. Sep 11 00:26:02.972246 systemd[1]: Started systemd-journald.service - Journal Service. Sep 11 00:26:02.975336 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 11 00:26:02.976015 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 11 00:26:02.980338 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 11 00:26:02.987400 systemd-modules-load[206]: Inserted module 'overlay' Sep 11 00:26:02.989570 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 11 00:26:02.994757 systemd-tmpfiles[219]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 11 00:26:02.998705 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 11 00:26:03.000838 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 11 00:26:03.009788 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 11 00:26:03.024253 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 11 00:26:03.025898 systemd-modules-load[206]: Inserted module 'br_netfilter' Sep 11 00:26:03.031179 kernel: Bridge firewalling registered Sep 11 00:26:03.026618 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 11 00:26:03.028314 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 11 00:26:03.039563 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 11 00:26:03.045412 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 11 00:26:03.048387 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 11 00:26:03.057340 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 11 00:26:03.069223 dracut-cmdline[243]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=24178014e7d1a618b6c727661dc98ca9324f7f5aeefcaa5f4996d4d839e6e63a Sep 11 00:26:03.096913 systemd-resolved[244]: Positive Trust Anchors: Sep 11 00:26:03.096926 systemd-resolved[244]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 11 00:26:03.096953 systemd-resolved[244]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 11 00:26:03.099421 systemd-resolved[244]: Defaulting to hostname 'linux'. Sep 11 00:26:03.100098 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 11 00:26:03.102744 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 11 00:26:03.150252 kernel: SCSI subsystem initialized Sep 11 00:26:03.156245 kernel: Loading iSCSI transport class v2.0-870. Sep 11 00:26:03.164252 kernel: iscsi: registered transport (tcp) Sep 11 00:26:03.180639 kernel: iscsi: registered transport (qla4xxx) Sep 11 00:26:03.180676 kernel: QLogic iSCSI HBA Driver Sep 11 00:26:03.191846 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 11 00:26:03.200894 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 11 00:26:03.204921 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 11 00:26:03.230959 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 11 00:26:03.232988 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 11 00:26:03.272254 kernel: raid6: avx512x4 gen() 47094 MB/s Sep 11 00:26:03.289242 kernel: raid6: avx512x2 gen() 45744 MB/s Sep 11 00:26:03.306240 kernel: raid6: avx512x1 gen() 30382 MB/s Sep 11 00:26:03.324244 kernel: raid6: avx2x4 gen() 42218 MB/s Sep 11 00:26:03.342241 kernel: raid6: avx2x2 gen() 43327 MB/s Sep 11 00:26:03.359646 kernel: raid6: avx2x1 gen() 30290 MB/s Sep 11 00:26:03.359675 kernel: raid6: using algorithm avx512x4 gen() 47094 MB/s Sep 11 00:26:03.377621 kernel: raid6: .... xor() 8182 MB/s, rmw enabled Sep 11 00:26:03.377640 kernel: raid6: using avx512x2 recovery algorithm Sep 11 00:26:03.394255 kernel: xor: automatically using best checksumming function avx Sep 11 00:26:03.499252 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 11 00:26:03.503214 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 11 00:26:03.505190 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 11 00:26:03.526417 systemd-udevd[453]: Using default interface naming scheme 'v255'. Sep 11 00:26:03.530064 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 11 00:26:03.541866 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 11 00:26:03.557443 dracut-pre-trigger[467]: rd.md=0: removing MD RAID activation Sep 11 00:26:03.573942 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 11 00:26:03.574982 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 11 00:26:03.606321 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 11 00:26:03.615962 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 11 00:26:03.651245 kernel: cryptd: max_cpu_qlen set to 1000 Sep 11 00:26:03.669066 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 11 00:26:03.670801 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 11 00:26:03.679197 kernel: AES CTR mode by8 optimization enabled Sep 11 00:26:03.675041 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 11 00:26:03.682379 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 11 00:26:03.694257 kernel: hv_vmbus: Vmbus version:5.3 Sep 11 00:26:03.697425 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 11 00:26:03.700330 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 11 00:26:03.708473 kernel: hv_vmbus: registering driver hyperv_keyboard Sep 11 00:26:03.709674 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 11 00:26:03.718866 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 11 00:26:03.718896 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 11 00:26:03.727601 kernel: hv_vmbus: registering driver hv_storvsc Sep 11 00:26:03.727639 kernel: PTP clock support registered Sep 11 00:26:03.734255 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Sep 11 00:26:03.737294 kernel: scsi host0: storvsc_host_t Sep 11 00:26:03.739636 kernel: hv_utils: Registering HyperV Utility Driver Sep 11 00:26:03.739673 kernel: hv_vmbus: registering driver hv_utils Sep 11 00:26:03.745198 kernel: hv_vmbus: registering driver hv_netvsc Sep 11 00:26:03.745258 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Sep 11 00:26:03.753249 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 11 00:26:03.755959 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 11 00:26:03.760486 kernel: hv_vmbus: registering driver hv_pci Sep 11 00:26:03.762263 kernel: hv_vmbus: registering driver hid_hyperv Sep 11 00:26:03.771537 kernel: hv_utils: Shutdown IC version 3.2 Sep 11 00:26:03.771568 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Sep 11 00:26:03.771651 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI VMBus probing: Using version 0x10004 Sep 11 00:26:03.771806 kernel: hv_utils: Heartbeat IC version 3.0 Sep 11 00:26:03.775323 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Sep 11 00:26:03.777247 kernel: hv_utils: TimeSync IC version 4.0 Sep 11 00:26:04.172451 systemd-resolved[244]: Clock change detected. Flushing caches. Sep 11 00:26:04.177476 kernel: hv_netvsc f8615163-0000-1000-2000-7ced8d459561 (unnamed net_device) (uninitialized): VF slot 1 added Sep 11 00:26:04.182948 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI host bridge to bus c05b:00 Sep 11 00:26:04.183088 kernel: pci_bus c05b:00: root bus resource [mem 0xfc0000000-0xfc007ffff window] Sep 11 00:26:04.197974 kernel: pci_bus c05b:00: No busn resource found for root bus, will use [bus 00-ff] Sep 11 00:26:04.211308 kernel: pci c05b:00:00.0: [1414:00a9] type 00 class 0x010802 PCIe Endpoint Sep 11 00:26:04.211348 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit] Sep 11 00:26:04.213155 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Sep 11 00:26:04.213277 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 11 00:26:04.215398 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Sep 11 00:26:04.223404 kernel: pci_bus c05b:00: busn_res: [bus 00-ff] end is updated to 00 Sep 11 00:26:04.223518 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit]: assigned Sep 11 00:26:04.233404 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#88 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Sep 11 00:26:04.240606 kernel: nvme nvme0: pci function c05b:00:00.0 Sep 11 00:26:04.240785 kernel: nvme c05b:00:00.0: enabling device (0000 -> 0002) Sep 11 00:26:04.253407 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#125 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Sep 11 00:26:04.397411 kernel: nvme nvme0: 2/0/0 default/read/poll queues Sep 11 00:26:04.402403 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 11 00:26:04.648404 kernel: nvme nvme0: using unchecked data buffer Sep 11 00:26:04.838893 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - MSFT NVMe Accelerator v1.0 EFI-SYSTEM. Sep 11 00:26:04.849985 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Sep 11 00:26:04.860169 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - MSFT NVMe Accelerator v1.0 ROOT. Sep 11 00:26:04.875801 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - MSFT NVMe Accelerator v1.0 USR-A. Sep 11 00:26:04.876240 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - MSFT NVMe Accelerator v1.0 USR-A. Sep 11 00:26:04.882320 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 11 00:26:04.894844 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 11 00:26:04.898914 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 11 00:26:04.899154 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 11 00:26:04.899401 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 11 00:26:04.901498 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 11 00:26:04.919423 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 11 00:26:04.920414 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 11 00:26:05.208395 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI VMBus probing: Using version 0x10004 Sep 11 00:26:05.211920 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI host bridge to bus 7870:00 Sep 11 00:26:05.212064 kernel: pci_bus 7870:00: root bus resource [mem 0xfc2000000-0xfc4007fff window] Sep 11 00:26:05.213791 kernel: pci_bus 7870:00: No busn resource found for root bus, will use [bus 00-ff] Sep 11 00:26:05.217426 kernel: pci 7870:00:00.0: [1414:00ba] type 00 class 0x020000 PCIe Endpoint Sep 11 00:26:05.220514 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref] Sep 11 00:26:05.224533 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref] Sep 11 00:26:05.226472 kernel: pci 7870:00:00.0: enabling Extended Tags Sep 11 00:26:05.241348 kernel: pci_bus 7870:00: busn_res: [bus 00-ff] end is updated to 00 Sep 11 00:26:05.241530 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref]: assigned Sep 11 00:26:05.241684 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref]: assigned Sep 11 00:26:05.246286 kernel: mana 7870:00:00.0: enabling device (0000 -> 0002) Sep 11 00:26:05.255394 kernel: mana 7870:00:00.0: Microsoft Azure Network Adapter protocol version: 0.1.1 Sep 11 00:26:05.258733 kernel: hv_netvsc f8615163-0000-1000-2000-7ced8d459561 eth0: VF registering: eth1 Sep 11 00:26:05.258882 kernel: mana 7870:00:00.0 eth1: joined to eth0 Sep 11 00:26:05.262405 kernel: mana 7870:00:00.0 enP30832s1: renamed from eth1 Sep 11 00:26:05.930796 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 11 00:26:05.931576 disk-uuid[673]: The operation has completed successfully. Sep 11 00:26:05.972349 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 11 00:26:05.972435 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 11 00:26:06.004235 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 11 00:26:06.028201 sh[717]: Success Sep 11 00:26:06.055871 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 11 00:26:06.055906 kernel: device-mapper: uevent: version 1.0.3 Sep 11 00:26:06.055924 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 11 00:26:06.064399 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Sep 11 00:26:06.289666 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 11 00:26:06.296050 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 11 00:26:06.308249 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 11 00:26:06.318531 kernel: BTRFS: device fsid f1eb5eb7-34cc-49c0-9f2b-e603bd772d66 devid 1 transid 39 /dev/mapper/usr (254:0) scanned by mount (730) Sep 11 00:26:06.318735 kernel: BTRFS info (device dm-0): first mount of filesystem f1eb5eb7-34cc-49c0-9f2b-e603bd772d66 Sep 11 00:26:06.320080 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 11 00:26:06.570049 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 11 00:26:06.570128 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 11 00:26:06.571849 kernel: BTRFS info (device dm-0): enabling free space tree Sep 11 00:26:06.600083 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 11 00:26:06.603777 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 11 00:26:06.605124 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 11 00:26:06.605802 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 11 00:26:06.614833 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 11 00:26:06.635426 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:12) scanned by mount (765) Sep 11 00:26:06.635455 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem a5de7b5e-e14d-4c62-883d-af7ea22fae7e Sep 11 00:26:06.637786 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Sep 11 00:26:06.657302 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 11 00:26:06.657339 kernel: BTRFS info (device nvme0n1p6): turning on async discard Sep 11 00:26:06.657349 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Sep 11 00:26:06.662401 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem a5de7b5e-e14d-4c62-883d-af7ea22fae7e Sep 11 00:26:06.663069 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 11 00:26:06.669494 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 11 00:26:06.684864 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 11 00:26:06.689116 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 11 00:26:06.716983 systemd-networkd[899]: lo: Link UP Sep 11 00:26:06.721840 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Sep 11 00:26:06.716990 systemd-networkd[899]: lo: Gained carrier Sep 11 00:26:06.728454 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Sep 11 00:26:06.728573 kernel: hv_netvsc f8615163-0000-1000-2000-7ced8d459561 eth0: Data path switched to VF: enP30832s1 Sep 11 00:26:06.717890 systemd-networkd[899]: Enumeration completed Sep 11 00:26:06.718206 systemd-networkd[899]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 11 00:26:06.718209 systemd-networkd[899]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 11 00:26:06.718510 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 11 00:26:06.723763 systemd[1]: Reached target network.target - Network. Sep 11 00:26:06.727552 systemd-networkd[899]: enP30832s1: Link UP Sep 11 00:26:06.727612 systemd-networkd[899]: eth0: Link UP Sep 11 00:26:06.727703 systemd-networkd[899]: eth0: Gained carrier Sep 11 00:26:06.727713 systemd-networkd[899]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 11 00:26:06.732989 systemd-networkd[899]: enP30832s1: Gained carrier Sep 11 00:26:06.741426 systemd-networkd[899]: eth0: DHCPv4 address 10.200.8.4/24, gateway 10.200.8.1 acquired from 168.63.129.16 Sep 11 00:26:07.379914 ignition[868]: Ignition 2.21.0 Sep 11 00:26:07.379924 ignition[868]: Stage: fetch-offline Sep 11 00:26:07.382151 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 11 00:26:07.380011 ignition[868]: no configs at "/usr/lib/ignition/base.d" Sep 11 00:26:07.385320 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 11 00:26:07.380017 ignition[868]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 11 00:26:07.380100 ignition[868]: parsed url from cmdline: "" Sep 11 00:26:07.380104 ignition[868]: no config URL provided Sep 11 00:26:07.380108 ignition[868]: reading system config file "/usr/lib/ignition/user.ign" Sep 11 00:26:07.380113 ignition[868]: no config at "/usr/lib/ignition/user.ign" Sep 11 00:26:07.380117 ignition[868]: failed to fetch config: resource requires networking Sep 11 00:26:07.381087 ignition[868]: Ignition finished successfully Sep 11 00:26:07.406957 ignition[909]: Ignition 2.21.0 Sep 11 00:26:07.406962 ignition[909]: Stage: fetch Sep 11 00:26:07.407130 ignition[909]: no configs at "/usr/lib/ignition/base.d" Sep 11 00:26:07.407137 ignition[909]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 11 00:26:07.407196 ignition[909]: parsed url from cmdline: "" Sep 11 00:26:07.407198 ignition[909]: no config URL provided Sep 11 00:26:07.407202 ignition[909]: reading system config file "/usr/lib/ignition/user.ign" Sep 11 00:26:07.407207 ignition[909]: no config at "/usr/lib/ignition/user.ign" Sep 11 00:26:07.407244 ignition[909]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Sep 11 00:26:07.454062 ignition[909]: GET result: OK Sep 11 00:26:07.454444 ignition[909]: config has been read from IMDS userdata Sep 11 00:26:07.454480 ignition[909]: parsing config with SHA512: f68d1eee01c1496b3f60b0d5e62c30674e53a7185ae66b5b362fc47f94e1a9066f17b4511cc6c55f9c82e6aac7370ff935a67d394bd2910ac48c7a2f3f5faf58 Sep 11 00:26:07.459908 unknown[909]: fetched base config from "system" Sep 11 00:26:07.459917 unknown[909]: fetched base config from "system" Sep 11 00:26:07.460198 ignition[909]: fetch: fetch complete Sep 11 00:26:07.459921 unknown[909]: fetched user config from "azure" Sep 11 00:26:07.460202 ignition[909]: fetch: fetch passed Sep 11 00:26:07.461902 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 11 00:26:07.460232 ignition[909]: Ignition finished successfully Sep 11 00:26:07.464321 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 11 00:26:07.492783 ignition[915]: Ignition 2.21.0 Sep 11 00:26:07.492791 ignition[915]: Stage: kargs Sep 11 00:26:07.493176 ignition[915]: no configs at "/usr/lib/ignition/base.d" Sep 11 00:26:07.497776 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 11 00:26:07.493184 ignition[915]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 11 00:26:07.502312 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 11 00:26:07.495809 ignition[915]: kargs: kargs passed Sep 11 00:26:07.495858 ignition[915]: Ignition finished successfully Sep 11 00:26:07.520125 ignition[922]: Ignition 2.21.0 Sep 11 00:26:07.520134 ignition[922]: Stage: disks Sep 11 00:26:07.520300 ignition[922]: no configs at "/usr/lib/ignition/base.d" Sep 11 00:26:07.522803 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 11 00:26:07.520307 ignition[922]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 11 00:26:07.522031 ignition[922]: disks: disks passed Sep 11 00:26:07.522063 ignition[922]: Ignition finished successfully Sep 11 00:26:07.529131 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 11 00:26:07.531465 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 11 00:26:07.533924 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 11 00:26:07.536206 systemd[1]: Reached target sysinit.target - System Initialization. Sep 11 00:26:07.537000 systemd[1]: Reached target basic.target - Basic System. Sep 11 00:26:07.541954 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 11 00:26:07.602564 systemd-fsck[931]: ROOT: clean, 15/7326000 files, 477845/7359488 blocks Sep 11 00:26:07.609641 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 11 00:26:07.615254 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 11 00:26:07.842396 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 6a9ce0af-81d0-4628-9791-e47488ed2744 r/w with ordered data mode. Quota mode: none. Sep 11 00:26:07.842833 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 11 00:26:07.843501 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 11 00:26:07.861885 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 11 00:26:07.866127 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 11 00:26:07.878273 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Sep 11 00:26:07.885503 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 11 00:26:07.890779 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:12) scanned by mount (940) Sep 11 00:26:07.890810 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem a5de7b5e-e14d-4c62-883d-af7ea22fae7e Sep 11 00:26:07.887855 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 11 00:26:07.894658 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Sep 11 00:26:07.897227 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 11 00:26:07.899316 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 11 00:26:07.909481 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 11 00:26:07.910594 kernel: BTRFS info (device nvme0n1p6): turning on async discard Sep 11 00:26:07.910605 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Sep 11 00:26:07.907412 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 11 00:26:08.336710 coreos-metadata[942]: Sep 11 00:26:08.336 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Sep 11 00:26:08.343326 coreos-metadata[942]: Sep 11 00:26:08.343 INFO Fetch successful Sep 11 00:26:08.343326 coreos-metadata[942]: Sep 11 00:26:08.343 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Sep 11 00:26:08.352159 coreos-metadata[942]: Sep 11 00:26:08.352 INFO Fetch successful Sep 11 00:26:08.368495 coreos-metadata[942]: Sep 11 00:26:08.368 INFO wrote hostname ci-4372.1.0-n-3f8a739b41 to /sysroot/etc/hostname Sep 11 00:26:08.371746 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 11 00:26:08.482321 initrd-setup-root[971]: cut: /sysroot/etc/passwd: No such file or directory Sep 11 00:26:08.544932 initrd-setup-root[978]: cut: /sysroot/etc/group: No such file or directory Sep 11 00:26:08.549095 initrd-setup-root[985]: cut: /sysroot/etc/shadow: No such file or directory Sep 11 00:26:08.551513 systemd-networkd[899]: eth0: Gained IPv6LL Sep 11 00:26:08.554299 initrd-setup-root[992]: cut: /sysroot/etc/gshadow: No such file or directory Sep 11 00:26:09.320660 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 11 00:26:09.324191 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 11 00:26:09.327532 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 11 00:26:09.338373 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 11 00:26:09.342854 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem a5de7b5e-e14d-4c62-883d-af7ea22fae7e Sep 11 00:26:09.356400 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 11 00:26:09.363449 ignition[1060]: INFO : Ignition 2.21.0 Sep 11 00:26:09.363449 ignition[1060]: INFO : Stage: mount Sep 11 00:26:09.367491 ignition[1060]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 11 00:26:09.367491 ignition[1060]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 11 00:26:09.367491 ignition[1060]: INFO : mount: mount passed Sep 11 00:26:09.367491 ignition[1060]: INFO : Ignition finished successfully Sep 11 00:26:09.366905 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 11 00:26:09.374460 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 11 00:26:09.387758 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 11 00:26:09.405397 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:12) scanned by mount (1072) Sep 11 00:26:09.405444 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem a5de7b5e-e14d-4c62-883d-af7ea22fae7e Sep 11 00:26:09.407402 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Sep 11 00:26:09.410599 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 11 00:26:09.410695 kernel: BTRFS info (device nvme0n1p6): turning on async discard Sep 11 00:26:09.411414 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Sep 11 00:26:09.413082 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 11 00:26:09.438609 ignition[1089]: INFO : Ignition 2.21.0 Sep 11 00:26:09.438609 ignition[1089]: INFO : Stage: files Sep 11 00:26:09.440718 ignition[1089]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 11 00:26:09.440718 ignition[1089]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 11 00:26:09.440718 ignition[1089]: DEBUG : files: compiled without relabeling support, skipping Sep 11 00:26:09.466244 ignition[1089]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 11 00:26:09.466244 ignition[1089]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 11 00:26:09.520722 ignition[1089]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 11 00:26:09.523025 ignition[1089]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 11 00:26:09.523025 ignition[1089]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 11 00:26:09.523025 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 11 00:26:09.523025 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Sep 11 00:26:09.520991 unknown[1089]: wrote ssh authorized keys file for user: core Sep 11 00:26:09.585849 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 11 00:26:09.626552 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 11 00:26:09.630474 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 11 00:26:09.630474 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 11 00:26:09.630474 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 11 00:26:09.630474 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 11 00:26:09.630474 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 11 00:26:09.630474 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 11 00:26:09.630474 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 11 00:26:09.630474 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 11 00:26:09.655435 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 11 00:26:09.655435 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 11 00:26:09.655435 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 11 00:26:09.655435 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 11 00:26:09.655435 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 11 00:26:09.655435 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Sep 11 00:26:10.178112 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 11 00:26:11.459224 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 11 00:26:11.459224 ignition[1089]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 11 00:26:11.486360 ignition[1089]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 11 00:26:11.494682 ignition[1089]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 11 00:26:11.494682 ignition[1089]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 11 00:26:11.503079 ignition[1089]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Sep 11 00:26:11.503079 ignition[1089]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Sep 11 00:26:11.503079 ignition[1089]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 11 00:26:11.503079 ignition[1089]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 11 00:26:11.503079 ignition[1089]: INFO : files: files passed Sep 11 00:26:11.503079 ignition[1089]: INFO : Ignition finished successfully Sep 11 00:26:11.496401 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 11 00:26:11.499530 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 11 00:26:11.506490 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 11 00:26:11.529722 initrd-setup-root-after-ignition[1118]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 11 00:26:11.529722 initrd-setup-root-after-ignition[1118]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 11 00:26:11.520651 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 11 00:26:11.541437 initrd-setup-root-after-ignition[1122]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 11 00:26:11.520727 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 11 00:26:11.530201 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 11 00:26:11.532189 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 11 00:26:11.534489 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 11 00:26:11.559517 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 11 00:26:11.560547 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 11 00:26:11.563327 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 11 00:26:11.571873 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 11 00:26:11.572103 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 11 00:26:11.572687 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 11 00:26:11.595296 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 11 00:26:11.597484 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 11 00:26:11.612043 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 11 00:26:11.612530 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 11 00:26:11.612745 systemd[1]: Stopped target timers.target - Timer Units. Sep 11 00:26:11.618499 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 11 00:26:11.618589 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 11 00:26:11.621698 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 11 00:26:11.625543 systemd[1]: Stopped target basic.target - Basic System. Sep 11 00:26:11.628520 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 11 00:26:11.630793 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 11 00:26:11.633356 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 11 00:26:11.636145 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 11 00:26:11.636924 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 11 00:26:11.637190 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 11 00:26:11.637481 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 11 00:26:11.637750 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 11 00:26:11.638012 systemd[1]: Stopped target swap.target - Swaps. Sep 11 00:26:11.638264 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 11 00:26:11.638354 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 11 00:26:11.646532 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 11 00:26:11.647452 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 11 00:26:11.647678 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 11 00:26:11.648336 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 11 00:26:11.653507 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 11 00:26:11.653628 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 11 00:26:11.654055 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 11 00:26:11.654168 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 11 00:26:11.654433 systemd[1]: ignition-files.service: Deactivated successfully. Sep 11 00:26:11.654526 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 11 00:26:11.654968 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 11 00:26:11.655053 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 11 00:26:11.661392 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 11 00:26:11.687470 ignition[1143]: INFO : Ignition 2.21.0 Sep 11 00:26:11.687470 ignition[1143]: INFO : Stage: umount Sep 11 00:26:11.697701 ignition[1143]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 11 00:26:11.697701 ignition[1143]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 11 00:26:11.697701 ignition[1143]: INFO : umount: umount passed Sep 11 00:26:11.697701 ignition[1143]: INFO : Ignition finished successfully Sep 11 00:26:11.694443 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 11 00:26:11.694611 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 11 00:26:11.698375 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 11 00:26:11.703849 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 11 00:26:11.703981 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 11 00:26:11.704189 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 11 00:26:11.704268 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 11 00:26:11.712599 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 11 00:26:11.712668 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 11 00:26:11.717755 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 11 00:26:11.717847 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 11 00:26:11.730303 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 11 00:26:11.730876 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 11 00:26:11.731076 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 11 00:26:11.731356 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 11 00:26:11.731421 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 11 00:26:11.731513 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 11 00:26:11.731535 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 11 00:26:11.731640 systemd[1]: Stopped target network.target - Network. Sep 11 00:26:11.731783 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 11 00:26:11.731806 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 11 00:26:11.738461 systemd[1]: Stopped target paths.target - Path Units. Sep 11 00:26:11.740476 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 11 00:26:11.744938 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 11 00:26:11.752433 systemd[1]: Stopped target slices.target - Slice Units. Sep 11 00:26:11.754881 systemd[1]: Stopped target sockets.target - Socket Units. Sep 11 00:26:11.765540 systemd[1]: iscsid.socket: Deactivated successfully. Sep 11 00:26:11.765579 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 11 00:26:11.769452 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 11 00:26:11.769485 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 11 00:26:11.773433 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 11 00:26:11.773477 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 11 00:26:11.777441 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 11 00:26:11.777476 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 11 00:26:11.779588 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 11 00:26:11.779791 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 11 00:26:11.787701 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 11 00:26:11.787815 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 11 00:26:11.792650 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 11 00:26:11.793849 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 11 00:26:11.793886 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 11 00:26:11.799699 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 11 00:26:11.799884 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 11 00:26:11.799963 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 11 00:26:11.805163 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 11 00:26:11.805450 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 11 00:26:11.818040 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 11 00:26:11.818082 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 11 00:26:11.821973 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 11 00:26:11.823769 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 11 00:26:11.823820 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 11 00:26:11.825205 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 11 00:26:11.825242 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 11 00:26:11.826348 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 11 00:26:11.826396 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 11 00:26:11.847474 kernel: hv_netvsc f8615163-0000-1000-2000-7ced8d459561 eth0: Data path switched from VF: enP30832s1 Sep 11 00:26:11.847621 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Sep 11 00:26:11.826620 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 11 00:26:11.827380 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 11 00:26:11.846487 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 11 00:26:11.846624 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 11 00:26:11.851023 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 11 00:26:11.851083 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 11 00:26:11.853910 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 11 00:26:11.853954 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 11 00:26:11.855268 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 11 00:26:11.855297 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 11 00:26:11.861480 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 11 00:26:11.862880 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 11 00:26:11.877434 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 11 00:26:11.877485 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 11 00:26:11.879612 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 11 00:26:11.879654 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 11 00:26:11.886171 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 11 00:26:11.889519 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 11 00:26:11.889575 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 11 00:26:11.890602 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 11 00:26:11.890641 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 11 00:26:11.890878 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 11 00:26:11.890918 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 11 00:26:11.898055 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 11 00:26:11.898126 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 11 00:26:12.097229 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 11 00:26:12.097315 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 11 00:26:12.101649 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 11 00:26:12.105494 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 11 00:26:12.105551 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 11 00:26:12.110202 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 11 00:26:12.124873 systemd[1]: Switching root. Sep 11 00:26:12.205530 systemd-journald[205]: Journal stopped Sep 11 00:26:15.194024 systemd-journald[205]: Received SIGTERM from PID 1 (systemd). Sep 11 00:26:15.194052 kernel: SELinux: policy capability network_peer_controls=1 Sep 11 00:26:15.194063 kernel: SELinux: policy capability open_perms=1 Sep 11 00:26:15.194072 kernel: SELinux: policy capability extended_socket_class=1 Sep 11 00:26:15.194079 kernel: SELinux: policy capability always_check_network=0 Sep 11 00:26:15.194087 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 11 00:26:15.194097 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 11 00:26:15.194105 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 11 00:26:15.194112 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 11 00:26:15.194120 kernel: SELinux: policy capability userspace_initial_context=0 Sep 11 00:26:15.194128 kernel: audit: type=1403 audit(1757550373.173:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 11 00:26:15.194137 systemd[1]: Successfully loaded SELinux policy in 133.813ms. Sep 11 00:26:15.194146 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 6.114ms. Sep 11 00:26:15.194157 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 11 00:26:15.194167 systemd[1]: Detected virtualization microsoft. Sep 11 00:26:15.194175 systemd[1]: Detected architecture x86-64. Sep 11 00:26:15.194183 systemd[1]: Detected first boot. Sep 11 00:26:15.194193 systemd[1]: Hostname set to . Sep 11 00:26:15.194203 systemd[1]: Initializing machine ID from random generator. Sep 11 00:26:15.194212 zram_generator::config[1187]: No configuration found. Sep 11 00:26:15.194221 kernel: Guest personality initialized and is inactive Sep 11 00:26:15.194229 kernel: VMCI host device registered (name=vmci, major=10, minor=124) Sep 11 00:26:15.194237 kernel: Initialized host personality Sep 11 00:26:15.194244 kernel: NET: Registered PF_VSOCK protocol family Sep 11 00:26:15.194253 systemd[1]: Populated /etc with preset unit settings. Sep 11 00:26:15.194263 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 11 00:26:15.194272 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 11 00:26:15.194280 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 11 00:26:15.194288 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 11 00:26:15.194297 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 11 00:26:15.194306 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 11 00:26:15.194314 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 11 00:26:15.194324 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 11 00:26:15.194333 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 11 00:26:15.194342 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 11 00:26:15.194350 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 11 00:26:15.194359 systemd[1]: Created slice user.slice - User and Session Slice. Sep 11 00:26:15.194367 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 11 00:26:15.194377 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 11 00:26:15.194399 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 11 00:26:15.194411 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 11 00:26:15.194421 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 11 00:26:15.194431 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 11 00:26:15.194440 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 11 00:26:15.194449 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 11 00:26:15.194457 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 11 00:26:15.194466 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 11 00:26:15.194475 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 11 00:26:15.194485 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 11 00:26:15.194494 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 11 00:26:15.194503 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 11 00:26:15.194512 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 11 00:26:15.194521 systemd[1]: Reached target slices.target - Slice Units. Sep 11 00:26:15.194530 systemd[1]: Reached target swap.target - Swaps. Sep 11 00:26:15.194538 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 11 00:26:15.194547 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 11 00:26:15.194558 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 11 00:26:15.194567 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 11 00:26:15.194576 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 11 00:26:15.194586 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 11 00:26:15.194595 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 11 00:26:15.194605 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 11 00:26:15.194614 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 11 00:26:15.194623 systemd[1]: Mounting media.mount - External Media Directory... Sep 11 00:26:15.194632 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 11 00:26:15.194641 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 11 00:26:15.194650 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 11 00:26:15.194658 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 11 00:26:15.194668 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 11 00:26:15.194678 systemd[1]: Reached target machines.target - Containers. Sep 11 00:26:15.194687 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 11 00:26:15.194696 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 11 00:26:15.194705 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 11 00:26:15.194714 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 11 00:26:15.194722 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 11 00:26:15.194731 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 11 00:26:15.194740 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 11 00:26:15.194750 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 11 00:26:15.194759 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 11 00:26:15.194768 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 11 00:26:15.194778 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 11 00:26:15.194787 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 11 00:26:15.194795 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 11 00:26:15.194804 systemd[1]: Stopped systemd-fsck-usr.service. Sep 11 00:26:15.194812 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 11 00:26:15.194821 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 11 00:26:15.194831 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 11 00:26:15.194840 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 11 00:26:15.194848 kernel: loop: module loaded Sep 11 00:26:15.194856 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 11 00:26:15.194864 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 11 00:26:15.194873 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 11 00:26:15.194881 systemd[1]: verity-setup.service: Deactivated successfully. Sep 11 00:26:15.194890 systemd[1]: Stopped verity-setup.service. Sep 11 00:26:15.194915 systemd-journald[1270]: Collecting audit messages is disabled. Sep 11 00:26:15.194934 systemd-journald[1270]: Journal started Sep 11 00:26:15.194955 systemd-journald[1270]: Runtime Journal (/run/log/journal/a39c37fc88664302b3711a5ab50f3f58) is 8M, max 158.9M, 150.9M free. Sep 11 00:26:15.199751 kernel: fuse: init (API version 7.41) Sep 11 00:26:14.831506 systemd[1]: Queued start job for default target multi-user.target. Sep 11 00:26:14.838757 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Sep 11 00:26:14.839081 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 11 00:26:15.213409 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 11 00:26:15.216973 systemd[1]: Started systemd-journald.service - Journal Service. Sep 11 00:26:15.219048 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 11 00:26:15.220789 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 11 00:26:15.224542 systemd[1]: Mounted media.mount - External Media Directory. Sep 11 00:26:15.227516 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 11 00:26:15.228834 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 11 00:26:15.231531 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 11 00:26:15.232859 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 11 00:26:15.236658 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 11 00:26:15.238362 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 11 00:26:15.238557 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 11 00:26:15.240681 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 11 00:26:15.241580 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 11 00:26:15.244623 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 11 00:26:15.244745 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 11 00:26:15.246399 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 11 00:26:15.246532 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 11 00:26:15.249704 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 11 00:26:15.249910 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 11 00:26:15.253710 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 11 00:26:15.255967 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 11 00:26:15.258341 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 11 00:26:15.267957 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 11 00:26:15.271557 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 11 00:26:15.278458 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 11 00:26:15.282097 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 11 00:26:15.282186 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 11 00:26:15.294037 kernel: ACPI: bus type drm_connector registered Sep 11 00:26:15.289202 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 11 00:26:15.294511 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 11 00:26:15.297723 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 11 00:26:15.299552 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 11 00:26:15.308465 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 11 00:26:15.311459 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 11 00:26:15.313509 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 11 00:26:15.316923 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 11 00:26:15.318444 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 11 00:26:15.322636 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 11 00:26:15.327449 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 11 00:26:15.332826 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 11 00:26:15.332971 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 11 00:26:15.336858 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 11 00:26:15.338625 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 11 00:26:15.340340 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 11 00:26:15.342484 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 11 00:26:15.354860 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 11 00:26:15.363853 systemd-journald[1270]: Time spent on flushing to /var/log/journal/a39c37fc88664302b3711a5ab50f3f58 is 15.015ms for 989 entries. Sep 11 00:26:15.363853 systemd-journald[1270]: System Journal (/var/log/journal/a39c37fc88664302b3711a5ab50f3f58) is 8M, max 2.6G, 2.6G free. Sep 11 00:26:15.407904 systemd-journald[1270]: Received client request to flush runtime journal. Sep 11 00:26:15.407940 kernel: loop0: detected capacity change from 0 to 113872 Sep 11 00:26:15.356970 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 11 00:26:15.361492 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 11 00:26:15.397504 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 11 00:26:15.408531 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 11 00:26:15.411548 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 11 00:26:15.422984 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 11 00:26:15.424913 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 11 00:26:15.462948 systemd-tmpfiles[1343]: ACLs are not supported, ignoring. Sep 11 00:26:15.462962 systemd-tmpfiles[1343]: ACLs are not supported, ignoring. Sep 11 00:26:15.466023 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 11 00:26:15.695398 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 11 00:26:15.725397 kernel: loop1: detected capacity change from 0 to 28504 Sep 11 00:26:15.840291 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 11 00:26:16.033795 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 11 00:26:16.037476 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 11 00:26:16.047418 kernel: loop2: detected capacity change from 0 to 146240 Sep 11 00:26:16.063330 systemd-udevd[1350]: Using default interface naming scheme 'v255'. Sep 11 00:26:16.183610 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 11 00:26:16.190495 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 11 00:26:16.272522 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 11 00:26:16.295840 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 11 00:26:16.307400 kernel: mousedev: PS/2 mouse device common for all mice Sep 11 00:26:16.327455 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#106 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Sep 11 00:26:16.344407 kernel: hv_vmbus: registering driver hyperv_fb Sep 11 00:26:16.347971 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 11 00:26:16.366474 kernel: hv_vmbus: registering driver hv_balloon Sep 11 00:26:16.368518 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Sep 11 00:26:16.372399 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Sep 11 00:26:16.374397 kernel: Console: switching to colour dummy device 80x25 Sep 11 00:26:16.380553 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Sep 11 00:26:16.380595 kernel: Console: switching to colour frame buffer device 128x48 Sep 11 00:26:16.638490 kernel: loop3: detected capacity change from 0 to 229808 Sep 11 00:26:16.640596 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 11 00:26:16.650056 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 11 00:26:16.650291 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 11 00:26:16.654487 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 11 00:26:16.676404 kernel: loop4: detected capacity change from 0 to 113872 Sep 11 00:26:16.683422 kernel: loop5: detected capacity change from 0 to 28504 Sep 11 00:26:16.692418 kernel: loop6: detected capacity change from 0 to 146240 Sep 11 00:26:16.701403 kernel: loop7: detected capacity change from 0 to 229808 Sep 11 00:26:16.711082 (sd-merge)[1428]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Sep 11 00:26:16.711458 (sd-merge)[1428]: Merged extensions into '/usr'. Sep 11 00:26:16.714284 systemd[1]: Reload requested from client PID 1326 ('systemd-sysext') (unit systemd-sysext.service)... Sep 11 00:26:16.714295 systemd[1]: Reloading... Sep 11 00:26:16.775422 zram_generator::config[1454]: No configuration found. Sep 11 00:26:16.826483 systemd-networkd[1358]: lo: Link UP Sep 11 00:26:16.826505 systemd-networkd[1358]: lo: Gained carrier Sep 11 00:26:16.827808 systemd-networkd[1358]: Enumeration completed Sep 11 00:26:16.828151 systemd-networkd[1358]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 11 00:26:16.828218 systemd-networkd[1358]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 11 00:26:16.830448 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Sep 11 00:26:16.833662 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Sep 11 00:26:16.833861 kernel: hv_netvsc f8615163-0000-1000-2000-7ced8d459561 eth0: Data path switched to VF: enP30832s1 Sep 11 00:26:16.834221 systemd-networkd[1358]: enP30832s1: Link UP Sep 11 00:26:16.834340 systemd-networkd[1358]: eth0: Link UP Sep 11 00:26:16.834403 systemd-networkd[1358]: eth0: Gained carrier Sep 11 00:26:16.834444 systemd-networkd[1358]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 11 00:26:16.839586 systemd-networkd[1358]: enP30832s1: Gained carrier Sep 11 00:26:16.853500 systemd-networkd[1358]: eth0: DHCPv4 address 10.200.8.4/24, gateway 10.200.8.1 acquired from 168.63.129.16 Sep 11 00:26:17.040400 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Sep 11 00:26:17.065105 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 11 00:26:17.171976 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Sep 11 00:26:17.173633 systemd[1]: Reloading finished in 459 ms. Sep 11 00:26:17.201725 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 11 00:26:17.202992 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 11 00:26:17.229260 systemd[1]: Starting ensure-sysext.service... Sep 11 00:26:17.231538 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 11 00:26:17.234565 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 11 00:26:17.238547 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 11 00:26:17.241598 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 11 00:26:17.250688 systemd[1]: Reload requested from client PID 1527 ('systemctl') (unit ensure-sysext.service)... Sep 11 00:26:17.250753 systemd[1]: Reloading... Sep 11 00:26:17.309453 zram_generator::config[1565]: No configuration found. Sep 11 00:26:17.370419 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 11 00:26:17.440346 systemd-tmpfiles[1531]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 11 00:26:17.440378 systemd-tmpfiles[1531]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 11 00:26:17.440822 systemd-tmpfiles[1531]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 11 00:26:17.441051 systemd-tmpfiles[1531]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 11 00:26:17.441675 systemd-tmpfiles[1531]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 11 00:26:17.441924 systemd-tmpfiles[1531]: ACLs are not supported, ignoring. Sep 11 00:26:17.441997 systemd-tmpfiles[1531]: ACLs are not supported, ignoring. Sep 11 00:26:17.454420 systemd[1]: Reloading finished in 203 ms. Sep 11 00:26:17.459603 systemd-tmpfiles[1531]: Detected autofs mount point /boot during canonicalization of boot. Sep 11 00:26:17.459609 systemd-tmpfiles[1531]: Skipping /boot Sep 11 00:26:17.465067 systemd-tmpfiles[1531]: Detected autofs mount point /boot during canonicalization of boot. Sep 11 00:26:17.465140 systemd-tmpfiles[1531]: Skipping /boot Sep 11 00:26:17.480748 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 11 00:26:17.482884 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 11 00:26:17.484933 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 11 00:26:17.495665 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 11 00:26:17.499798 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 11 00:26:17.504621 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 11 00:26:17.515613 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 11 00:26:17.519423 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 11 00:26:17.525668 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 11 00:26:17.525815 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 11 00:26:17.527400 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 11 00:26:17.530628 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 11 00:26:17.534617 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 11 00:26:17.536615 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 11 00:26:17.536713 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 11 00:26:17.536792 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 11 00:26:17.538293 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 11 00:26:17.538466 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 11 00:26:17.541098 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 11 00:26:17.541286 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 11 00:26:17.547069 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 11 00:26:17.547260 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 11 00:26:17.551912 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 11 00:26:17.557716 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 11 00:26:17.557913 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 11 00:26:17.561490 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 11 00:26:17.563524 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 11 00:26:17.568243 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 11 00:26:17.569520 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 11 00:26:17.569783 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 11 00:26:17.569810 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 11 00:26:17.569857 systemd[1]: Reached target time-set.target - System Time Set. Sep 11 00:26:17.570001 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 11 00:26:17.570350 systemd[1]: Finished ensure-sysext.service. Sep 11 00:26:17.573599 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 11 00:26:17.573720 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 11 00:26:17.579775 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 11 00:26:17.579921 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 11 00:26:17.582274 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 11 00:26:17.582442 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 11 00:26:17.582973 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 11 00:26:17.596869 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 11 00:26:17.597006 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 11 00:26:17.599737 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 11 00:26:17.634380 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 11 00:26:17.859130 systemd-resolved[1628]: Positive Trust Anchors: Sep 11 00:26:17.859140 systemd-resolved[1628]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 11 00:26:17.859171 systemd-resolved[1628]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 11 00:26:17.861614 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 11 00:26:17.864563 systemd-resolved[1628]: Using system hostname 'ci-4372.1.0-n-3f8a739b41'. Sep 11 00:26:17.865636 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 11 00:26:17.868513 systemd[1]: Reached target network.target - Network. Sep 11 00:26:17.871471 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 11 00:26:18.326683 augenrules[1670]: No rules Sep 11 00:26:18.327681 systemd[1]: audit-rules.service: Deactivated successfully. Sep 11 00:26:18.327866 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 11 00:26:18.391836 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 11 00:26:18.394687 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 11 00:26:18.535552 systemd-networkd[1358]: eth0: Gained IPv6LL Sep 11 00:26:18.537257 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 11 00:26:18.541595 systemd[1]: Reached target network-online.target - Network is Online. Sep 11 00:26:25.757741 ldconfig[1321]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 11 00:26:25.783127 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 11 00:26:25.787605 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 11 00:26:25.800868 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 11 00:26:25.802403 systemd[1]: Reached target sysinit.target - System Initialization. Sep 11 00:26:25.806523 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 11 00:26:25.807965 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 11 00:26:25.810420 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Sep 11 00:26:25.811911 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 11 00:26:25.814503 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 11 00:26:25.817448 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 11 00:26:25.820437 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 11 00:26:25.820467 systemd[1]: Reached target paths.target - Path Units. Sep 11 00:26:25.821497 systemd[1]: Reached target timers.target - Timer Units. Sep 11 00:26:25.823303 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 11 00:26:25.825456 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 11 00:26:25.828364 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 11 00:26:25.831584 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 11 00:26:25.834428 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 11 00:26:25.843763 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 11 00:26:25.846718 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 11 00:26:25.848576 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 11 00:26:25.853034 systemd[1]: Reached target sockets.target - Socket Units. Sep 11 00:26:25.855474 systemd[1]: Reached target basic.target - Basic System. Sep 11 00:26:25.856627 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 11 00:26:25.856649 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 11 00:26:25.858206 systemd[1]: Starting chronyd.service - NTP client/server... Sep 11 00:26:25.861185 systemd[1]: Starting containerd.service - containerd container runtime... Sep 11 00:26:25.867130 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 11 00:26:25.870553 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 11 00:26:25.873353 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 11 00:26:25.877205 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 11 00:26:25.887189 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 11 00:26:25.890641 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 11 00:26:25.895903 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Sep 11 00:26:25.897585 systemd[1]: hv_fcopy_uio_daemon.service - Hyper-V FCOPY UIO daemon was skipped because of an unmet condition check (ConditionPathExists=/sys/bus/vmbus/devices/eb765408-105f-49b6-b4aa-c123b64d17d4/uio). Sep 11 00:26:25.900232 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Sep 11 00:26:25.903516 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Sep 11 00:26:25.905563 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 00:26:25.906006 jq[1688]: false Sep 11 00:26:25.911577 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 11 00:26:25.917299 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 11 00:26:25.918824 (chronyd)[1683]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Sep 11 00:26:25.920076 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 11 00:26:25.924313 KVP[1694]: KVP starting; pid is:1694 Sep 11 00:26:25.926597 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 11 00:26:25.932203 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 11 00:26:25.935489 kernel: hv_utils: KVP IC version 4.0 Sep 11 00:26:25.936417 KVP[1694]: KVP LIC Version: 3.1 Sep 11 00:26:25.937916 chronyd[1703]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Sep 11 00:26:25.939438 chronyd[1703]: Timezone right/UTC failed leap second check, ignoring Sep 11 00:26:25.939664 chronyd[1703]: Loaded seccomp filter (level 2) Sep 11 00:26:25.942552 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 11 00:26:25.946172 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 11 00:26:25.946553 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 11 00:26:25.949548 systemd[1]: Starting update-engine.service - Update Engine... Sep 11 00:26:25.954769 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 11 00:26:25.957229 systemd[1]: Started chronyd.service - NTP client/server. Sep 11 00:26:25.965582 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 11 00:26:25.967855 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 11 00:26:25.968023 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 11 00:26:25.971702 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 11 00:26:25.971881 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 11 00:26:25.974099 jq[1710]: true Sep 11 00:26:26.001143 jq[1717]: true Sep 11 00:26:26.067610 google_oslogin_nss_cache[1693]: oslogin_cache_refresh[1693]: Refreshing passwd entry cache Sep 11 00:26:26.067800 oslogin_cache_refresh[1693]: Refreshing passwd entry cache Sep 11 00:26:26.092729 google_oslogin_nss_cache[1693]: oslogin_cache_refresh[1693]: Failure getting users, quitting Sep 11 00:26:26.092799 oslogin_cache_refresh[1693]: Failure getting users, quitting Sep 11 00:26:26.093166 google_oslogin_nss_cache[1693]: oslogin_cache_refresh[1693]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 11 00:26:26.093166 google_oslogin_nss_cache[1693]: oslogin_cache_refresh[1693]: Refreshing group entry cache Sep 11 00:26:26.092851 oslogin_cache_refresh[1693]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 11 00:26:26.092883 oslogin_cache_refresh[1693]: Refreshing group entry cache Sep 11 00:26:26.128040 google_oslogin_nss_cache[1693]: oslogin_cache_refresh[1693]: Failure getting groups, quitting Sep 11 00:26:26.128040 google_oslogin_nss_cache[1693]: oslogin_cache_refresh[1693]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 11 00:26:26.128032 oslogin_cache_refresh[1693]: Failure getting groups, quitting Sep 11 00:26:26.128040 oslogin_cache_refresh[1693]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 11 00:26:26.128923 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Sep 11 00:26:26.129124 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Sep 11 00:26:26.167446 extend-filesystems[1692]: Found /dev/nvme0n1p6 Sep 11 00:26:26.173267 systemd[1]: motdgen.service: Deactivated successfully. Sep 11 00:26:26.174624 (ntainerd)[1747]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 11 00:26:26.174837 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 11 00:26:26.186213 systemd-logind[1706]: New seat seat0. Sep 11 00:26:26.188166 systemd-logind[1706]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 11 00:26:26.188302 systemd[1]: Started systemd-logind.service - User Login Management. Sep 11 00:26:26.188629 update_engine[1707]: I20250911 00:26:26.188571 1707 main.cc:92] Flatcar Update Engine starting Sep 11 00:26:26.206538 extend-filesystems[1692]: Found /dev/nvme0n1p9 Sep 11 00:26:26.213278 extend-filesystems[1692]: Checking size of /dev/nvme0n1p9 Sep 11 00:26:26.224102 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 11 00:26:26.353641 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 11 00:26:26.656089 extend-filesystems[1692]: Old size kept for /dev/nvme0n1p9 Sep 11 00:26:26.353819 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 11 00:26:26.658601 tar[1716]: linux-amd64/LICENSE Sep 11 00:26:26.658601 tar[1716]: linux-amd64/helm Sep 11 00:26:26.665878 dbus-daemon[1686]: [system] SELinux support is enabled Sep 11 00:26:26.666072 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 11 00:26:26.671716 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 11 00:26:26.671745 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 11 00:26:26.673931 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 11 00:26:26.673947 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 11 00:26:26.680190 update_engine[1707]: I20250911 00:26:26.676743 1707 update_check_scheduler.cc:74] Next update check in 7m44s Sep 11 00:26:26.678096 dbus-daemon[1686]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 11 00:26:26.677618 systemd[1]: Started update-engine.service - Update Engine. Sep 11 00:26:26.682431 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 11 00:26:27.350553 coreos-metadata[1685]: Sep 11 00:26:26.925 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Sep 11 00:26:27.350553 coreos-metadata[1685]: Sep 11 00:26:26.926 INFO Fetch successful Sep 11 00:26:27.350553 coreos-metadata[1685]: Sep 11 00:26:26.926 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Sep 11 00:26:27.350553 coreos-metadata[1685]: Sep 11 00:26:26.929 INFO Fetch successful Sep 11 00:26:27.350553 coreos-metadata[1685]: Sep 11 00:26:26.929 INFO Fetching http://168.63.129.16/machine/9d51cbba-588f-4c0c-b001-c239aae4e230/54948bdb%2Db817%2D427e%2Da2cf%2D6c03e00657eb.%5Fci%2D4372.1.0%2Dn%2D3f8a739b41?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Sep 11 00:26:27.350553 coreos-metadata[1685]: Sep 11 00:26:26.930 INFO Fetch successful Sep 11 00:26:27.350553 coreos-metadata[1685]: Sep 11 00:26:26.930 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Sep 11 00:26:27.350553 coreos-metadata[1685]: Sep 11 00:26:26.946 INFO Fetch successful Sep 11 00:26:26.968618 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 11 00:26:26.970989 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 11 00:26:27.616652 locksmithd[1786]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 11 00:26:28.111066 bash[1741]: Updated "/home/core/.ssh/authorized_keys" Sep 11 00:26:28.111898 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 11 00:26:28.112287 sshd_keygen[1743]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 11 00:26:28.116744 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 11 00:26:28.149640 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 11 00:26:28.152784 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 11 00:26:28.155077 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Sep 11 00:26:28.181661 systemd[1]: issuegen.service: Deactivated successfully. Sep 11 00:26:28.181839 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 11 00:26:28.186703 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 11 00:26:28.210737 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Sep 11 00:26:28.215096 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 11 00:26:28.220403 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 11 00:26:28.225233 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 11 00:26:28.227610 systemd[1]: Reached target getty.target - Login Prompts. Sep 11 00:26:28.441318 tar[1716]: linux-amd64/README.md Sep 11 00:26:28.452491 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 11 00:26:28.487524 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:26:28.497596 (kubelet)[1833]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 11 00:26:28.943078 kubelet[1833]: E0911 00:26:28.943039 1833 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 11 00:26:28.944990 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 11 00:26:28.945117 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 11 00:26:28.945417 systemd[1]: kubelet.service: Consumed 817ms CPU time, 268.6M memory peak. Sep 11 00:26:29.264462 containerd[1747]: time="2025-09-11T00:26:29Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 11 00:26:29.265017 containerd[1747]: time="2025-09-11T00:26:29.264986971Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Sep 11 00:26:29.270320 containerd[1747]: time="2025-09-11T00:26:29.270290060Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="6.574µs" Sep 11 00:26:29.270320 containerd[1747]: time="2025-09-11T00:26:29.270312222Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 11 00:26:29.270433 containerd[1747]: time="2025-09-11T00:26:29.270328141Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 11 00:26:29.270490 containerd[1747]: time="2025-09-11T00:26:29.270477161Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 11 00:26:29.270513 containerd[1747]: time="2025-09-11T00:26:29.270490823Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 11 00:26:29.270513 containerd[1747]: time="2025-09-11T00:26:29.270509927Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 11 00:26:29.270581 containerd[1747]: time="2025-09-11T00:26:29.270551375Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 11 00:26:29.270581 containerd[1747]: time="2025-09-11T00:26:29.270577088Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 11 00:26:29.270765 containerd[1747]: time="2025-09-11T00:26:29.270750359Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 11 00:26:29.270765 containerd[1747]: time="2025-09-11T00:26:29.270760902Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 11 00:26:29.270805 containerd[1747]: time="2025-09-11T00:26:29.270770328Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 11 00:26:29.270805 containerd[1747]: time="2025-09-11T00:26:29.270777092Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 11 00:26:29.270844 containerd[1747]: time="2025-09-11T00:26:29.270829745Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 11 00:26:29.270980 containerd[1747]: time="2025-09-11T00:26:29.270966322Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 11 00:26:29.271011 containerd[1747]: time="2025-09-11T00:26:29.270998833Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 11 00:26:29.271035 containerd[1747]: time="2025-09-11T00:26:29.271010098Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 11 00:26:29.271053 containerd[1747]: time="2025-09-11T00:26:29.271033648Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 11 00:26:29.271275 containerd[1747]: time="2025-09-11T00:26:29.271248829Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 11 00:26:29.271330 containerd[1747]: time="2025-09-11T00:26:29.271302808Z" level=info msg="metadata content store policy set" policy=shared Sep 11 00:26:29.366711 containerd[1747]: time="2025-09-11T00:26:29.366674881Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 11 00:26:29.366788 containerd[1747]: time="2025-09-11T00:26:29.366722096Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 11 00:26:29.366788 containerd[1747]: time="2025-09-11T00:26:29.366749605Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 11 00:26:29.366788 containerd[1747]: time="2025-09-11T00:26:29.366760573Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 11 00:26:29.366788 containerd[1747]: time="2025-09-11T00:26:29.366772309Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 11 00:26:29.366788 containerd[1747]: time="2025-09-11T00:26:29.366782180Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 11 00:26:29.366881 containerd[1747]: time="2025-09-11T00:26:29.366791535Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 11 00:26:29.366881 containerd[1747]: time="2025-09-11T00:26:29.366802016Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 11 00:26:29.366881 containerd[1747]: time="2025-09-11T00:26:29.366812885Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 11 00:26:29.366881 containerd[1747]: time="2025-09-11T00:26:29.366822717Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 11 00:26:29.366881 containerd[1747]: time="2025-09-11T00:26:29.366831019Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 11 00:26:29.366881 containerd[1747]: time="2025-09-11T00:26:29.366841780Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 11 00:26:29.366972 containerd[1747]: time="2025-09-11T00:26:29.366944500Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 11 00:26:29.366972 containerd[1747]: time="2025-09-11T00:26:29.366960037Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 11 00:26:29.367004 containerd[1747]: time="2025-09-11T00:26:29.366972933Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 11 00:26:29.367004 containerd[1747]: time="2025-09-11T00:26:29.366982179Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 11 00:26:29.367004 containerd[1747]: time="2025-09-11T00:26:29.366991325Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 11 00:26:29.367004 containerd[1747]: time="2025-09-11T00:26:29.367001054Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 11 00:26:29.367067 containerd[1747]: time="2025-09-11T00:26:29.367012069Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 11 00:26:29.367067 containerd[1747]: time="2025-09-11T00:26:29.367021316Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 11 00:26:29.367067 containerd[1747]: time="2025-09-11T00:26:29.367031411Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 11 00:26:29.367067 containerd[1747]: time="2025-09-11T00:26:29.367040771Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 11 00:26:29.367067 containerd[1747]: time="2025-09-11T00:26:29.367049683Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 11 00:26:29.367148 containerd[1747]: time="2025-09-11T00:26:29.367106772Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 11 00:26:29.367148 containerd[1747]: time="2025-09-11T00:26:29.367118285Z" level=info msg="Start snapshots syncer" Sep 11 00:26:29.367148 containerd[1747]: time="2025-09-11T00:26:29.367136681Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 11 00:26:29.367378 containerd[1747]: time="2025-09-11T00:26:29.367351781Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 11 00:26:29.367499 containerd[1747]: time="2025-09-11T00:26:29.367403522Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 11 00:26:29.367499 containerd[1747]: time="2025-09-11T00:26:29.367485066Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 11 00:26:29.367607 containerd[1747]: time="2025-09-11T00:26:29.367591320Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 11 00:26:29.367628 containerd[1747]: time="2025-09-11T00:26:29.367610132Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 11 00:26:29.367628 containerd[1747]: time="2025-09-11T00:26:29.367619279Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 11 00:26:29.367664 containerd[1747]: time="2025-09-11T00:26:29.367628232Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 11 00:26:29.367664 containerd[1747]: time="2025-09-11T00:26:29.367644049Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 11 00:26:29.367664 containerd[1747]: time="2025-09-11T00:26:29.367658169Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 11 00:26:29.367721 containerd[1747]: time="2025-09-11T00:26:29.367669272Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 11 00:26:29.367721 containerd[1747]: time="2025-09-11T00:26:29.367689685Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 11 00:26:29.367721 containerd[1747]: time="2025-09-11T00:26:29.367700627Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 11 00:26:29.367721 containerd[1747]: time="2025-09-11T00:26:29.367713673Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 11 00:26:29.367790 containerd[1747]: time="2025-09-11T00:26:29.367743246Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 11 00:26:29.367790 containerd[1747]: time="2025-09-11T00:26:29.367756357Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 11 00:26:29.367790 containerd[1747]: time="2025-09-11T00:26:29.367764360Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 11 00:26:29.367790 containerd[1747]: time="2025-09-11T00:26:29.367772966Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 11 00:26:29.367790 containerd[1747]: time="2025-09-11T00:26:29.367780114Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 11 00:26:29.367869 containerd[1747]: time="2025-09-11T00:26:29.367815623Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 11 00:26:29.367869 containerd[1747]: time="2025-09-11T00:26:29.367825578Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 11 00:26:29.367869 containerd[1747]: time="2025-09-11T00:26:29.367840401Z" level=info msg="runtime interface created" Sep 11 00:26:29.367869 containerd[1747]: time="2025-09-11T00:26:29.367844999Z" level=info msg="created NRI interface" Sep 11 00:26:29.367869 containerd[1747]: time="2025-09-11T00:26:29.367851860Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 11 00:26:29.367869 containerd[1747]: time="2025-09-11T00:26:29.367862384Z" level=info msg="Connect containerd service" Sep 11 00:26:29.368129 containerd[1747]: time="2025-09-11T00:26:29.368089691Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 11 00:26:29.369172 containerd[1747]: time="2025-09-11T00:26:29.368901745Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 11 00:26:31.326412 containerd[1747]: time="2025-09-11T00:26:31.325471833Z" level=info msg="Start subscribing containerd event" Sep 11 00:26:31.326412 containerd[1747]: time="2025-09-11T00:26:31.325531058Z" level=info msg="Start recovering state" Sep 11 00:26:31.326412 containerd[1747]: time="2025-09-11T00:26:31.325627973Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 11 00:26:31.326412 containerd[1747]: time="2025-09-11T00:26:31.325631183Z" level=info msg="Start event monitor" Sep 11 00:26:31.326412 containerd[1747]: time="2025-09-11T00:26:31.325657706Z" level=info msg="Start cni network conf syncer for default" Sep 11 00:26:31.326412 containerd[1747]: time="2025-09-11T00:26:31.325662597Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 11 00:26:31.326412 containerd[1747]: time="2025-09-11T00:26:31.325664631Z" level=info msg="Start streaming server" Sep 11 00:26:31.326412 containerd[1747]: time="2025-09-11T00:26:31.325682018Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 11 00:26:31.326412 containerd[1747]: time="2025-09-11T00:26:31.325689529Z" level=info msg="runtime interface starting up..." Sep 11 00:26:31.326412 containerd[1747]: time="2025-09-11T00:26:31.325695101Z" level=info msg="starting plugins..." Sep 11 00:26:31.326412 containerd[1747]: time="2025-09-11T00:26:31.325707198Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 11 00:26:31.326412 containerd[1747]: time="2025-09-11T00:26:31.325788366Z" level=info msg="containerd successfully booted in 2.061628s" Sep 11 00:26:31.325979 systemd[1]: Started containerd.service - containerd container runtime. Sep 11 00:26:31.328284 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 11 00:26:31.332159 systemd[1]: Startup finished in 2.786s (kernel) + 9.941s (initrd) + 18.291s (userspace) = 31.019s. Sep 11 00:26:32.062994 login[1824]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Sep 11 00:26:32.063466 login[1823]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 11 00:26:32.068210 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 11 00:26:32.069109 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 11 00:26:32.076283 systemd-logind[1706]: New session 1 of user core. Sep 11 00:26:32.084491 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 11 00:26:32.086675 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 11 00:26:32.094089 (systemd)[1860]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 11 00:26:32.095588 systemd-logind[1706]: New session c1 of user core. Sep 11 00:26:32.241234 systemd[1860]: Queued start job for default target default.target. Sep 11 00:26:32.246997 systemd[1860]: Created slice app.slice - User Application Slice. Sep 11 00:26:32.247023 systemd[1860]: Reached target paths.target - Paths. Sep 11 00:26:32.247049 systemd[1860]: Reached target timers.target - Timers. Sep 11 00:26:32.247808 systemd[1860]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 11 00:26:32.254361 systemd[1860]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 11 00:26:32.254422 systemd[1860]: Reached target sockets.target - Sockets. Sep 11 00:26:32.254453 systemd[1860]: Reached target basic.target - Basic System. Sep 11 00:26:32.254504 systemd[1860]: Reached target default.target - Main User Target. Sep 11 00:26:32.254524 systemd[1860]: Startup finished in 155ms. Sep 11 00:26:32.254772 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 11 00:26:32.260483 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 11 00:26:32.592918 waagent[1821]: 2025-09-11T00:26:32.592854Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Sep 11 00:26:32.594302 waagent[1821]: 2025-09-11T00:26:32.594263Z INFO Daemon Daemon OS: flatcar 4372.1.0 Sep 11 00:26:32.595354 waagent[1821]: 2025-09-11T00:26:32.595291Z INFO Daemon Daemon Python: 3.11.12 Sep 11 00:26:32.596399 waagent[1821]: 2025-09-11T00:26:32.596348Z INFO Daemon Daemon Run daemon Sep 11 00:26:32.597322 waagent[1821]: 2025-09-11T00:26:32.597297Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4372.1.0' Sep 11 00:26:32.599016 waagent[1821]: 2025-09-11T00:26:32.598994Z INFO Daemon Daemon Using waagent for provisioning Sep 11 00:26:32.600171 waagent[1821]: 2025-09-11T00:26:32.600144Z INFO Daemon Daemon Activate resource disk Sep 11 00:26:32.601210 waagent[1821]: 2025-09-11T00:26:32.601149Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Sep 11 00:26:32.603988 waagent[1821]: 2025-09-11T00:26:32.603946Z INFO Daemon Daemon Found device: None Sep 11 00:26:32.604975 waagent[1821]: 2025-09-11T00:26:32.604907Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Sep 11 00:26:32.605943 waagent[1821]: 2025-09-11T00:26:32.605916Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Sep 11 00:26:32.609228 waagent[1821]: 2025-09-11T00:26:32.609182Z INFO Daemon Daemon Clean protocol and wireserver endpoint Sep 11 00:26:32.610521 waagent[1821]: 2025-09-11T00:26:32.610491Z INFO Daemon Daemon Running default provisioning handler Sep 11 00:26:32.616519 waagent[1821]: 2025-09-11T00:26:32.616019Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Sep 11 00:26:32.617251 waagent[1821]: 2025-09-11T00:26:32.617221Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Sep 11 00:26:32.617724 waagent[1821]: 2025-09-11T00:26:32.617702Z INFO Daemon Daemon cloud-init is enabled: False Sep 11 00:26:32.617982 waagent[1821]: 2025-09-11T00:26:32.617967Z INFO Daemon Daemon Copying ovf-env.xml Sep 11 00:26:32.673128 waagent[1821]: 2025-09-11T00:26:32.671503Z INFO Daemon Daemon Successfully mounted dvd Sep 11 00:26:32.712866 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Sep 11 00:26:32.714720 waagent[1821]: 2025-09-11T00:26:32.714369Z INFO Daemon Daemon Detect protocol endpoint Sep 11 00:26:32.715837 waagent[1821]: 2025-09-11T00:26:32.714760Z INFO Daemon Daemon Clean protocol and wireserver endpoint Sep 11 00:26:32.717044 waagent[1821]: 2025-09-11T00:26:32.717013Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Sep 11 00:26:32.718417 waagent[1821]: 2025-09-11T00:26:32.717672Z INFO Daemon Daemon Test for route to 168.63.129.16 Sep 11 00:26:32.719576 waagent[1821]: 2025-09-11T00:26:32.719549Z INFO Daemon Daemon Route to 168.63.129.16 exists Sep 11 00:26:32.720696 waagent[1821]: 2025-09-11T00:26:32.720670Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Sep 11 00:26:32.730814 waagent[1821]: 2025-09-11T00:26:32.730790Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Sep 11 00:26:32.733400 waagent[1821]: 2025-09-11T00:26:32.731422Z INFO Daemon Daemon Wire protocol version:2012-11-30 Sep 11 00:26:32.733400 waagent[1821]: 2025-09-11T00:26:32.731592Z INFO Daemon Daemon Server preferred version:2015-04-05 Sep 11 00:26:32.893893 waagent[1821]: 2025-09-11T00:26:32.893817Z INFO Daemon Daemon Initializing goal state during protocol detection Sep 11 00:26:32.898732 waagent[1821]: 2025-09-11T00:26:32.894050Z INFO Daemon Daemon Forcing an update of the goal state. Sep 11 00:26:32.898732 waagent[1821]: 2025-09-11T00:26:32.896204Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Sep 11 00:26:32.907762 waagent[1821]: 2025-09-11T00:26:32.907732Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.175 Sep 11 00:26:32.909126 waagent[1821]: 2025-09-11T00:26:32.909094Z INFO Daemon Sep 11 00:26:32.909883 waagent[1821]: 2025-09-11T00:26:32.909818Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: cc9333e1-f7a3-4312-8d53-3cb2d92bb0ba eTag: 12323040541324493026 source: Fabric] Sep 11 00:26:32.912130 waagent[1821]: 2025-09-11T00:26:32.912096Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Sep 11 00:26:32.913651 waagent[1821]: 2025-09-11T00:26:32.913617Z INFO Daemon Sep 11 00:26:32.914312 waagent[1821]: 2025-09-11T00:26:32.914060Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Sep 11 00:26:32.924377 waagent[1821]: 2025-09-11T00:26:32.924348Z INFO Daemon Daemon Downloading artifacts profile blob Sep 11 00:26:33.002265 waagent[1821]: 2025-09-11T00:26:33.002224Z INFO Daemon Downloaded certificate {'thumbprint': '164C43EA451E61ACDC404CAE068C0CE80D27B89B', 'hasPrivateKey': True} Sep 11 00:26:33.004192 waagent[1821]: 2025-09-11T00:26:33.004160Z INFO Daemon Fetch goal state completed Sep 11 00:26:33.015451 waagent[1821]: 2025-09-11T00:26:33.015404Z INFO Daemon Daemon Starting provisioning Sep 11 00:26:33.016026 waagent[1821]: 2025-09-11T00:26:33.015957Z INFO Daemon Daemon Handle ovf-env.xml. Sep 11 00:26:33.016414 waagent[1821]: 2025-09-11T00:26:33.016196Z INFO Daemon Daemon Set hostname [ci-4372.1.0-n-3f8a739b41] Sep 11 00:26:33.018710 waagent[1821]: 2025-09-11T00:26:33.018674Z INFO Daemon Daemon Publish hostname [ci-4372.1.0-n-3f8a739b41] Sep 11 00:26:33.019818 waagent[1821]: 2025-09-11T00:26:33.019244Z INFO Daemon Daemon Examine /proc/net/route for primary interface Sep 11 00:26:33.019818 waagent[1821]: 2025-09-11T00:26:33.019447Z INFO Daemon Daemon Primary interface is [eth0] Sep 11 00:26:33.025449 systemd-networkd[1358]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 11 00:26:33.025455 systemd-networkd[1358]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 11 00:26:33.025476 systemd-networkd[1358]: eth0: DHCP lease lost Sep 11 00:26:33.026186 waagent[1821]: 2025-09-11T00:26:33.026146Z INFO Daemon Daemon Create user account if not exists Sep 11 00:26:33.027404 waagent[1821]: 2025-09-11T00:26:33.026598Z INFO Daemon Daemon User core already exists, skip useradd Sep 11 00:26:33.027404 waagent[1821]: 2025-09-11T00:26:33.026808Z INFO Daemon Daemon Configure sudoer Sep 11 00:26:33.032782 waagent[1821]: 2025-09-11T00:26:33.032737Z INFO Daemon Daemon Configure sshd Sep 11 00:26:33.036653 waagent[1821]: 2025-09-11T00:26:33.036617Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Sep 11 00:26:33.040047 waagent[1821]: 2025-09-11T00:26:33.037117Z INFO Daemon Daemon Deploy ssh public key. Sep 11 00:26:33.052411 systemd-networkd[1358]: eth0: DHCPv4 address 10.200.8.4/24, gateway 10.200.8.1 acquired from 168.63.129.16 Sep 11 00:26:33.064366 login[1824]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 11 00:26:33.068037 systemd-logind[1706]: New session 2 of user core. Sep 11 00:26:33.074512 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 11 00:26:34.102702 waagent[1821]: 2025-09-11T00:26:34.102665Z INFO Daemon Daemon Provisioning complete Sep 11 00:26:34.118082 waagent[1821]: 2025-09-11T00:26:34.118048Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Sep 11 00:26:34.119268 waagent[1821]: 2025-09-11T00:26:34.119238Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Sep 11 00:26:34.119794 waagent[1821]: 2025-09-11T00:26:34.119769Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Sep 11 00:26:34.208746 waagent[1910]: 2025-09-11T00:26:34.208689Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Sep 11 00:26:34.208977 waagent[1910]: 2025-09-11T00:26:34.208774Z INFO ExtHandler ExtHandler OS: flatcar 4372.1.0 Sep 11 00:26:34.208977 waagent[1910]: 2025-09-11T00:26:34.208809Z INFO ExtHandler ExtHandler Python: 3.11.12 Sep 11 00:26:34.208977 waagent[1910]: 2025-09-11T00:26:34.208842Z INFO ExtHandler ExtHandler CPU Arch: x86_64 Sep 11 00:26:34.242725 waagent[1910]: 2025-09-11T00:26:34.242679Z INFO ExtHandler ExtHandler Distro: flatcar-4372.1.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.12; Arch: x86_64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Sep 11 00:26:34.242847 waagent[1910]: 2025-09-11T00:26:34.242823Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 11 00:26:34.242891 waagent[1910]: 2025-09-11T00:26:34.242875Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 11 00:26:34.248087 waagent[1910]: 2025-09-11T00:26:34.248043Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Sep 11 00:26:34.253503 waagent[1910]: 2025-09-11T00:26:34.253477Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.175 Sep 11 00:26:34.253781 waagent[1910]: 2025-09-11T00:26:34.253757Z INFO ExtHandler Sep 11 00:26:34.253814 waagent[1910]: 2025-09-11T00:26:34.253803Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 4341c8e1-dff0-41fc-a3de-df77b88def55 eTag: 12323040541324493026 source: Fabric] Sep 11 00:26:34.253972 waagent[1910]: 2025-09-11T00:26:34.253954Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Sep 11 00:26:34.254240 waagent[1910]: 2025-09-11T00:26:34.254221Z INFO ExtHandler Sep 11 00:26:34.254279 waagent[1910]: 2025-09-11T00:26:34.254256Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Sep 11 00:26:34.257480 waagent[1910]: 2025-09-11T00:26:34.257456Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Sep 11 00:26:34.331681 waagent[1910]: 2025-09-11T00:26:34.331640Z INFO ExtHandler Downloaded certificate {'thumbprint': '164C43EA451E61ACDC404CAE068C0CE80D27B89B', 'hasPrivateKey': True} Sep 11 00:26:34.331964 waagent[1910]: 2025-09-11T00:26:34.331939Z INFO ExtHandler Fetch goal state completed Sep 11 00:26:34.343069 waagent[1910]: 2025-09-11T00:26:34.343031Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.3.3 11 Feb 2025 (Library: OpenSSL 3.3.3 11 Feb 2025) Sep 11 00:26:34.346785 waagent[1910]: 2025-09-11T00:26:34.346743Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 1910 Sep 11 00:26:34.346878 waagent[1910]: 2025-09-11T00:26:34.346844Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Sep 11 00:26:34.347084 waagent[1910]: 2025-09-11T00:26:34.347065Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Sep 11 00:26:34.347915 waagent[1910]: 2025-09-11T00:26:34.347888Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4372.1.0', '', 'Flatcar Container Linux by Kinvolk'] Sep 11 00:26:34.348161 waagent[1910]: 2025-09-11T00:26:34.348140Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4372.1.0', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Sep 11 00:26:34.348240 waagent[1910]: 2025-09-11T00:26:34.348224Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Sep 11 00:26:34.348599 waagent[1910]: 2025-09-11T00:26:34.348579Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Sep 11 00:26:34.366020 waagent[1910]: 2025-09-11T00:26:34.365969Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Sep 11 00:26:34.366115 waagent[1910]: 2025-09-11T00:26:34.366082Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Sep 11 00:26:34.371106 waagent[1910]: 2025-09-11T00:26:34.370797Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Sep 11 00:26:34.375186 systemd[1]: Reload requested from client PID 1925 ('systemctl') (unit waagent.service)... Sep 11 00:26:34.375197 systemd[1]: Reloading... Sep 11 00:26:34.451400 zram_generator::config[1966]: No configuration found. Sep 11 00:26:34.515482 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 11 00:26:34.597227 systemd[1]: Reloading finished in 221 ms. Sep 11 00:26:34.606895 waagent[1910]: 2025-09-11T00:26:34.605290Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Sep 11 00:26:34.606895 waagent[1910]: 2025-09-11T00:26:34.605376Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Sep 11 00:26:34.848592 waagent[1910]: 2025-09-11T00:26:34.848517Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Sep 11 00:26:34.848766 waagent[1910]: 2025-09-11T00:26:34.848745Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Sep 11 00:26:34.849294 waagent[1910]: 2025-09-11T00:26:34.849269Z INFO ExtHandler ExtHandler Starting env monitor service. Sep 11 00:26:34.849600 waagent[1910]: 2025-09-11T00:26:34.849571Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Sep 11 00:26:34.849837 waagent[1910]: 2025-09-11T00:26:34.849800Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Sep 11 00:26:34.850009 waagent[1910]: 2025-09-11T00:26:34.849906Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Sep 11 00:26:34.850080 waagent[1910]: 2025-09-11T00:26:34.850046Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 11 00:26:34.850244 waagent[1910]: 2025-09-11T00:26:34.850192Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 11 00:26:34.850324 waagent[1910]: 2025-09-11T00:26:34.850304Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Sep 11 00:26:34.850395 waagent[1910]: 2025-09-11T00:26:34.850354Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Sep 11 00:26:34.850548 waagent[1910]: 2025-09-11T00:26:34.850521Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 11 00:26:34.850646 waagent[1910]: 2025-09-11T00:26:34.850631Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Sep 11 00:26:34.850981 waagent[1910]: 2025-09-11T00:26:34.850948Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Sep 11 00:26:34.851664 waagent[1910]: 2025-09-11T00:26:34.851644Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 11 00:26:34.851770 waagent[1910]: 2025-09-11T00:26:34.851751Z INFO EnvHandler ExtHandler Configure routes Sep 11 00:26:34.851806 waagent[1910]: 2025-09-11T00:26:34.851793Z INFO EnvHandler ExtHandler Gateway:None Sep 11 00:26:34.852448 waagent[1910]: 2025-09-11T00:26:34.852410Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Sep 11 00:26:34.852448 waagent[1910]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Sep 11 00:26:34.852448 waagent[1910]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Sep 11 00:26:34.852448 waagent[1910]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Sep 11 00:26:34.852448 waagent[1910]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Sep 11 00:26:34.852448 waagent[1910]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Sep 11 00:26:34.852448 waagent[1910]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Sep 11 00:26:34.852717 waagent[1910]: 2025-09-11T00:26:34.852698Z INFO EnvHandler ExtHandler Routes:None Sep 11 00:26:34.857735 waagent[1910]: 2025-09-11T00:26:34.857709Z INFO ExtHandler ExtHandler Sep 11 00:26:34.857787 waagent[1910]: 2025-09-11T00:26:34.857758Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: fb5e3b5c-6d82-44b8-8aa4-a17318bbdd6b correlation db4ee8bd-e2ad-4d2c-8490-64025445fe5f created: 2025-09-11T00:25:35.595146Z] Sep 11 00:26:34.857985 waagent[1910]: 2025-09-11T00:26:34.857968Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Sep 11 00:26:34.858295 waagent[1910]: 2025-09-11T00:26:34.858278Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 0 ms] Sep 11 00:26:34.887806 waagent[1910]: 2025-09-11T00:26:34.887770Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Sep 11 00:26:34.887806 waagent[1910]: Try `iptables -h' or 'iptables --help' for more information.) Sep 11 00:26:34.888054 waagent[1910]: 2025-09-11T00:26:34.888034Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 7A38B47D-598D-4CD8-9399-3AE80AC7A575;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Sep 11 00:26:34.892607 waagent[1910]: 2025-09-11T00:26:34.892570Z INFO MonitorHandler ExtHandler Network interfaces: Sep 11 00:26:34.892607 waagent[1910]: Executing ['ip', '-a', '-o', 'link']: Sep 11 00:26:34.892607 waagent[1910]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Sep 11 00:26:34.892607 waagent[1910]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:45:95:61 brd ff:ff:ff:ff:ff:ff\ alias Network Device Sep 11 00:26:34.892607 waagent[1910]: 3: enP30832s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:45:95:61 brd ff:ff:ff:ff:ff:ff\ altname enP30832p0s0 Sep 11 00:26:34.892607 waagent[1910]: Executing ['ip', '-4', '-a', '-o', 'address']: Sep 11 00:26:34.892607 waagent[1910]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Sep 11 00:26:34.892607 waagent[1910]: 2: eth0 inet 10.200.8.4/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Sep 11 00:26:34.892607 waagent[1910]: Executing ['ip', '-6', '-a', '-o', 'address']: Sep 11 00:26:34.892607 waagent[1910]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Sep 11 00:26:34.892607 waagent[1910]: 2: eth0 inet6 fe80::7eed:8dff:fe45:9561/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Sep 11 00:26:34.935021 waagent[1910]: 2025-09-11T00:26:34.934980Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Sep 11 00:26:34.935021 waagent[1910]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Sep 11 00:26:34.935021 waagent[1910]: pkts bytes target prot opt in out source destination Sep 11 00:26:34.935021 waagent[1910]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Sep 11 00:26:34.935021 waagent[1910]: pkts bytes target prot opt in out source destination Sep 11 00:26:34.935021 waagent[1910]: Chain OUTPUT (policy ACCEPT 4 packets, 401 bytes) Sep 11 00:26:34.935021 waagent[1910]: pkts bytes target prot opt in out source destination Sep 11 00:26:34.935021 waagent[1910]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Sep 11 00:26:34.935021 waagent[1910]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Sep 11 00:26:34.935021 waagent[1910]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Sep 11 00:26:34.937647 waagent[1910]: 2025-09-11T00:26:34.937618Z INFO EnvHandler ExtHandler Current Firewall rules: Sep 11 00:26:34.937647 waagent[1910]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Sep 11 00:26:34.937647 waagent[1910]: pkts bytes target prot opt in out source destination Sep 11 00:26:34.937647 waagent[1910]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Sep 11 00:26:34.937647 waagent[1910]: pkts bytes target prot opt in out source destination Sep 11 00:26:34.937647 waagent[1910]: Chain OUTPUT (policy ACCEPT 4 packets, 401 bytes) Sep 11 00:26:34.937647 waagent[1910]: pkts bytes target prot opt in out source destination Sep 11 00:26:34.937647 waagent[1910]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Sep 11 00:26:34.937647 waagent[1910]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Sep 11 00:26:34.937647 waagent[1910]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Sep 11 00:26:39.195882 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 11 00:26:39.197228 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 00:26:47.311313 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:26:47.314073 (kubelet)[2063]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 11 00:26:47.345595 kubelet[2063]: E0911 00:26:47.345566 2063 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 11 00:26:47.348183 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 11 00:26:47.348295 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 11 00:26:47.348709 systemd[1]: kubelet.service: Consumed 119ms CPU time, 110.7M memory peak. Sep 11 00:26:49.721691 chronyd[1703]: Selected source PHC0 Sep 11 00:26:52.521549 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 11 00:26:52.522780 systemd[1]: Started sshd@0-10.200.8.4:22-10.200.16.10:55866.service - OpenSSH per-connection server daemon (10.200.16.10:55866). Sep 11 00:26:53.244428 sshd[2071]: Accepted publickey for core from 10.200.16.10 port 55866 ssh2: RSA SHA256:WsqnZe1Vz7kcM6FJ5Bl6636L4nXESJA3OI736agNivA Sep 11 00:26:53.245320 sshd-session[2071]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:26:53.249049 systemd-logind[1706]: New session 3 of user core. Sep 11 00:26:53.251502 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 11 00:26:53.798588 systemd[1]: Started sshd@1-10.200.8.4:22-10.200.16.10:55876.service - OpenSSH per-connection server daemon (10.200.16.10:55876). Sep 11 00:26:54.435067 sshd[2076]: Accepted publickey for core from 10.200.16.10 port 55876 ssh2: RSA SHA256:WsqnZe1Vz7kcM6FJ5Bl6636L4nXESJA3OI736agNivA Sep 11 00:26:54.435978 sshd-session[2076]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:26:54.439678 systemd-logind[1706]: New session 4 of user core. Sep 11 00:26:54.445511 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 11 00:26:54.882554 sshd[2078]: Connection closed by 10.200.16.10 port 55876 Sep 11 00:26:54.882917 sshd-session[2076]: pam_unix(sshd:session): session closed for user core Sep 11 00:26:54.885270 systemd[1]: sshd@1-10.200.8.4:22-10.200.16.10:55876.service: Deactivated successfully. Sep 11 00:26:54.886571 systemd[1]: session-4.scope: Deactivated successfully. Sep 11 00:26:54.887140 systemd-logind[1706]: Session 4 logged out. Waiting for processes to exit. Sep 11 00:26:54.888108 systemd-logind[1706]: Removed session 4. Sep 11 00:26:55.001455 systemd[1]: Started sshd@2-10.200.8.4:22-10.200.16.10:55884.service - OpenSSH per-connection server daemon (10.200.16.10:55884). Sep 11 00:26:55.643908 sshd[2084]: Accepted publickey for core from 10.200.16.10 port 55884 ssh2: RSA SHA256:WsqnZe1Vz7kcM6FJ5Bl6636L4nXESJA3OI736agNivA Sep 11 00:26:55.644825 sshd-session[2084]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:26:55.648714 systemd-logind[1706]: New session 5 of user core. Sep 11 00:26:55.653516 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 11 00:26:56.089145 sshd[2086]: Connection closed by 10.200.16.10 port 55884 Sep 11 00:26:56.089591 sshd-session[2084]: pam_unix(sshd:session): session closed for user core Sep 11 00:26:56.092203 systemd[1]: sshd@2-10.200.8.4:22-10.200.16.10:55884.service: Deactivated successfully. Sep 11 00:26:56.093502 systemd[1]: session-5.scope: Deactivated successfully. Sep 11 00:26:56.094028 systemd-logind[1706]: Session 5 logged out. Waiting for processes to exit. Sep 11 00:26:56.095028 systemd-logind[1706]: Removed session 5. Sep 11 00:26:56.210372 systemd[1]: Started sshd@3-10.200.8.4:22-10.200.16.10:55894.service - OpenSSH per-connection server daemon (10.200.16.10:55894). Sep 11 00:26:56.847829 sshd[2092]: Accepted publickey for core from 10.200.16.10 port 55894 ssh2: RSA SHA256:WsqnZe1Vz7kcM6FJ5Bl6636L4nXESJA3OI736agNivA Sep 11 00:26:56.848758 sshd-session[2092]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:26:56.852708 systemd-logind[1706]: New session 6 of user core. Sep 11 00:26:56.857510 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 11 00:26:57.294349 sshd[2094]: Connection closed by 10.200.16.10 port 55894 Sep 11 00:26:57.294916 sshd-session[2092]: pam_unix(sshd:session): session closed for user core Sep 11 00:26:57.296792 systemd[1]: sshd@3-10.200.8.4:22-10.200.16.10:55894.service: Deactivated successfully. Sep 11 00:26:57.298069 systemd[1]: session-6.scope: Deactivated successfully. Sep 11 00:26:57.299440 systemd-logind[1706]: Session 6 logged out. Waiting for processes to exit. Sep 11 00:26:57.300106 systemd-logind[1706]: Removed session 6. Sep 11 00:26:57.409012 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 11 00:26:57.410108 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 00:26:57.412580 systemd[1]: Started sshd@4-10.200.8.4:22-10.200.16.10:55896.service - OpenSSH per-connection server daemon (10.200.16.10:55896). Sep 11 00:26:58.027734 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:26:58.030486 (kubelet)[2110]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 11 00:26:58.051204 sshd[2101]: Accepted publickey for core from 10.200.16.10 port 55896 ssh2: RSA SHA256:WsqnZe1Vz7kcM6FJ5Bl6636L4nXESJA3OI736agNivA Sep 11 00:26:58.052516 sshd-session[2101]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:26:58.058768 systemd-logind[1706]: New session 7 of user core. Sep 11 00:26:58.062974 kubelet[2110]: E0911 00:26:58.062617 2110 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 11 00:26:58.062652 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 11 00:26:58.065553 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 11 00:26:58.065671 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 11 00:26:58.065915 systemd[1]: kubelet.service: Consumed 112ms CPU time, 110.4M memory peak. Sep 11 00:26:58.530918 sudo[2118]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 11 00:26:58.531115 sudo[2118]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 11 00:26:58.553185 sudo[2118]: pam_unix(sudo:session): session closed for user root Sep 11 00:26:58.654419 sshd[2117]: Connection closed by 10.200.16.10 port 55896 Sep 11 00:26:58.654898 sshd-session[2101]: pam_unix(sshd:session): session closed for user core Sep 11 00:26:58.657158 systemd[1]: sshd@4-10.200.8.4:22-10.200.16.10:55896.service: Deactivated successfully. Sep 11 00:26:58.658306 systemd[1]: session-7.scope: Deactivated successfully. Sep 11 00:26:58.659911 systemd-logind[1706]: Session 7 logged out. Waiting for processes to exit. Sep 11 00:26:58.660862 systemd-logind[1706]: Removed session 7. Sep 11 00:26:58.769655 systemd[1]: Started sshd@5-10.200.8.4:22-10.200.16.10:55904.service - OpenSSH per-connection server daemon (10.200.16.10:55904). Sep 11 00:26:59.407627 sshd[2124]: Accepted publickey for core from 10.200.16.10 port 55904 ssh2: RSA SHA256:WsqnZe1Vz7kcM6FJ5Bl6636L4nXESJA3OI736agNivA Sep 11 00:26:59.408621 sshd-session[2124]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:26:59.412727 systemd-logind[1706]: New session 8 of user core. Sep 11 00:26:59.418560 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 11 00:26:59.754832 sudo[2128]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 11 00:26:59.755187 sudo[2128]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 11 00:26:59.761028 sudo[2128]: pam_unix(sudo:session): session closed for user root Sep 11 00:26:59.764458 sudo[2127]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 11 00:26:59.764644 sudo[2127]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 11 00:26:59.771335 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 11 00:26:59.801173 augenrules[2150]: No rules Sep 11 00:26:59.802002 systemd[1]: audit-rules.service: Deactivated successfully. Sep 11 00:26:59.802247 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 11 00:26:59.803076 sudo[2127]: pam_unix(sudo:session): session closed for user root Sep 11 00:26:59.909259 sshd[2126]: Connection closed by 10.200.16.10 port 55904 Sep 11 00:26:59.909612 sshd-session[2124]: pam_unix(sshd:session): session closed for user core Sep 11 00:26:59.911995 systemd[1]: sshd@5-10.200.8.4:22-10.200.16.10:55904.service: Deactivated successfully. Sep 11 00:26:59.913139 systemd[1]: session-8.scope: Deactivated successfully. Sep 11 00:26:59.913763 systemd-logind[1706]: Session 8 logged out. Waiting for processes to exit. Sep 11 00:26:59.914654 systemd-logind[1706]: Removed session 8. Sep 11 00:27:00.020608 systemd[1]: Started sshd@6-10.200.8.4:22-10.200.16.10:48366.service - OpenSSH per-connection server daemon (10.200.16.10:48366). Sep 11 00:27:00.663839 sshd[2159]: Accepted publickey for core from 10.200.16.10 port 48366 ssh2: RSA SHA256:WsqnZe1Vz7kcM6FJ5Bl6636L4nXESJA3OI736agNivA Sep 11 00:27:00.664792 sshd-session[2159]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:27:00.668783 systemd-logind[1706]: New session 9 of user core. Sep 11 00:27:00.673532 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 11 00:27:01.010621 sudo[2162]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 11 00:27:01.010821 sudo[2162]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 11 00:27:01.897680 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 11 00:27:01.907648 (dockerd)[2181]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 11 00:27:02.454333 dockerd[2181]: time="2025-09-11T00:27:02.454293471Z" level=info msg="Starting up" Sep 11 00:27:02.455238 dockerd[2181]: time="2025-09-11T00:27:02.455216687Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 11 00:27:02.570954 dockerd[2181]: time="2025-09-11T00:27:02.570914970Z" level=info msg="Loading containers: start." Sep 11 00:27:02.595405 kernel: Initializing XFRM netlink socket Sep 11 00:27:02.879372 systemd-networkd[1358]: docker0: Link UP Sep 11 00:27:02.896559 dockerd[2181]: time="2025-09-11T00:27:02.896534704Z" level=info msg="Loading containers: done." Sep 11 00:27:02.917740 dockerd[2181]: time="2025-09-11T00:27:02.917713389Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 11 00:27:02.917841 dockerd[2181]: time="2025-09-11T00:27:02.917767842Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Sep 11 00:27:02.917866 dockerd[2181]: time="2025-09-11T00:27:02.917839681Z" level=info msg="Initializing buildkit" Sep 11 00:27:02.981290 dockerd[2181]: time="2025-09-11T00:27:02.981266552Z" level=info msg="Completed buildkit initialization" Sep 11 00:27:02.987696 dockerd[2181]: time="2025-09-11T00:27:02.987659825Z" level=info msg="Daemon has completed initialization" Sep 11 00:27:02.988174 dockerd[2181]: time="2025-09-11T00:27:02.988105678Z" level=info msg="API listen on /run/docker.sock" Sep 11 00:27:02.987841 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 11 00:27:04.041220 containerd[1747]: time="2025-09-11T00:27:04.041186088Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Sep 11 00:27:04.475875 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Sep 11 00:27:04.891135 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2534765810.mount: Deactivated successfully. Sep 11 00:27:05.969154 containerd[1747]: time="2025-09-11T00:27:05.969117312Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:27:05.971427 containerd[1747]: time="2025-09-11T00:27:05.971397495Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=30114901" Sep 11 00:27:05.974487 containerd[1747]: time="2025-09-11T00:27:05.974454138Z" level=info msg="ImageCreate event name:\"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:27:05.977969 containerd[1747]: time="2025-09-11T00:27:05.977716233Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:27:05.978352 containerd[1747]: time="2025-09-11T00:27:05.978333111Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"30111492\" in 1.937113803s" Sep 11 00:27:05.978403 containerd[1747]: time="2025-09-11T00:27:05.978363685Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\"" Sep 11 00:27:05.979006 containerd[1747]: time="2025-09-11T00:27:05.978988982Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Sep 11 00:27:07.315717 containerd[1747]: time="2025-09-11T00:27:07.315675416Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:27:07.318003 containerd[1747]: time="2025-09-11T00:27:07.317971477Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=26020852" Sep 11 00:27:07.320475 containerd[1747]: time="2025-09-11T00:27:07.320436877Z" level=info msg="ImageCreate event name:\"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:27:07.323960 containerd[1747]: time="2025-09-11T00:27:07.323817882Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:27:07.324369 containerd[1747]: time="2025-09-11T00:27:07.324347789Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"27681301\" in 1.345336725s" Sep 11 00:27:07.324417 containerd[1747]: time="2025-09-11T00:27:07.324377940Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\"" Sep 11 00:27:07.325024 containerd[1747]: time="2025-09-11T00:27:07.324995658Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Sep 11 00:27:08.072683 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 11 00:27:08.076581 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 00:27:08.646817 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:27:08.655643 (kubelet)[2455]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 11 00:27:08.705694 kubelet[2455]: E0911 00:27:08.705195 2455 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 11 00:27:08.708597 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 11 00:27:08.708837 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 11 00:27:08.709329 systemd[1]: kubelet.service: Consumed 124ms CPU time, 107.8M memory peak. Sep 11 00:27:08.851467 containerd[1747]: time="2025-09-11T00:27:08.851436493Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:27:08.853876 containerd[1747]: time="2025-09-11T00:27:08.853742419Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=20155576" Sep 11 00:27:08.856261 containerd[1747]: time="2025-09-11T00:27:08.856241171Z" level=info msg="ImageCreate event name:\"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:27:08.859816 containerd[1747]: time="2025-09-11T00:27:08.859794740Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:27:08.860369 containerd[1747]: time="2025-09-11T00:27:08.860346848Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"21816043\" in 1.535328012s" Sep 11 00:27:08.860423 containerd[1747]: time="2025-09-11T00:27:08.860377163Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\"" Sep 11 00:27:08.860877 containerd[1747]: time="2025-09-11T00:27:08.860854399Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Sep 11 00:27:10.083121 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3177280023.mount: Deactivated successfully. Sep 11 00:27:10.413201 containerd[1747]: time="2025-09-11T00:27:10.413116296Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:27:10.415327 containerd[1747]: time="2025-09-11T00:27:10.415297929Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=31929477" Sep 11 00:27:10.418120 containerd[1747]: time="2025-09-11T00:27:10.418085485Z" level=info msg="ImageCreate event name:\"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:27:10.421102 containerd[1747]: time="2025-09-11T00:27:10.421067023Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:27:10.421594 containerd[1747]: time="2025-09-11T00:27:10.421311700Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"31928488\" in 1.560429018s" Sep 11 00:27:10.421594 containerd[1747]: time="2025-09-11T00:27:10.421339703Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\"" Sep 11 00:27:10.421823 containerd[1747]: time="2025-09-11T00:27:10.421809624Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Sep 11 00:27:10.976682 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2991360011.mount: Deactivated successfully. Sep 11 00:27:12.287420 containerd[1747]: time="2025-09-11T00:27:12.287367916Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:27:12.289417 containerd[1747]: time="2025-09-11T00:27:12.289394701Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942246" Sep 11 00:27:12.292057 containerd[1747]: time="2025-09-11T00:27:12.292022799Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:27:12.295656 containerd[1747]: time="2025-09-11T00:27:12.295618691Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:27:12.296591 containerd[1747]: time="2025-09-11T00:27:12.296141102Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.874257737s" Sep 11 00:27:12.296591 containerd[1747]: time="2025-09-11T00:27:12.296172033Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Sep 11 00:27:12.296683 containerd[1747]: time="2025-09-11T00:27:12.296629391Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 11 00:27:12.322188 update_engine[1707]: I20250911 00:27:12.322142 1707 update_attempter.cc:509] Updating boot flags... Sep 11 00:27:12.860026 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount940644260.mount: Deactivated successfully. Sep 11 00:27:12.876558 containerd[1747]: time="2025-09-11T00:27:12.876520908Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 11 00:27:12.878671 containerd[1747]: time="2025-09-11T00:27:12.878650004Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Sep 11 00:27:12.881635 containerd[1747]: time="2025-09-11T00:27:12.881602020Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 11 00:27:12.884829 containerd[1747]: time="2025-09-11T00:27:12.884791906Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 11 00:27:12.885498 containerd[1747]: time="2025-09-11T00:27:12.885186676Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 588.535493ms" Sep 11 00:27:12.885498 containerd[1747]: time="2025-09-11T00:27:12.885212048Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 11 00:27:12.885770 containerd[1747]: time="2025-09-11T00:27:12.885753395Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Sep 11 00:27:13.860049 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1459905920.mount: Deactivated successfully. Sep 11 00:27:18.822542 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Sep 11 00:27:18.824053 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 00:27:23.661227 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:27:23.667659 (kubelet)[2581]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 11 00:27:23.698303 kubelet[2581]: E0911 00:27:23.698274 2581 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 11 00:27:23.699636 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 11 00:27:23.699727 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 11 00:27:23.699993 systemd[1]: kubelet.service: Consumed 114ms CPU time, 110.5M memory peak. Sep 11 00:27:28.015624 containerd[1747]: time="2025-09-11T00:27:28.015573853Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:27:28.018195 containerd[1747]: time="2025-09-11T00:27:28.018151627Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58378441" Sep 11 00:27:28.020890 containerd[1747]: time="2025-09-11T00:27:28.020855297Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:27:28.024475 containerd[1747]: time="2025-09-11T00:27:28.024437447Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:27:28.026332 containerd[1747]: time="2025-09-11T00:27:28.025012687Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 15.139237227s" Sep 11 00:27:28.026332 containerd[1747]: time="2025-09-11T00:27:28.025039734Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Sep 11 00:27:30.722686 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:27:30.722817 systemd[1]: kubelet.service: Consumed 114ms CPU time, 110.5M memory peak. Sep 11 00:27:30.724635 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 00:27:30.745417 systemd[1]: Reload requested from client PID 2659 ('systemctl') (unit session-9.scope)... Sep 11 00:27:30.745429 systemd[1]: Reloading... Sep 11 00:27:30.823400 zram_generator::config[2701]: No configuration found. Sep 11 00:27:30.940124 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 11 00:27:31.021783 systemd[1]: Reloading finished in 276 ms. Sep 11 00:27:31.050958 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 11 00:27:31.051017 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 11 00:27:31.051260 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:27:31.051309 systemd[1]: kubelet.service: Consumed 60ms CPU time, 69.9M memory peak. Sep 11 00:27:31.053425 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 00:27:31.665152 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:27:31.672380 (kubelet)[2771]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 11 00:27:31.704675 kubelet[2771]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 11 00:27:31.704675 kubelet[2771]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 11 00:27:31.704675 kubelet[2771]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 11 00:27:31.704913 kubelet[2771]: I0911 00:27:31.704743 2771 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 11 00:27:32.042846 kubelet[2771]: I0911 00:27:32.042618 2771 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 11 00:27:32.042846 kubelet[2771]: I0911 00:27:32.042636 2771 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 11 00:27:32.042846 kubelet[2771]: I0911 00:27:32.042810 2771 server.go:956] "Client rotation is on, will bootstrap in background" Sep 11 00:27:32.071736 kubelet[2771]: E0911 00:27:32.071711 2771 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.8.4:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.8.4:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 11 00:27:32.075001 kubelet[2771]: I0911 00:27:32.074725 2771 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 11 00:27:32.080274 kubelet[2771]: I0911 00:27:32.080258 2771 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 11 00:27:32.083790 kubelet[2771]: I0911 00:27:32.083768 2771 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 11 00:27:32.083935 kubelet[2771]: I0911 00:27:32.083919 2771 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 11 00:27:32.084067 kubelet[2771]: I0911 00:27:32.083936 2771 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4372.1.0-n-3f8a739b41","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 11 00:27:32.084186 kubelet[2771]: I0911 00:27:32.084073 2771 topology_manager.go:138] "Creating topology manager with none policy" Sep 11 00:27:32.084186 kubelet[2771]: I0911 00:27:32.084082 2771 container_manager_linux.go:303] "Creating device plugin manager" Sep 11 00:27:32.084186 kubelet[2771]: I0911 00:27:32.084175 2771 state_mem.go:36] "Initialized new in-memory state store" Sep 11 00:27:32.085902 kubelet[2771]: I0911 00:27:32.085890 2771 kubelet.go:480] "Attempting to sync node with API server" Sep 11 00:27:32.085951 kubelet[2771]: I0911 00:27:32.085904 2771 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 11 00:27:32.085951 kubelet[2771]: I0911 00:27:32.085924 2771 kubelet.go:386] "Adding apiserver pod source" Sep 11 00:27:32.085951 kubelet[2771]: I0911 00:27:32.085935 2771 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 11 00:27:32.095408 kubelet[2771]: I0911 00:27:32.095085 2771 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Sep 11 00:27:32.095510 kubelet[2771]: I0911 00:27:32.095498 2771 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 11 00:27:32.096054 kubelet[2771]: W0911 00:27:32.096042 2771 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 11 00:27:32.098032 kubelet[2771]: I0911 00:27:32.097888 2771 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 11 00:27:32.098032 kubelet[2771]: I0911 00:27:32.097925 2771 server.go:1289] "Started kubelet" Sep 11 00:27:32.098107 kubelet[2771]: E0911 00:27:32.098059 2771 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.8.4:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4372.1.0-n-3f8a739b41&limit=500&resourceVersion=0\": dial tcp 10.200.8.4:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 11 00:27:32.098146 kubelet[2771]: E0911 00:27:32.098128 2771 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.8.4:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.4:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 11 00:27:32.102337 kubelet[2771]: I0911 00:27:32.102299 2771 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 11 00:27:32.104085 kubelet[2771]: I0911 00:27:32.103191 2771 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 11 00:27:32.105080 kubelet[2771]: I0911 00:27:32.104435 2771 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 11 00:27:32.107181 kubelet[2771]: E0911 00:27:32.106138 2771 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.4:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.4:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4372.1.0-n-3f8a739b41.186412d3041f66ed default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4372.1.0-n-3f8a739b41,UID:ci-4372.1.0-n-3f8a739b41,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4372.1.0-n-3f8a739b41,},FirstTimestamp:2025-09-11 00:27:32.097902317 +0000 UTC m=+0.422146280,LastTimestamp:2025-09-11 00:27:32.097902317 +0000 UTC m=+0.422146280,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4372.1.0-n-3f8a739b41,}" Sep 11 00:27:32.107751 kubelet[2771]: I0911 00:27:32.107732 2771 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 11 00:27:32.108709 kubelet[2771]: I0911 00:27:32.108695 2771 server.go:317] "Adding debug handlers to kubelet server" Sep 11 00:27:32.109520 kubelet[2771]: I0911 00:27:32.109503 2771 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 11 00:27:32.112535 kubelet[2771]: I0911 00:27:32.111179 2771 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 11 00:27:32.112535 kubelet[2771]: E0911 00:27:32.111524 2771 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4372.1.0-n-3f8a739b41\" not found" Sep 11 00:27:32.112535 kubelet[2771]: E0911 00:27:32.111767 2771 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372.1.0-n-3f8a739b41?timeout=10s\": dial tcp 10.200.8.4:6443: connect: connection refused" interval="200ms" Sep 11 00:27:32.112535 kubelet[2771]: I0911 00:27:32.111792 2771 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 11 00:27:32.112535 kubelet[2771]: E0911 00:27:32.112028 2771 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.8.4:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.4:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 11 00:27:32.112535 kubelet[2771]: I0911 00:27:32.112072 2771 reconciler.go:26] "Reconciler: start to sync state" Sep 11 00:27:32.112895 kubelet[2771]: I0911 00:27:32.112880 2771 factory.go:223] Registration of the systemd container factory successfully Sep 11 00:27:32.112963 kubelet[2771]: I0911 00:27:32.112950 2771 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 11 00:27:32.113365 kubelet[2771]: E0911 00:27:32.113349 2771 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 11 00:27:32.114306 kubelet[2771]: I0911 00:27:32.114291 2771 factory.go:223] Registration of the containerd container factory successfully Sep 11 00:27:32.126642 kubelet[2771]: I0911 00:27:32.126628 2771 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 11 00:27:32.126642 kubelet[2771]: I0911 00:27:32.126638 2771 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 11 00:27:32.126721 kubelet[2771]: I0911 00:27:32.126651 2771 state_mem.go:36] "Initialized new in-memory state store" Sep 11 00:27:32.131482 kubelet[2771]: I0911 00:27:32.131468 2771 policy_none.go:49] "None policy: Start" Sep 11 00:27:32.131482 kubelet[2771]: I0911 00:27:32.131483 2771 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 11 00:27:32.131545 kubelet[2771]: I0911 00:27:32.131505 2771 state_mem.go:35] "Initializing new in-memory state store" Sep 11 00:27:32.134330 kubelet[2771]: I0911 00:27:32.134245 2771 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 11 00:27:32.136162 kubelet[2771]: I0911 00:27:32.135143 2771 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 11 00:27:32.136162 kubelet[2771]: I0911 00:27:32.135164 2771 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 11 00:27:32.136162 kubelet[2771]: I0911 00:27:32.135181 2771 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 11 00:27:32.136162 kubelet[2771]: I0911 00:27:32.135187 2771 kubelet.go:2436] "Starting kubelet main sync loop" Sep 11 00:27:32.136162 kubelet[2771]: E0911 00:27:32.135213 2771 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 11 00:27:32.138154 kubelet[2771]: E0911 00:27:32.138067 2771 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.8.4:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.4:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 11 00:27:32.141296 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 11 00:27:32.150785 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 11 00:27:32.153159 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 11 00:27:32.163835 kubelet[2771]: E0911 00:27:32.163821 2771 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 11 00:27:32.164032 kubelet[2771]: I0911 00:27:32.164022 2771 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 11 00:27:32.164100 kubelet[2771]: I0911 00:27:32.164079 2771 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 11 00:27:32.164783 kubelet[2771]: I0911 00:27:32.164645 2771 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 11 00:27:32.165053 kubelet[2771]: E0911 00:27:32.165042 2771 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 11 00:27:32.165413 kubelet[2771]: E0911 00:27:32.165329 2771 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4372.1.0-n-3f8a739b41\" not found" Sep 11 00:27:32.250116 systemd[1]: Created slice kubepods-burstable-pod1a0b6a3088438a05b5849fc7004fcaff.slice - libcontainer container kubepods-burstable-pod1a0b6a3088438a05b5849fc7004fcaff.slice. Sep 11 00:27:32.256837 kubelet[2771]: E0911 00:27:32.256815 2771 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.1.0-n-3f8a739b41\" not found" node="ci-4372.1.0-n-3f8a739b41" Sep 11 00:27:32.260838 systemd[1]: Created slice kubepods-burstable-pode7d280209c796987504e24e99315def7.slice - libcontainer container kubepods-burstable-pode7d280209c796987504e24e99315def7.slice. Sep 11 00:27:32.265462 kubelet[2771]: I0911 00:27:32.265449 2771 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372.1.0-n-3f8a739b41" Sep 11 00:27:32.265663 kubelet[2771]: E0911 00:27:32.265646 2771 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.4:6443/api/v1/nodes\": dial tcp 10.200.8.4:6443: connect: connection refused" node="ci-4372.1.0-n-3f8a739b41" Sep 11 00:27:32.267159 kubelet[2771]: E0911 00:27:32.267144 2771 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.1.0-n-3f8a739b41\" not found" node="ci-4372.1.0-n-3f8a739b41" Sep 11 00:27:32.269083 systemd[1]: Created slice kubepods-burstable-podebf21d4f2445243377d8656191ace802.slice - libcontainer container kubepods-burstable-podebf21d4f2445243377d8656191ace802.slice. Sep 11 00:27:32.270524 kubelet[2771]: E0911 00:27:32.270508 2771 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.1.0-n-3f8a739b41\" not found" node="ci-4372.1.0-n-3f8a739b41" Sep 11 00:27:32.313094 kubelet[2771]: E0911 00:27:32.313016 2771 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372.1.0-n-3f8a739b41?timeout=10s\": dial tcp 10.200.8.4:6443: connect: connection refused" interval="400ms" Sep 11 00:27:32.413367 kubelet[2771]: I0911 00:27:32.413314 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e7d280209c796987504e24e99315def7-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4372.1.0-n-3f8a739b41\" (UID: \"e7d280209c796987504e24e99315def7\") " pod="kube-system/kube-apiserver-ci-4372.1.0-n-3f8a739b41" Sep 11 00:27:32.413367 kubelet[2771]: I0911 00:27:32.413354 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ebf21d4f2445243377d8656191ace802-kubeconfig\") pod \"kube-controller-manager-ci-4372.1.0-n-3f8a739b41\" (UID: \"ebf21d4f2445243377d8656191ace802\") " pod="kube-system/kube-controller-manager-ci-4372.1.0-n-3f8a739b41" Sep 11 00:27:32.413534 kubelet[2771]: I0911 00:27:32.413378 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1a0b6a3088438a05b5849fc7004fcaff-kubeconfig\") pod \"kube-scheduler-ci-4372.1.0-n-3f8a739b41\" (UID: \"1a0b6a3088438a05b5849fc7004fcaff\") " pod="kube-system/kube-scheduler-ci-4372.1.0-n-3f8a739b41" Sep 11 00:27:32.413534 kubelet[2771]: I0911 00:27:32.413405 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e7d280209c796987504e24e99315def7-ca-certs\") pod \"kube-apiserver-ci-4372.1.0-n-3f8a739b41\" (UID: \"e7d280209c796987504e24e99315def7\") " pod="kube-system/kube-apiserver-ci-4372.1.0-n-3f8a739b41" Sep 11 00:27:32.413534 kubelet[2771]: I0911 00:27:32.413419 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ebf21d4f2445243377d8656191ace802-ca-certs\") pod \"kube-controller-manager-ci-4372.1.0-n-3f8a739b41\" (UID: \"ebf21d4f2445243377d8656191ace802\") " pod="kube-system/kube-controller-manager-ci-4372.1.0-n-3f8a739b41" Sep 11 00:27:32.413534 kubelet[2771]: I0911 00:27:32.413434 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ebf21d4f2445243377d8656191ace802-flexvolume-dir\") pod \"kube-controller-manager-ci-4372.1.0-n-3f8a739b41\" (UID: \"ebf21d4f2445243377d8656191ace802\") " pod="kube-system/kube-controller-manager-ci-4372.1.0-n-3f8a739b41" Sep 11 00:27:32.413534 kubelet[2771]: I0911 00:27:32.413464 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ebf21d4f2445243377d8656191ace802-k8s-certs\") pod \"kube-controller-manager-ci-4372.1.0-n-3f8a739b41\" (UID: \"ebf21d4f2445243377d8656191ace802\") " pod="kube-system/kube-controller-manager-ci-4372.1.0-n-3f8a739b41" Sep 11 00:27:32.413653 kubelet[2771]: I0911 00:27:32.413495 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ebf21d4f2445243377d8656191ace802-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4372.1.0-n-3f8a739b41\" (UID: \"ebf21d4f2445243377d8656191ace802\") " pod="kube-system/kube-controller-manager-ci-4372.1.0-n-3f8a739b41" Sep 11 00:27:32.413653 kubelet[2771]: I0911 00:27:32.413511 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e7d280209c796987504e24e99315def7-k8s-certs\") pod \"kube-apiserver-ci-4372.1.0-n-3f8a739b41\" (UID: \"e7d280209c796987504e24e99315def7\") " pod="kube-system/kube-apiserver-ci-4372.1.0-n-3f8a739b41" Sep 11 00:27:32.467380 kubelet[2771]: I0911 00:27:32.467341 2771 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372.1.0-n-3f8a739b41" Sep 11 00:27:32.467750 kubelet[2771]: E0911 00:27:32.467727 2771 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.4:6443/api/v1/nodes\": dial tcp 10.200.8.4:6443: connect: connection refused" node="ci-4372.1.0-n-3f8a739b41" Sep 11 00:27:32.557785 containerd[1747]: time="2025-09-11T00:27:32.557758084Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4372.1.0-n-3f8a739b41,Uid:1a0b6a3088438a05b5849fc7004fcaff,Namespace:kube-system,Attempt:0,}" Sep 11 00:27:32.568266 containerd[1747]: time="2025-09-11T00:27:32.568205015Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4372.1.0-n-3f8a739b41,Uid:e7d280209c796987504e24e99315def7,Namespace:kube-system,Attempt:0,}" Sep 11 00:27:32.571188 containerd[1747]: time="2025-09-11T00:27:32.571154411Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4372.1.0-n-3f8a739b41,Uid:ebf21d4f2445243377d8656191ace802,Namespace:kube-system,Attempt:0,}" Sep 11 00:27:32.713603 kubelet[2771]: E0911 00:27:32.713575 2771 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372.1.0-n-3f8a739b41?timeout=10s\": dial tcp 10.200.8.4:6443: connect: connection refused" interval="800ms" Sep 11 00:27:32.869148 kubelet[2771]: I0911 00:27:32.869130 2771 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372.1.0-n-3f8a739b41" Sep 11 00:27:32.869355 kubelet[2771]: E0911 00:27:32.869335 2771 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.4:6443/api/v1/nodes\": dial tcp 10.200.8.4:6443: connect: connection refused" node="ci-4372.1.0-n-3f8a739b41" Sep 11 00:27:32.977229 kubelet[2771]: E0911 00:27:32.977207 2771 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.8.4:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.4:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 11 00:27:33.040956 containerd[1747]: time="2025-09-11T00:27:33.040899846Z" level=info msg="connecting to shim dbb2a241f6a6488e8e2c71f4b61a9980cda6417f7d87afee13bf7c3478f25774" address="unix:///run/containerd/s/1c62e3794e013ae5db1a130eac851bba2145ec93724543bdd10df757d86f826a" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:27:33.059574 containerd[1747]: time="2025-09-11T00:27:33.059551496Z" level=info msg="connecting to shim 30c6fbaf915145e416601a284c611cd13c4cd164fff4eec643c2ecbf05403b27" address="unix:///run/containerd/s/3983e5a4eeb91a4abfcd77a36835612ebc31d9bf5506ab0bedf7b1c15b4ef340" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:27:33.066408 containerd[1747]: time="2025-09-11T00:27:33.066351912Z" level=info msg="connecting to shim 69aa4daaf25a570810ed61f6cac678fa060a0f7c951a33b6710f5abd4fe33b9e" address="unix:///run/containerd/s/43eaf02f3248de3b26ec8cc618144abb0e77e73910dc4042e9f741c0c2b2dcb1" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:27:33.089518 systemd[1]: Started cri-containerd-dbb2a241f6a6488e8e2c71f4b61a9980cda6417f7d87afee13bf7c3478f25774.scope - libcontainer container dbb2a241f6a6488e8e2c71f4b61a9980cda6417f7d87afee13bf7c3478f25774. Sep 11 00:27:33.101496 systemd[1]: Started cri-containerd-30c6fbaf915145e416601a284c611cd13c4cd164fff4eec643c2ecbf05403b27.scope - libcontainer container 30c6fbaf915145e416601a284c611cd13c4cd164fff4eec643c2ecbf05403b27. Sep 11 00:27:33.102604 systemd[1]: Started cri-containerd-69aa4daaf25a570810ed61f6cac678fa060a0f7c951a33b6710f5abd4fe33b9e.scope - libcontainer container 69aa4daaf25a570810ed61f6cac678fa060a0f7c951a33b6710f5abd4fe33b9e. Sep 11 00:27:33.161785 containerd[1747]: time="2025-09-11T00:27:33.161691548Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4372.1.0-n-3f8a739b41,Uid:1a0b6a3088438a05b5849fc7004fcaff,Namespace:kube-system,Attempt:0,} returns sandbox id \"dbb2a241f6a6488e8e2c71f4b61a9980cda6417f7d87afee13bf7c3478f25774\"" Sep 11 00:27:33.169113 containerd[1747]: time="2025-09-11T00:27:33.169062165Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4372.1.0-n-3f8a739b41,Uid:e7d280209c796987504e24e99315def7,Namespace:kube-system,Attempt:0,} returns sandbox id \"30c6fbaf915145e416601a284c611cd13c4cd164fff4eec643c2ecbf05403b27\"" Sep 11 00:27:33.172902 containerd[1747]: time="2025-09-11T00:27:33.172850809Z" level=info msg="CreateContainer within sandbox \"dbb2a241f6a6488e8e2c71f4b61a9980cda6417f7d87afee13bf7c3478f25774\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 11 00:27:33.178076 containerd[1747]: time="2025-09-11T00:27:33.178051681Z" level=info msg="CreateContainer within sandbox \"30c6fbaf915145e416601a284c611cd13c4cd164fff4eec643c2ecbf05403b27\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 11 00:27:33.180544 containerd[1747]: time="2025-09-11T00:27:33.180520595Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4372.1.0-n-3f8a739b41,Uid:ebf21d4f2445243377d8656191ace802,Namespace:kube-system,Attempt:0,} returns sandbox id \"69aa4daaf25a570810ed61f6cac678fa060a0f7c951a33b6710f5abd4fe33b9e\"" Sep 11 00:27:33.188921 containerd[1747]: time="2025-09-11T00:27:33.188853107Z" level=info msg="CreateContainer within sandbox \"69aa4daaf25a570810ed61f6cac678fa060a0f7c951a33b6710f5abd4fe33b9e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 11 00:27:33.201401 containerd[1747]: time="2025-09-11T00:27:33.201276589Z" level=info msg="Container 6cfe998d533bcd41faff8e9552ba8368d137087402da1711a21edf4bab63a738: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:27:33.217163 containerd[1747]: time="2025-09-11T00:27:33.217143261Z" level=info msg="Container 979b1b47a134f26db8147718edb32fc2bef9f720cfda2ed1dc071dfc0b5eddef: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:27:33.222164 containerd[1747]: time="2025-09-11T00:27:33.222142552Z" level=info msg="Container 4df5ce4899502ba036e44f23f5387227f76d09c52b47072b50fb885f99504b3c: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:27:33.229332 containerd[1747]: time="2025-09-11T00:27:33.229307812Z" level=info msg="CreateContainer within sandbox \"dbb2a241f6a6488e8e2c71f4b61a9980cda6417f7d87afee13bf7c3478f25774\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"6cfe998d533bcd41faff8e9552ba8368d137087402da1711a21edf4bab63a738\"" Sep 11 00:27:33.229752 containerd[1747]: time="2025-09-11T00:27:33.229734046Z" level=info msg="StartContainer for \"6cfe998d533bcd41faff8e9552ba8368d137087402da1711a21edf4bab63a738\"" Sep 11 00:27:33.230402 containerd[1747]: time="2025-09-11T00:27:33.230358196Z" level=info msg="connecting to shim 6cfe998d533bcd41faff8e9552ba8368d137087402da1711a21edf4bab63a738" address="unix:///run/containerd/s/1c62e3794e013ae5db1a130eac851bba2145ec93724543bdd10df757d86f826a" protocol=ttrpc version=3 Sep 11 00:27:33.240972 containerd[1747]: time="2025-09-11T00:27:33.240949678Z" level=info msg="CreateContainer within sandbox \"30c6fbaf915145e416601a284c611cd13c4cd164fff4eec643c2ecbf05403b27\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"979b1b47a134f26db8147718edb32fc2bef9f720cfda2ed1dc071dfc0b5eddef\"" Sep 11 00:27:33.241446 containerd[1747]: time="2025-09-11T00:27:33.241428912Z" level=info msg="StartContainer for \"979b1b47a134f26db8147718edb32fc2bef9f720cfda2ed1dc071dfc0b5eddef\"" Sep 11 00:27:33.242242 containerd[1747]: time="2025-09-11T00:27:33.242221218Z" level=info msg="connecting to shim 979b1b47a134f26db8147718edb32fc2bef9f720cfda2ed1dc071dfc0b5eddef" address="unix:///run/containerd/s/3983e5a4eeb91a4abfcd77a36835612ebc31d9bf5506ab0bedf7b1c15b4ef340" protocol=ttrpc version=3 Sep 11 00:27:33.245559 systemd[1]: Started cri-containerd-6cfe998d533bcd41faff8e9552ba8368d137087402da1711a21edf4bab63a738.scope - libcontainer container 6cfe998d533bcd41faff8e9552ba8368d137087402da1711a21edf4bab63a738. Sep 11 00:27:33.250121 containerd[1747]: time="2025-09-11T00:27:33.250094215Z" level=info msg="CreateContainer within sandbox \"69aa4daaf25a570810ed61f6cac678fa060a0f7c951a33b6710f5abd4fe33b9e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"4df5ce4899502ba036e44f23f5387227f76d09c52b47072b50fb885f99504b3c\"" Sep 11 00:27:33.254008 containerd[1747]: time="2025-09-11T00:27:33.253499660Z" level=info msg="StartContainer for \"4df5ce4899502ba036e44f23f5387227f76d09c52b47072b50fb885f99504b3c\"" Sep 11 00:27:33.257116 containerd[1747]: time="2025-09-11T00:27:33.257079576Z" level=info msg="connecting to shim 4df5ce4899502ba036e44f23f5387227f76d09c52b47072b50fb885f99504b3c" address="unix:///run/containerd/s/43eaf02f3248de3b26ec8cc618144abb0e77e73910dc4042e9f741c0c2b2dcb1" protocol=ttrpc version=3 Sep 11 00:27:33.259524 systemd[1]: Started cri-containerd-979b1b47a134f26db8147718edb32fc2bef9f720cfda2ed1dc071dfc0b5eddef.scope - libcontainer container 979b1b47a134f26db8147718edb32fc2bef9f720cfda2ed1dc071dfc0b5eddef. Sep 11 00:27:33.279085 systemd[1]: Started cri-containerd-4df5ce4899502ba036e44f23f5387227f76d09c52b47072b50fb885f99504b3c.scope - libcontainer container 4df5ce4899502ba036e44f23f5387227f76d09c52b47072b50fb885f99504b3c. Sep 11 00:27:33.318440 containerd[1747]: time="2025-09-11T00:27:33.318418918Z" level=info msg="StartContainer for \"979b1b47a134f26db8147718edb32fc2bef9f720cfda2ed1dc071dfc0b5eddef\" returns successfully" Sep 11 00:27:33.332516 containerd[1747]: time="2025-09-11T00:27:33.332355770Z" level=info msg="StartContainer for \"6cfe998d533bcd41faff8e9552ba8368d137087402da1711a21edf4bab63a738\" returns successfully" Sep 11 00:27:33.349894 containerd[1747]: time="2025-09-11T00:27:33.349873828Z" level=info msg="StartContainer for \"4df5ce4899502ba036e44f23f5387227f76d09c52b47072b50fb885f99504b3c\" returns successfully" Sep 11 00:27:33.671431 kubelet[2771]: I0911 00:27:33.671376 2771 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372.1.0-n-3f8a739b41" Sep 11 00:27:34.151978 kubelet[2771]: E0911 00:27:34.151663 2771 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.1.0-n-3f8a739b41\" not found" node="ci-4372.1.0-n-3f8a739b41" Sep 11 00:27:34.154359 kubelet[2771]: E0911 00:27:34.154251 2771 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.1.0-n-3f8a739b41\" not found" node="ci-4372.1.0-n-3f8a739b41" Sep 11 00:27:34.158401 kubelet[2771]: E0911 00:27:34.157298 2771 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.1.0-n-3f8a739b41\" not found" node="ci-4372.1.0-n-3f8a739b41" Sep 11 00:27:35.160317 kubelet[2771]: E0911 00:27:35.159941 2771 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.1.0-n-3f8a739b41\" not found" node="ci-4372.1.0-n-3f8a739b41" Sep 11 00:27:35.160317 kubelet[2771]: E0911 00:27:35.160226 2771 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.1.0-n-3f8a739b41\" not found" node="ci-4372.1.0-n-3f8a739b41" Sep 11 00:27:35.419617 kubelet[2771]: I0911 00:27:35.419489 2771 kubelet_node_status.go:78] "Successfully registered node" node="ci-4372.1.0-n-3f8a739b41" Sep 11 00:27:35.419617 kubelet[2771]: E0911 00:27:35.419519 2771 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4372.1.0-n-3f8a739b41\": node \"ci-4372.1.0-n-3f8a739b41\" not found" Sep 11 00:27:35.511973 kubelet[2771]: I0911 00:27:35.511721 2771 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4372.1.0-n-3f8a739b41" Sep 11 00:27:35.522223 kubelet[2771]: E0911 00:27:35.522202 2771 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4372.1.0-n-3f8a739b41\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4372.1.0-n-3f8a739b41" Sep 11 00:27:35.522332 kubelet[2771]: I0911 00:27:35.522325 2771 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4372.1.0-n-3f8a739b41" Sep 11 00:27:35.525637 kubelet[2771]: E0911 00:27:35.525507 2771 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4372.1.0-n-3f8a739b41\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4372.1.0-n-3f8a739b41" Sep 11 00:27:35.525637 kubelet[2771]: I0911 00:27:35.525525 2771 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4372.1.0-n-3f8a739b41" Sep 11 00:27:35.531490 kubelet[2771]: E0911 00:27:35.531466 2771 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4372.1.0-n-3f8a739b41\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4372.1.0-n-3f8a739b41" Sep 11 00:27:36.094507 kubelet[2771]: I0911 00:27:36.094479 2771 apiserver.go:52] "Watching apiserver" Sep 11 00:27:36.112374 kubelet[2771]: I0911 00:27:36.112355 2771 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 11 00:27:37.606627 systemd[1]: Reload requested from client PID 3044 ('systemctl') (unit session-9.scope)... Sep 11 00:27:37.606640 systemd[1]: Reloading... Sep 11 00:27:37.676404 zram_generator::config[3086]: No configuration found. Sep 11 00:27:37.757672 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 11 00:27:37.848258 systemd[1]: Reloading finished in 241 ms. Sep 11 00:27:37.874371 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 00:27:37.885085 systemd[1]: kubelet.service: Deactivated successfully. Sep 11 00:27:37.885300 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:27:37.885345 systemd[1]: kubelet.service: Consumed 677ms CPU time, 131M memory peak. Sep 11 00:27:37.886848 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 00:27:39.028270 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:27:39.036694 (kubelet)[3157]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 11 00:27:39.071542 kubelet[3157]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 11 00:27:39.071542 kubelet[3157]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 11 00:27:39.071542 kubelet[3157]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 11 00:27:39.072407 kubelet[3157]: I0911 00:27:39.071916 3157 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 11 00:27:39.078778 kubelet[3157]: I0911 00:27:39.078756 3157 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 11 00:27:39.078778 kubelet[3157]: I0911 00:27:39.078772 3157 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 11 00:27:39.078943 kubelet[3157]: I0911 00:27:39.078931 3157 server.go:956] "Client rotation is on, will bootstrap in background" Sep 11 00:27:39.079712 kubelet[3157]: I0911 00:27:39.079698 3157 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Sep 11 00:27:39.082346 kubelet[3157]: I0911 00:27:39.082086 3157 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 11 00:27:39.086243 kubelet[3157]: I0911 00:27:39.086229 3157 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 11 00:27:39.089131 kubelet[3157]: I0911 00:27:39.089116 3157 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 11 00:27:39.089284 kubelet[3157]: I0911 00:27:39.089263 3157 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 11 00:27:39.089413 kubelet[3157]: I0911 00:27:39.089285 3157 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4372.1.0-n-3f8a739b41","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 11 00:27:39.089502 kubelet[3157]: I0911 00:27:39.089420 3157 topology_manager.go:138] "Creating topology manager with none policy" Sep 11 00:27:39.089502 kubelet[3157]: I0911 00:27:39.089428 3157 container_manager_linux.go:303] "Creating device plugin manager" Sep 11 00:27:39.089502 kubelet[3157]: I0911 00:27:39.089467 3157 state_mem.go:36] "Initialized new in-memory state store" Sep 11 00:27:39.089582 kubelet[3157]: I0911 00:27:39.089573 3157 kubelet.go:480] "Attempting to sync node with API server" Sep 11 00:27:39.089601 kubelet[3157]: I0911 00:27:39.089584 3157 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 11 00:27:39.090403 kubelet[3157]: I0911 00:27:39.090312 3157 kubelet.go:386] "Adding apiserver pod source" Sep 11 00:27:39.090403 kubelet[3157]: I0911 00:27:39.090334 3157 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 11 00:27:39.097670 kubelet[3157]: I0911 00:27:39.097641 3157 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Sep 11 00:27:39.098430 kubelet[3157]: I0911 00:27:39.098419 3157 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 11 00:27:39.103308 kubelet[3157]: I0911 00:27:39.103296 3157 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 11 00:27:39.103456 kubelet[3157]: I0911 00:27:39.103449 3157 server.go:1289] "Started kubelet" Sep 11 00:27:39.105219 kubelet[3157]: I0911 00:27:39.105067 3157 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 11 00:27:39.106620 kubelet[3157]: I0911 00:27:39.106339 3157 server.go:317] "Adding debug handlers to kubelet server" Sep 11 00:27:39.108228 kubelet[3157]: I0911 00:27:39.107895 3157 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 11 00:27:39.118401 kubelet[3157]: I0911 00:27:39.105555 3157 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 11 00:27:39.118606 kubelet[3157]: I0911 00:27:39.118596 3157 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 11 00:27:39.118671 kubelet[3157]: I0911 00:27:39.116905 3157 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 11 00:27:39.118812 kubelet[3157]: I0911 00:27:39.118803 3157 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 11 00:27:39.120727 kubelet[3157]: I0911 00:27:39.116895 3157 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 11 00:27:39.121354 kubelet[3157]: I0911 00:27:39.121054 3157 reconciler.go:26] "Reconciler: start to sync state" Sep 11 00:27:39.122582 kubelet[3157]: I0911 00:27:39.122529 3157 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 11 00:27:39.124167 kubelet[3157]: I0911 00:27:39.124144 3157 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 11 00:27:39.124167 kubelet[3157]: I0911 00:27:39.124165 3157 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 11 00:27:39.124247 kubelet[3157]: I0911 00:27:39.124178 3157 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 11 00:27:39.124247 kubelet[3157]: I0911 00:27:39.124184 3157 kubelet.go:2436] "Starting kubelet main sync loop" Sep 11 00:27:39.124247 kubelet[3157]: E0911 00:27:39.124212 3157 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 11 00:27:39.126845 kubelet[3157]: I0911 00:27:39.126721 3157 factory.go:223] Registration of the systemd container factory successfully Sep 11 00:27:39.126845 kubelet[3157]: I0911 00:27:39.126786 3157 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 11 00:27:39.133826 kubelet[3157]: E0911 00:27:39.132850 3157 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 11 00:27:39.134176 kubelet[3157]: I0911 00:27:39.134163 3157 factory.go:223] Registration of the containerd container factory successfully Sep 11 00:27:39.176026 kubelet[3157]: I0911 00:27:39.176014 3157 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 11 00:27:39.176026 kubelet[3157]: I0911 00:27:39.176025 3157 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 11 00:27:39.176116 kubelet[3157]: I0911 00:27:39.176039 3157 state_mem.go:36] "Initialized new in-memory state store" Sep 11 00:27:39.176139 kubelet[3157]: I0911 00:27:39.176129 3157 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 11 00:27:39.176158 kubelet[3157]: I0911 00:27:39.176136 3157 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 11 00:27:39.176158 kubelet[3157]: I0911 00:27:39.176148 3157 policy_none.go:49] "None policy: Start" Sep 11 00:27:39.176158 kubelet[3157]: I0911 00:27:39.176156 3157 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 11 00:27:39.177034 kubelet[3157]: I0911 00:27:39.176164 3157 state_mem.go:35] "Initializing new in-memory state store" Sep 11 00:27:39.177034 kubelet[3157]: I0911 00:27:39.176245 3157 state_mem.go:75] "Updated machine memory state" Sep 11 00:27:39.181096 kubelet[3157]: E0911 00:27:39.180558 3157 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 11 00:27:39.181096 kubelet[3157]: I0911 00:27:39.180664 3157 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 11 00:27:39.181096 kubelet[3157]: I0911 00:27:39.180672 3157 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 11 00:27:39.181404 kubelet[3157]: I0911 00:27:39.181377 3157 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 11 00:27:39.184250 kubelet[3157]: E0911 00:27:39.184236 3157 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 11 00:27:39.225914 kubelet[3157]: I0911 00:27:39.225739 3157 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4372.1.0-n-3f8a739b41" Sep 11 00:27:39.226557 kubelet[3157]: I0911 00:27:39.226412 3157 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4372.1.0-n-3f8a739b41" Sep 11 00:27:39.226740 kubelet[3157]: I0911 00:27:39.226675 3157 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4372.1.0-n-3f8a739b41" Sep 11 00:27:39.233728 kubelet[3157]: I0911 00:27:39.233692 3157 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 11 00:27:39.236824 kubelet[3157]: I0911 00:27:39.236809 3157 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 11 00:27:39.237818 kubelet[3157]: I0911 00:27:39.237683 3157 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 11 00:27:39.287009 kubelet[3157]: I0911 00:27:39.286964 3157 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372.1.0-n-3f8a739b41" Sep 11 00:27:39.300160 kubelet[3157]: I0911 00:27:39.299239 3157 kubelet_node_status.go:124] "Node was previously registered" node="ci-4372.1.0-n-3f8a739b41" Sep 11 00:27:39.300160 kubelet[3157]: I0911 00:27:39.299287 3157 kubelet_node_status.go:78] "Successfully registered node" node="ci-4372.1.0-n-3f8a739b41" Sep 11 00:27:39.321999 kubelet[3157]: I0911 00:27:39.321973 3157 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ebf21d4f2445243377d8656191ace802-ca-certs\") pod \"kube-controller-manager-ci-4372.1.0-n-3f8a739b41\" (UID: \"ebf21d4f2445243377d8656191ace802\") " pod="kube-system/kube-controller-manager-ci-4372.1.0-n-3f8a739b41" Sep 11 00:27:39.322123 kubelet[3157]: I0911 00:27:39.322111 3157 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ebf21d4f2445243377d8656191ace802-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4372.1.0-n-3f8a739b41\" (UID: \"ebf21d4f2445243377d8656191ace802\") " pod="kube-system/kube-controller-manager-ci-4372.1.0-n-3f8a739b41" Sep 11 00:27:39.322187 kubelet[3157]: I0911 00:27:39.322178 3157 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e7d280209c796987504e24e99315def7-k8s-certs\") pod \"kube-apiserver-ci-4372.1.0-n-3f8a739b41\" (UID: \"e7d280209c796987504e24e99315def7\") " pod="kube-system/kube-apiserver-ci-4372.1.0-n-3f8a739b41" Sep 11 00:27:39.322255 kubelet[3157]: I0911 00:27:39.322247 3157 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e7d280209c796987504e24e99315def7-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4372.1.0-n-3f8a739b41\" (UID: \"e7d280209c796987504e24e99315def7\") " pod="kube-system/kube-apiserver-ci-4372.1.0-n-3f8a739b41" Sep 11 00:27:39.322322 kubelet[3157]: I0911 00:27:39.322314 3157 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ebf21d4f2445243377d8656191ace802-flexvolume-dir\") pod \"kube-controller-manager-ci-4372.1.0-n-3f8a739b41\" (UID: \"ebf21d4f2445243377d8656191ace802\") " pod="kube-system/kube-controller-manager-ci-4372.1.0-n-3f8a739b41" Sep 11 00:27:39.322379 kubelet[3157]: I0911 00:27:39.322358 3157 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ebf21d4f2445243377d8656191ace802-k8s-certs\") pod \"kube-controller-manager-ci-4372.1.0-n-3f8a739b41\" (UID: \"ebf21d4f2445243377d8656191ace802\") " pod="kube-system/kube-controller-manager-ci-4372.1.0-n-3f8a739b41" Sep 11 00:27:39.322460 kubelet[3157]: I0911 00:27:39.322447 3157 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ebf21d4f2445243377d8656191ace802-kubeconfig\") pod \"kube-controller-manager-ci-4372.1.0-n-3f8a739b41\" (UID: \"ebf21d4f2445243377d8656191ace802\") " pod="kube-system/kube-controller-manager-ci-4372.1.0-n-3f8a739b41" Sep 11 00:27:39.322520 kubelet[3157]: I0911 00:27:39.322513 3157 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1a0b6a3088438a05b5849fc7004fcaff-kubeconfig\") pod \"kube-scheduler-ci-4372.1.0-n-3f8a739b41\" (UID: \"1a0b6a3088438a05b5849fc7004fcaff\") " pod="kube-system/kube-scheduler-ci-4372.1.0-n-3f8a739b41" Sep 11 00:27:39.322588 kubelet[3157]: I0911 00:27:39.322557 3157 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e7d280209c796987504e24e99315def7-ca-certs\") pod \"kube-apiserver-ci-4372.1.0-n-3f8a739b41\" (UID: \"e7d280209c796987504e24e99315def7\") " pod="kube-system/kube-apiserver-ci-4372.1.0-n-3f8a739b41" Sep 11 00:27:40.094441 kubelet[3157]: I0911 00:27:40.094416 3157 apiserver.go:52] "Watching apiserver" Sep 11 00:27:40.119499 kubelet[3157]: I0911 00:27:40.119476 3157 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 11 00:27:40.155397 kubelet[3157]: I0911 00:27:40.155319 3157 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4372.1.0-n-3f8a739b41" Sep 11 00:27:40.156311 kubelet[3157]: I0911 00:27:40.156296 3157 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4372.1.0-n-3f8a739b41" Sep 11 00:27:40.167398 kubelet[3157]: I0911 00:27:40.167210 3157 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 11 00:27:40.167398 kubelet[3157]: E0911 00:27:40.167251 3157 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4372.1.0-n-3f8a739b41\" already exists" pod="kube-system/kube-controller-manager-ci-4372.1.0-n-3f8a739b41" Sep 11 00:27:40.168424 kubelet[3157]: I0911 00:27:40.168410 3157 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 11 00:27:40.168588 kubelet[3157]: E0911 00:27:40.168548 3157 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4372.1.0-n-3f8a739b41\" already exists" pod="kube-system/kube-apiserver-ci-4372.1.0-n-3f8a739b41" Sep 11 00:27:40.176141 kubelet[3157]: I0911 00:27:40.176103 3157 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4372.1.0-n-3f8a739b41" podStartSLOduration=1.176092127 podStartE2EDuration="1.176092127s" podCreationTimestamp="2025-09-11 00:27:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-11 00:27:40.174886333 +0000 UTC m=+1.134712175" watchObservedRunningTime="2025-09-11 00:27:40.176092127 +0000 UTC m=+1.135917974" Sep 11 00:27:40.189263 kubelet[3157]: I0911 00:27:40.189122 3157 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4372.1.0-n-3f8a739b41" podStartSLOduration=1.189110887 podStartE2EDuration="1.189110887s" podCreationTimestamp="2025-09-11 00:27:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-11 00:27:40.188689742 +0000 UTC m=+1.148515583" watchObservedRunningTime="2025-09-11 00:27:40.189110887 +0000 UTC m=+1.148936726" Sep 11 00:27:40.207599 kubelet[3157]: I0911 00:27:40.207504 3157 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4372.1.0-n-3f8a739b41" podStartSLOduration=1.207494927 podStartE2EDuration="1.207494927s" podCreationTimestamp="2025-09-11 00:27:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-11 00:27:40.197477677 +0000 UTC m=+1.157303537" watchObservedRunningTime="2025-09-11 00:27:40.207494927 +0000 UTC m=+1.167320810" Sep 11 00:27:44.911059 kubelet[3157]: I0911 00:27:44.911030 3157 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 11 00:27:44.911568 containerd[1747]: time="2025-09-11T00:27:44.911540368Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 11 00:27:44.911918 kubelet[3157]: I0911 00:27:44.911683 3157 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 11 00:27:45.885233 systemd[1]: Created slice kubepods-besteffort-pod487c4bbe_2dcc_4be1_b399_c752299a43ce.slice - libcontainer container kubepods-besteffort-pod487c4bbe_2dcc_4be1_b399_c752299a43ce.slice. Sep 11 00:27:45.966258 kubelet[3157]: I0911 00:27:45.966219 3157 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgnlt\" (UniqueName: \"kubernetes.io/projected/487c4bbe-2dcc-4be1-b399-c752299a43ce-kube-api-access-zgnlt\") pod \"kube-proxy-8xwl4\" (UID: \"487c4bbe-2dcc-4be1-b399-c752299a43ce\") " pod="kube-system/kube-proxy-8xwl4" Sep 11 00:27:45.966552 kubelet[3157]: I0911 00:27:45.966260 3157 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/487c4bbe-2dcc-4be1-b399-c752299a43ce-kube-proxy\") pod \"kube-proxy-8xwl4\" (UID: \"487c4bbe-2dcc-4be1-b399-c752299a43ce\") " pod="kube-system/kube-proxy-8xwl4" Sep 11 00:27:45.966552 kubelet[3157]: I0911 00:27:45.966280 3157 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/487c4bbe-2dcc-4be1-b399-c752299a43ce-xtables-lock\") pod \"kube-proxy-8xwl4\" (UID: \"487c4bbe-2dcc-4be1-b399-c752299a43ce\") " pod="kube-system/kube-proxy-8xwl4" Sep 11 00:27:45.966552 kubelet[3157]: I0911 00:27:45.966294 3157 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/487c4bbe-2dcc-4be1-b399-c752299a43ce-lib-modules\") pod \"kube-proxy-8xwl4\" (UID: \"487c4bbe-2dcc-4be1-b399-c752299a43ce\") " pod="kube-system/kube-proxy-8xwl4" Sep 11 00:27:46.145236 systemd[1]: Created slice kubepods-besteffort-pod1b4d42ac_b6b7_4bf8_be0f_963e91bdb397.slice - libcontainer container kubepods-besteffort-pod1b4d42ac_b6b7_4bf8_be0f_963e91bdb397.slice. Sep 11 00:27:46.167702 kubelet[3157]: I0911 00:27:46.167670 3157 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xst5k\" (UniqueName: \"kubernetes.io/projected/1b4d42ac-b6b7-4bf8-be0f-963e91bdb397-kube-api-access-xst5k\") pod \"tigera-operator-755d956888-x25v7\" (UID: \"1b4d42ac-b6b7-4bf8-be0f-963e91bdb397\") " pod="tigera-operator/tigera-operator-755d956888-x25v7" Sep 11 00:27:46.167702 kubelet[3157]: I0911 00:27:46.167697 3157 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1b4d42ac-b6b7-4bf8-be0f-963e91bdb397-var-lib-calico\") pod \"tigera-operator-755d956888-x25v7\" (UID: \"1b4d42ac-b6b7-4bf8-be0f-963e91bdb397\") " pod="tigera-operator/tigera-operator-755d956888-x25v7" Sep 11 00:27:46.197449 containerd[1747]: time="2025-09-11T00:27:46.197380301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8xwl4,Uid:487c4bbe-2dcc-4be1-b399-c752299a43ce,Namespace:kube-system,Attempt:0,}" Sep 11 00:27:46.233972 containerd[1747]: time="2025-09-11T00:27:46.233948155Z" level=info msg="connecting to shim d46eec4f9498f36ce1148e6263abbe35a1ac38406df1a938a12cc82c8b96f9ed" address="unix:///run/containerd/s/a9431e44d07e5d40f6b34ea0a9bafafc811af44f308deed70004335215467b9d" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:27:46.258513 systemd[1]: Started cri-containerd-d46eec4f9498f36ce1148e6263abbe35a1ac38406df1a938a12cc82c8b96f9ed.scope - libcontainer container d46eec4f9498f36ce1148e6263abbe35a1ac38406df1a938a12cc82c8b96f9ed. Sep 11 00:27:46.291570 containerd[1747]: time="2025-09-11T00:27:46.291544704Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8xwl4,Uid:487c4bbe-2dcc-4be1-b399-c752299a43ce,Namespace:kube-system,Attempt:0,} returns sandbox id \"d46eec4f9498f36ce1148e6263abbe35a1ac38406df1a938a12cc82c8b96f9ed\"" Sep 11 00:27:46.299105 containerd[1747]: time="2025-09-11T00:27:46.299085211Z" level=info msg="CreateContainer within sandbox \"d46eec4f9498f36ce1148e6263abbe35a1ac38406df1a938a12cc82c8b96f9ed\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 11 00:27:46.318721 containerd[1747]: time="2025-09-11T00:27:46.318700114Z" level=info msg="Container 2fa9c74513bd3afbb8d4c9a378d6db5573972660de7addd5799061d1e8aa9070: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:27:46.332610 containerd[1747]: time="2025-09-11T00:27:46.332588151Z" level=info msg="CreateContainer within sandbox \"d46eec4f9498f36ce1148e6263abbe35a1ac38406df1a938a12cc82c8b96f9ed\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2fa9c74513bd3afbb8d4c9a378d6db5573972660de7addd5799061d1e8aa9070\"" Sep 11 00:27:46.332931 containerd[1747]: time="2025-09-11T00:27:46.332919634Z" level=info msg="StartContainer for \"2fa9c74513bd3afbb8d4c9a378d6db5573972660de7addd5799061d1e8aa9070\"" Sep 11 00:27:46.334098 containerd[1747]: time="2025-09-11T00:27:46.334070077Z" level=info msg="connecting to shim 2fa9c74513bd3afbb8d4c9a378d6db5573972660de7addd5799061d1e8aa9070" address="unix:///run/containerd/s/a9431e44d07e5d40f6b34ea0a9bafafc811af44f308deed70004335215467b9d" protocol=ttrpc version=3 Sep 11 00:27:46.348537 systemd[1]: Started cri-containerd-2fa9c74513bd3afbb8d4c9a378d6db5573972660de7addd5799061d1e8aa9070.scope - libcontainer container 2fa9c74513bd3afbb8d4c9a378d6db5573972660de7addd5799061d1e8aa9070. Sep 11 00:27:46.378613 containerd[1747]: time="2025-09-11T00:27:46.378593285Z" level=info msg="StartContainer for \"2fa9c74513bd3afbb8d4c9a378d6db5573972660de7addd5799061d1e8aa9070\" returns successfully" Sep 11 00:27:46.447925 containerd[1747]: time="2025-09-11T00:27:46.447579752Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-x25v7,Uid:1b4d42ac-b6b7-4bf8-be0f-963e91bdb397,Namespace:tigera-operator,Attempt:0,}" Sep 11 00:27:46.478620 containerd[1747]: time="2025-09-11T00:27:46.478597635Z" level=info msg="connecting to shim 82a3b9ad0a2db7cc97c14c4d74a865223eed3ea6a2b29dd5b656f86c6ebea5c5" address="unix:///run/containerd/s/dcaedc41feb852ebfa8064916bca78fbf2d2260a022db21a38de1ad2de44baaa" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:27:46.496518 systemd[1]: Started cri-containerd-82a3b9ad0a2db7cc97c14c4d74a865223eed3ea6a2b29dd5b656f86c6ebea5c5.scope - libcontainer container 82a3b9ad0a2db7cc97c14c4d74a865223eed3ea6a2b29dd5b656f86c6ebea5c5. Sep 11 00:27:46.527351 containerd[1747]: time="2025-09-11T00:27:46.527330794Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-x25v7,Uid:1b4d42ac-b6b7-4bf8-be0f-963e91bdb397,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"82a3b9ad0a2db7cc97c14c4d74a865223eed3ea6a2b29dd5b656f86c6ebea5c5\"" Sep 11 00:27:46.528349 containerd[1747]: time="2025-09-11T00:27:46.528322319Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 11 00:27:47.176518 kubelet[3157]: I0911 00:27:47.176447 3157 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-8xwl4" podStartSLOduration=2.176374935 podStartE2EDuration="2.176374935s" podCreationTimestamp="2025-09-11 00:27:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-11 00:27:47.176310419 +0000 UTC m=+8.136136260" watchObservedRunningTime="2025-09-11 00:27:47.176374935 +0000 UTC m=+8.136200773" Sep 11 00:27:48.131046 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3181643807.mount: Deactivated successfully. Sep 11 00:27:48.592766 containerd[1747]: time="2025-09-11T00:27:48.592735387Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:27:48.595407 containerd[1747]: time="2025-09-11T00:27:48.595333204Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Sep 11 00:27:48.598199 containerd[1747]: time="2025-09-11T00:27:48.598161922Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:27:48.601871 containerd[1747]: time="2025-09-11T00:27:48.601827179Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:27:48.602457 containerd[1747]: time="2025-09-11T00:27:48.602191566Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 2.073845139s" Sep 11 00:27:48.602457 containerd[1747]: time="2025-09-11T00:27:48.602217603Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 11 00:27:48.608262 containerd[1747]: time="2025-09-11T00:27:48.608240206Z" level=info msg="CreateContainer within sandbox \"82a3b9ad0a2db7cc97c14c4d74a865223eed3ea6a2b29dd5b656f86c6ebea5c5\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 11 00:27:48.628414 containerd[1747]: time="2025-09-11T00:27:48.627256905Z" level=info msg="Container 91cde1bf52f2474aa17fbe3f02a82ba63e0eb080c40333a6ce90551eb235a586: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:27:48.639981 containerd[1747]: time="2025-09-11T00:27:48.639960399Z" level=info msg="CreateContainer within sandbox \"82a3b9ad0a2db7cc97c14c4d74a865223eed3ea6a2b29dd5b656f86c6ebea5c5\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"91cde1bf52f2474aa17fbe3f02a82ba63e0eb080c40333a6ce90551eb235a586\"" Sep 11 00:27:48.640607 containerd[1747]: time="2025-09-11T00:27:48.640373285Z" level=info msg="StartContainer for \"91cde1bf52f2474aa17fbe3f02a82ba63e0eb080c40333a6ce90551eb235a586\"" Sep 11 00:27:48.641235 containerd[1747]: time="2025-09-11T00:27:48.641213192Z" level=info msg="connecting to shim 91cde1bf52f2474aa17fbe3f02a82ba63e0eb080c40333a6ce90551eb235a586" address="unix:///run/containerd/s/dcaedc41feb852ebfa8064916bca78fbf2d2260a022db21a38de1ad2de44baaa" protocol=ttrpc version=3 Sep 11 00:27:48.659508 systemd[1]: Started cri-containerd-91cde1bf52f2474aa17fbe3f02a82ba63e0eb080c40333a6ce90551eb235a586.scope - libcontainer container 91cde1bf52f2474aa17fbe3f02a82ba63e0eb080c40333a6ce90551eb235a586. Sep 11 00:27:48.686285 containerd[1747]: time="2025-09-11T00:27:48.685634310Z" level=info msg="StartContainer for \"91cde1bf52f2474aa17fbe3f02a82ba63e0eb080c40333a6ce90551eb235a586\" returns successfully" Sep 11 00:27:49.908012 kubelet[3157]: I0911 00:27:49.907959 3157 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-x25v7" podStartSLOduration=1.833060867 podStartE2EDuration="3.907945263s" podCreationTimestamp="2025-09-11 00:27:46 +0000 UTC" firstStartedPulling="2025-09-11 00:27:46.528029903 +0000 UTC m=+7.487855739" lastFinishedPulling="2025-09-11 00:27:48.602914292 +0000 UTC m=+9.562740135" observedRunningTime="2025-09-11 00:27:49.180135322 +0000 UTC m=+10.139961164" watchObservedRunningTime="2025-09-11 00:27:49.907945263 +0000 UTC m=+10.867771110" Sep 11 00:27:54.206553 sudo[2162]: pam_unix(sudo:session): session closed for user root Sep 11 00:27:54.310287 sshd[2161]: Connection closed by 10.200.16.10 port 48366 Sep 11 00:27:54.310769 sshd-session[2159]: pam_unix(sshd:session): session closed for user core Sep 11 00:27:54.313477 systemd-logind[1706]: Session 9 logged out. Waiting for processes to exit. Sep 11 00:27:54.314258 systemd[1]: sshd@6-10.200.8.4:22-10.200.16.10:48366.service: Deactivated successfully. Sep 11 00:27:54.316857 systemd[1]: session-9.scope: Deactivated successfully. Sep 11 00:27:54.317129 systemd[1]: session-9.scope: Consumed 3.597s CPU time, 230.2M memory peak. Sep 11 00:27:54.320318 systemd-logind[1706]: Removed session 9. Sep 11 00:27:56.978860 systemd[1]: Created slice kubepods-besteffort-pod0731bf32_8fd1_4cb0_9f44_875736719d08.slice - libcontainer container kubepods-besteffort-pod0731bf32_8fd1_4cb0_9f44_875736719d08.slice. Sep 11 00:27:57.037312 kubelet[3157]: I0911 00:27:57.037276 3157 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/0731bf32-8fd1-4cb0-9f44-875736719d08-typha-certs\") pod \"calico-typha-54f78fbb55-srd2x\" (UID: \"0731bf32-8fd1-4cb0-9f44-875736719d08\") " pod="calico-system/calico-typha-54f78fbb55-srd2x" Sep 11 00:27:57.037312 kubelet[3157]: I0911 00:27:57.037308 3157 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7v8j\" (UniqueName: \"kubernetes.io/projected/0731bf32-8fd1-4cb0-9f44-875736719d08-kube-api-access-r7v8j\") pod \"calico-typha-54f78fbb55-srd2x\" (UID: \"0731bf32-8fd1-4cb0-9f44-875736719d08\") " pod="calico-system/calico-typha-54f78fbb55-srd2x" Sep 11 00:27:57.037602 kubelet[3157]: I0911 00:27:57.037326 3157 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0731bf32-8fd1-4cb0-9f44-875736719d08-tigera-ca-bundle\") pod \"calico-typha-54f78fbb55-srd2x\" (UID: \"0731bf32-8fd1-4cb0-9f44-875736719d08\") " pod="calico-system/calico-typha-54f78fbb55-srd2x" Sep 11 00:27:57.283802 containerd[1747]: time="2025-09-11T00:27:57.283464140Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-54f78fbb55-srd2x,Uid:0731bf32-8fd1-4cb0-9f44-875736719d08,Namespace:calico-system,Attempt:0,}" Sep 11 00:27:57.329643 containerd[1747]: time="2025-09-11T00:27:57.329606532Z" level=info msg="connecting to shim ef8829e215198afa1caa5cc49819ea0957c8dec6c0fde97efa74b6e83b8010cc" address="unix:///run/containerd/s/b9989d94ef7f342610458ddd3b76c3a96d1db28d2e0e22fe2e09f58a429e23e2" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:27:57.354736 systemd[1]: Created slice kubepods-besteffort-poda74b6cd0_b7e1_4423_b083_204bf9e3c1ae.slice - libcontainer container kubepods-besteffort-poda74b6cd0_b7e1_4423_b083_204bf9e3c1ae.slice. Sep 11 00:27:57.368524 systemd[1]: Started cri-containerd-ef8829e215198afa1caa5cc49819ea0957c8dec6c0fde97efa74b6e83b8010cc.scope - libcontainer container ef8829e215198afa1caa5cc49819ea0957c8dec6c0fde97efa74b6e83b8010cc. Sep 11 00:27:57.409295 containerd[1747]: time="2025-09-11T00:27:57.409207165Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-54f78fbb55-srd2x,Uid:0731bf32-8fd1-4cb0-9f44-875736719d08,Namespace:calico-system,Attempt:0,} returns sandbox id \"ef8829e215198afa1caa5cc49819ea0957c8dec6c0fde97efa74b6e83b8010cc\"" Sep 11 00:27:57.410962 containerd[1747]: time="2025-09-11T00:27:57.410924454Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 11 00:27:57.439263 kubelet[3157]: I0911 00:27:57.439209 3157 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/a74b6cd0-b7e1-4423-b083-204bf9e3c1ae-var-run-calico\") pod \"calico-node-jqrjt\" (UID: \"a74b6cd0-b7e1-4423-b083-204bf9e3c1ae\") " pod="calico-system/calico-node-jqrjt" Sep 11 00:27:57.439612 kubelet[3157]: I0911 00:27:57.439360 3157 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a74b6cd0-b7e1-4423-b083-204bf9e3c1ae-xtables-lock\") pod \"calico-node-jqrjt\" (UID: \"a74b6cd0-b7e1-4423-b083-204bf9e3c1ae\") " pod="calico-system/calico-node-jqrjt" Sep 11 00:27:57.439612 kubelet[3157]: I0911 00:27:57.439379 3157 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/a74b6cd0-b7e1-4423-b083-204bf9e3c1ae-cni-bin-dir\") pod \"calico-node-jqrjt\" (UID: \"a74b6cd0-b7e1-4423-b083-204bf9e3c1ae\") " pod="calico-system/calico-node-jqrjt" Sep 11 00:27:57.439612 kubelet[3157]: I0911 00:27:57.439404 3157 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a74b6cd0-b7e1-4423-b083-204bf9e3c1ae-var-lib-calico\") pod \"calico-node-jqrjt\" (UID: \"a74b6cd0-b7e1-4423-b083-204bf9e3c1ae\") " pod="calico-system/calico-node-jqrjt" Sep 11 00:27:57.439612 kubelet[3157]: I0911 00:27:57.439420 3157 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/a74b6cd0-b7e1-4423-b083-204bf9e3c1ae-cni-log-dir\") pod \"calico-node-jqrjt\" (UID: \"a74b6cd0-b7e1-4423-b083-204bf9e3c1ae\") " pod="calico-system/calico-node-jqrjt" Sep 11 00:27:57.439612 kubelet[3157]: I0911 00:27:57.439434 3157 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/a74b6cd0-b7e1-4423-b083-204bf9e3c1ae-cni-net-dir\") pod \"calico-node-jqrjt\" (UID: \"a74b6cd0-b7e1-4423-b083-204bf9e3c1ae\") " pod="calico-system/calico-node-jqrjt" Sep 11 00:27:57.439730 kubelet[3157]: I0911 00:27:57.439448 3157 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/a74b6cd0-b7e1-4423-b083-204bf9e3c1ae-node-certs\") pod \"calico-node-jqrjt\" (UID: \"a74b6cd0-b7e1-4423-b083-204bf9e3c1ae\") " pod="calico-system/calico-node-jqrjt" Sep 11 00:27:57.439730 kubelet[3157]: I0911 00:27:57.439461 3157 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a74b6cd0-b7e1-4423-b083-204bf9e3c1ae-tigera-ca-bundle\") pod \"calico-node-jqrjt\" (UID: \"a74b6cd0-b7e1-4423-b083-204bf9e3c1ae\") " pod="calico-system/calico-node-jqrjt" Sep 11 00:27:57.439730 kubelet[3157]: I0911 00:27:57.439477 3157 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/a74b6cd0-b7e1-4423-b083-204bf9e3c1ae-flexvol-driver-host\") pod \"calico-node-jqrjt\" (UID: \"a74b6cd0-b7e1-4423-b083-204bf9e3c1ae\") " pod="calico-system/calico-node-jqrjt" Sep 11 00:27:57.439730 kubelet[3157]: I0911 00:27:57.439492 3157 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/a74b6cd0-b7e1-4423-b083-204bf9e3c1ae-policysync\") pod \"calico-node-jqrjt\" (UID: \"a74b6cd0-b7e1-4423-b083-204bf9e3c1ae\") " pod="calico-system/calico-node-jqrjt" Sep 11 00:27:57.439730 kubelet[3157]: I0911 00:27:57.439506 3157 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a74b6cd0-b7e1-4423-b083-204bf9e3c1ae-lib-modules\") pod \"calico-node-jqrjt\" (UID: \"a74b6cd0-b7e1-4423-b083-204bf9e3c1ae\") " pod="calico-system/calico-node-jqrjt" Sep 11 00:27:57.439831 kubelet[3157]: I0911 00:27:57.439520 3157 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sm7cb\" (UniqueName: \"kubernetes.io/projected/a74b6cd0-b7e1-4423-b083-204bf9e3c1ae-kube-api-access-sm7cb\") pod \"calico-node-jqrjt\" (UID: \"a74b6cd0-b7e1-4423-b083-204bf9e3c1ae\") " pod="calico-system/calico-node-jqrjt" Sep 11 00:27:57.545819 kubelet[3157]: E0911 00:27:57.545770 3157 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:27:57.545819 kubelet[3157]: W0911 00:27:57.545784 3157 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:27:57.545819 kubelet[3157]: E0911 00:27:57.545800 3157 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:27:57.547496 kubelet[3157]: E0911 00:27:57.546999 3157 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:27:57.547496 kubelet[3157]: W0911 00:27:57.547017 3157 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:27:57.547496 kubelet[3157]: E0911 00:27:57.547027 3157 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:27:57.641313 kubelet[3157]: E0911 00:27:57.641286 3157 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ldmgv" podUID="6bbf56be-c717-41c6-9b0e-bbe3b830a307" Sep 11 00:27:57.659578 containerd[1747]: time="2025-09-11T00:27:57.659530476Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-jqrjt,Uid:a74b6cd0-b7e1-4423-b083-204bf9e3c1ae,Namespace:calico-system,Attempt:0,}" Sep 11 00:27:57.699543 containerd[1747]: time="2025-09-11T00:27:57.699516675Z" level=info msg="connecting to shim 092e05b26e345041681455cdb1c620f17ef833196ded4bbd624bff3da5f85ca4" address="unix:///run/containerd/s/e53aa3d0c6c8dfc60dae01437bcc8979b156f7838b610723f9a66242bce9a902" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:27:57.720645 systemd[1]: Started cri-containerd-092e05b26e345041681455cdb1c620f17ef833196ded4bbd624bff3da5f85ca4.scope - libcontainer container 092e05b26e345041681455cdb1c620f17ef833196ded4bbd624bff3da5f85ca4. Sep 11 00:27:57.727993 kubelet[3157]: E0911 00:27:57.727925 3157 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:27:57.728398 kubelet[3157]: W0911 00:27:57.728216 3157 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:27:57.728398 kubelet[3157]: E0911 00:27:57.728236 3157 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:27:57.728804 kubelet[3157]: E0911 00:27:57.728788 3157 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:27:57.728804 kubelet[3157]: W0911 00:27:57.728803 3157 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:27:57.729091 kubelet[3157]: E0911 00:27:57.728815 3157 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:27:57.729091 kubelet[3157]: E0911 00:27:57.728931 3157 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:27:57.729091 kubelet[3157]: W0911 00:27:57.728935 3157 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:27:57.729091 kubelet[3157]: E0911 00:27:57.728944 3157 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:27:57.729091 kubelet[3157]: E0911 00:27:57.729080 3157 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:27:57.729091 kubelet[3157]: W0911 00:27:57.729085 3157 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:27:57.729091 kubelet[3157]: E0911 00:27:57.729092 3157 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:27:57.729771 kubelet[3157]: E0911 00:27:57.729211 3157 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:27:57.729771 kubelet[3157]: W0911 00:27:57.729217 3157 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:27:57.729771 kubelet[3157]: E0911 00:27:57.729224 3157 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:27:57.729771 kubelet[3157]: E0911 00:27:57.729326 3157 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:27:57.729771 kubelet[3157]: W0911 00:27:57.729330 3157 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:27:57.729771 kubelet[3157]: E0911 00:27:57.729336 3157 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:27:57.729771 kubelet[3157]: E0911 00:27:57.729465 3157 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:27:57.729771 kubelet[3157]: W0911 00:27:57.729470 3157 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:27:57.729771 kubelet[3157]: E0911 00:27:57.729476 3157 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:27:57.729771 kubelet[3157]: E0911 00:27:57.729630 3157 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:27:57.730563 kubelet[3157]: W0911 00:27:57.729659 3157 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:27:57.730563 kubelet[3157]: E0911 00:27:57.729666 3157 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:27:57.730563 kubelet[3157]: E0911 00:27:57.729920 3157 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:27:57.730563 kubelet[3157]: W0911 00:27:57.729928 3157 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:27:57.730563 kubelet[3157]: E0911 00:27:57.729937 3157 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:27:57.730563 kubelet[3157]: E0911 00:27:57.730068 3157 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:27:57.730563 kubelet[3157]: W0911 00:27:57.730073 3157 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:27:57.730563 kubelet[3157]: E0911 00:27:57.730080 3157 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:27:57.730563 kubelet[3157]: E0911 00:27:57.730300 3157 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:27:57.730563 kubelet[3157]: W0911 00:27:57.730306 3157 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:27:57.730888 kubelet[3157]: E0911 00:27:57.730314 3157 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:27:57.731708 kubelet[3157]: E0911 00:27:57.731613 3157 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:27:57.731708 kubelet[3157]: W0911 00:27:57.731629 3157 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:27:57.731708 kubelet[3157]: E0911 00:27:57.731641 3157 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:27:57.731813 kubelet[3157]: E0911 00:27:57.731774 3157 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:27:57.731813 kubelet[3157]: W0911 00:27:57.731780 3157 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:27:57.731813 kubelet[3157]: E0911 00:27:57.731790 3157 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:27:57.732526 kubelet[3157]: E0911 00:27:57.731877 3157 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:27:57.732526 kubelet[3157]: W0911 00:27:57.731884 3157 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:27:57.732526 kubelet[3157]: E0911 00:27:57.731889 3157 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:27:57.732526 kubelet[3157]: E0911 00:27:57.731973 3157 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:27:57.732526 kubelet[3157]: W0911 00:27:57.731978 3157 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:27:57.732526 kubelet[3157]: E0911 00:27:57.731983 3157 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:27:57.732526 kubelet[3157]: E0911 00:27:57.732081 3157 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:27:57.732526 kubelet[3157]: W0911 00:27:57.732085 3157 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:27:57.732526 kubelet[3157]: E0911 00:27:57.732090 3157 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:27:57.732526 kubelet[3157]: E0911 00:27:57.732175 3157 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:27:57.733096 kubelet[3157]: W0911 00:27:57.732179 3157 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:27:57.733096 kubelet[3157]: E0911 00:27:57.732185 3157 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:27:57.733096 kubelet[3157]: E0911 00:27:57.732270 3157 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:27:57.733096 kubelet[3157]: W0911 00:27:57.732274 3157 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:27:57.733096 kubelet[3157]: E0911 00:27:57.732330 3157 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:27:57.733096 kubelet[3157]: E0911 00:27:57.732432 3157 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:27:57.733096 kubelet[3157]: W0911 00:27:57.732437 3157 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:27:57.733096 kubelet[3157]: E0911 00:27:57.732443 3157 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:27:57.733096 kubelet[3157]: E0911 00:27:57.732625 3157 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:27:57.733096 kubelet[3157]: W0911 00:27:57.732631 3157 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:27:57.733546 kubelet[3157]: E0911 00:27:57.732638 3157 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:27:57.741331 kubelet[3157]: E0911 00:27:57.741312 3157 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:27:57.741331 kubelet[3157]: W0911 00:27:57.741328 3157 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:27:57.741654 kubelet[3157]: E0911 00:27:57.741340 3157 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:27:57.741654 kubelet[3157]: I0911 00:27:57.741363 3157 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6bbf56be-c717-41c6-9b0e-bbe3b830a307-kubelet-dir\") pod \"csi-node-driver-ldmgv\" (UID: \"6bbf56be-c717-41c6-9b0e-bbe3b830a307\") " pod="calico-system/csi-node-driver-ldmgv" Sep 11 00:27:57.741874 kubelet[3157]: E0911 00:27:57.741653 3157 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:27:57.741874 kubelet[3157]: W0911 00:27:57.741664 3157 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:27:57.741874 kubelet[3157]: E0911 00:27:57.741675 3157 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:27:57.741874 kubelet[3157]: I0911 00:27:57.741711 3157 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fll7h\" (UniqueName: \"kubernetes.io/projected/6bbf56be-c717-41c6-9b0e-bbe3b830a307-kube-api-access-fll7h\") pod \"csi-node-driver-ldmgv\" (UID: \"6bbf56be-c717-41c6-9b0e-bbe3b830a307\") " pod="calico-system/csi-node-driver-ldmgv" Sep 11 00:27:57.741964 kubelet[3157]: E0911 00:27:57.741957 3157 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:27:57.741985 kubelet[3157]: W0911 00:27:57.741965 3157 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:27:57.742549 kubelet[3157]: E0911 00:27:57.741974 3157 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:27:57.742549 kubelet[3157]: I0911 00:27:57.742022 3157 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/6bbf56be-c717-41c6-9b0e-bbe3b830a307-varrun\") pod \"csi-node-driver-ldmgv\" (UID: \"6bbf56be-c717-41c6-9b0e-bbe3b830a307\") " pod="calico-system/csi-node-driver-ldmgv" Sep 11 00:27:57.742549 kubelet[3157]: E0911 00:27:57.742262 3157 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:27:57.742549 kubelet[3157]: W0911 00:27:57.742276 3157 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:27:57.742549 kubelet[3157]: E0911 00:27:57.742286 3157 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:27:57.742549 kubelet[3157]: I0911 00:27:57.742345 3157 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/6bbf56be-c717-41c6-9b0e-bbe3b830a307-registration-dir\") pod \"csi-node-driver-ldmgv\" (UID: \"6bbf56be-c717-41c6-9b0e-bbe3b830a307\") " pod="calico-system/csi-node-driver-ldmgv" Sep 11 00:27:57.742998 kubelet[3157]: E0911 00:27:57.742914 3157 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:27:57.742998 kubelet[3157]: W0911 00:27:57.742927 3157 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:27:57.742998 kubelet[3157]: E0911 00:27:57.742939 3157 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:27:57.742998 kubelet[3157]: I0911 00:27:57.742960 3157 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/6bbf56be-c717-41c6-9b0e-bbe3b830a307-socket-dir\") pod \"csi-node-driver-ldmgv\" (UID: \"6bbf56be-c717-41c6-9b0e-bbe3b830a307\") " pod="calico-system/csi-node-driver-ldmgv" Sep 11 00:27:57.743855 kubelet[3157]: E0911 00:27:57.743843 3157 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:27:57.743855 kubelet[3157]: W0911 00:27:57.743854 3157 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:27:57.744139 kubelet[3157]: E0911 00:27:57.743878 3157 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:27:57.744139 kubelet[3157]: E0911 00:27:57.743981 3157 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:27:57.744139 kubelet[3157]: W0911 00:27:57.743986 3157 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:27:57.744139 kubelet[3157]: E0911 00:27:57.743993 3157 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:27:57.744483 kubelet[3157]: E0911 00:27:57.744402 3157 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:27:57.744483 kubelet[3157]: W0911 00:27:57.744412 3157 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:27:57.744483 kubelet[3157]: E0911 00:27:57.744422 3157 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:27:57.746021 kubelet[3157]: E0911 00:27:57.746004 3157 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:27:57.746021 kubelet[3157]: W0911 00:27:57.746022 3157 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:27:57.746233 kubelet[3157]: E0911 00:27:57.746034 3157 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:27:57.746233 kubelet[3157]: E0911 00:27:57.746176 3157 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:27:57.746233 kubelet[3157]: W0911 00:27:57.746182 3157 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:27:57.746233 kubelet[3157]: E0911 00:27:57.746190 3157 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:27:57.746547 kubelet[3157]: E0911 00:27:57.746304 3157 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:27:57.746547 kubelet[3157]: W0911 00:27:57.746308 3157 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:27:57.746547 kubelet[3157]: E0911 00:27:57.746313 3157 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:27:57.746819 kubelet[3157]: E0911 00:27:57.746605 3157 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:27:57.746819 kubelet[3157]: W0911 00:27:57.746614 3157 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:27:57.746819 kubelet[3157]: E0911 00:27:57.746624 3157 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:27:57.746819 kubelet[3157]: E0911 00:27:57.746759 3157 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:27:57.746819 kubelet[3157]: W0911 00:27:57.746764 3157 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:27:57.746819 kubelet[3157]: E0911 00:27:57.746771 3157 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:27:57.747424 kubelet[3157]: E0911 00:27:57.747233 3157 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:27:57.747424 kubelet[3157]: W0911 00:27:57.747242 3157 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:27:57.747424 kubelet[3157]: E0911 00:27:57.747252 3157 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:27:57.747424 kubelet[3157]: E0911 00:27:57.747366 3157 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:27:57.747424 kubelet[3157]: W0911 00:27:57.747370 3157 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:27:57.747424 kubelet[3157]: E0911 00:27:57.747395 3157 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:27:57.758187 containerd[1747]: time="2025-09-11T00:27:57.758160077Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-jqrjt,Uid:a74b6cd0-b7e1-4423-b083-204bf9e3c1ae,Namespace:calico-system,Attempt:0,} returns sandbox id \"092e05b26e345041681455cdb1c620f17ef833196ded4bbd624bff3da5f85ca4\"" Sep 11 00:27:57.844083 kubelet[3157]: E0911 00:27:57.844057 3157 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:27:57.844172 kubelet[3157]: W0911 00:27:57.844085 3157 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:27:57.844172 kubelet[3157]: E0911 00:27:57.844096 3157 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:27:57.844362 kubelet[3157]: E0911 00:27:57.844210 3157 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:27:57.844362 kubelet[3157]: W0911 00:27:57.844215 3157 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:27:57.844362 kubelet[3157]: E0911 00:27:57.844221 3157 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:27:57.844572 kubelet[3157]: E0911 00:27:57.844486 3157 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:27:57.844572 kubelet[3157]: W0911 00:27:57.844496 3157 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:27:57.844572 kubelet[3157]: E0911 00:27:57.844506 3157 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:27:57.844737 kubelet[3157]: E0911 00:27:57.844706 3157 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:27:57.844737 kubelet[3157]: W0911 00:27:57.844727 3157 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:27:57.844737 kubelet[3157]: E0911 00:27:57.844733 3157 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:27:57.844853 kubelet[3157]: E0911 00:27:57.844850 3157 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:27:57.844882 kubelet[3157]: W0911 00:27:57.844855 3157 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:27:57.844882 kubelet[3157]: E0911 00:27:57.844862 3157 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:27:57.845028 kubelet[3157]: E0911 00:27:57.845007 3157 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:27:57.845028 kubelet[3157]: W0911 00:27:57.845027 3157 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:27:57.845095 kubelet[3157]: E0911 00:27:57.845034 3157 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:27:57.845137 kubelet[3157]: E0911 00:27:57.845122 3157 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:27:57.845137 kubelet[3157]: W0911 00:27:57.845127 3157 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:27:57.845137 kubelet[3157]: E0911 00:27:57.845133 3157 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:27:57.845307 kubelet[3157]: E0911 00:27:57.845225 3157 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:27:57.845307 kubelet[3157]: W0911 00:27:57.845230 3157 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:27:57.845307 kubelet[3157]: E0911 00:27:57.845235 3157 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:27:57.845483 kubelet[3157]: E0911 00:27:57.845460 3157 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:27:57.845522 kubelet[3157]: W0911 00:27:57.845482 3157 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:27:57.845522 kubelet[3157]: E0911 00:27:57.845493 3157 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:27:57.845582 kubelet[3157]: E0911 00:27:57.845573 3157 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:27:57.845582 kubelet[3157]: W0911 00:27:57.845580 3157 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:27:57.845644 kubelet[3157]: E0911 00:27:57.845585 3157 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:27:57.845680 kubelet[3157]: E0911 00:27:57.845669 3157 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:27:57.845680 kubelet[3157]: W0911 00:27:57.845674 3157 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:27:57.845724 kubelet[3157]: E0911 00:27:57.845680 3157 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:27:57.845853 kubelet[3157]: E0911 00:27:57.845848 3157 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:27:57.845880 kubelet[3157]: W0911 00:27:57.845854 3157 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:27:57.845880 kubelet[3157]: E0911 00:27:57.845865 3157 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:27:57.846006 kubelet[3157]: E0911 00:27:57.845984 3157 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:27:57.846035 kubelet[3157]: W0911 00:27:57.846020 3157 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:27:57.846035 kubelet[3157]: E0911 00:27:57.846026 3157 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:27:57.846179 kubelet[3157]: E0911 00:27:57.846164 3157 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:27:57.846204 kubelet[3157]: W0911 00:27:57.846177 3157 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:27:57.846204 kubelet[3157]: E0911 00:27:57.846189 3157 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:27:57.846283 kubelet[3157]: E0911 00:27:57.846274 3157 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:27:57.846283 kubelet[3157]: W0911 00:27:57.846280 3157 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:27:57.846326 kubelet[3157]: E0911 00:27:57.846286 3157 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:27:57.846494 kubelet[3157]: E0911 00:27:57.846464 3157 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:27:57.846494 kubelet[3157]: W0911 00:27:57.846490 3157 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:27:57.846541 kubelet[3157]: E0911 00:27:57.846496 3157 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:27:57.846647 kubelet[3157]: E0911 00:27:57.846638 3157 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:27:57.846647 kubelet[3157]: W0911 00:27:57.846644 3157 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:27:57.846710 kubelet[3157]: E0911 00:27:57.846650 3157 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:27:57.846748 kubelet[3157]: E0911 00:27:57.846739 3157 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:27:57.846748 kubelet[3157]: W0911 00:27:57.846743 3157 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:27:57.846848 kubelet[3157]: E0911 00:27:57.846749 3157 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:27:57.846886 kubelet[3157]: E0911 00:27:57.846860 3157 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:27:57.846886 kubelet[3157]: W0911 00:27:57.846865 3157 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:27:57.846886 kubelet[3157]: E0911 00:27:57.846871 3157 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:27:57.846972 kubelet[3157]: E0911 00:27:57.846960 3157 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:27:57.846972 kubelet[3157]: W0911 00:27:57.846964 3157 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:27:57.847018 kubelet[3157]: E0911 00:27:57.846970 3157 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:27:57.847160 kubelet[3157]: E0911 00:27:57.847142 3157 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:27:57.847160 kubelet[3157]: W0911 00:27:57.847158 3157 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:27:57.847223 kubelet[3157]: E0911 00:27:57.847166 3157 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:27:57.847292 kubelet[3157]: E0911 00:27:57.847283 3157 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:27:57.847292 kubelet[3157]: W0911 00:27:57.847290 3157 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:27:57.847338 kubelet[3157]: E0911 00:27:57.847296 3157 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:27:57.847465 kubelet[3157]: E0911 00:27:57.847411 3157 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:27:57.847465 kubelet[3157]: W0911 00:27:57.847418 3157 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:27:57.847465 kubelet[3157]: E0911 00:27:57.847424 3157 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:27:57.848392 kubelet[3157]: E0911 00:27:57.848363 3157 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:27:57.848392 kubelet[3157]: W0911 00:27:57.848375 3157 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:27:57.848590 kubelet[3157]: E0911 00:27:57.848483 3157 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:27:57.848700 kubelet[3157]: E0911 00:27:57.848689 3157 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:27:57.848778 kubelet[3157]: W0911 00:27:57.848752 3157 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:27:57.848778 kubelet[3157]: E0911 00:27:57.848779 3157 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:27:57.853543 kubelet[3157]: E0911 00:27:57.853528 3157 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:27:57.853543 kubelet[3157]: W0911 00:27:57.853543 3157 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:27:57.853634 kubelet[3157]: E0911 00:27:57.853555 3157 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:27:58.868712 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount40588997.mount: Deactivated successfully. Sep 11 00:27:59.125925 kubelet[3157]: E0911 00:27:59.125300 3157 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ldmgv" podUID="6bbf56be-c717-41c6-9b0e-bbe3b830a307" Sep 11 00:27:59.781855 containerd[1747]: time="2025-09-11T00:27:59.781818349Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:27:59.784208 containerd[1747]: time="2025-09-11T00:27:59.784177809Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=35237389" Sep 11 00:27:59.786704 containerd[1747]: time="2025-09-11T00:27:59.786667730Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:27:59.790746 containerd[1747]: time="2025-09-11T00:27:59.790034129Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:27:59.790746 containerd[1747]: time="2025-09-11T00:27:59.790401916Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 2.379452312s" Sep 11 00:27:59.790746 containerd[1747]: time="2025-09-11T00:27:59.790423550Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 11 00:27:59.791419 containerd[1747]: time="2025-09-11T00:27:59.791375566Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 11 00:27:59.809706 containerd[1747]: time="2025-09-11T00:27:59.809472102Z" level=info msg="CreateContainer within sandbox \"ef8829e215198afa1caa5cc49819ea0957c8dec6c0fde97efa74b6e83b8010cc\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 11 00:27:59.831911 containerd[1747]: time="2025-09-11T00:27:59.831888668Z" level=info msg="Container 99c354c62104d398abb7641ecfd948e499557cebb992222d7c1dae5c307d2941: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:27:59.852157 containerd[1747]: time="2025-09-11T00:27:59.852133599Z" level=info msg="CreateContainer within sandbox \"ef8829e215198afa1caa5cc49819ea0957c8dec6c0fde97efa74b6e83b8010cc\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"99c354c62104d398abb7641ecfd948e499557cebb992222d7c1dae5c307d2941\"" Sep 11 00:27:59.852785 containerd[1747]: time="2025-09-11T00:27:59.852767216Z" level=info msg="StartContainer for \"99c354c62104d398abb7641ecfd948e499557cebb992222d7c1dae5c307d2941\"" Sep 11 00:27:59.853792 containerd[1747]: time="2025-09-11T00:27:59.853771098Z" level=info msg="connecting to shim 99c354c62104d398abb7641ecfd948e499557cebb992222d7c1dae5c307d2941" address="unix:///run/containerd/s/b9989d94ef7f342610458ddd3b76c3a96d1db28d2e0e22fe2e09f58a429e23e2" protocol=ttrpc version=3 Sep 11 00:27:59.870644 systemd[1]: Started cri-containerd-99c354c62104d398abb7641ecfd948e499557cebb992222d7c1dae5c307d2941.scope - libcontainer container 99c354c62104d398abb7641ecfd948e499557cebb992222d7c1dae5c307d2941. Sep 11 00:27:59.909914 containerd[1747]: time="2025-09-11T00:27:59.909890314Z" level=info msg="StartContainer for \"99c354c62104d398abb7641ecfd948e499557cebb992222d7c1dae5c307d2941\" returns successfully" Sep 11 00:28:00.247594 kubelet[3157]: E0911 00:28:00.247573 3157 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:00.247594 kubelet[3157]: W0911 00:28:00.247589 3157 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:00.248013 kubelet[3157]: E0911 00:28:00.247604 3157 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:00.248013 kubelet[3157]: E0911 00:28:00.247704 3157 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:00.248013 kubelet[3157]: W0911 00:28:00.247709 3157 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:00.248013 kubelet[3157]: E0911 00:28:00.247716 3157 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:00.248013 kubelet[3157]: E0911 00:28:00.247802 3157 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:00.248013 kubelet[3157]: W0911 00:28:00.247807 3157 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:00.248013 kubelet[3157]: E0911 00:28:00.247813 3157 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:00.248013 kubelet[3157]: E0911 00:28:00.247934 3157 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:00.248013 kubelet[3157]: W0911 00:28:00.247940 3157 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:00.248013 kubelet[3157]: E0911 00:28:00.247947 3157 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:00.248276 kubelet[3157]: E0911 00:28:00.248029 3157 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:00.248276 kubelet[3157]: W0911 00:28:00.248033 3157 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:00.248276 kubelet[3157]: E0911 00:28:00.248039 3157 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:00.248276 kubelet[3157]: E0911 00:28:00.248110 3157 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:00.248276 kubelet[3157]: W0911 00:28:00.248114 3157 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:00.248276 kubelet[3157]: E0911 00:28:00.248119 3157 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:00.248276 kubelet[3157]: E0911 00:28:00.248188 3157 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:00.248276 kubelet[3157]: W0911 00:28:00.248192 3157 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:00.248276 kubelet[3157]: E0911 00:28:00.248197 3157 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:00.248276 kubelet[3157]: E0911 00:28:00.248266 3157 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:00.248589 kubelet[3157]: W0911 00:28:00.248270 3157 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:00.248589 kubelet[3157]: E0911 00:28:00.248276 3157 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:00.248589 kubelet[3157]: E0911 00:28:00.248353 3157 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:00.248589 kubelet[3157]: W0911 00:28:00.248357 3157 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:00.248589 kubelet[3157]: E0911 00:28:00.248363 3157 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:00.248589 kubelet[3157]: E0911 00:28:00.248449 3157 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:00.248589 kubelet[3157]: W0911 00:28:00.248453 3157 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:00.248589 kubelet[3157]: E0911 00:28:00.248459 3157 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:00.248589 kubelet[3157]: E0911 00:28:00.248544 3157 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:00.248589 kubelet[3157]: W0911 00:28:00.248549 3157 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:00.248827 kubelet[3157]: E0911 00:28:00.248555 3157 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:00.248827 kubelet[3157]: E0911 00:28:00.248636 3157 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:00.248827 kubelet[3157]: W0911 00:28:00.248642 3157 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:00.248827 kubelet[3157]: E0911 00:28:00.248647 3157 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:00.248827 kubelet[3157]: E0911 00:28:00.248732 3157 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:00.248827 kubelet[3157]: W0911 00:28:00.248736 3157 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:00.248827 kubelet[3157]: E0911 00:28:00.248741 3157 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:00.248827 kubelet[3157]: E0911 00:28:00.248816 3157 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:00.248827 kubelet[3157]: W0911 00:28:00.248820 3157 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:00.248827 kubelet[3157]: E0911 00:28:00.248824 3157 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:00.249049 kubelet[3157]: E0911 00:28:00.248896 3157 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:00.249049 kubelet[3157]: W0911 00:28:00.248900 3157 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:00.249049 kubelet[3157]: E0911 00:28:00.248904 3157 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:00.264329 kubelet[3157]: E0911 00:28:00.264297 3157 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:00.264329 kubelet[3157]: W0911 00:28:00.264324 3157 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:00.264461 kubelet[3157]: E0911 00:28:00.264339 3157 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:00.264483 kubelet[3157]: E0911 00:28:00.264467 3157 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:00.264483 kubelet[3157]: W0911 00:28:00.264471 3157 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:00.264483 kubelet[3157]: E0911 00:28:00.264478 3157 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:00.264615 kubelet[3157]: E0911 00:28:00.264604 3157 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:00.264615 kubelet[3157]: W0911 00:28:00.264613 3157 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:00.264667 kubelet[3157]: E0911 00:28:00.264620 3157 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:00.264779 kubelet[3157]: E0911 00:28:00.264752 3157 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:00.264832 kubelet[3157]: W0911 00:28:00.264820 3157 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:00.264890 kubelet[3157]: E0911 00:28:00.264831 3157 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:00.264941 kubelet[3157]: E0911 00:28:00.264935 3157 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:00.264941 kubelet[3157]: W0911 00:28:00.264940 3157 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:00.265010 kubelet[3157]: E0911 00:28:00.264947 3157 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:00.265111 kubelet[3157]: E0911 00:28:00.265086 3157 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:00.265111 kubelet[3157]: W0911 00:28:00.265109 3157 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:00.265154 kubelet[3157]: E0911 00:28:00.265116 3157 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:00.265377 kubelet[3157]: E0911 00:28:00.265361 3157 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:00.265418 kubelet[3157]: W0911 00:28:00.265375 3157 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:00.265418 kubelet[3157]: E0911 00:28:00.265403 3157 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:00.265531 kubelet[3157]: E0911 00:28:00.265510 3157 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:00.265531 kubelet[3157]: W0911 00:28:00.265530 3157 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:00.265579 kubelet[3157]: E0911 00:28:00.265536 3157 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:00.265720 kubelet[3157]: E0911 00:28:00.265691 3157 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:00.265720 kubelet[3157]: W0911 00:28:00.265713 3157 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:00.265769 kubelet[3157]: E0911 00:28:00.265726 3157 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:00.265972 kubelet[3157]: E0911 00:28:00.265889 3157 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:00.265972 kubelet[3157]: W0911 00:28:00.265896 3157 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:00.265972 kubelet[3157]: E0911 00:28:00.265915 3157 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:00.266081 kubelet[3157]: E0911 00:28:00.266062 3157 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:00.266081 kubelet[3157]: W0911 00:28:00.266079 3157 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:00.266154 kubelet[3157]: E0911 00:28:00.266084 3157 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:00.266207 kubelet[3157]: E0911 00:28:00.266199 3157 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:00.266207 kubelet[3157]: W0911 00:28:00.266204 3157 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:00.266275 kubelet[3157]: E0911 00:28:00.266211 3157 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:00.266317 kubelet[3157]: E0911 00:28:00.266306 3157 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:00.266317 kubelet[3157]: W0911 00:28:00.266315 3157 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:00.266464 kubelet[3157]: E0911 00:28:00.266321 3157 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:00.266619 kubelet[3157]: E0911 00:28:00.266585 3157 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:00.266619 kubelet[3157]: W0911 00:28:00.266612 3157 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:00.266688 kubelet[3157]: E0911 00:28:00.266624 3157 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:00.266740 kubelet[3157]: E0911 00:28:00.266718 3157 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:00.266740 kubelet[3157]: W0911 00:28:00.266736 3157 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:00.266784 kubelet[3157]: E0911 00:28:00.266742 3157 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:00.266851 kubelet[3157]: E0911 00:28:00.266843 3157 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:00.266851 kubelet[3157]: W0911 00:28:00.266849 3157 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:00.266893 kubelet[3157]: E0911 00:28:00.266855 3157 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:00.267028 kubelet[3157]: E0911 00:28:00.267005 3157 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:00.267028 kubelet[3157]: W0911 00:28:00.267025 3157 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:00.267069 kubelet[3157]: E0911 00:28:00.267031 3157 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:00.267259 kubelet[3157]: E0911 00:28:00.267250 3157 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:00.267259 kubelet[3157]: W0911 00:28:00.267257 3157 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:00.267309 kubelet[3157]: E0911 00:28:00.267263 3157 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:01.124997 kubelet[3157]: E0911 00:28:01.124714 3157 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ldmgv" podUID="6bbf56be-c717-41c6-9b0e-bbe3b830a307" Sep 11 00:28:01.191469 kubelet[3157]: I0911 00:28:01.191446 3157 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 11 00:28:01.200045 containerd[1747]: time="2025-09-11T00:28:01.200009323Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:28:01.202413 containerd[1747]: time="2025-09-11T00:28:01.202281599Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4446660" Sep 11 00:28:01.205013 containerd[1747]: time="2025-09-11T00:28:01.204984876Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:28:01.208981 containerd[1747]: time="2025-09-11T00:28:01.208560721Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:28:01.208981 containerd[1747]: time="2025-09-11T00:28:01.208856032Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 1.417326202s" Sep 11 00:28:01.208981 containerd[1747]: time="2025-09-11T00:28:01.208877842Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 11 00:28:01.215580 containerd[1747]: time="2025-09-11T00:28:01.215557837Z" level=info msg="CreateContainer within sandbox \"092e05b26e345041681455cdb1c620f17ef833196ded4bbd624bff3da5f85ca4\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 11 00:28:01.234113 containerd[1747]: time="2025-09-11T00:28:01.233468856Z" level=info msg="Container 6ce1b95fa2ebf162e8cd16006d8f59670f2dba786ca86c202baf051e9a2226c1: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:28:01.251551 containerd[1747]: time="2025-09-11T00:28:01.251523846Z" level=info msg="CreateContainer within sandbox \"092e05b26e345041681455cdb1c620f17ef833196ded4bbd624bff3da5f85ca4\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"6ce1b95fa2ebf162e8cd16006d8f59670f2dba786ca86c202baf051e9a2226c1\"" Sep 11 00:28:01.252456 containerd[1747]: time="2025-09-11T00:28:01.252245631Z" level=info msg="StartContainer for \"6ce1b95fa2ebf162e8cd16006d8f59670f2dba786ca86c202baf051e9a2226c1\"" Sep 11 00:28:01.253838 containerd[1747]: time="2025-09-11T00:28:01.253817201Z" level=info msg="connecting to shim 6ce1b95fa2ebf162e8cd16006d8f59670f2dba786ca86c202baf051e9a2226c1" address="unix:///run/containerd/s/e53aa3d0c6c8dfc60dae01437bcc8979b156f7838b610723f9a66242bce9a902" protocol=ttrpc version=3 Sep 11 00:28:01.254181 kubelet[3157]: E0911 00:28:01.254098 3157 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:01.254181 kubelet[3157]: W0911 00:28:01.254134 3157 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:01.254181 kubelet[3157]: E0911 00:28:01.254150 3157 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:01.255068 kubelet[3157]: E0911 00:28:01.254801 3157 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:01.255068 kubelet[3157]: W0911 00:28:01.254814 3157 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:01.255068 kubelet[3157]: E0911 00:28:01.254828 3157 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:01.255068 kubelet[3157]: E0911 00:28:01.254957 3157 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:01.255068 kubelet[3157]: W0911 00:28:01.254962 3157 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:01.255068 kubelet[3157]: E0911 00:28:01.254969 3157 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:01.255209 kubelet[3157]: E0911 00:28:01.255137 3157 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:01.255209 kubelet[3157]: W0911 00:28:01.255142 3157 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:01.255209 kubelet[3157]: E0911 00:28:01.255149 3157 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:01.255274 kubelet[3157]: E0911 00:28:01.255259 3157 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:01.255274 kubelet[3157]: W0911 00:28:01.255264 3157 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:01.255317 kubelet[3157]: E0911 00:28:01.255278 3157 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:01.255614 kubelet[3157]: E0911 00:28:01.255368 3157 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:01.255614 kubelet[3157]: W0911 00:28:01.255374 3157 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:01.255614 kubelet[3157]: E0911 00:28:01.255380 3157 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:01.255614 kubelet[3157]: E0911 00:28:01.255481 3157 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:01.255614 kubelet[3157]: W0911 00:28:01.255484 3157 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:01.255614 kubelet[3157]: E0911 00:28:01.255489 3157 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:01.255830 kubelet[3157]: E0911 00:28:01.255728 3157 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:01.255830 kubelet[3157]: W0911 00:28:01.255736 3157 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:01.255830 kubelet[3157]: E0911 00:28:01.255746 3157 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:01.256272 kubelet[3157]: E0911 00:28:01.256244 3157 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:01.256272 kubelet[3157]: W0911 00:28:01.256270 3157 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:01.256421 kubelet[3157]: E0911 00:28:01.256411 3157 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:01.257170 kubelet[3157]: E0911 00:28:01.256595 3157 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:01.257170 kubelet[3157]: W0911 00:28:01.257121 3157 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:01.257170 kubelet[3157]: E0911 00:28:01.257137 3157 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:01.257445 kubelet[3157]: E0911 00:28:01.257414 3157 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:01.257445 kubelet[3157]: W0911 00:28:01.257422 3157 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:01.257445 kubelet[3157]: E0911 00:28:01.257430 3157 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:01.257707 kubelet[3157]: E0911 00:28:01.257648 3157 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:01.257707 kubelet[3157]: W0911 00:28:01.257655 3157 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:01.257707 kubelet[3157]: E0911 00:28:01.257663 3157 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:01.257860 kubelet[3157]: E0911 00:28:01.257835 3157 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:01.257860 kubelet[3157]: W0911 00:28:01.257841 3157 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:01.257860 kubelet[3157]: E0911 00:28:01.257847 3157 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:01.258085 kubelet[3157]: E0911 00:28:01.258041 3157 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:01.258085 kubelet[3157]: W0911 00:28:01.258048 3157 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:01.258085 kubelet[3157]: E0911 00:28:01.258056 3157 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:01.258309 kubelet[3157]: E0911 00:28:01.258264 3157 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:01.258309 kubelet[3157]: W0911 00:28:01.258271 3157 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:01.258309 kubelet[3157]: E0911 00:28:01.258278 3157 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:01.271692 kubelet[3157]: E0911 00:28:01.271681 3157 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:01.271854 kubelet[3157]: W0911 00:28:01.271721 3157 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:01.271854 kubelet[3157]: E0911 00:28:01.271733 3157 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:01.272099 kubelet[3157]: E0911 00:28:01.272091 3157 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:01.272393 kubelet[3157]: W0911 00:28:01.272326 3157 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:01.272393 kubelet[3157]: E0911 00:28:01.272343 3157 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:01.272717 kubelet[3157]: E0911 00:28:01.272704 3157 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:01.272916 kubelet[3157]: W0911 00:28:01.272765 3157 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:01.272998 kubelet[3157]: E0911 00:28:01.272989 3157 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:01.273253 kubelet[3157]: E0911 00:28:01.273245 3157 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:01.273644 kubelet[3157]: W0911 00:28:01.273555 3157 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:01.273644 kubelet[3157]: E0911 00:28:01.273569 3157 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:01.273816 kubelet[3157]: E0911 00:28:01.273803 3157 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:01.273816 kubelet[3157]: W0911 00:28:01.273814 3157 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:01.273872 kubelet[3157]: E0911 00:28:01.273824 3157 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:01.274292 kubelet[3157]: E0911 00:28:01.274283 3157 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:01.274292 kubelet[3157]: W0911 00:28:01.274292 3157 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:01.274363 kubelet[3157]: E0911 00:28:01.274302 3157 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:01.274754 kubelet[3157]: E0911 00:28:01.274740 3157 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:01.274754 kubelet[3157]: W0911 00:28:01.274750 3157 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:01.274827 kubelet[3157]: E0911 00:28:01.274771 3157 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:01.275395 kubelet[3157]: E0911 00:28:01.275372 3157 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:01.275459 kubelet[3157]: W0911 00:28:01.275400 3157 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:01.275459 kubelet[3157]: E0911 00:28:01.275411 3157 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:01.275557 kubelet[3157]: E0911 00:28:01.275544 3157 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:01.275557 kubelet[3157]: W0911 00:28:01.275551 3157 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:01.275616 kubelet[3157]: E0911 00:28:01.275558 3157 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:01.276375 kubelet[3157]: E0911 00:28:01.276358 3157 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:01.276375 kubelet[3157]: W0911 00:28:01.276369 3157 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:01.276463 kubelet[3157]: E0911 00:28:01.276378 3157 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:01.276647 kubelet[3157]: E0911 00:28:01.276511 3157 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:01.276647 kubelet[3157]: W0911 00:28:01.276518 3157 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:01.276647 kubelet[3157]: E0911 00:28:01.276525 3157 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:01.276647 kubelet[3157]: E0911 00:28:01.276623 3157 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:01.276647 kubelet[3157]: W0911 00:28:01.276628 3157 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:01.276647 kubelet[3157]: E0911 00:28:01.276634 3157 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:01.276781 kubelet[3157]: E0911 00:28:01.276740 3157 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:01.276781 kubelet[3157]: W0911 00:28:01.276745 3157 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:01.276781 kubelet[3157]: E0911 00:28:01.276751 3157 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:01.276983 kubelet[3157]: E0911 00:28:01.276966 3157 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:01.276983 kubelet[3157]: W0911 00:28:01.276975 3157 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:01.276983 kubelet[3157]: E0911 00:28:01.276982 3157 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:01.277236 kubelet[3157]: E0911 00:28:01.277211 3157 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:01.277262 kubelet[3157]: W0911 00:28:01.277238 3157 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:01.277262 kubelet[3157]: E0911 00:28:01.277245 3157 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:01.277439 kubelet[3157]: E0911 00:28:01.277428 3157 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:01.277439 kubelet[3157]: W0911 00:28:01.277435 3157 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:01.277503 kubelet[3157]: E0911 00:28:01.277441 3157 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:01.277841 kubelet[3157]: E0911 00:28:01.277595 3157 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:01.277841 kubelet[3157]: W0911 00:28:01.277622 3157 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:01.277841 kubelet[3157]: E0911 00:28:01.277643 3157 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:01.277913 kubelet[3157]: E0911 00:28:01.277899 3157 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:01.277913 kubelet[3157]: W0911 00:28:01.277904 3157 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:01.277913 kubelet[3157]: E0911 00:28:01.277911 3157 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:01.278573 systemd[1]: Started cri-containerd-6ce1b95fa2ebf162e8cd16006d8f59670f2dba786ca86c202baf051e9a2226c1.scope - libcontainer container 6ce1b95fa2ebf162e8cd16006d8f59670f2dba786ca86c202baf051e9a2226c1. Sep 11 00:28:01.307573 containerd[1747]: time="2025-09-11T00:28:01.307498608Z" level=info msg="StartContainer for \"6ce1b95fa2ebf162e8cd16006d8f59670f2dba786ca86c202baf051e9a2226c1\" returns successfully" Sep 11 00:28:01.313773 systemd[1]: cri-containerd-6ce1b95fa2ebf162e8cd16006d8f59670f2dba786ca86c202baf051e9a2226c1.scope: Deactivated successfully. Sep 11 00:28:01.316683 containerd[1747]: time="2025-09-11T00:28:01.316662252Z" level=info msg="received exit event container_id:\"6ce1b95fa2ebf162e8cd16006d8f59670f2dba786ca86c202baf051e9a2226c1\" id:\"6ce1b95fa2ebf162e8cd16006d8f59670f2dba786ca86c202baf051e9a2226c1\" pid:3857 exited_at:{seconds:1757550481 nanos:316247578}" Sep 11 00:28:01.316818 containerd[1747]: time="2025-09-11T00:28:01.316732144Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6ce1b95fa2ebf162e8cd16006d8f59670f2dba786ca86c202baf051e9a2226c1\" id:\"6ce1b95fa2ebf162e8cd16006d8f59670f2dba786ca86c202baf051e9a2226c1\" pid:3857 exited_at:{seconds:1757550481 nanos:316247578}" Sep 11 00:28:01.332074 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6ce1b95fa2ebf162e8cd16006d8f59670f2dba786ca86c202baf051e9a2226c1-rootfs.mount: Deactivated successfully. Sep 11 00:28:02.208124 kubelet[3157]: I0911 00:28:02.208071 3157 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-54f78fbb55-srd2x" podStartSLOduration=3.827186563 podStartE2EDuration="6.208055982s" podCreationTimestamp="2025-09-11 00:27:56 +0000 UTC" firstStartedPulling="2025-09-11 00:27:57.410194068 +0000 UTC m=+18.370019911" lastFinishedPulling="2025-09-11 00:27:59.791063489 +0000 UTC m=+20.750889330" observedRunningTime="2025-09-11 00:28:00.200749367 +0000 UTC m=+21.160575243" watchObservedRunningTime="2025-09-11 00:28:02.208055982 +0000 UTC m=+23.167881821" Sep 11 00:28:03.125199 kubelet[3157]: E0911 00:28:03.124747 3157 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ldmgv" podUID="6bbf56be-c717-41c6-9b0e-bbe3b830a307" Sep 11 00:28:04.199886 containerd[1747]: time="2025-09-11T00:28:04.199844710Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 11 00:28:05.124921 kubelet[3157]: E0911 00:28:05.124601 3157 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ldmgv" podUID="6bbf56be-c717-41c6-9b0e-bbe3b830a307" Sep 11 00:28:07.125455 kubelet[3157]: E0911 00:28:07.124690 3157 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ldmgv" podUID="6bbf56be-c717-41c6-9b0e-bbe3b830a307" Sep 11 00:28:07.688500 containerd[1747]: time="2025-09-11T00:28:07.688466359Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:28:07.690744 containerd[1747]: time="2025-09-11T00:28:07.690713364Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 11 00:28:07.693625 containerd[1747]: time="2025-09-11T00:28:07.693588783Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:28:07.696987 containerd[1747]: time="2025-09-11T00:28:07.696950347Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:28:07.697394 containerd[1747]: time="2025-09-11T00:28:07.697247244Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 3.497363679s" Sep 11 00:28:07.697394 containerd[1747]: time="2025-09-11T00:28:07.697273237Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 11 00:28:07.707568 containerd[1747]: time="2025-09-11T00:28:07.707546754Z" level=info msg="CreateContainer within sandbox \"092e05b26e345041681455cdb1c620f17ef833196ded4bbd624bff3da5f85ca4\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 11 00:28:07.726493 containerd[1747]: time="2025-09-11T00:28:07.725435997Z" level=info msg="Container 732c7cb3864d14b1c78abff11184665e6f74356c2d2482464b906c58de0f4b15: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:28:07.740835 containerd[1747]: time="2025-09-11T00:28:07.740815903Z" level=info msg="CreateContainer within sandbox \"092e05b26e345041681455cdb1c620f17ef833196ded4bbd624bff3da5f85ca4\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"732c7cb3864d14b1c78abff11184665e6f74356c2d2482464b906c58de0f4b15\"" Sep 11 00:28:07.741971 containerd[1747]: time="2025-09-11T00:28:07.741230853Z" level=info msg="StartContainer for \"732c7cb3864d14b1c78abff11184665e6f74356c2d2482464b906c58de0f4b15\"" Sep 11 00:28:07.743308 containerd[1747]: time="2025-09-11T00:28:07.743267718Z" level=info msg="connecting to shim 732c7cb3864d14b1c78abff11184665e6f74356c2d2482464b906c58de0f4b15" address="unix:///run/containerd/s/e53aa3d0c6c8dfc60dae01437bcc8979b156f7838b610723f9a66242bce9a902" protocol=ttrpc version=3 Sep 11 00:28:07.764534 systemd[1]: Started cri-containerd-732c7cb3864d14b1c78abff11184665e6f74356c2d2482464b906c58de0f4b15.scope - libcontainer container 732c7cb3864d14b1c78abff11184665e6f74356c2d2482464b906c58de0f4b15. Sep 11 00:28:07.793288 containerd[1747]: time="2025-09-11T00:28:07.793273059Z" level=info msg="StartContainer for \"732c7cb3864d14b1c78abff11184665e6f74356c2d2482464b906c58de0f4b15\" returns successfully" Sep 11 00:28:08.968248 containerd[1747]: time="2025-09-11T00:28:08.968211791Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 11 00:28:08.971006 systemd[1]: cri-containerd-732c7cb3864d14b1c78abff11184665e6f74356c2d2482464b906c58de0f4b15.scope: Deactivated successfully. Sep 11 00:28:08.971618 systemd[1]: cri-containerd-732c7cb3864d14b1c78abff11184665e6f74356c2d2482464b906c58de0f4b15.scope: Consumed 332ms CPU time, 192.6M memory peak, 171.3M written to disk. Sep 11 00:28:08.972778 containerd[1747]: time="2025-09-11T00:28:08.972721297Z" level=info msg="received exit event container_id:\"732c7cb3864d14b1c78abff11184665e6f74356c2d2482464b906c58de0f4b15\" id:\"732c7cb3864d14b1c78abff11184665e6f74356c2d2482464b906c58de0f4b15\" pid:3917 exited_at:{seconds:1757550488 nanos:971501774}" Sep 11 00:28:08.972953 containerd[1747]: time="2025-09-11T00:28:08.972921265Z" level=info msg="TaskExit event in podsandbox handler container_id:\"732c7cb3864d14b1c78abff11184665e6f74356c2d2482464b906c58de0f4b15\" id:\"732c7cb3864d14b1c78abff11184665e6f74356c2d2482464b906c58de0f4b15\" pid:3917 exited_at:{seconds:1757550488 nanos:971501774}" Sep 11 00:28:08.989911 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-732c7cb3864d14b1c78abff11184665e6f74356c2d2482464b906c58de0f4b15-rootfs.mount: Deactivated successfully. Sep 11 00:28:09.034331 kubelet[3157]: I0911 00:28:09.034260 3157 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 11 00:28:09.322709 kubelet[3157]: I0911 00:28:09.322675 3157 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dww7\" (UniqueName: \"kubernetes.io/projected/7896f4f6-30fb-414f-8fc9-01181638003f-kube-api-access-9dww7\") pod \"coredns-674b8bbfcf-mkpsv\" (UID: \"7896f4f6-30fb-414f-8fc9-01181638003f\") " pod="kube-system/coredns-674b8bbfcf-mkpsv" Sep 11 00:28:09.322800 kubelet[3157]: I0911 00:28:09.322718 3157 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7896f4f6-30fb-414f-8fc9-01181638003f-config-volume\") pod \"coredns-674b8bbfcf-mkpsv\" (UID: \"7896f4f6-30fb-414f-8fc9-01181638003f\") " pod="kube-system/coredns-674b8bbfcf-mkpsv" Sep 11 00:28:09.399633 systemd[1]: Created slice kubepods-besteffort-pod6bbf56be_c717_41c6_9b0e_bbe3b830a307.slice - libcontainer container kubepods-besteffort-pod6bbf56be_c717_41c6_9b0e_bbe3b830a307.slice. Sep 11 00:28:09.409690 systemd[1]: Created slice kubepods-besteffort-pod399d91b5_6a95_457c_b82d_2a4a35d507ad.slice - libcontainer container kubepods-besteffort-pod399d91b5_6a95_457c_b82d_2a4a35d507ad.slice. Sep 11 00:28:09.439578 kubelet[3157]: I0911 00:28:09.423373 3157 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/399d91b5-6a95-457c-b82d-2a4a35d507ad-tigera-ca-bundle\") pod \"calico-kube-controllers-6c99b56578-4pbpk\" (UID: \"399d91b5-6a95-457c-b82d-2a4a35d507ad\") " pod="calico-system/calico-kube-controllers-6c99b56578-4pbpk" Sep 11 00:28:09.439578 kubelet[3157]: I0911 00:28:09.423425 3157 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7j47\" (UniqueName: \"kubernetes.io/projected/399d91b5-6a95-457c-b82d-2a4a35d507ad-kube-api-access-k7j47\") pod \"calico-kube-controllers-6c99b56578-4pbpk\" (UID: \"399d91b5-6a95-457c-b82d-2a4a35d507ad\") " pod="calico-system/calico-kube-controllers-6c99b56578-4pbpk" Sep 11 00:28:09.439578 kubelet[3157]: E0911 00:28:09.423474 3157 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered Sep 11 00:28:09.439578 kubelet[3157]: E0911 00:28:09.423526 3157 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7896f4f6-30fb-414f-8fc9-01181638003f-config-volume podName:7896f4f6-30fb-414f-8fc9-01181638003f nodeName:}" failed. No retries permitted until 2025-09-11 00:28:09.923502442 +0000 UTC m=+30.883328280 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/7896f4f6-30fb-414f-8fc9-01181638003f-config-volume") pod "coredns-674b8bbfcf-mkpsv" (UID: "7896f4f6-30fb-414f-8fc9-01181638003f") : object "kube-system"/"coredns" not registered Sep 11 00:28:09.440088 containerd[1747]: time="2025-09-11T00:28:09.440066565Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ldmgv,Uid:6bbf56be-c717-41c6-9b0e-bbe3b830a307,Namespace:calico-system,Attempt:0,}" Sep 11 00:28:09.571270 systemd[1]: Created slice kubepods-burstable-pod7896f4f6_30fb_414f_8fc9_01181638003f.slice - libcontainer container kubepods-burstable-pod7896f4f6_30fb_414f_8fc9_01181638003f.slice. Sep 11 00:28:09.575347 systemd[1]: Created slice kubepods-besteffort-pode4a2905f_a3b1_4fce_89a6_6dcc04516c41.slice - libcontainer container kubepods-besteffort-pode4a2905f_a3b1_4fce_89a6_6dcc04516c41.slice. Sep 11 00:28:09.624844 kubelet[3157]: I0911 00:28:09.624813 3157 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jprrp\" (UniqueName: \"kubernetes.io/projected/e4a2905f-a3b1-4fce-89a6-6dcc04516c41-kube-api-access-jprrp\") pod \"calico-apiserver-75d779f9f-s7nnc\" (UID: \"e4a2905f-a3b1-4fce-89a6-6dcc04516c41\") " pod="calico-apiserver/calico-apiserver-75d779f9f-s7nnc" Sep 11 00:28:09.624940 kubelet[3157]: I0911 00:28:09.624858 3157 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e4a2905f-a3b1-4fce-89a6-6dcc04516c41-calico-apiserver-certs\") pod \"calico-apiserver-75d779f9f-s7nnc\" (UID: \"e4a2905f-a3b1-4fce-89a6-6dcc04516c41\") " pod="calico-apiserver/calico-apiserver-75d779f9f-s7nnc" Sep 11 00:28:09.863471 containerd[1747]: time="2025-09-11T00:28:09.862029540Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6c99b56578-4pbpk,Uid:399d91b5-6a95-457c-b82d-2a4a35d507ad,Namespace:calico-system,Attempt:0,}" Sep 11 00:28:09.879510 containerd[1747]: time="2025-09-11T00:28:09.879485420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75d779f9f-s7nnc,Uid:e4a2905f-a3b1-4fce-89a6-6dcc04516c41,Namespace:calico-apiserver,Attempt:0,}" Sep 11 00:28:09.908156 systemd[1]: Created slice kubepods-burstable-podf50c5ad7_5f27_4da4_a84f_88c5aeb97826.slice - libcontainer container kubepods-burstable-podf50c5ad7_5f27_4da4_a84f_88c5aeb97826.slice. Sep 11 00:28:09.927584 kubelet[3157]: I0911 00:28:09.927497 3157 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f50c5ad7-5f27-4da4-a84f-88c5aeb97826-config-volume\") pod \"coredns-674b8bbfcf-9s5qx\" (UID: \"f50c5ad7-5f27-4da4-a84f-88c5aeb97826\") " pod="kube-system/coredns-674b8bbfcf-9s5qx" Sep 11 00:28:09.928376 kubelet[3157]: I0911 00:28:09.928333 3157 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhbfw\" (UniqueName: \"kubernetes.io/projected/f50c5ad7-5f27-4da4-a84f-88c5aeb97826-kube-api-access-xhbfw\") pod \"coredns-674b8bbfcf-9s5qx\" (UID: \"f50c5ad7-5f27-4da4-a84f-88c5aeb97826\") " pod="kube-system/coredns-674b8bbfcf-9s5qx" Sep 11 00:28:09.939078 systemd[1]: Created slice kubepods-besteffort-pod9e4921f3_2a97_4fd2_b436_8e2b54a05b3f.slice - libcontainer container kubepods-besteffort-pod9e4921f3_2a97_4fd2_b436_8e2b54a05b3f.slice. Sep 11 00:28:09.947430 systemd[1]: Created slice kubepods-besteffort-podb5d91706_bc49_4025_ac5e_31192f7804cc.slice - libcontainer container kubepods-besteffort-podb5d91706_bc49_4025_ac5e_31192f7804cc.slice. Sep 11 00:28:09.958642 systemd[1]: Created slice kubepods-besteffort-pode7ff1367_64cd_423f_a7ae_69bcd41ce72a.slice - libcontainer container kubepods-besteffort-pode7ff1367_64cd_423f_a7ae_69bcd41ce72a.slice. Sep 11 00:28:09.967517 systemd[1]: Created slice kubepods-besteffort-pod50700c67_8777_495e_a0c8_f24ef4739246.slice - libcontainer container kubepods-besteffort-pod50700c67_8777_495e_a0c8_f24ef4739246.slice. Sep 11 00:28:10.017712 containerd[1747]: time="2025-09-11T00:28:10.017674021Z" level=error msg="Failed to destroy network for sandbox \"61bb388e74d8704e0abe6ac5f2c2d1da6d96461bd5264f8f75222f20af1c315d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:28:10.020296 systemd[1]: run-netns-cni\x2dd6765262\x2d0d05\x2d41eb\x2d8995\x2d8175bddd8bde.mount: Deactivated successfully. Sep 11 00:28:10.024331 containerd[1747]: time="2025-09-11T00:28:10.024280520Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ldmgv,Uid:6bbf56be-c717-41c6-9b0e-bbe3b830a307,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"61bb388e74d8704e0abe6ac5f2c2d1da6d96461bd5264f8f75222f20af1c315d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:28:10.027777 kubelet[3157]: E0911 00:28:10.027744 3157 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61bb388e74d8704e0abe6ac5f2c2d1da6d96461bd5264f8f75222f20af1c315d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:28:10.027843 containerd[1747]: time="2025-09-11T00:28:10.027781272Z" level=error msg="Failed to destroy network for sandbox \"188742c5616097fedd535a4c6d807834051bd11d5424384a8b840662e264103c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:28:10.028902 kubelet[3157]: I0911 00:28:10.028880 3157 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prlmp\" (UniqueName: \"kubernetes.io/projected/50700c67-8777-495e-a0c8-f24ef4739246-kube-api-access-prlmp\") pod \"calico-apiserver-7488f5d565-n2rfl\" (UID: \"50700c67-8777-495e-a0c8-f24ef4739246\") " pod="calico-apiserver/calico-apiserver-7488f5d565-n2rfl" Sep 11 00:28:10.028972 kubelet[3157]: I0911 00:28:10.028912 3157 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e7ff1367-64cd-423f-a7ae-69bcd41ce72a-calico-apiserver-certs\") pod \"calico-apiserver-75d779f9f-bb974\" (UID: \"e7ff1367-64cd-423f-a7ae-69bcd41ce72a\") " pod="calico-apiserver/calico-apiserver-75d779f9f-bb974" Sep 11 00:28:10.028972 kubelet[3157]: I0911 00:28:10.028960 3157 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/9e4921f3-2a97-4fd2-b436-8e2b54a05b3f-goldmane-key-pair\") pod \"goldmane-54d579b49d-h8dt2\" (UID: \"9e4921f3-2a97-4fd2-b436-8e2b54a05b3f\") " pod="calico-system/goldmane-54d579b49d-h8dt2" Sep 11 00:28:10.029019 kubelet[3157]: I0911 00:28:10.028978 3157 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfphz\" (UniqueName: \"kubernetes.io/projected/9e4921f3-2a97-4fd2-b436-8e2b54a05b3f-kube-api-access-sfphz\") pod \"goldmane-54d579b49d-h8dt2\" (UID: \"9e4921f3-2a97-4fd2-b436-8e2b54a05b3f\") " pod="calico-system/goldmane-54d579b49d-h8dt2" Sep 11 00:28:10.029019 kubelet[3157]: I0911 00:28:10.029002 3157 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e4921f3-2a97-4fd2-b436-8e2b54a05b3f-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-h8dt2\" (UID: \"9e4921f3-2a97-4fd2-b436-8e2b54a05b3f\") " pod="calico-system/goldmane-54d579b49d-h8dt2" Sep 11 00:28:10.029066 kubelet[3157]: I0911 00:28:10.029021 3157 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqpfj\" (UniqueName: \"kubernetes.io/projected/e7ff1367-64cd-423f-a7ae-69bcd41ce72a-kube-api-access-lqpfj\") pod \"calico-apiserver-75d779f9f-bb974\" (UID: \"e7ff1367-64cd-423f-a7ae-69bcd41ce72a\") " pod="calico-apiserver/calico-apiserver-75d779f9f-bb974" Sep 11 00:28:10.029066 kubelet[3157]: I0911 00:28:10.029038 3157 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6p8b\" (UniqueName: \"kubernetes.io/projected/b5d91706-bc49-4025-ac5e-31192f7804cc-kube-api-access-j6p8b\") pod \"whisker-6995f8c8b5-g9blm\" (UID: \"b5d91706-bc49-4025-ac5e-31192f7804cc\") " pod="calico-system/whisker-6995f8c8b5-g9blm" Sep 11 00:28:10.029066 kubelet[3157]: I0911 00:28:10.029057 3157 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e4921f3-2a97-4fd2-b436-8e2b54a05b3f-config\") pod \"goldmane-54d579b49d-h8dt2\" (UID: \"9e4921f3-2a97-4fd2-b436-8e2b54a05b3f\") " pod="calico-system/goldmane-54d579b49d-h8dt2" Sep 11 00:28:10.029127 kubelet[3157]: I0911 00:28:10.029078 3157 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/50700c67-8777-495e-a0c8-f24ef4739246-calico-apiserver-certs\") pod \"calico-apiserver-7488f5d565-n2rfl\" (UID: \"50700c67-8777-495e-a0c8-f24ef4739246\") " pod="calico-apiserver/calico-apiserver-7488f5d565-n2rfl" Sep 11 00:28:10.029127 kubelet[3157]: I0911 00:28:10.029094 3157 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b5d91706-bc49-4025-ac5e-31192f7804cc-whisker-ca-bundle\") pod \"whisker-6995f8c8b5-g9blm\" (UID: \"b5d91706-bc49-4025-ac5e-31192f7804cc\") " pod="calico-system/whisker-6995f8c8b5-g9blm" Sep 11 00:28:10.029127 kubelet[3157]: I0911 00:28:10.029112 3157 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b5d91706-bc49-4025-ac5e-31192f7804cc-whisker-backend-key-pair\") pod \"whisker-6995f8c8b5-g9blm\" (UID: \"b5d91706-bc49-4025-ac5e-31192f7804cc\") " pod="calico-system/whisker-6995f8c8b5-g9blm" Sep 11 00:28:10.029190 kubelet[3157]: E0911 00:28:10.028881 3157 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61bb388e74d8704e0abe6ac5f2c2d1da6d96461bd5264f8f75222f20af1c315d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ldmgv" Sep 11 00:28:10.029190 kubelet[3157]: E0911 00:28:10.029166 3157 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61bb388e74d8704e0abe6ac5f2c2d1da6d96461bd5264f8f75222f20af1c315d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ldmgv" Sep 11 00:28:10.029234 kubelet[3157]: E0911 00:28:10.029210 3157 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-ldmgv_calico-system(6bbf56be-c717-41c6-9b0e-bbe3b830a307)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-ldmgv_calico-system(6bbf56be-c717-41c6-9b0e-bbe3b830a307)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"61bb388e74d8704e0abe6ac5f2c2d1da6d96461bd5264f8f75222f20af1c315d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ldmgv" podUID="6bbf56be-c717-41c6-9b0e-bbe3b830a307" Sep 11 00:28:10.030119 systemd[1]: run-netns-cni\x2d05b8b0da\x2d600d\x2db6c5\x2dd2da\x2d0a29d10aa463.mount: Deactivated successfully. Sep 11 00:28:10.034601 containerd[1747]: time="2025-09-11T00:28:10.034568763Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75d779f9f-s7nnc,Uid:e4a2905f-a3b1-4fce-89a6-6dcc04516c41,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"188742c5616097fedd535a4c6d807834051bd11d5424384a8b840662e264103c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:28:10.035171 kubelet[3157]: E0911 00:28:10.034704 3157 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"188742c5616097fedd535a4c6d807834051bd11d5424384a8b840662e264103c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:28:10.035171 kubelet[3157]: E0911 00:28:10.034742 3157 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"188742c5616097fedd535a4c6d807834051bd11d5424384a8b840662e264103c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-75d779f9f-s7nnc" Sep 11 00:28:10.035171 kubelet[3157]: E0911 00:28:10.034755 3157 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"188742c5616097fedd535a4c6d807834051bd11d5424384a8b840662e264103c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-75d779f9f-s7nnc" Sep 11 00:28:10.036981 kubelet[3157]: E0911 00:28:10.034837 3157 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-75d779f9f-s7nnc_calico-apiserver(e4a2905f-a3b1-4fce-89a6-6dcc04516c41)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-75d779f9f-s7nnc_calico-apiserver(e4a2905f-a3b1-4fce-89a6-6dcc04516c41)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"188742c5616097fedd535a4c6d807834051bd11d5424384a8b840662e264103c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-75d779f9f-s7nnc" podUID="e4a2905f-a3b1-4fce-89a6-6dcc04516c41" Sep 11 00:28:10.048551 containerd[1747]: time="2025-09-11T00:28:10.048518231Z" level=error msg="Failed to destroy network for sandbox \"111fa97ea47c055ed9ad03f995859cbf7cac34fbfd04ca8dfed85252ffe64ecb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:28:10.050266 systemd[1]: run-netns-cni\x2d18dab917\x2d8aa1\x2d32e3\x2db497\x2d750693d5df65.mount: Deactivated successfully. Sep 11 00:28:10.052014 containerd[1747]: time="2025-09-11T00:28:10.051990162Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6c99b56578-4pbpk,Uid:399d91b5-6a95-457c-b82d-2a4a35d507ad,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"111fa97ea47c055ed9ad03f995859cbf7cac34fbfd04ca8dfed85252ffe64ecb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:28:10.052173 kubelet[3157]: E0911 00:28:10.052151 3157 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"111fa97ea47c055ed9ad03f995859cbf7cac34fbfd04ca8dfed85252ffe64ecb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:28:10.052209 kubelet[3157]: E0911 00:28:10.052191 3157 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"111fa97ea47c055ed9ad03f995859cbf7cac34fbfd04ca8dfed85252ffe64ecb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6c99b56578-4pbpk" Sep 11 00:28:10.052232 kubelet[3157]: E0911 00:28:10.052208 3157 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"111fa97ea47c055ed9ad03f995859cbf7cac34fbfd04ca8dfed85252ffe64ecb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6c99b56578-4pbpk" Sep 11 00:28:10.052273 kubelet[3157]: E0911 00:28:10.052250 3157 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6c99b56578-4pbpk_calico-system(399d91b5-6a95-457c-b82d-2a4a35d507ad)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6c99b56578-4pbpk_calico-system(399d91b5-6a95-457c-b82d-2a4a35d507ad)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"111fa97ea47c055ed9ad03f995859cbf7cac34fbfd04ca8dfed85252ffe64ecb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6c99b56578-4pbpk" podUID="399d91b5-6a95-457c-b82d-2a4a35d507ad" Sep 11 00:28:10.175050 containerd[1747]: time="2025-09-11T00:28:10.174982066Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-mkpsv,Uid:7896f4f6-30fb-414f-8fc9-01181638003f,Namespace:kube-system,Attempt:0,}" Sep 11 00:28:10.213644 containerd[1747]: time="2025-09-11T00:28:10.213616230Z" level=error msg="Failed to destroy network for sandbox \"e5a74ebfd3df42e1f81796947f23a60cb9627edce4925fe6cc5994fa5c0ebaa9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:28:10.216011 containerd[1747]: time="2025-09-11T00:28:10.215964916Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 11 00:28:10.216463 containerd[1747]: time="2025-09-11T00:28:10.216138774Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-9s5qx,Uid:f50c5ad7-5f27-4da4-a84f-88c5aeb97826,Namespace:kube-system,Attempt:0,}" Sep 11 00:28:10.216993 containerd[1747]: time="2025-09-11T00:28:10.216942154Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-mkpsv,Uid:7896f4f6-30fb-414f-8fc9-01181638003f,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e5a74ebfd3df42e1f81796947f23a60cb9627edce4925fe6cc5994fa5c0ebaa9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:28:10.217294 kubelet[3157]: E0911 00:28:10.217269 3157 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e5a74ebfd3df42e1f81796947f23a60cb9627edce4925fe6cc5994fa5c0ebaa9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:28:10.217497 kubelet[3157]: E0911 00:28:10.217410 3157 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e5a74ebfd3df42e1f81796947f23a60cb9627edce4925fe6cc5994fa5c0ebaa9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-mkpsv" Sep 11 00:28:10.217497 kubelet[3157]: E0911 00:28:10.217433 3157 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e5a74ebfd3df42e1f81796947f23a60cb9627edce4925fe6cc5994fa5c0ebaa9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-mkpsv" Sep 11 00:28:10.217497 kubelet[3157]: E0911 00:28:10.217471 3157 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-mkpsv_kube-system(7896f4f6-30fb-414f-8fc9-01181638003f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-mkpsv_kube-system(7896f4f6-30fb-414f-8fc9-01181638003f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e5a74ebfd3df42e1f81796947f23a60cb9627edce4925fe6cc5994fa5c0ebaa9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-mkpsv" podUID="7896f4f6-30fb-414f-8fc9-01181638003f" Sep 11 00:28:10.248036 containerd[1747]: time="2025-09-11T00:28:10.247609370Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-h8dt2,Uid:9e4921f3-2a97-4fd2-b436-8e2b54a05b3f,Namespace:calico-system,Attempt:0,}" Sep 11 00:28:10.256482 containerd[1747]: time="2025-09-11T00:28:10.256458594Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6995f8c8b5-g9blm,Uid:b5d91706-bc49-4025-ac5e-31192f7804cc,Namespace:calico-system,Attempt:0,}" Sep 11 00:28:10.263263 containerd[1747]: time="2025-09-11T00:28:10.263242580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75d779f9f-bb974,Uid:e7ff1367-64cd-423f-a7ae-69bcd41ce72a,Namespace:calico-apiserver,Attempt:0,}" Sep 11 00:28:10.270800 containerd[1747]: time="2025-09-11T00:28:10.270781726Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7488f5d565-n2rfl,Uid:50700c67-8777-495e-a0c8-f24ef4739246,Namespace:calico-apiserver,Attempt:0,}" Sep 11 00:28:10.332716 containerd[1747]: time="2025-09-11T00:28:10.332691504Z" level=error msg="Failed to destroy network for sandbox \"163ea24ec5a7220e3a32e9fe97e49963a47e377d96e7c43dafe0e7e2e738fb9e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:28:10.336936 containerd[1747]: time="2025-09-11T00:28:10.336908446Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-9s5qx,Uid:f50c5ad7-5f27-4da4-a84f-88c5aeb97826,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"163ea24ec5a7220e3a32e9fe97e49963a47e377d96e7c43dafe0e7e2e738fb9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:28:10.337215 kubelet[3157]: E0911 00:28:10.337039 3157 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"163ea24ec5a7220e3a32e9fe97e49963a47e377d96e7c43dafe0e7e2e738fb9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:28:10.337215 kubelet[3157]: E0911 00:28:10.337079 3157 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"163ea24ec5a7220e3a32e9fe97e49963a47e377d96e7c43dafe0e7e2e738fb9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-9s5qx" Sep 11 00:28:10.337215 kubelet[3157]: E0911 00:28:10.337096 3157 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"163ea24ec5a7220e3a32e9fe97e49963a47e377d96e7c43dafe0e7e2e738fb9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-9s5qx" Sep 11 00:28:10.337304 kubelet[3157]: E0911 00:28:10.337132 3157 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-9s5qx_kube-system(f50c5ad7-5f27-4da4-a84f-88c5aeb97826)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-9s5qx_kube-system(f50c5ad7-5f27-4da4-a84f-88c5aeb97826)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"163ea24ec5a7220e3a32e9fe97e49963a47e377d96e7c43dafe0e7e2e738fb9e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-9s5qx" podUID="f50c5ad7-5f27-4da4-a84f-88c5aeb97826" Sep 11 00:28:10.353221 containerd[1747]: time="2025-09-11T00:28:10.353195302Z" level=error msg="Failed to destroy network for sandbox \"9a2f3f3d4c81ba6d85ffed15f904c5f4d126a94ff1ffe8ceec504a432ac72d63\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:28:10.356096 containerd[1747]: time="2025-09-11T00:28:10.356027371Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-h8dt2,Uid:9e4921f3-2a97-4fd2-b436-8e2b54a05b3f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a2f3f3d4c81ba6d85ffed15f904c5f4d126a94ff1ffe8ceec504a432ac72d63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:28:10.356376 kubelet[3157]: E0911 00:28:10.356282 3157 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a2f3f3d4c81ba6d85ffed15f904c5f4d126a94ff1ffe8ceec504a432ac72d63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:28:10.356376 kubelet[3157]: E0911 00:28:10.356353 3157 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a2f3f3d4c81ba6d85ffed15f904c5f4d126a94ff1ffe8ceec504a432ac72d63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-h8dt2" Sep 11 00:28:10.356543 kubelet[3157]: E0911 00:28:10.356468 3157 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a2f3f3d4c81ba6d85ffed15f904c5f4d126a94ff1ffe8ceec504a432ac72d63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-h8dt2" Sep 11 00:28:10.356543 kubelet[3157]: E0911 00:28:10.356519 3157 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-h8dt2_calico-system(9e4921f3-2a97-4fd2-b436-8e2b54a05b3f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-h8dt2_calico-system(9e4921f3-2a97-4fd2-b436-8e2b54a05b3f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9a2f3f3d4c81ba6d85ffed15f904c5f4d126a94ff1ffe8ceec504a432ac72d63\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-h8dt2" podUID="9e4921f3-2a97-4fd2-b436-8e2b54a05b3f" Sep 11 00:28:10.360079 containerd[1747]: time="2025-09-11T00:28:10.360045062Z" level=error msg="Failed to destroy network for sandbox \"574799e7663ed4ead5750109b9e17ac55f78bf9216c4fa3d06ae7b194326f2f3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:28:10.365492 containerd[1747]: time="2025-09-11T00:28:10.365456367Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6995f8c8b5-g9blm,Uid:b5d91706-bc49-4025-ac5e-31192f7804cc,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"574799e7663ed4ead5750109b9e17ac55f78bf9216c4fa3d06ae7b194326f2f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:28:10.365745 kubelet[3157]: E0911 00:28:10.365710 3157 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"574799e7663ed4ead5750109b9e17ac55f78bf9216c4fa3d06ae7b194326f2f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:28:10.365856 kubelet[3157]: E0911 00:28:10.365809 3157 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"574799e7663ed4ead5750109b9e17ac55f78bf9216c4fa3d06ae7b194326f2f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6995f8c8b5-g9blm" Sep 11 00:28:10.365856 kubelet[3157]: E0911 00:28:10.365833 3157 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"574799e7663ed4ead5750109b9e17ac55f78bf9216c4fa3d06ae7b194326f2f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6995f8c8b5-g9blm" Sep 11 00:28:10.365989 kubelet[3157]: E0911 00:28:10.365946 3157 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6995f8c8b5-g9blm_calico-system(b5d91706-bc49-4025-ac5e-31192f7804cc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6995f8c8b5-g9blm_calico-system(b5d91706-bc49-4025-ac5e-31192f7804cc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"574799e7663ed4ead5750109b9e17ac55f78bf9216c4fa3d06ae7b194326f2f3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6995f8c8b5-g9blm" podUID="b5d91706-bc49-4025-ac5e-31192f7804cc" Sep 11 00:28:10.375871 containerd[1747]: time="2025-09-11T00:28:10.375845660Z" level=error msg="Failed to destroy network for sandbox \"08c9440820f6240c2523d5d704835a33cfc3dca73d488f94c54282d72dcb60fc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:28:10.376903 containerd[1747]: time="2025-09-11T00:28:10.376876931Z" level=error msg="Failed to destroy network for sandbox \"7b17412c7477ee1929761f102534e313b1c2fa8bf65f5d6e2cddbb8fc8ee5999\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:28:10.379328 containerd[1747]: time="2025-09-11T00:28:10.379275921Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7488f5d565-n2rfl,Uid:50700c67-8777-495e-a0c8-f24ef4739246,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"08c9440820f6240c2523d5d704835a33cfc3dca73d488f94c54282d72dcb60fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:28:10.379454 kubelet[3157]: E0911 00:28:10.379432 3157 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"08c9440820f6240c2523d5d704835a33cfc3dca73d488f94c54282d72dcb60fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:28:10.379504 kubelet[3157]: E0911 00:28:10.379480 3157 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"08c9440820f6240c2523d5d704835a33cfc3dca73d488f94c54282d72dcb60fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7488f5d565-n2rfl" Sep 11 00:28:10.379504 kubelet[3157]: E0911 00:28:10.379497 3157 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"08c9440820f6240c2523d5d704835a33cfc3dca73d488f94c54282d72dcb60fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7488f5d565-n2rfl" Sep 11 00:28:10.379694 kubelet[3157]: E0911 00:28:10.379671 3157 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7488f5d565-n2rfl_calico-apiserver(50700c67-8777-495e-a0c8-f24ef4739246)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7488f5d565-n2rfl_calico-apiserver(50700c67-8777-495e-a0c8-f24ef4739246)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"08c9440820f6240c2523d5d704835a33cfc3dca73d488f94c54282d72dcb60fc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7488f5d565-n2rfl" podUID="50700c67-8777-495e-a0c8-f24ef4739246" Sep 11 00:28:10.381683 containerd[1747]: time="2025-09-11T00:28:10.381613564Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75d779f9f-bb974,Uid:e7ff1367-64cd-423f-a7ae-69bcd41ce72a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b17412c7477ee1929761f102534e313b1c2fa8bf65f5d6e2cddbb8fc8ee5999\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:28:10.381812 kubelet[3157]: E0911 00:28:10.381769 3157 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b17412c7477ee1929761f102534e313b1c2fa8bf65f5d6e2cddbb8fc8ee5999\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:28:10.381848 kubelet[3157]: E0911 00:28:10.381824 3157 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b17412c7477ee1929761f102534e313b1c2fa8bf65f5d6e2cddbb8fc8ee5999\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-75d779f9f-bb974" Sep 11 00:28:10.381848 kubelet[3157]: E0911 00:28:10.381841 3157 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b17412c7477ee1929761f102534e313b1c2fa8bf65f5d6e2cddbb8fc8ee5999\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-75d779f9f-bb974" Sep 11 00:28:10.381907 kubelet[3157]: E0911 00:28:10.381888 3157 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-75d779f9f-bb974_calico-apiserver(e7ff1367-64cd-423f-a7ae-69bcd41ce72a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-75d779f9f-bb974_calico-apiserver(e7ff1367-64cd-423f-a7ae-69bcd41ce72a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7b17412c7477ee1929761f102534e313b1c2fa8bf65f5d6e2cddbb8fc8ee5999\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-75d779f9f-bb974" podUID="e7ff1367-64cd-423f-a7ae-69bcd41ce72a" Sep 11 00:28:12.003718 kubelet[3157]: I0911 00:28:12.003653 3157 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 11 00:28:16.629044 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1782444366.mount: Deactivated successfully. Sep 11 00:28:16.656376 containerd[1747]: time="2025-09-11T00:28:16.656339194Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:28:16.658680 containerd[1747]: time="2025-09-11T00:28:16.658620732Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 11 00:28:16.661544 containerd[1747]: time="2025-09-11T00:28:16.661522780Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:28:16.667812 containerd[1747]: time="2025-09-11T00:28:16.667770193Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:28:16.668121 containerd[1747]: time="2025-09-11T00:28:16.668026052Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 6.452033397s" Sep 11 00:28:16.668121 containerd[1747]: time="2025-09-11T00:28:16.668050812Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 11 00:28:16.684304 containerd[1747]: time="2025-09-11T00:28:16.684270200Z" level=info msg="CreateContainer within sandbox \"092e05b26e345041681455cdb1c620f17ef833196ded4bbd624bff3da5f85ca4\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 11 00:28:16.708420 containerd[1747]: time="2025-09-11T00:28:16.706611436Z" level=info msg="Container 501b940f048afec0cb344c0bbe460a3694c330693b0af9da65daf0188458dc24: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:28:16.710651 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2490705500.mount: Deactivated successfully. Sep 11 00:28:16.725293 containerd[1747]: time="2025-09-11T00:28:16.725270044Z" level=info msg="CreateContainer within sandbox \"092e05b26e345041681455cdb1c620f17ef833196ded4bbd624bff3da5f85ca4\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"501b940f048afec0cb344c0bbe460a3694c330693b0af9da65daf0188458dc24\"" Sep 11 00:28:16.725800 containerd[1747]: time="2025-09-11T00:28:16.725658601Z" level=info msg="StartContainer for \"501b940f048afec0cb344c0bbe460a3694c330693b0af9da65daf0188458dc24\"" Sep 11 00:28:16.727107 containerd[1747]: time="2025-09-11T00:28:16.727070239Z" level=info msg="connecting to shim 501b940f048afec0cb344c0bbe460a3694c330693b0af9da65daf0188458dc24" address="unix:///run/containerd/s/e53aa3d0c6c8dfc60dae01437bcc8979b156f7838b610723f9a66242bce9a902" protocol=ttrpc version=3 Sep 11 00:28:16.744515 systemd[1]: Started cri-containerd-501b940f048afec0cb344c0bbe460a3694c330693b0af9da65daf0188458dc24.scope - libcontainer container 501b940f048afec0cb344c0bbe460a3694c330693b0af9da65daf0188458dc24. Sep 11 00:28:16.777159 containerd[1747]: time="2025-09-11T00:28:16.777099463Z" level=info msg="StartContainer for \"501b940f048afec0cb344c0bbe460a3694c330693b0af9da65daf0188458dc24\" returns successfully" Sep 11 00:28:17.150417 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 11 00:28:17.150482 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 11 00:28:17.250096 kubelet[3157]: I0911 00:28:17.250041 3157 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-jqrjt" podStartSLOduration=1.3404051510000001 podStartE2EDuration="20.250026772s" podCreationTimestamp="2025-09-11 00:27:57 +0000 UTC" firstStartedPulling="2025-09-11 00:27:57.75895178 +0000 UTC m=+18.718777618" lastFinishedPulling="2025-09-11 00:28:16.668573408 +0000 UTC m=+37.628399239" observedRunningTime="2025-09-11 00:28:17.249713495 +0000 UTC m=+38.209539343" watchObservedRunningTime="2025-09-11 00:28:17.250026772 +0000 UTC m=+38.209852617" Sep 11 00:28:17.349285 containerd[1747]: time="2025-09-11T00:28:17.349209935Z" level=info msg="TaskExit event in podsandbox handler container_id:\"501b940f048afec0cb344c0bbe460a3694c330693b0af9da65daf0188458dc24\" id:\"f3bfa75b03876f473f2d5b04b1a535014f93e2d2690cb690e957bb7bf74d44a2\" pid:4264 exit_status:1 exited_at:{seconds:1757550497 nanos:348873311}" Sep 11 00:28:17.367982 kubelet[3157]: I0911 00:28:17.367957 3157 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j6p8b\" (UniqueName: \"kubernetes.io/projected/b5d91706-bc49-4025-ac5e-31192f7804cc-kube-api-access-j6p8b\") pod \"b5d91706-bc49-4025-ac5e-31192f7804cc\" (UID: \"b5d91706-bc49-4025-ac5e-31192f7804cc\") " Sep 11 00:28:17.368070 kubelet[3157]: I0911 00:28:17.367998 3157 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b5d91706-bc49-4025-ac5e-31192f7804cc-whisker-ca-bundle\") pod \"b5d91706-bc49-4025-ac5e-31192f7804cc\" (UID: \"b5d91706-bc49-4025-ac5e-31192f7804cc\") " Sep 11 00:28:17.368070 kubelet[3157]: I0911 00:28:17.368018 3157 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b5d91706-bc49-4025-ac5e-31192f7804cc-whisker-backend-key-pair\") pod \"b5d91706-bc49-4025-ac5e-31192f7804cc\" (UID: \"b5d91706-bc49-4025-ac5e-31192f7804cc\") " Sep 11 00:28:17.370414 kubelet[3157]: I0911 00:28:17.369242 3157 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5d91706-bc49-4025-ac5e-31192f7804cc-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "b5d91706-bc49-4025-ac5e-31192f7804cc" (UID: "b5d91706-bc49-4025-ac5e-31192f7804cc"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 11 00:28:17.371848 kubelet[3157]: I0911 00:28:17.371824 3157 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5d91706-bc49-4025-ac5e-31192f7804cc-kube-api-access-j6p8b" (OuterVolumeSpecName: "kube-api-access-j6p8b") pod "b5d91706-bc49-4025-ac5e-31192f7804cc" (UID: "b5d91706-bc49-4025-ac5e-31192f7804cc"). InnerVolumeSpecName "kube-api-access-j6p8b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 11 00:28:17.371920 kubelet[3157]: I0911 00:28:17.371869 3157 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5d91706-bc49-4025-ac5e-31192f7804cc-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "b5d91706-bc49-4025-ac5e-31192f7804cc" (UID: "b5d91706-bc49-4025-ac5e-31192f7804cc"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 11 00:28:17.468372 kubelet[3157]: I0911 00:28:17.468310 3157 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b5d91706-bc49-4025-ac5e-31192f7804cc-whisker-ca-bundle\") on node \"ci-4372.1.0-n-3f8a739b41\" DevicePath \"\"" Sep 11 00:28:17.468372 kubelet[3157]: I0911 00:28:17.468336 3157 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b5d91706-bc49-4025-ac5e-31192f7804cc-whisker-backend-key-pair\") on node \"ci-4372.1.0-n-3f8a739b41\" DevicePath \"\"" Sep 11 00:28:17.468372 kubelet[3157]: I0911 00:28:17.468347 3157 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-j6p8b\" (UniqueName: \"kubernetes.io/projected/b5d91706-bc49-4025-ac5e-31192f7804cc-kube-api-access-j6p8b\") on node \"ci-4372.1.0-n-3f8a739b41\" DevicePath \"\"" Sep 11 00:28:17.628883 systemd[1]: var-lib-kubelet-pods-b5d91706\x2dbc49\x2d4025\x2dac5e\x2d31192f7804cc-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dj6p8b.mount: Deactivated successfully. Sep 11 00:28:17.628965 systemd[1]: var-lib-kubelet-pods-b5d91706\x2dbc49\x2d4025\x2dac5e\x2d31192f7804cc-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 11 00:28:18.237023 systemd[1]: Removed slice kubepods-besteffort-podb5d91706_bc49_4025_ac5e_31192f7804cc.slice - libcontainer container kubepods-besteffort-podb5d91706_bc49_4025_ac5e_31192f7804cc.slice. Sep 11 00:28:18.318064 systemd[1]: Created slice kubepods-besteffort-pod9b8e1e2b_097b_4f54_b8a7_5b567897e4c7.slice - libcontainer container kubepods-besteffort-pod9b8e1e2b_097b_4f54_b8a7_5b567897e4c7.slice. Sep 11 00:28:18.326962 containerd[1747]: time="2025-09-11T00:28:18.326900376Z" level=info msg="TaskExit event in podsandbox handler container_id:\"501b940f048afec0cb344c0bbe460a3694c330693b0af9da65daf0188458dc24\" id:\"fcaa136b02aef50bcbb00f4cd9b1df6acfdc4342e5e689124c70207aad5233ec\" pid:4310 exit_status:1 exited_at:{seconds:1757550498 nanos:326690967}" Sep 11 00:28:18.373789 kubelet[3157]: I0911 00:28:18.373716 3157 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9b8e1e2b-097b-4f54-b8a7-5b567897e4c7-whisker-ca-bundle\") pod \"whisker-69657f7f84-fdhs5\" (UID: \"9b8e1e2b-097b-4f54-b8a7-5b567897e4c7\") " pod="calico-system/whisker-69657f7f84-fdhs5" Sep 11 00:28:18.373789 kubelet[3157]: I0911 00:28:18.373752 3157 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9b8e1e2b-097b-4f54-b8a7-5b567897e4c7-whisker-backend-key-pair\") pod \"whisker-69657f7f84-fdhs5\" (UID: \"9b8e1e2b-097b-4f54-b8a7-5b567897e4c7\") " pod="calico-system/whisker-69657f7f84-fdhs5" Sep 11 00:28:18.373789 kubelet[3157]: I0911 00:28:18.373765 3157 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvs78\" (UniqueName: \"kubernetes.io/projected/9b8e1e2b-097b-4f54-b8a7-5b567897e4c7-kube-api-access-mvs78\") pod \"whisker-69657f7f84-fdhs5\" (UID: \"9b8e1e2b-097b-4f54-b8a7-5b567897e4c7\") " pod="calico-system/whisker-69657f7f84-fdhs5" Sep 11 00:28:18.624196 containerd[1747]: time="2025-09-11T00:28:18.624155809Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-69657f7f84-fdhs5,Uid:9b8e1e2b-097b-4f54-b8a7-5b567897e4c7,Namespace:calico-system,Attempt:0,}" Sep 11 00:28:18.774625 systemd-networkd[1358]: cali9fe0983c394: Link UP Sep 11 00:28:18.775318 systemd-networkd[1358]: cali9fe0983c394: Gained carrier Sep 11 00:28:18.789116 containerd[1747]: 2025-09-11 00:28:18.665 [INFO][4412] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 11 00:28:18.789116 containerd[1747]: 2025-09-11 00:28:18.676 [INFO][4412] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.1.0--n--3f8a739b41-k8s-whisker--69657f7f84--fdhs5-eth0 whisker-69657f7f84- calico-system 9b8e1e2b-097b-4f54-b8a7-5b567897e4c7 948 0 2025-09-11 00:28:18 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:69657f7f84 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4372.1.0-n-3f8a739b41 whisker-69657f7f84-fdhs5 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali9fe0983c394 [] [] }} ContainerID="ed7675eb64afc6301aae094ba23921b593875af0074c35194b80c08a0d073159" Namespace="calico-system" Pod="whisker-69657f7f84-fdhs5" WorkloadEndpoint="ci--4372.1.0--n--3f8a739b41-k8s-whisker--69657f7f84--fdhs5-" Sep 11 00:28:18.789116 containerd[1747]: 2025-09-11 00:28:18.676 [INFO][4412] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ed7675eb64afc6301aae094ba23921b593875af0074c35194b80c08a0d073159" Namespace="calico-system" Pod="whisker-69657f7f84-fdhs5" WorkloadEndpoint="ci--4372.1.0--n--3f8a739b41-k8s-whisker--69657f7f84--fdhs5-eth0" Sep 11 00:28:18.789116 containerd[1747]: 2025-09-11 00:28:18.714 [INFO][4425] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ed7675eb64afc6301aae094ba23921b593875af0074c35194b80c08a0d073159" HandleID="k8s-pod-network.ed7675eb64afc6301aae094ba23921b593875af0074c35194b80c08a0d073159" Workload="ci--4372.1.0--n--3f8a739b41-k8s-whisker--69657f7f84--fdhs5-eth0" Sep 11 00:28:18.789461 containerd[1747]: 2025-09-11 00:28:18.714 [INFO][4425] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ed7675eb64afc6301aae094ba23921b593875af0074c35194b80c08a0d073159" HandleID="k8s-pod-network.ed7675eb64afc6301aae094ba23921b593875af0074c35194b80c08a0d073159" Workload="ci--4372.1.0--n--3f8a739b41-k8s-whisker--69657f7f84--fdhs5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f0a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4372.1.0-n-3f8a739b41", "pod":"whisker-69657f7f84-fdhs5", "timestamp":"2025-09-11 00:28:18.714377534 +0000 UTC"}, Hostname:"ci-4372.1.0-n-3f8a739b41", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 11 00:28:18.789461 containerd[1747]: 2025-09-11 00:28:18.714 [INFO][4425] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 11 00:28:18.789461 containerd[1747]: 2025-09-11 00:28:18.714 [INFO][4425] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 11 00:28:18.789461 containerd[1747]: 2025-09-11 00:28:18.714 [INFO][4425] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.1.0-n-3f8a739b41' Sep 11 00:28:18.789461 containerd[1747]: 2025-09-11 00:28:18.720 [INFO][4425] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ed7675eb64afc6301aae094ba23921b593875af0074c35194b80c08a0d073159" host="ci-4372.1.0-n-3f8a739b41" Sep 11 00:28:18.789461 containerd[1747]: 2025-09-11 00:28:18.724 [INFO][4425] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.1.0-n-3f8a739b41" Sep 11 00:28:18.789461 containerd[1747]: 2025-09-11 00:28:18.731 [INFO][4425] ipam/ipam.go 511: Trying affinity for 192.168.87.0/26 host="ci-4372.1.0-n-3f8a739b41" Sep 11 00:28:18.789461 containerd[1747]: 2025-09-11 00:28:18.732 [INFO][4425] ipam/ipam.go 158: Attempting to load block cidr=192.168.87.0/26 host="ci-4372.1.0-n-3f8a739b41" Sep 11 00:28:18.789461 containerd[1747]: 2025-09-11 00:28:18.734 [INFO][4425] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.87.0/26 host="ci-4372.1.0-n-3f8a739b41" Sep 11 00:28:18.789693 containerd[1747]: 2025-09-11 00:28:18.734 [INFO][4425] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.87.0/26 handle="k8s-pod-network.ed7675eb64afc6301aae094ba23921b593875af0074c35194b80c08a0d073159" host="ci-4372.1.0-n-3f8a739b41" Sep 11 00:28:18.789693 containerd[1747]: 2025-09-11 00:28:18.735 [INFO][4425] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ed7675eb64afc6301aae094ba23921b593875af0074c35194b80c08a0d073159 Sep 11 00:28:18.789693 containerd[1747]: 2025-09-11 00:28:18.739 [INFO][4425] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.87.0/26 handle="k8s-pod-network.ed7675eb64afc6301aae094ba23921b593875af0074c35194b80c08a0d073159" host="ci-4372.1.0-n-3f8a739b41" Sep 11 00:28:18.789693 containerd[1747]: 2025-09-11 00:28:18.750 [INFO][4425] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.87.1/26] block=192.168.87.0/26 handle="k8s-pod-network.ed7675eb64afc6301aae094ba23921b593875af0074c35194b80c08a0d073159" host="ci-4372.1.0-n-3f8a739b41" Sep 11 00:28:18.789693 containerd[1747]: 2025-09-11 00:28:18.750 [INFO][4425] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.87.1/26] handle="k8s-pod-network.ed7675eb64afc6301aae094ba23921b593875af0074c35194b80c08a0d073159" host="ci-4372.1.0-n-3f8a739b41" Sep 11 00:28:18.789693 containerd[1747]: 2025-09-11 00:28:18.750 [INFO][4425] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 11 00:28:18.789693 containerd[1747]: 2025-09-11 00:28:18.750 [INFO][4425] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.87.1/26] IPv6=[] ContainerID="ed7675eb64afc6301aae094ba23921b593875af0074c35194b80c08a0d073159" HandleID="k8s-pod-network.ed7675eb64afc6301aae094ba23921b593875af0074c35194b80c08a0d073159" Workload="ci--4372.1.0--n--3f8a739b41-k8s-whisker--69657f7f84--fdhs5-eth0" Sep 11 00:28:18.789896 containerd[1747]: 2025-09-11 00:28:18.754 [INFO][4412] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ed7675eb64afc6301aae094ba23921b593875af0074c35194b80c08a0d073159" Namespace="calico-system" Pod="whisker-69657f7f84-fdhs5" WorkloadEndpoint="ci--4372.1.0--n--3f8a739b41-k8s-whisker--69657f7f84--fdhs5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.1.0--n--3f8a739b41-k8s-whisker--69657f7f84--fdhs5-eth0", GenerateName:"whisker-69657f7f84-", Namespace:"calico-system", SelfLink:"", UID:"9b8e1e2b-097b-4f54-b8a7-5b567897e4c7", ResourceVersion:"948", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 28, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"69657f7f84", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.1.0-n-3f8a739b41", ContainerID:"", Pod:"whisker-69657f7f84-fdhs5", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.87.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali9fe0983c394", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:28:18.789896 containerd[1747]: 2025-09-11 00:28:18.754 [INFO][4412] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.87.1/32] ContainerID="ed7675eb64afc6301aae094ba23921b593875af0074c35194b80c08a0d073159" Namespace="calico-system" Pod="whisker-69657f7f84-fdhs5" WorkloadEndpoint="ci--4372.1.0--n--3f8a739b41-k8s-whisker--69657f7f84--fdhs5-eth0" Sep 11 00:28:18.789995 containerd[1747]: 2025-09-11 00:28:18.754 [INFO][4412] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9fe0983c394 ContainerID="ed7675eb64afc6301aae094ba23921b593875af0074c35194b80c08a0d073159" Namespace="calico-system" Pod="whisker-69657f7f84-fdhs5" WorkloadEndpoint="ci--4372.1.0--n--3f8a739b41-k8s-whisker--69657f7f84--fdhs5-eth0" Sep 11 00:28:18.789995 containerd[1747]: 2025-09-11 00:28:18.775 [INFO][4412] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ed7675eb64afc6301aae094ba23921b593875af0074c35194b80c08a0d073159" Namespace="calico-system" Pod="whisker-69657f7f84-fdhs5" WorkloadEndpoint="ci--4372.1.0--n--3f8a739b41-k8s-whisker--69657f7f84--fdhs5-eth0" Sep 11 00:28:18.790043 containerd[1747]: 2025-09-11 00:28:18.775 [INFO][4412] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ed7675eb64afc6301aae094ba23921b593875af0074c35194b80c08a0d073159" Namespace="calico-system" Pod="whisker-69657f7f84-fdhs5" WorkloadEndpoint="ci--4372.1.0--n--3f8a739b41-k8s-whisker--69657f7f84--fdhs5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.1.0--n--3f8a739b41-k8s-whisker--69657f7f84--fdhs5-eth0", GenerateName:"whisker-69657f7f84-", Namespace:"calico-system", SelfLink:"", UID:"9b8e1e2b-097b-4f54-b8a7-5b567897e4c7", ResourceVersion:"948", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 28, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"69657f7f84", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.1.0-n-3f8a739b41", ContainerID:"ed7675eb64afc6301aae094ba23921b593875af0074c35194b80c08a0d073159", Pod:"whisker-69657f7f84-fdhs5", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.87.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali9fe0983c394", MAC:"62:3a:67:14:b0:04", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:28:18.790118 containerd[1747]: 2025-09-11 00:28:18.788 [INFO][4412] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ed7675eb64afc6301aae094ba23921b593875af0074c35194b80c08a0d073159" Namespace="calico-system" Pod="whisker-69657f7f84-fdhs5" WorkloadEndpoint="ci--4372.1.0--n--3f8a739b41-k8s-whisker--69657f7f84--fdhs5-eth0" Sep 11 00:28:18.824099 containerd[1747]: time="2025-09-11T00:28:18.824055493Z" level=info msg="connecting to shim ed7675eb64afc6301aae094ba23921b593875af0074c35194b80c08a0d073159" address="unix:///run/containerd/s/9bd1c76c33e0dd9d425ce8e53ea95b2ce5c7cc7c64a322194196127b397058f5" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:28:18.859674 systemd[1]: Started cri-containerd-ed7675eb64afc6301aae094ba23921b593875af0074c35194b80c08a0d073159.scope - libcontainer container ed7675eb64afc6301aae094ba23921b593875af0074c35194b80c08a0d073159. Sep 11 00:28:18.908884 containerd[1747]: time="2025-09-11T00:28:18.908228230Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-69657f7f84-fdhs5,Uid:9b8e1e2b-097b-4f54-b8a7-5b567897e4c7,Namespace:calico-system,Attempt:0,} returns sandbox id \"ed7675eb64afc6301aae094ba23921b593875af0074c35194b80c08a0d073159\"" Sep 11 00:28:18.911524 containerd[1747]: time="2025-09-11T00:28:18.911470936Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 11 00:28:19.121184 systemd-networkd[1358]: vxlan.calico: Link UP Sep 11 00:28:19.121189 systemd-networkd[1358]: vxlan.calico: Gained carrier Sep 11 00:28:19.130986 kubelet[3157]: I0911 00:28:19.130963 3157 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5d91706-bc49-4025-ac5e-31192f7804cc" path="/var/lib/kubelet/pods/b5d91706-bc49-4025-ac5e-31192f7804cc/volumes" Sep 11 00:28:20.330768 containerd[1747]: time="2025-09-11T00:28:20.330729741Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:28:20.333324 containerd[1747]: time="2025-09-11T00:28:20.333292627Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Sep 11 00:28:20.335998 containerd[1747]: time="2025-09-11T00:28:20.335960593Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:28:20.340178 containerd[1747]: time="2025-09-11T00:28:20.339810075Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:28:20.340178 containerd[1747]: time="2025-09-11T00:28:20.340090155Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 1.428594546s" Sep 11 00:28:20.340178 containerd[1747]: time="2025-09-11T00:28:20.340113925Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 11 00:28:20.346895 containerd[1747]: time="2025-09-11T00:28:20.346869312Z" level=info msg="CreateContainer within sandbox \"ed7675eb64afc6301aae094ba23921b593875af0074c35194b80c08a0d073159\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 11 00:28:20.364196 containerd[1747]: time="2025-09-11T00:28:20.363492470Z" level=info msg="Container 701645639fc07e43ae2cbbf32a9f5b9ef7d6be08796aa0484b443cde590af615: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:28:20.378059 containerd[1747]: time="2025-09-11T00:28:20.378036462Z" level=info msg="CreateContainer within sandbox \"ed7675eb64afc6301aae094ba23921b593875af0074c35194b80c08a0d073159\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"701645639fc07e43ae2cbbf32a9f5b9ef7d6be08796aa0484b443cde590af615\"" Sep 11 00:28:20.378603 containerd[1747]: time="2025-09-11T00:28:20.378580754Z" level=info msg="StartContainer for \"701645639fc07e43ae2cbbf32a9f5b9ef7d6be08796aa0484b443cde590af615\"" Sep 11 00:28:20.379781 containerd[1747]: time="2025-09-11T00:28:20.379759591Z" level=info msg="connecting to shim 701645639fc07e43ae2cbbf32a9f5b9ef7d6be08796aa0484b443cde590af615" address="unix:///run/containerd/s/9bd1c76c33e0dd9d425ce8e53ea95b2ce5c7cc7c64a322194196127b397058f5" protocol=ttrpc version=3 Sep 11 00:28:20.405521 systemd[1]: Started cri-containerd-701645639fc07e43ae2cbbf32a9f5b9ef7d6be08796aa0484b443cde590af615.scope - libcontainer container 701645639fc07e43ae2cbbf32a9f5b9ef7d6be08796aa0484b443cde590af615. Sep 11 00:28:20.447409 containerd[1747]: time="2025-09-11T00:28:20.447110059Z" level=info msg="StartContainer for \"701645639fc07e43ae2cbbf32a9f5b9ef7d6be08796aa0484b443cde590af615\" returns successfully" Sep 11 00:28:20.449563 containerd[1747]: time="2025-09-11T00:28:20.449539003Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 11 00:28:20.647493 systemd-networkd[1358]: cali9fe0983c394: Gained IPv6LL Sep 11 00:28:20.839488 systemd-networkd[1358]: vxlan.calico: Gained IPv6LL Sep 11 00:28:22.126821 containerd[1747]: time="2025-09-11T00:28:22.126567796Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ldmgv,Uid:6bbf56be-c717-41c6-9b0e-bbe3b830a307,Namespace:calico-system,Attempt:0,}" Sep 11 00:28:22.220820 systemd-networkd[1358]: caliad12baca645: Link UP Sep 11 00:28:22.221001 systemd-networkd[1358]: caliad12baca645: Gained carrier Sep 11 00:28:22.235011 containerd[1747]: 2025-09-11 00:28:22.165 [INFO][4626] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.1.0--n--3f8a739b41-k8s-csi--node--driver--ldmgv-eth0 csi-node-driver- calico-system 6bbf56be-c717-41c6-9b0e-bbe3b830a307 757 0 2025-09-11 00:27:57 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4372.1.0-n-3f8a739b41 csi-node-driver-ldmgv eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] caliad12baca645 [] [] }} ContainerID="a5775b106b8f07546e060b453b3e18338fbf53bcd1c4d8e1ba7ce23cfe0cb631" Namespace="calico-system" Pod="csi-node-driver-ldmgv" WorkloadEndpoint="ci--4372.1.0--n--3f8a739b41-k8s-csi--node--driver--ldmgv-" Sep 11 00:28:22.235011 containerd[1747]: 2025-09-11 00:28:22.165 [INFO][4626] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a5775b106b8f07546e060b453b3e18338fbf53bcd1c4d8e1ba7ce23cfe0cb631" Namespace="calico-system" Pod="csi-node-driver-ldmgv" WorkloadEndpoint="ci--4372.1.0--n--3f8a739b41-k8s-csi--node--driver--ldmgv-eth0" Sep 11 00:28:22.235011 containerd[1747]: 2025-09-11 00:28:22.187 [INFO][4639] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a5775b106b8f07546e060b453b3e18338fbf53bcd1c4d8e1ba7ce23cfe0cb631" HandleID="k8s-pod-network.a5775b106b8f07546e060b453b3e18338fbf53bcd1c4d8e1ba7ce23cfe0cb631" Workload="ci--4372.1.0--n--3f8a739b41-k8s-csi--node--driver--ldmgv-eth0" Sep 11 00:28:22.235497 containerd[1747]: 2025-09-11 00:28:22.187 [INFO][4639] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a5775b106b8f07546e060b453b3e18338fbf53bcd1c4d8e1ba7ce23cfe0cb631" HandleID="k8s-pod-network.a5775b106b8f07546e060b453b3e18338fbf53bcd1c4d8e1ba7ce23cfe0cb631" Workload="ci--4372.1.0--n--3f8a739b41-k8s-csi--node--driver--ldmgv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4ff0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4372.1.0-n-3f8a739b41", "pod":"csi-node-driver-ldmgv", "timestamp":"2025-09-11 00:28:22.187680037 +0000 UTC"}, Hostname:"ci-4372.1.0-n-3f8a739b41", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 11 00:28:22.235497 containerd[1747]: 2025-09-11 00:28:22.187 [INFO][4639] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 11 00:28:22.235497 containerd[1747]: 2025-09-11 00:28:22.187 [INFO][4639] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 11 00:28:22.235497 containerd[1747]: 2025-09-11 00:28:22.187 [INFO][4639] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.1.0-n-3f8a739b41' Sep 11 00:28:22.235497 containerd[1747]: 2025-09-11 00:28:22.192 [INFO][4639] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a5775b106b8f07546e060b453b3e18338fbf53bcd1c4d8e1ba7ce23cfe0cb631" host="ci-4372.1.0-n-3f8a739b41" Sep 11 00:28:22.235497 containerd[1747]: 2025-09-11 00:28:22.195 [INFO][4639] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.1.0-n-3f8a739b41" Sep 11 00:28:22.235497 containerd[1747]: 2025-09-11 00:28:22.198 [INFO][4639] ipam/ipam.go 511: Trying affinity for 192.168.87.0/26 host="ci-4372.1.0-n-3f8a739b41" Sep 11 00:28:22.235497 containerd[1747]: 2025-09-11 00:28:22.199 [INFO][4639] ipam/ipam.go 158: Attempting to load block cidr=192.168.87.0/26 host="ci-4372.1.0-n-3f8a739b41" Sep 11 00:28:22.235497 containerd[1747]: 2025-09-11 00:28:22.200 [INFO][4639] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.87.0/26 host="ci-4372.1.0-n-3f8a739b41" Sep 11 00:28:22.235695 containerd[1747]: 2025-09-11 00:28:22.200 [INFO][4639] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.87.0/26 handle="k8s-pod-network.a5775b106b8f07546e060b453b3e18338fbf53bcd1c4d8e1ba7ce23cfe0cb631" host="ci-4372.1.0-n-3f8a739b41" Sep 11 00:28:22.235695 containerd[1747]: 2025-09-11 00:28:22.201 [INFO][4639] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a5775b106b8f07546e060b453b3e18338fbf53bcd1c4d8e1ba7ce23cfe0cb631 Sep 11 00:28:22.235695 containerd[1747]: 2025-09-11 00:28:22.210 [INFO][4639] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.87.0/26 handle="k8s-pod-network.a5775b106b8f07546e060b453b3e18338fbf53bcd1c4d8e1ba7ce23cfe0cb631" host="ci-4372.1.0-n-3f8a739b41" Sep 11 00:28:22.235695 containerd[1747]: 2025-09-11 00:28:22.214 [INFO][4639] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.87.2/26] block=192.168.87.0/26 handle="k8s-pod-network.a5775b106b8f07546e060b453b3e18338fbf53bcd1c4d8e1ba7ce23cfe0cb631" host="ci-4372.1.0-n-3f8a739b41" Sep 11 00:28:22.235695 containerd[1747]: 2025-09-11 00:28:22.215 [INFO][4639] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.87.2/26] handle="k8s-pod-network.a5775b106b8f07546e060b453b3e18338fbf53bcd1c4d8e1ba7ce23cfe0cb631" host="ci-4372.1.0-n-3f8a739b41" Sep 11 00:28:22.235695 containerd[1747]: 2025-09-11 00:28:22.215 [INFO][4639] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 11 00:28:22.235695 containerd[1747]: 2025-09-11 00:28:22.215 [INFO][4639] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.87.2/26] IPv6=[] ContainerID="a5775b106b8f07546e060b453b3e18338fbf53bcd1c4d8e1ba7ce23cfe0cb631" HandleID="k8s-pod-network.a5775b106b8f07546e060b453b3e18338fbf53bcd1c4d8e1ba7ce23cfe0cb631" Workload="ci--4372.1.0--n--3f8a739b41-k8s-csi--node--driver--ldmgv-eth0" Sep 11 00:28:22.235821 containerd[1747]: 2025-09-11 00:28:22.216 [INFO][4626] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a5775b106b8f07546e060b453b3e18338fbf53bcd1c4d8e1ba7ce23cfe0cb631" Namespace="calico-system" Pod="csi-node-driver-ldmgv" WorkloadEndpoint="ci--4372.1.0--n--3f8a739b41-k8s-csi--node--driver--ldmgv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.1.0--n--3f8a739b41-k8s-csi--node--driver--ldmgv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6bbf56be-c717-41c6-9b0e-bbe3b830a307", ResourceVersion:"757", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 27, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.1.0-n-3f8a739b41", ContainerID:"", Pod:"csi-node-driver-ldmgv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.87.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliad12baca645", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:28:22.236536 containerd[1747]: 2025-09-11 00:28:22.216 [INFO][4626] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.87.2/32] ContainerID="a5775b106b8f07546e060b453b3e18338fbf53bcd1c4d8e1ba7ce23cfe0cb631" Namespace="calico-system" Pod="csi-node-driver-ldmgv" WorkloadEndpoint="ci--4372.1.0--n--3f8a739b41-k8s-csi--node--driver--ldmgv-eth0" Sep 11 00:28:22.236536 containerd[1747]: 2025-09-11 00:28:22.216 [INFO][4626] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliad12baca645 ContainerID="a5775b106b8f07546e060b453b3e18338fbf53bcd1c4d8e1ba7ce23cfe0cb631" Namespace="calico-system" Pod="csi-node-driver-ldmgv" WorkloadEndpoint="ci--4372.1.0--n--3f8a739b41-k8s-csi--node--driver--ldmgv-eth0" Sep 11 00:28:22.236536 containerd[1747]: 2025-09-11 00:28:22.220 [INFO][4626] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a5775b106b8f07546e060b453b3e18338fbf53bcd1c4d8e1ba7ce23cfe0cb631" Namespace="calico-system" Pod="csi-node-driver-ldmgv" WorkloadEndpoint="ci--4372.1.0--n--3f8a739b41-k8s-csi--node--driver--ldmgv-eth0" Sep 11 00:28:22.236603 containerd[1747]: 2025-09-11 00:28:22.221 [INFO][4626] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a5775b106b8f07546e060b453b3e18338fbf53bcd1c4d8e1ba7ce23cfe0cb631" Namespace="calico-system" Pod="csi-node-driver-ldmgv" WorkloadEndpoint="ci--4372.1.0--n--3f8a739b41-k8s-csi--node--driver--ldmgv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.1.0--n--3f8a739b41-k8s-csi--node--driver--ldmgv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6bbf56be-c717-41c6-9b0e-bbe3b830a307", ResourceVersion:"757", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 27, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.1.0-n-3f8a739b41", ContainerID:"a5775b106b8f07546e060b453b3e18338fbf53bcd1c4d8e1ba7ce23cfe0cb631", Pod:"csi-node-driver-ldmgv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.87.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliad12baca645", MAC:"a6:b3:e6:8e:c1:b2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:28:22.236651 containerd[1747]: 2025-09-11 00:28:22.232 [INFO][4626] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a5775b106b8f07546e060b453b3e18338fbf53bcd1c4d8e1ba7ce23cfe0cb631" Namespace="calico-system" Pod="csi-node-driver-ldmgv" WorkloadEndpoint="ci--4372.1.0--n--3f8a739b41-k8s-csi--node--driver--ldmgv-eth0" Sep 11 00:28:22.273660 containerd[1747]: time="2025-09-11T00:28:22.273632446Z" level=info msg="connecting to shim a5775b106b8f07546e060b453b3e18338fbf53bcd1c4d8e1ba7ce23cfe0cb631" address="unix:///run/containerd/s/20a54408b0b1ff1c30dfffae29ac4f4422c611c7078979dd00cdcfffd6e7a40d" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:28:22.294522 systemd[1]: Started cri-containerd-a5775b106b8f07546e060b453b3e18338fbf53bcd1c4d8e1ba7ce23cfe0cb631.scope - libcontainer container a5775b106b8f07546e060b453b3e18338fbf53bcd1c4d8e1ba7ce23cfe0cb631. Sep 11 00:28:22.320670 containerd[1747]: time="2025-09-11T00:28:22.320605681Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ldmgv,Uid:6bbf56be-c717-41c6-9b0e-bbe3b830a307,Namespace:calico-system,Attempt:0,} returns sandbox id \"a5775b106b8f07546e060b453b3e18338fbf53bcd1c4d8e1ba7ce23cfe0cb631\"" Sep 11 00:28:22.827940 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount863827777.mount: Deactivated successfully. Sep 11 00:28:22.872309 containerd[1747]: time="2025-09-11T00:28:22.872280628Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:28:22.874528 containerd[1747]: time="2025-09-11T00:28:22.874398635Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Sep 11 00:28:22.877525 containerd[1747]: time="2025-09-11T00:28:22.877504262Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:28:22.880932 containerd[1747]: time="2025-09-11T00:28:22.880904409Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:28:22.881433 containerd[1747]: time="2025-09-11T00:28:22.881414486Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 2.43184213s" Sep 11 00:28:22.881507 containerd[1747]: time="2025-09-11T00:28:22.881496078Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 11 00:28:22.883470 containerd[1747]: time="2025-09-11T00:28:22.882872765Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 11 00:28:22.888193 containerd[1747]: time="2025-09-11T00:28:22.888160632Z" level=info msg="CreateContainer within sandbox \"ed7675eb64afc6301aae094ba23921b593875af0074c35194b80c08a0d073159\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 11 00:28:22.903564 containerd[1747]: time="2025-09-11T00:28:22.903541932Z" level=info msg="Container 2ff236f7d6d05510d3eab90d92ca24a243f5b8049401601f72a30a301219b733: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:28:22.927370 containerd[1747]: time="2025-09-11T00:28:22.927350664Z" level=info msg="CreateContainer within sandbox \"ed7675eb64afc6301aae094ba23921b593875af0074c35194b80c08a0d073159\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"2ff236f7d6d05510d3eab90d92ca24a243f5b8049401601f72a30a301219b733\"" Sep 11 00:28:22.927740 containerd[1747]: time="2025-09-11T00:28:22.927690568Z" level=info msg="StartContainer for \"2ff236f7d6d05510d3eab90d92ca24a243f5b8049401601f72a30a301219b733\"" Sep 11 00:28:22.928941 containerd[1747]: time="2025-09-11T00:28:22.928732250Z" level=info msg="connecting to shim 2ff236f7d6d05510d3eab90d92ca24a243f5b8049401601f72a30a301219b733" address="unix:///run/containerd/s/9bd1c76c33e0dd9d425ce8e53ea95b2ce5c7cc7c64a322194196127b397058f5" protocol=ttrpc version=3 Sep 11 00:28:22.945536 systemd[1]: Started cri-containerd-2ff236f7d6d05510d3eab90d92ca24a243f5b8049401601f72a30a301219b733.scope - libcontainer container 2ff236f7d6d05510d3eab90d92ca24a243f5b8049401601f72a30a301219b733. Sep 11 00:28:22.985694 containerd[1747]: time="2025-09-11T00:28:22.985459206Z" level=info msg="StartContainer for \"2ff236f7d6d05510d3eab90d92ca24a243f5b8049401601f72a30a301219b733\" returns successfully" Sep 11 00:28:23.125578 containerd[1747]: time="2025-09-11T00:28:23.125519246Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7488f5d565-n2rfl,Uid:50700c67-8777-495e-a0c8-f24ef4739246,Namespace:calico-apiserver,Attempt:0,}" Sep 11 00:28:23.126353 containerd[1747]: time="2025-09-11T00:28:23.125997407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75d779f9f-s7nnc,Uid:e4a2905f-a3b1-4fce-89a6-6dcc04516c41,Namespace:calico-apiserver,Attempt:0,}" Sep 11 00:28:23.126604 containerd[1747]: time="2025-09-11T00:28:23.126588709Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75d779f9f-bb974,Uid:e7ff1367-64cd-423f-a7ae-69bcd41ce72a,Namespace:calico-apiserver,Attempt:0,}" Sep 11 00:28:23.261993 kubelet[3157]: I0911 00:28:23.261540 3157 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-69657f7f84-fdhs5" podStartSLOduration=1.289590974 podStartE2EDuration="5.261524698s" podCreationTimestamp="2025-09-11 00:28:18 +0000 UTC" firstStartedPulling="2025-09-11 00:28:18.910195394 +0000 UTC m=+39.870021236" lastFinishedPulling="2025-09-11 00:28:22.882129122 +0000 UTC m=+43.841954960" observedRunningTime="2025-09-11 00:28:23.261427675 +0000 UTC m=+44.221253538" watchObservedRunningTime="2025-09-11 00:28:23.261524698 +0000 UTC m=+44.221350541" Sep 11 00:28:23.284233 systemd-networkd[1358]: cali69a695ac666: Link UP Sep 11 00:28:23.285782 systemd-networkd[1358]: cali69a695ac666: Gained carrier Sep 11 00:28:23.303944 containerd[1747]: 2025-09-11 00:28:23.190 [INFO][4753] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.1.0--n--3f8a739b41-k8s-calico--apiserver--7488f5d565--n2rfl-eth0 calico-apiserver-7488f5d565- calico-apiserver 50700c67-8777-495e-a0c8-f24ef4739246 871 0 2025-09-11 00:27:55 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7488f5d565 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4372.1.0-n-3f8a739b41 calico-apiserver-7488f5d565-n2rfl eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali69a695ac666 [] [] }} ContainerID="13e45be8155bf480af9f0d73b455e1f52591f7b60b65a242a4adb324aac9fa2a" Namespace="calico-apiserver" Pod="calico-apiserver-7488f5d565-n2rfl" WorkloadEndpoint="ci--4372.1.0--n--3f8a739b41-k8s-calico--apiserver--7488f5d565--n2rfl-" Sep 11 00:28:23.303944 containerd[1747]: 2025-09-11 00:28:23.191 [INFO][4753] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="13e45be8155bf480af9f0d73b455e1f52591f7b60b65a242a4adb324aac9fa2a" Namespace="calico-apiserver" Pod="calico-apiserver-7488f5d565-n2rfl" WorkloadEndpoint="ci--4372.1.0--n--3f8a739b41-k8s-calico--apiserver--7488f5d565--n2rfl-eth0" Sep 11 00:28:23.303944 containerd[1747]: 2025-09-11 00:28:23.233 [INFO][4779] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="13e45be8155bf480af9f0d73b455e1f52591f7b60b65a242a4adb324aac9fa2a" HandleID="k8s-pod-network.13e45be8155bf480af9f0d73b455e1f52591f7b60b65a242a4adb324aac9fa2a" Workload="ci--4372.1.0--n--3f8a739b41-k8s-calico--apiserver--7488f5d565--n2rfl-eth0" Sep 11 00:28:23.304248 containerd[1747]: 2025-09-11 00:28:23.233 [INFO][4779] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="13e45be8155bf480af9f0d73b455e1f52591f7b60b65a242a4adb324aac9fa2a" HandleID="k8s-pod-network.13e45be8155bf480af9f0d73b455e1f52591f7b60b65a242a4adb324aac9fa2a" Workload="ci--4372.1.0--n--3f8a739b41-k8s-calico--apiserver--7488f5d565--n2rfl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cd6c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4372.1.0-n-3f8a739b41", "pod":"calico-apiserver-7488f5d565-n2rfl", "timestamp":"2025-09-11 00:28:23.23381324 +0000 UTC"}, Hostname:"ci-4372.1.0-n-3f8a739b41", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 11 00:28:23.304248 containerd[1747]: 2025-09-11 00:28:23.234 [INFO][4779] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 11 00:28:23.304248 containerd[1747]: 2025-09-11 00:28:23.234 [INFO][4779] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 11 00:28:23.304248 containerd[1747]: 2025-09-11 00:28:23.234 [INFO][4779] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.1.0-n-3f8a739b41' Sep 11 00:28:23.304248 containerd[1747]: 2025-09-11 00:28:23.239 [INFO][4779] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.13e45be8155bf480af9f0d73b455e1f52591f7b60b65a242a4adb324aac9fa2a" host="ci-4372.1.0-n-3f8a739b41" Sep 11 00:28:23.304248 containerd[1747]: 2025-09-11 00:28:23.243 [INFO][4779] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.1.0-n-3f8a739b41" Sep 11 00:28:23.304248 containerd[1747]: 2025-09-11 00:28:23.249 [INFO][4779] ipam/ipam.go 511: Trying affinity for 192.168.87.0/26 host="ci-4372.1.0-n-3f8a739b41" Sep 11 00:28:23.304248 containerd[1747]: 2025-09-11 00:28:23.250 [INFO][4779] ipam/ipam.go 158: Attempting to load block cidr=192.168.87.0/26 host="ci-4372.1.0-n-3f8a739b41" Sep 11 00:28:23.304248 containerd[1747]: 2025-09-11 00:28:23.252 [INFO][4779] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.87.0/26 host="ci-4372.1.0-n-3f8a739b41" Sep 11 00:28:23.305628 containerd[1747]: 2025-09-11 00:28:23.252 [INFO][4779] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.87.0/26 handle="k8s-pod-network.13e45be8155bf480af9f0d73b455e1f52591f7b60b65a242a4adb324aac9fa2a" host="ci-4372.1.0-n-3f8a739b41" Sep 11 00:28:23.305628 containerd[1747]: 2025-09-11 00:28:23.259 [INFO][4779] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.13e45be8155bf480af9f0d73b455e1f52591f7b60b65a242a4adb324aac9fa2a Sep 11 00:28:23.305628 containerd[1747]: 2025-09-11 00:28:23.266 [INFO][4779] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.87.0/26 handle="k8s-pod-network.13e45be8155bf480af9f0d73b455e1f52591f7b60b65a242a4adb324aac9fa2a" host="ci-4372.1.0-n-3f8a739b41" Sep 11 00:28:23.305628 containerd[1747]: 2025-09-11 00:28:23.278 [INFO][4779] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.87.3/26] block=192.168.87.0/26 handle="k8s-pod-network.13e45be8155bf480af9f0d73b455e1f52591f7b60b65a242a4adb324aac9fa2a" host="ci-4372.1.0-n-3f8a739b41" Sep 11 00:28:23.305628 containerd[1747]: 2025-09-11 00:28:23.278 [INFO][4779] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.87.3/26] handle="k8s-pod-network.13e45be8155bf480af9f0d73b455e1f52591f7b60b65a242a4adb324aac9fa2a" host="ci-4372.1.0-n-3f8a739b41" Sep 11 00:28:23.305628 containerd[1747]: 2025-09-11 00:28:23.278 [INFO][4779] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 11 00:28:23.305628 containerd[1747]: 2025-09-11 00:28:23.278 [INFO][4779] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.87.3/26] IPv6=[] ContainerID="13e45be8155bf480af9f0d73b455e1f52591f7b60b65a242a4adb324aac9fa2a" HandleID="k8s-pod-network.13e45be8155bf480af9f0d73b455e1f52591f7b60b65a242a4adb324aac9fa2a" Workload="ci--4372.1.0--n--3f8a739b41-k8s-calico--apiserver--7488f5d565--n2rfl-eth0" Sep 11 00:28:23.306035 containerd[1747]: 2025-09-11 00:28:23.281 [INFO][4753] cni-plugin/k8s.go 418: Populated endpoint ContainerID="13e45be8155bf480af9f0d73b455e1f52591f7b60b65a242a4adb324aac9fa2a" Namespace="calico-apiserver" Pod="calico-apiserver-7488f5d565-n2rfl" WorkloadEndpoint="ci--4372.1.0--n--3f8a739b41-k8s-calico--apiserver--7488f5d565--n2rfl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.1.0--n--3f8a739b41-k8s-calico--apiserver--7488f5d565--n2rfl-eth0", GenerateName:"calico-apiserver-7488f5d565-", Namespace:"calico-apiserver", SelfLink:"", UID:"50700c67-8777-495e-a0c8-f24ef4739246", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 27, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7488f5d565", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.1.0-n-3f8a739b41", ContainerID:"", Pod:"calico-apiserver-7488f5d565-n2rfl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.87.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali69a695ac666", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:28:23.306280 containerd[1747]: 2025-09-11 00:28:23.281 [INFO][4753] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.87.3/32] ContainerID="13e45be8155bf480af9f0d73b455e1f52591f7b60b65a242a4adb324aac9fa2a" Namespace="calico-apiserver" Pod="calico-apiserver-7488f5d565-n2rfl" WorkloadEndpoint="ci--4372.1.0--n--3f8a739b41-k8s-calico--apiserver--7488f5d565--n2rfl-eth0" Sep 11 00:28:23.306280 containerd[1747]: 2025-09-11 00:28:23.281 [INFO][4753] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali69a695ac666 ContainerID="13e45be8155bf480af9f0d73b455e1f52591f7b60b65a242a4adb324aac9fa2a" Namespace="calico-apiserver" Pod="calico-apiserver-7488f5d565-n2rfl" WorkloadEndpoint="ci--4372.1.0--n--3f8a739b41-k8s-calico--apiserver--7488f5d565--n2rfl-eth0" Sep 11 00:28:23.306280 containerd[1747]: 2025-09-11 00:28:23.285 [INFO][4753] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="13e45be8155bf480af9f0d73b455e1f52591f7b60b65a242a4adb324aac9fa2a" Namespace="calico-apiserver" Pod="calico-apiserver-7488f5d565-n2rfl" WorkloadEndpoint="ci--4372.1.0--n--3f8a739b41-k8s-calico--apiserver--7488f5d565--n2rfl-eth0" Sep 11 00:28:23.306510 containerd[1747]: 2025-09-11 00:28:23.287 [INFO][4753] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="13e45be8155bf480af9f0d73b455e1f52591f7b60b65a242a4adb324aac9fa2a" Namespace="calico-apiserver" Pod="calico-apiserver-7488f5d565-n2rfl" WorkloadEndpoint="ci--4372.1.0--n--3f8a739b41-k8s-calico--apiserver--7488f5d565--n2rfl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.1.0--n--3f8a739b41-k8s-calico--apiserver--7488f5d565--n2rfl-eth0", GenerateName:"calico-apiserver-7488f5d565-", Namespace:"calico-apiserver", SelfLink:"", UID:"50700c67-8777-495e-a0c8-f24ef4739246", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 27, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7488f5d565", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.1.0-n-3f8a739b41", ContainerID:"13e45be8155bf480af9f0d73b455e1f52591f7b60b65a242a4adb324aac9fa2a", Pod:"calico-apiserver-7488f5d565-n2rfl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.87.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali69a695ac666", MAC:"56:71:41:12:1d:2c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:28:23.306770 containerd[1747]: 2025-09-11 00:28:23.302 [INFO][4753] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="13e45be8155bf480af9f0d73b455e1f52591f7b60b65a242a4adb324aac9fa2a" Namespace="calico-apiserver" Pod="calico-apiserver-7488f5d565-n2rfl" WorkloadEndpoint="ci--4372.1.0--n--3f8a739b41-k8s-calico--apiserver--7488f5d565--n2rfl-eth0" Sep 11 00:28:23.351183 containerd[1747]: time="2025-09-11T00:28:23.350889368Z" level=info msg="connecting to shim 13e45be8155bf480af9f0d73b455e1f52591f7b60b65a242a4adb324aac9fa2a" address="unix:///run/containerd/s/9dfe8872f606964839a7f6c31df57fc3774557080faea928ede7c5d397d3e394" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:28:23.378643 systemd[1]: Started cri-containerd-13e45be8155bf480af9f0d73b455e1f52591f7b60b65a242a4adb324aac9fa2a.scope - libcontainer container 13e45be8155bf480af9f0d73b455e1f52591f7b60b65a242a4adb324aac9fa2a. Sep 11 00:28:23.388733 systemd-networkd[1358]: calic5672151f5f: Link UP Sep 11 00:28:23.390661 systemd-networkd[1358]: calic5672151f5f: Gained carrier Sep 11 00:28:23.410750 containerd[1747]: 2025-09-11 00:28:23.192 [INFO][4742] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.1.0--n--3f8a739b41-k8s-calico--apiserver--75d779f9f--s7nnc-eth0 calico-apiserver-75d779f9f- calico-apiserver e4a2905f-a3b1-4fce-89a6-6dcc04516c41 866 0 2025-09-11 00:27:54 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:75d779f9f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4372.1.0-n-3f8a739b41 calico-apiserver-75d779f9f-s7nnc eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic5672151f5f [] [] }} ContainerID="bbc97d8e5c9d849dd7512306a7d8b3d90fa38823e1f620aa281d3b787f4cfa8d" Namespace="calico-apiserver" Pod="calico-apiserver-75d779f9f-s7nnc" WorkloadEndpoint="ci--4372.1.0--n--3f8a739b41-k8s-calico--apiserver--75d779f9f--s7nnc-" Sep 11 00:28:23.410750 containerd[1747]: 2025-09-11 00:28:23.192 [INFO][4742] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bbc97d8e5c9d849dd7512306a7d8b3d90fa38823e1f620aa281d3b787f4cfa8d" Namespace="calico-apiserver" Pod="calico-apiserver-75d779f9f-s7nnc" WorkloadEndpoint="ci--4372.1.0--n--3f8a739b41-k8s-calico--apiserver--75d779f9f--s7nnc-eth0" Sep 11 00:28:23.410750 containerd[1747]: 2025-09-11 00:28:23.238 [INFO][4787] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bbc97d8e5c9d849dd7512306a7d8b3d90fa38823e1f620aa281d3b787f4cfa8d" HandleID="k8s-pod-network.bbc97d8e5c9d849dd7512306a7d8b3d90fa38823e1f620aa281d3b787f4cfa8d" Workload="ci--4372.1.0--n--3f8a739b41-k8s-calico--apiserver--75d779f9f--s7nnc-eth0" Sep 11 00:28:23.410889 containerd[1747]: 2025-09-11 00:28:23.238 [INFO][4787] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="bbc97d8e5c9d849dd7512306a7d8b3d90fa38823e1f620aa281d3b787f4cfa8d" HandleID="k8s-pod-network.bbc97d8e5c9d849dd7512306a7d8b3d90fa38823e1f620aa281d3b787f4cfa8d" Workload="ci--4372.1.0--n--3f8a739b41-k8s-calico--apiserver--75d779f9f--s7nnc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5600), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4372.1.0-n-3f8a739b41", "pod":"calico-apiserver-75d779f9f-s7nnc", "timestamp":"2025-09-11 00:28:23.238660541 +0000 UTC"}, Hostname:"ci-4372.1.0-n-3f8a739b41", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 11 00:28:23.410889 containerd[1747]: 2025-09-11 00:28:23.239 [INFO][4787] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 11 00:28:23.410889 containerd[1747]: 2025-09-11 00:28:23.278 [INFO][4787] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 11 00:28:23.410889 containerd[1747]: 2025-09-11 00:28:23.278 [INFO][4787] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.1.0-n-3f8a739b41' Sep 11 00:28:23.410889 containerd[1747]: 2025-09-11 00:28:23.340 [INFO][4787] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bbc97d8e5c9d849dd7512306a7d8b3d90fa38823e1f620aa281d3b787f4cfa8d" host="ci-4372.1.0-n-3f8a739b41" Sep 11 00:28:23.410889 containerd[1747]: 2025-09-11 00:28:23.346 [INFO][4787] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.1.0-n-3f8a739b41" Sep 11 00:28:23.410889 containerd[1747]: 2025-09-11 00:28:23.350 [INFO][4787] ipam/ipam.go 511: Trying affinity for 192.168.87.0/26 host="ci-4372.1.0-n-3f8a739b41" Sep 11 00:28:23.410889 containerd[1747]: 2025-09-11 00:28:23.352 [INFO][4787] ipam/ipam.go 158: Attempting to load block cidr=192.168.87.0/26 host="ci-4372.1.0-n-3f8a739b41" Sep 11 00:28:23.410889 containerd[1747]: 2025-09-11 00:28:23.354 [INFO][4787] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.87.0/26 host="ci-4372.1.0-n-3f8a739b41" Sep 11 00:28:23.411090 containerd[1747]: 2025-09-11 00:28:23.354 [INFO][4787] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.87.0/26 handle="k8s-pod-network.bbc97d8e5c9d849dd7512306a7d8b3d90fa38823e1f620aa281d3b787f4cfa8d" host="ci-4372.1.0-n-3f8a739b41" Sep 11 00:28:23.411090 containerd[1747]: 2025-09-11 00:28:23.356 [INFO][4787] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.bbc97d8e5c9d849dd7512306a7d8b3d90fa38823e1f620aa281d3b787f4cfa8d Sep 11 00:28:23.411090 containerd[1747]: 2025-09-11 00:28:23.366 [INFO][4787] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.87.0/26 handle="k8s-pod-network.bbc97d8e5c9d849dd7512306a7d8b3d90fa38823e1f620aa281d3b787f4cfa8d" host="ci-4372.1.0-n-3f8a739b41" Sep 11 00:28:23.411090 containerd[1747]: 2025-09-11 00:28:23.373 [INFO][4787] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.87.4/26] block=192.168.87.0/26 handle="k8s-pod-network.bbc97d8e5c9d849dd7512306a7d8b3d90fa38823e1f620aa281d3b787f4cfa8d" host="ci-4372.1.0-n-3f8a739b41" Sep 11 00:28:23.411090 containerd[1747]: 2025-09-11 00:28:23.374 [INFO][4787] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.87.4/26] handle="k8s-pod-network.bbc97d8e5c9d849dd7512306a7d8b3d90fa38823e1f620aa281d3b787f4cfa8d" host="ci-4372.1.0-n-3f8a739b41" Sep 11 00:28:23.411090 containerd[1747]: 2025-09-11 00:28:23.374 [INFO][4787] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 11 00:28:23.411090 containerd[1747]: 2025-09-11 00:28:23.374 [INFO][4787] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.87.4/26] IPv6=[] ContainerID="bbc97d8e5c9d849dd7512306a7d8b3d90fa38823e1f620aa281d3b787f4cfa8d" HandleID="k8s-pod-network.bbc97d8e5c9d849dd7512306a7d8b3d90fa38823e1f620aa281d3b787f4cfa8d" Workload="ci--4372.1.0--n--3f8a739b41-k8s-calico--apiserver--75d779f9f--s7nnc-eth0" Sep 11 00:28:23.411340 containerd[1747]: 2025-09-11 00:28:23.379 [INFO][4742] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bbc97d8e5c9d849dd7512306a7d8b3d90fa38823e1f620aa281d3b787f4cfa8d" Namespace="calico-apiserver" Pod="calico-apiserver-75d779f9f-s7nnc" WorkloadEndpoint="ci--4372.1.0--n--3f8a739b41-k8s-calico--apiserver--75d779f9f--s7nnc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.1.0--n--3f8a739b41-k8s-calico--apiserver--75d779f9f--s7nnc-eth0", GenerateName:"calico-apiserver-75d779f9f-", Namespace:"calico-apiserver", SelfLink:"", UID:"e4a2905f-a3b1-4fce-89a6-6dcc04516c41", ResourceVersion:"866", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 27, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"75d779f9f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.1.0-n-3f8a739b41", ContainerID:"", Pod:"calico-apiserver-75d779f9f-s7nnc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.87.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic5672151f5f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:28:23.411444 containerd[1747]: 2025-09-11 00:28:23.379 [INFO][4742] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.87.4/32] ContainerID="bbc97d8e5c9d849dd7512306a7d8b3d90fa38823e1f620aa281d3b787f4cfa8d" Namespace="calico-apiserver" Pod="calico-apiserver-75d779f9f-s7nnc" WorkloadEndpoint="ci--4372.1.0--n--3f8a739b41-k8s-calico--apiserver--75d779f9f--s7nnc-eth0" Sep 11 00:28:23.411444 containerd[1747]: 2025-09-11 00:28:23.379 [INFO][4742] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic5672151f5f ContainerID="bbc97d8e5c9d849dd7512306a7d8b3d90fa38823e1f620aa281d3b787f4cfa8d" Namespace="calico-apiserver" Pod="calico-apiserver-75d779f9f-s7nnc" WorkloadEndpoint="ci--4372.1.0--n--3f8a739b41-k8s-calico--apiserver--75d779f9f--s7nnc-eth0" Sep 11 00:28:23.411444 containerd[1747]: 2025-09-11 00:28:23.395 [INFO][4742] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bbc97d8e5c9d849dd7512306a7d8b3d90fa38823e1f620aa281d3b787f4cfa8d" Namespace="calico-apiserver" Pod="calico-apiserver-75d779f9f-s7nnc" WorkloadEndpoint="ci--4372.1.0--n--3f8a739b41-k8s-calico--apiserver--75d779f9f--s7nnc-eth0" Sep 11 00:28:23.411512 containerd[1747]: 2025-09-11 00:28:23.395 [INFO][4742] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bbc97d8e5c9d849dd7512306a7d8b3d90fa38823e1f620aa281d3b787f4cfa8d" Namespace="calico-apiserver" Pod="calico-apiserver-75d779f9f-s7nnc" WorkloadEndpoint="ci--4372.1.0--n--3f8a739b41-k8s-calico--apiserver--75d779f9f--s7nnc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.1.0--n--3f8a739b41-k8s-calico--apiserver--75d779f9f--s7nnc-eth0", GenerateName:"calico-apiserver-75d779f9f-", Namespace:"calico-apiserver", SelfLink:"", UID:"e4a2905f-a3b1-4fce-89a6-6dcc04516c41", ResourceVersion:"866", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 27, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"75d779f9f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.1.0-n-3f8a739b41", ContainerID:"bbc97d8e5c9d849dd7512306a7d8b3d90fa38823e1f620aa281d3b787f4cfa8d", Pod:"calico-apiserver-75d779f9f-s7nnc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.87.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic5672151f5f", MAC:"0e:43:48:88:43:5c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:28:23.411570 containerd[1747]: 2025-09-11 00:28:23.408 [INFO][4742] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bbc97d8e5c9d849dd7512306a7d8b3d90fa38823e1f620aa281d3b787f4cfa8d" Namespace="calico-apiserver" Pod="calico-apiserver-75d779f9f-s7nnc" WorkloadEndpoint="ci--4372.1.0--n--3f8a739b41-k8s-calico--apiserver--75d779f9f--s7nnc-eth0" Sep 11 00:28:23.438225 containerd[1747]: time="2025-09-11T00:28:23.438202437Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7488f5d565-n2rfl,Uid:50700c67-8777-495e-a0c8-f24ef4739246,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"13e45be8155bf480af9f0d73b455e1f52591f7b60b65a242a4adb324aac9fa2a\"" Sep 11 00:28:23.458412 containerd[1747]: time="2025-09-11T00:28:23.458229221Z" level=info msg="connecting to shim bbc97d8e5c9d849dd7512306a7d8b3d90fa38823e1f620aa281d3b787f4cfa8d" address="unix:///run/containerd/s/c92681ab8bad0a3878f636a147ab4b21504e8065f4402a73db958b5790815c86" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:28:23.463938 systemd-networkd[1358]: caliad12baca645: Gained IPv6LL Sep 11 00:28:23.478775 systemd[1]: Started cri-containerd-bbc97d8e5c9d849dd7512306a7d8b3d90fa38823e1f620aa281d3b787f4cfa8d.scope - libcontainer container bbc97d8e5c9d849dd7512306a7d8b3d90fa38823e1f620aa281d3b787f4cfa8d. Sep 11 00:28:23.490136 systemd-networkd[1358]: cali3edc6671ea0: Link UP Sep 11 00:28:23.491043 systemd-networkd[1358]: cali3edc6671ea0: Gained carrier Sep 11 00:28:23.508419 containerd[1747]: 2025-09-11 00:28:23.210 [INFO][4766] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.1.0--n--3f8a739b41-k8s-calico--apiserver--75d779f9f--bb974-eth0 calico-apiserver-75d779f9f- calico-apiserver e7ff1367-64cd-423f-a7ae-69bcd41ce72a 873 0 2025-09-11 00:27:54 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:75d779f9f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4372.1.0-n-3f8a739b41 calico-apiserver-75d779f9f-bb974 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali3edc6671ea0 [] [] }} ContainerID="2e00446fcc65c2aa3d65276d401ac323ba32ac8ed278c594962bd7d49d00b978" Namespace="calico-apiserver" Pod="calico-apiserver-75d779f9f-bb974" WorkloadEndpoint="ci--4372.1.0--n--3f8a739b41-k8s-calico--apiserver--75d779f9f--bb974-" Sep 11 00:28:23.508419 containerd[1747]: 2025-09-11 00:28:23.210 [INFO][4766] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2e00446fcc65c2aa3d65276d401ac323ba32ac8ed278c594962bd7d49d00b978" Namespace="calico-apiserver" Pod="calico-apiserver-75d779f9f-bb974" WorkloadEndpoint="ci--4372.1.0--n--3f8a739b41-k8s-calico--apiserver--75d779f9f--bb974-eth0" Sep 11 00:28:23.508419 containerd[1747]: 2025-09-11 00:28:23.241 [INFO][4792] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2e00446fcc65c2aa3d65276d401ac323ba32ac8ed278c594962bd7d49d00b978" HandleID="k8s-pod-network.2e00446fcc65c2aa3d65276d401ac323ba32ac8ed278c594962bd7d49d00b978" Workload="ci--4372.1.0--n--3f8a739b41-k8s-calico--apiserver--75d779f9f--bb974-eth0" Sep 11 00:28:23.508560 containerd[1747]: 2025-09-11 00:28:23.241 [INFO][4792] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2e00446fcc65c2aa3d65276d401ac323ba32ac8ed278c594962bd7d49d00b978" HandleID="k8s-pod-network.2e00446fcc65c2aa3d65276d401ac323ba32ac8ed278c594962bd7d49d00b978" Workload="ci--4372.1.0--n--3f8a739b41-k8s-calico--apiserver--75d779f9f--bb974-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002b6040), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4372.1.0-n-3f8a739b41", "pod":"calico-apiserver-75d779f9f-bb974", "timestamp":"2025-09-11 00:28:23.241756864 +0000 UTC"}, Hostname:"ci-4372.1.0-n-3f8a739b41", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 11 00:28:23.508560 containerd[1747]: 2025-09-11 00:28:23.242 [INFO][4792] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 11 00:28:23.508560 containerd[1747]: 2025-09-11 00:28:23.374 [INFO][4792] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 11 00:28:23.508560 containerd[1747]: 2025-09-11 00:28:23.374 [INFO][4792] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.1.0-n-3f8a739b41' Sep 11 00:28:23.508560 containerd[1747]: 2025-09-11 00:28:23.440 [INFO][4792] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2e00446fcc65c2aa3d65276d401ac323ba32ac8ed278c594962bd7d49d00b978" host="ci-4372.1.0-n-3f8a739b41" Sep 11 00:28:23.508560 containerd[1747]: 2025-09-11 00:28:23.447 [INFO][4792] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.1.0-n-3f8a739b41" Sep 11 00:28:23.508560 containerd[1747]: 2025-09-11 00:28:23.453 [INFO][4792] ipam/ipam.go 511: Trying affinity for 192.168.87.0/26 host="ci-4372.1.0-n-3f8a739b41" Sep 11 00:28:23.508560 containerd[1747]: 2025-09-11 00:28:23.454 [INFO][4792] ipam/ipam.go 158: Attempting to load block cidr=192.168.87.0/26 host="ci-4372.1.0-n-3f8a739b41" Sep 11 00:28:23.508560 containerd[1747]: 2025-09-11 00:28:23.457 [INFO][4792] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.87.0/26 host="ci-4372.1.0-n-3f8a739b41" Sep 11 00:28:23.508767 containerd[1747]: 2025-09-11 00:28:23.457 [INFO][4792] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.87.0/26 handle="k8s-pod-network.2e00446fcc65c2aa3d65276d401ac323ba32ac8ed278c594962bd7d49d00b978" host="ci-4372.1.0-n-3f8a739b41" Sep 11 00:28:23.508767 containerd[1747]: 2025-09-11 00:28:23.458 [INFO][4792] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2e00446fcc65c2aa3d65276d401ac323ba32ac8ed278c594962bd7d49d00b978 Sep 11 00:28:23.508767 containerd[1747]: 2025-09-11 00:28:23.464 [INFO][4792] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.87.0/26 handle="k8s-pod-network.2e00446fcc65c2aa3d65276d401ac323ba32ac8ed278c594962bd7d49d00b978" host="ci-4372.1.0-n-3f8a739b41" Sep 11 00:28:23.508767 containerd[1747]: 2025-09-11 00:28:23.474 [INFO][4792] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.87.5/26] block=192.168.87.0/26 handle="k8s-pod-network.2e00446fcc65c2aa3d65276d401ac323ba32ac8ed278c594962bd7d49d00b978" host="ci-4372.1.0-n-3f8a739b41" Sep 11 00:28:23.508767 containerd[1747]: 2025-09-11 00:28:23.474 [INFO][4792] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.87.5/26] handle="k8s-pod-network.2e00446fcc65c2aa3d65276d401ac323ba32ac8ed278c594962bd7d49d00b978" host="ci-4372.1.0-n-3f8a739b41" Sep 11 00:28:23.508767 containerd[1747]: 2025-09-11 00:28:23.474 [INFO][4792] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 11 00:28:23.508767 containerd[1747]: 2025-09-11 00:28:23.474 [INFO][4792] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.87.5/26] IPv6=[] ContainerID="2e00446fcc65c2aa3d65276d401ac323ba32ac8ed278c594962bd7d49d00b978" HandleID="k8s-pod-network.2e00446fcc65c2aa3d65276d401ac323ba32ac8ed278c594962bd7d49d00b978" Workload="ci--4372.1.0--n--3f8a739b41-k8s-calico--apiserver--75d779f9f--bb974-eth0" Sep 11 00:28:23.508909 containerd[1747]: 2025-09-11 00:28:23.476 [INFO][4766] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2e00446fcc65c2aa3d65276d401ac323ba32ac8ed278c594962bd7d49d00b978" Namespace="calico-apiserver" Pod="calico-apiserver-75d779f9f-bb974" WorkloadEndpoint="ci--4372.1.0--n--3f8a739b41-k8s-calico--apiserver--75d779f9f--bb974-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.1.0--n--3f8a739b41-k8s-calico--apiserver--75d779f9f--bb974-eth0", GenerateName:"calico-apiserver-75d779f9f-", Namespace:"calico-apiserver", SelfLink:"", UID:"e7ff1367-64cd-423f-a7ae-69bcd41ce72a", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 27, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"75d779f9f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.1.0-n-3f8a739b41", ContainerID:"", Pod:"calico-apiserver-75d779f9f-bb974", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.87.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3edc6671ea0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:28:23.508968 containerd[1747]: 2025-09-11 00:28:23.476 [INFO][4766] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.87.5/32] ContainerID="2e00446fcc65c2aa3d65276d401ac323ba32ac8ed278c594962bd7d49d00b978" Namespace="calico-apiserver" Pod="calico-apiserver-75d779f9f-bb974" WorkloadEndpoint="ci--4372.1.0--n--3f8a739b41-k8s-calico--apiserver--75d779f9f--bb974-eth0" Sep 11 00:28:23.508968 containerd[1747]: 2025-09-11 00:28:23.476 [INFO][4766] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3edc6671ea0 ContainerID="2e00446fcc65c2aa3d65276d401ac323ba32ac8ed278c594962bd7d49d00b978" Namespace="calico-apiserver" Pod="calico-apiserver-75d779f9f-bb974" WorkloadEndpoint="ci--4372.1.0--n--3f8a739b41-k8s-calico--apiserver--75d779f9f--bb974-eth0" Sep 11 00:28:23.508968 containerd[1747]: 2025-09-11 00:28:23.491 [INFO][4766] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2e00446fcc65c2aa3d65276d401ac323ba32ac8ed278c594962bd7d49d00b978" Namespace="calico-apiserver" Pod="calico-apiserver-75d779f9f-bb974" WorkloadEndpoint="ci--4372.1.0--n--3f8a739b41-k8s-calico--apiserver--75d779f9f--bb974-eth0" Sep 11 00:28:23.509033 containerd[1747]: 2025-09-11 00:28:23.492 [INFO][4766] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2e00446fcc65c2aa3d65276d401ac323ba32ac8ed278c594962bd7d49d00b978" Namespace="calico-apiserver" Pod="calico-apiserver-75d779f9f-bb974" WorkloadEndpoint="ci--4372.1.0--n--3f8a739b41-k8s-calico--apiserver--75d779f9f--bb974-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.1.0--n--3f8a739b41-k8s-calico--apiserver--75d779f9f--bb974-eth0", GenerateName:"calico-apiserver-75d779f9f-", Namespace:"calico-apiserver", SelfLink:"", UID:"e7ff1367-64cd-423f-a7ae-69bcd41ce72a", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 27, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"75d779f9f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.1.0-n-3f8a739b41", ContainerID:"2e00446fcc65c2aa3d65276d401ac323ba32ac8ed278c594962bd7d49d00b978", Pod:"calico-apiserver-75d779f9f-bb974", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.87.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3edc6671ea0", MAC:"86:01:9a:c4:75:51", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:28:23.509090 containerd[1747]: 2025-09-11 00:28:23.505 [INFO][4766] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2e00446fcc65c2aa3d65276d401ac323ba32ac8ed278c594962bd7d49d00b978" Namespace="calico-apiserver" Pod="calico-apiserver-75d779f9f-bb974" WorkloadEndpoint="ci--4372.1.0--n--3f8a739b41-k8s-calico--apiserver--75d779f9f--bb974-eth0" Sep 11 00:28:23.535033 containerd[1747]: time="2025-09-11T00:28:23.535014516Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75d779f9f-s7nnc,Uid:e4a2905f-a3b1-4fce-89a6-6dcc04516c41,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"bbc97d8e5c9d849dd7512306a7d8b3d90fa38823e1f620aa281d3b787f4cfa8d\"" Sep 11 00:28:23.545734 containerd[1747]: time="2025-09-11T00:28:23.545707849Z" level=info msg="connecting to shim 2e00446fcc65c2aa3d65276d401ac323ba32ac8ed278c594962bd7d49d00b978" address="unix:///run/containerd/s/3b524c75f2967cc9544a5863b337ac814e6e6e6db80241cb52758d0f0441f3f3" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:28:23.559500 systemd[1]: Started cri-containerd-2e00446fcc65c2aa3d65276d401ac323ba32ac8ed278c594962bd7d49d00b978.scope - libcontainer container 2e00446fcc65c2aa3d65276d401ac323ba32ac8ed278c594962bd7d49d00b978. Sep 11 00:28:23.592348 containerd[1747]: time="2025-09-11T00:28:23.592316097Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75d779f9f-bb974,Uid:e7ff1367-64cd-423f-a7ae-69bcd41ce72a,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"2e00446fcc65c2aa3d65276d401ac323ba32ac8ed278c594962bd7d49d00b978\"" Sep 11 00:28:24.125684 containerd[1747]: time="2025-09-11T00:28:24.125639225Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-9s5qx,Uid:f50c5ad7-5f27-4da4-a84f-88c5aeb97826,Namespace:kube-system,Attempt:0,}" Sep 11 00:28:24.125823 containerd[1747]: time="2025-09-11T00:28:24.125638713Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6c99b56578-4pbpk,Uid:399d91b5-6a95-457c-b82d-2a4a35d507ad,Namespace:calico-system,Attempt:0,}" Sep 11 00:28:24.227673 systemd-networkd[1358]: cali4b24d7d05e8: Link UP Sep 11 00:28:24.228372 systemd-networkd[1358]: cali4b24d7d05e8: Gained carrier Sep 11 00:28:24.244428 containerd[1747]: 2025-09-11 00:28:24.168 [INFO][4972] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.1.0--n--3f8a739b41-k8s-coredns--674b8bbfcf--9s5qx-eth0 coredns-674b8bbfcf- kube-system f50c5ad7-5f27-4da4-a84f-88c5aeb97826 869 0 2025-09-11 00:27:46 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4372.1.0-n-3f8a739b41 coredns-674b8bbfcf-9s5qx eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali4b24d7d05e8 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="02a6066542c67bc73d4960b83c257b104f8148320d4d28e0bf81ecac506cdf4b" Namespace="kube-system" Pod="coredns-674b8bbfcf-9s5qx" WorkloadEndpoint="ci--4372.1.0--n--3f8a739b41-k8s-coredns--674b8bbfcf--9s5qx-" Sep 11 00:28:24.244428 containerd[1747]: 2025-09-11 00:28:24.168 [INFO][4972] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="02a6066542c67bc73d4960b83c257b104f8148320d4d28e0bf81ecac506cdf4b" Namespace="kube-system" Pod="coredns-674b8bbfcf-9s5qx" WorkloadEndpoint="ci--4372.1.0--n--3f8a739b41-k8s-coredns--674b8bbfcf--9s5qx-eth0" Sep 11 00:28:24.244428 containerd[1747]: 2025-09-11 00:28:24.192 [INFO][4998] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="02a6066542c67bc73d4960b83c257b104f8148320d4d28e0bf81ecac506cdf4b" HandleID="k8s-pod-network.02a6066542c67bc73d4960b83c257b104f8148320d4d28e0bf81ecac506cdf4b" Workload="ci--4372.1.0--n--3f8a739b41-k8s-coredns--674b8bbfcf--9s5qx-eth0" Sep 11 00:28:24.244552 containerd[1747]: 2025-09-11 00:28:24.194 [INFO][4998] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="02a6066542c67bc73d4960b83c257b104f8148320d4d28e0bf81ecac506cdf4b" HandleID="k8s-pod-network.02a6066542c67bc73d4960b83c257b104f8148320d4d28e0bf81ecac506cdf4b" Workload="ci--4372.1.0--n--3f8a739b41-k8s-coredns--674b8bbfcf--9s5qx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5730), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4372.1.0-n-3f8a739b41", "pod":"coredns-674b8bbfcf-9s5qx", "timestamp":"2025-09-11 00:28:24.192910506 +0000 UTC"}, Hostname:"ci-4372.1.0-n-3f8a739b41", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 11 00:28:24.244552 containerd[1747]: 2025-09-11 00:28:24.195 [INFO][4998] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 11 00:28:24.244552 containerd[1747]: 2025-09-11 00:28:24.195 [INFO][4998] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 11 00:28:24.244552 containerd[1747]: 2025-09-11 00:28:24.195 [INFO][4998] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.1.0-n-3f8a739b41' Sep 11 00:28:24.244552 containerd[1747]: 2025-09-11 00:28:24.201 [INFO][4998] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.02a6066542c67bc73d4960b83c257b104f8148320d4d28e0bf81ecac506cdf4b" host="ci-4372.1.0-n-3f8a739b41" Sep 11 00:28:24.244552 containerd[1747]: 2025-09-11 00:28:24.204 [INFO][4998] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.1.0-n-3f8a739b41" Sep 11 00:28:24.244552 containerd[1747]: 2025-09-11 00:28:24.207 [INFO][4998] ipam/ipam.go 511: Trying affinity for 192.168.87.0/26 host="ci-4372.1.0-n-3f8a739b41" Sep 11 00:28:24.244552 containerd[1747]: 2025-09-11 00:28:24.208 [INFO][4998] ipam/ipam.go 158: Attempting to load block cidr=192.168.87.0/26 host="ci-4372.1.0-n-3f8a739b41" Sep 11 00:28:24.244552 containerd[1747]: 2025-09-11 00:28:24.209 [INFO][4998] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.87.0/26 host="ci-4372.1.0-n-3f8a739b41" Sep 11 00:28:24.244683 containerd[1747]: 2025-09-11 00:28:24.209 [INFO][4998] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.87.0/26 handle="k8s-pod-network.02a6066542c67bc73d4960b83c257b104f8148320d4d28e0bf81ecac506cdf4b" host="ci-4372.1.0-n-3f8a739b41" Sep 11 00:28:24.244683 containerd[1747]: 2025-09-11 00:28:24.211 [INFO][4998] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.02a6066542c67bc73d4960b83c257b104f8148320d4d28e0bf81ecac506cdf4b Sep 11 00:28:24.244683 containerd[1747]: 2025-09-11 00:28:24.215 [INFO][4998] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.87.0/26 handle="k8s-pod-network.02a6066542c67bc73d4960b83c257b104f8148320d4d28e0bf81ecac506cdf4b" host="ci-4372.1.0-n-3f8a739b41" Sep 11 00:28:24.244683 containerd[1747]: 2025-09-11 00:28:24.222 [INFO][4998] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.87.6/26] block=192.168.87.0/26 handle="k8s-pod-network.02a6066542c67bc73d4960b83c257b104f8148320d4d28e0bf81ecac506cdf4b" host="ci-4372.1.0-n-3f8a739b41" Sep 11 00:28:24.244683 containerd[1747]: 2025-09-11 00:28:24.222 [INFO][4998] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.87.6/26] handle="k8s-pod-network.02a6066542c67bc73d4960b83c257b104f8148320d4d28e0bf81ecac506cdf4b" host="ci-4372.1.0-n-3f8a739b41" Sep 11 00:28:24.244683 containerd[1747]: 2025-09-11 00:28:24.222 [INFO][4998] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 11 00:28:24.244683 containerd[1747]: 2025-09-11 00:28:24.222 [INFO][4998] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.87.6/26] IPv6=[] ContainerID="02a6066542c67bc73d4960b83c257b104f8148320d4d28e0bf81ecac506cdf4b" HandleID="k8s-pod-network.02a6066542c67bc73d4960b83c257b104f8148320d4d28e0bf81ecac506cdf4b" Workload="ci--4372.1.0--n--3f8a739b41-k8s-coredns--674b8bbfcf--9s5qx-eth0" Sep 11 00:28:24.244774 containerd[1747]: 2025-09-11 00:28:24.225 [INFO][4972] cni-plugin/k8s.go 418: Populated endpoint ContainerID="02a6066542c67bc73d4960b83c257b104f8148320d4d28e0bf81ecac506cdf4b" Namespace="kube-system" Pod="coredns-674b8bbfcf-9s5qx" WorkloadEndpoint="ci--4372.1.0--n--3f8a739b41-k8s-coredns--674b8bbfcf--9s5qx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.1.0--n--3f8a739b41-k8s-coredns--674b8bbfcf--9s5qx-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"f50c5ad7-5f27-4da4-a84f-88c5aeb97826", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 27, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.1.0-n-3f8a739b41", ContainerID:"", Pod:"coredns-674b8bbfcf-9s5qx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.87.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4b24d7d05e8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:28:24.244774 containerd[1747]: 2025-09-11 00:28:24.225 [INFO][4972] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.87.6/32] ContainerID="02a6066542c67bc73d4960b83c257b104f8148320d4d28e0bf81ecac506cdf4b" Namespace="kube-system" Pod="coredns-674b8bbfcf-9s5qx" WorkloadEndpoint="ci--4372.1.0--n--3f8a739b41-k8s-coredns--674b8bbfcf--9s5qx-eth0" Sep 11 00:28:24.244774 containerd[1747]: 2025-09-11 00:28:24.225 [INFO][4972] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4b24d7d05e8 ContainerID="02a6066542c67bc73d4960b83c257b104f8148320d4d28e0bf81ecac506cdf4b" Namespace="kube-system" Pod="coredns-674b8bbfcf-9s5qx" WorkloadEndpoint="ci--4372.1.0--n--3f8a739b41-k8s-coredns--674b8bbfcf--9s5qx-eth0" Sep 11 00:28:24.244774 containerd[1747]: 2025-09-11 00:28:24.228 [INFO][4972] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="02a6066542c67bc73d4960b83c257b104f8148320d4d28e0bf81ecac506cdf4b" Namespace="kube-system" Pod="coredns-674b8bbfcf-9s5qx" WorkloadEndpoint="ci--4372.1.0--n--3f8a739b41-k8s-coredns--674b8bbfcf--9s5qx-eth0" Sep 11 00:28:24.244774 containerd[1747]: 2025-09-11 00:28:24.229 [INFO][4972] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="02a6066542c67bc73d4960b83c257b104f8148320d4d28e0bf81ecac506cdf4b" Namespace="kube-system" Pod="coredns-674b8bbfcf-9s5qx" WorkloadEndpoint="ci--4372.1.0--n--3f8a739b41-k8s-coredns--674b8bbfcf--9s5qx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.1.0--n--3f8a739b41-k8s-coredns--674b8bbfcf--9s5qx-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"f50c5ad7-5f27-4da4-a84f-88c5aeb97826", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 27, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.1.0-n-3f8a739b41", ContainerID:"02a6066542c67bc73d4960b83c257b104f8148320d4d28e0bf81ecac506cdf4b", Pod:"coredns-674b8bbfcf-9s5qx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.87.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4b24d7d05e8", MAC:"26:8f:02:24:77:5a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:28:24.244774 containerd[1747]: 2025-09-11 00:28:24.242 [INFO][4972] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="02a6066542c67bc73d4960b83c257b104f8148320d4d28e0bf81ecac506cdf4b" Namespace="kube-system" Pod="coredns-674b8bbfcf-9s5qx" WorkloadEndpoint="ci--4372.1.0--n--3f8a739b41-k8s-coredns--674b8bbfcf--9s5qx-eth0" Sep 11 00:28:24.290152 containerd[1747]: time="2025-09-11T00:28:24.290108803Z" level=info msg="connecting to shim 02a6066542c67bc73d4960b83c257b104f8148320d4d28e0bf81ecac506cdf4b" address="unix:///run/containerd/s/e5571bf0b560479ce56635b9824829438805e448815a11f66a79e6f2fc35ca03" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:28:24.309543 systemd[1]: Started cri-containerd-02a6066542c67bc73d4960b83c257b104f8148320d4d28e0bf81ecac506cdf4b.scope - libcontainer container 02a6066542c67bc73d4960b83c257b104f8148320d4d28e0bf81ecac506cdf4b. Sep 11 00:28:24.346037 systemd-networkd[1358]: caliaab0a635be2: Link UP Sep 11 00:28:24.347732 systemd-networkd[1358]: caliaab0a635be2: Gained carrier Sep 11 00:28:24.367018 containerd[1747]: time="2025-09-11T00:28:24.366984637Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-9s5qx,Uid:f50c5ad7-5f27-4da4-a84f-88c5aeb97826,Namespace:kube-system,Attempt:0,} returns sandbox id \"02a6066542c67bc73d4960b83c257b104f8148320d4d28e0bf81ecac506cdf4b\"" Sep 11 00:28:24.367697 containerd[1747]: 2025-09-11 00:28:24.173 [INFO][4982] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.1.0--n--3f8a739b41-k8s-calico--kube--controllers--6c99b56578--4pbpk-eth0 calico-kube-controllers-6c99b56578- calico-system 399d91b5-6a95-457c-b82d-2a4a35d507ad 862 0 2025-09-11 00:27:57 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6c99b56578 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4372.1.0-n-3f8a739b41 calico-kube-controllers-6c99b56578-4pbpk eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] caliaab0a635be2 [] [] }} ContainerID="0b5bed727c712bbf1c33c4bc4fa04f0f2c0282fcab0e9a2dcde901ef7814d989" Namespace="calico-system" Pod="calico-kube-controllers-6c99b56578-4pbpk" WorkloadEndpoint="ci--4372.1.0--n--3f8a739b41-k8s-calico--kube--controllers--6c99b56578--4pbpk-" Sep 11 00:28:24.367697 containerd[1747]: 2025-09-11 00:28:24.173 [INFO][4982] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0b5bed727c712bbf1c33c4bc4fa04f0f2c0282fcab0e9a2dcde901ef7814d989" Namespace="calico-system" Pod="calico-kube-controllers-6c99b56578-4pbpk" WorkloadEndpoint="ci--4372.1.0--n--3f8a739b41-k8s-calico--kube--controllers--6c99b56578--4pbpk-eth0" Sep 11 00:28:24.367697 containerd[1747]: 2025-09-11 00:28:24.201 [INFO][5003] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0b5bed727c712bbf1c33c4bc4fa04f0f2c0282fcab0e9a2dcde901ef7814d989" HandleID="k8s-pod-network.0b5bed727c712bbf1c33c4bc4fa04f0f2c0282fcab0e9a2dcde901ef7814d989" Workload="ci--4372.1.0--n--3f8a739b41-k8s-calico--kube--controllers--6c99b56578--4pbpk-eth0" Sep 11 00:28:24.367697 containerd[1747]: 2025-09-11 00:28:24.201 [INFO][5003] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0b5bed727c712bbf1c33c4bc4fa04f0f2c0282fcab0e9a2dcde901ef7814d989" HandleID="k8s-pod-network.0b5bed727c712bbf1c33c4bc4fa04f0f2c0282fcab0e9a2dcde901ef7814d989" Workload="ci--4372.1.0--n--3f8a739b41-k8s-calico--kube--controllers--6c99b56578--4pbpk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cd0a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4372.1.0-n-3f8a739b41", "pod":"calico-kube-controllers-6c99b56578-4pbpk", "timestamp":"2025-09-11 00:28:24.201359304 +0000 UTC"}, Hostname:"ci-4372.1.0-n-3f8a739b41", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 11 00:28:24.367697 containerd[1747]: 2025-09-11 00:28:24.201 [INFO][5003] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 11 00:28:24.367697 containerd[1747]: 2025-09-11 00:28:24.222 [INFO][5003] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 11 00:28:24.367697 containerd[1747]: 2025-09-11 00:28:24.222 [INFO][5003] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.1.0-n-3f8a739b41' Sep 11 00:28:24.367697 containerd[1747]: 2025-09-11 00:28:24.303 [INFO][5003] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0b5bed727c712bbf1c33c4bc4fa04f0f2c0282fcab0e9a2dcde901ef7814d989" host="ci-4372.1.0-n-3f8a739b41" Sep 11 00:28:24.367697 containerd[1747]: 2025-09-11 00:28:24.308 [INFO][5003] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.1.0-n-3f8a739b41" Sep 11 00:28:24.367697 containerd[1747]: 2025-09-11 00:28:24.312 [INFO][5003] ipam/ipam.go 511: Trying affinity for 192.168.87.0/26 host="ci-4372.1.0-n-3f8a739b41" Sep 11 00:28:24.367697 containerd[1747]: 2025-09-11 00:28:24.313 [INFO][5003] ipam/ipam.go 158: Attempting to load block cidr=192.168.87.0/26 host="ci-4372.1.0-n-3f8a739b41" Sep 11 00:28:24.367697 containerd[1747]: 2025-09-11 00:28:24.316 [INFO][5003] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.87.0/26 host="ci-4372.1.0-n-3f8a739b41" Sep 11 00:28:24.367697 containerd[1747]: 2025-09-11 00:28:24.317 [INFO][5003] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.87.0/26 handle="k8s-pod-network.0b5bed727c712bbf1c33c4bc4fa04f0f2c0282fcab0e9a2dcde901ef7814d989" host="ci-4372.1.0-n-3f8a739b41" Sep 11 00:28:24.367697 containerd[1747]: 2025-09-11 00:28:24.319 [INFO][5003] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0b5bed727c712bbf1c33c4bc4fa04f0f2c0282fcab0e9a2dcde901ef7814d989 Sep 11 00:28:24.367697 containerd[1747]: 2025-09-11 00:28:24.325 [INFO][5003] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.87.0/26 handle="k8s-pod-network.0b5bed727c712bbf1c33c4bc4fa04f0f2c0282fcab0e9a2dcde901ef7814d989" host="ci-4372.1.0-n-3f8a739b41" Sep 11 00:28:24.367697 containerd[1747]: 2025-09-11 00:28:24.340 [INFO][5003] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.87.7/26] block=192.168.87.0/26 handle="k8s-pod-network.0b5bed727c712bbf1c33c4bc4fa04f0f2c0282fcab0e9a2dcde901ef7814d989" host="ci-4372.1.0-n-3f8a739b41" Sep 11 00:28:24.367697 containerd[1747]: 2025-09-11 00:28:24.340 [INFO][5003] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.87.7/26] handle="k8s-pod-network.0b5bed727c712bbf1c33c4bc4fa04f0f2c0282fcab0e9a2dcde901ef7814d989" host="ci-4372.1.0-n-3f8a739b41" Sep 11 00:28:24.367697 containerd[1747]: 2025-09-11 00:28:24.340 [INFO][5003] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 11 00:28:24.367697 containerd[1747]: 2025-09-11 00:28:24.340 [INFO][5003] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.87.7/26] IPv6=[] ContainerID="0b5bed727c712bbf1c33c4bc4fa04f0f2c0282fcab0e9a2dcde901ef7814d989" HandleID="k8s-pod-network.0b5bed727c712bbf1c33c4bc4fa04f0f2c0282fcab0e9a2dcde901ef7814d989" Workload="ci--4372.1.0--n--3f8a739b41-k8s-calico--kube--controllers--6c99b56578--4pbpk-eth0" Sep 11 00:28:24.368893 containerd[1747]: 2025-09-11 00:28:24.342 [INFO][4982] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0b5bed727c712bbf1c33c4bc4fa04f0f2c0282fcab0e9a2dcde901ef7814d989" Namespace="calico-system" Pod="calico-kube-controllers-6c99b56578-4pbpk" WorkloadEndpoint="ci--4372.1.0--n--3f8a739b41-k8s-calico--kube--controllers--6c99b56578--4pbpk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.1.0--n--3f8a739b41-k8s-calico--kube--controllers--6c99b56578--4pbpk-eth0", GenerateName:"calico-kube-controllers-6c99b56578-", Namespace:"calico-system", SelfLink:"", UID:"399d91b5-6a95-457c-b82d-2a4a35d507ad", ResourceVersion:"862", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 27, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6c99b56578", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.1.0-n-3f8a739b41", ContainerID:"", Pod:"calico-kube-controllers-6c99b56578-4pbpk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.87.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliaab0a635be2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:28:24.368893 containerd[1747]: 2025-09-11 00:28:24.342 [INFO][4982] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.87.7/32] ContainerID="0b5bed727c712bbf1c33c4bc4fa04f0f2c0282fcab0e9a2dcde901ef7814d989" Namespace="calico-system" Pod="calico-kube-controllers-6c99b56578-4pbpk" WorkloadEndpoint="ci--4372.1.0--n--3f8a739b41-k8s-calico--kube--controllers--6c99b56578--4pbpk-eth0" Sep 11 00:28:24.368893 containerd[1747]: 2025-09-11 00:28:24.342 [INFO][4982] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliaab0a635be2 ContainerID="0b5bed727c712bbf1c33c4bc4fa04f0f2c0282fcab0e9a2dcde901ef7814d989" Namespace="calico-system" Pod="calico-kube-controllers-6c99b56578-4pbpk" WorkloadEndpoint="ci--4372.1.0--n--3f8a739b41-k8s-calico--kube--controllers--6c99b56578--4pbpk-eth0" Sep 11 00:28:24.368893 containerd[1747]: 2025-09-11 00:28:24.349 [INFO][4982] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0b5bed727c712bbf1c33c4bc4fa04f0f2c0282fcab0e9a2dcde901ef7814d989" Namespace="calico-system" Pod="calico-kube-controllers-6c99b56578-4pbpk" WorkloadEndpoint="ci--4372.1.0--n--3f8a739b41-k8s-calico--kube--controllers--6c99b56578--4pbpk-eth0" Sep 11 00:28:24.368893 containerd[1747]: 2025-09-11 00:28:24.350 [INFO][4982] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0b5bed727c712bbf1c33c4bc4fa04f0f2c0282fcab0e9a2dcde901ef7814d989" Namespace="calico-system" Pod="calico-kube-controllers-6c99b56578-4pbpk" WorkloadEndpoint="ci--4372.1.0--n--3f8a739b41-k8s-calico--kube--controllers--6c99b56578--4pbpk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.1.0--n--3f8a739b41-k8s-calico--kube--controllers--6c99b56578--4pbpk-eth0", GenerateName:"calico-kube-controllers-6c99b56578-", Namespace:"calico-system", SelfLink:"", UID:"399d91b5-6a95-457c-b82d-2a4a35d507ad", ResourceVersion:"862", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 27, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6c99b56578", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.1.0-n-3f8a739b41", ContainerID:"0b5bed727c712bbf1c33c4bc4fa04f0f2c0282fcab0e9a2dcde901ef7814d989", Pod:"calico-kube-controllers-6c99b56578-4pbpk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.87.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliaab0a635be2", MAC:"7a:9c:08:8e:df:5d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:28:24.368893 containerd[1747]: 2025-09-11 00:28:24.364 [INFO][4982] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0b5bed727c712bbf1c33c4bc4fa04f0f2c0282fcab0e9a2dcde901ef7814d989" Namespace="calico-system" Pod="calico-kube-controllers-6c99b56578-4pbpk" WorkloadEndpoint="ci--4372.1.0--n--3f8a739b41-k8s-calico--kube--controllers--6c99b56578--4pbpk-eth0" Sep 11 00:28:24.376077 containerd[1747]: time="2025-09-11T00:28:24.376014931Z" level=info msg="CreateContainer within sandbox \"02a6066542c67bc73d4960b83c257b104f8148320d4d28e0bf81ecac506cdf4b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 11 00:28:24.409362 containerd[1747]: time="2025-09-11T00:28:24.408732445Z" level=info msg="Container 09b80a1d6fceb3aceb4217c752d8e9df26cd1b5005a8f810bcbad52be8924329: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:28:24.412115 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount259241724.mount: Deactivated successfully. Sep 11 00:28:24.422460 containerd[1747]: time="2025-09-11T00:28:24.422436819Z" level=info msg="connecting to shim 0b5bed727c712bbf1c33c4bc4fa04f0f2c0282fcab0e9a2dcde901ef7814d989" address="unix:///run/containerd/s/0d75a9f785270d565cd3327afc03203a9dc087bb35e84b35d48dc7bff7d8ae49" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:28:24.424690 systemd-networkd[1358]: calic5672151f5f: Gained IPv6LL Sep 11 00:28:24.431408 containerd[1747]: time="2025-09-11T00:28:24.431286467Z" level=info msg="CreateContainer within sandbox \"02a6066542c67bc73d4960b83c257b104f8148320d4d28e0bf81ecac506cdf4b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"09b80a1d6fceb3aceb4217c752d8e9df26cd1b5005a8f810bcbad52be8924329\"" Sep 11 00:28:24.431854 containerd[1747]: time="2025-09-11T00:28:24.431835814Z" level=info msg="StartContainer for \"09b80a1d6fceb3aceb4217c752d8e9df26cd1b5005a8f810bcbad52be8924329\"" Sep 11 00:28:24.432833 containerd[1747]: time="2025-09-11T00:28:24.432764253Z" level=info msg="connecting to shim 09b80a1d6fceb3aceb4217c752d8e9df26cd1b5005a8f810bcbad52be8924329" address="unix:///run/containerd/s/e5571bf0b560479ce56635b9824829438805e448815a11f66a79e6f2fc35ca03" protocol=ttrpc version=3 Sep 11 00:28:24.438664 systemd[1]: Started cri-containerd-0b5bed727c712bbf1c33c4bc4fa04f0f2c0282fcab0e9a2dcde901ef7814d989.scope - libcontainer container 0b5bed727c712bbf1c33c4bc4fa04f0f2c0282fcab0e9a2dcde901ef7814d989. Sep 11 00:28:24.452529 systemd[1]: Started cri-containerd-09b80a1d6fceb3aceb4217c752d8e9df26cd1b5005a8f810bcbad52be8924329.scope - libcontainer container 09b80a1d6fceb3aceb4217c752d8e9df26cd1b5005a8f810bcbad52be8924329. Sep 11 00:28:24.483775 containerd[1747]: time="2025-09-11T00:28:24.483751357Z" level=info msg="StartContainer for \"09b80a1d6fceb3aceb4217c752d8e9df26cd1b5005a8f810bcbad52be8924329\" returns successfully" Sep 11 00:28:24.521467 containerd[1747]: time="2025-09-11T00:28:24.520915714Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6c99b56578-4pbpk,Uid:399d91b5-6a95-457c-b82d-2a4a35d507ad,Namespace:calico-system,Attempt:0,} returns sandbox id \"0b5bed727c712bbf1c33c4bc4fa04f0f2c0282fcab0e9a2dcde901ef7814d989\"" Sep 11 00:28:24.700705 containerd[1747]: time="2025-09-11T00:28:24.700639615Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:28:24.703309 containerd[1747]: time="2025-09-11T00:28:24.703277127Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 11 00:28:24.706473 containerd[1747]: time="2025-09-11T00:28:24.706434144Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:28:24.709966 containerd[1747]: time="2025-09-11T00:28:24.709769304Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:28:24.710394 containerd[1747]: time="2025-09-11T00:28:24.710185766Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 1.827287892s" Sep 11 00:28:24.710394 containerd[1747]: time="2025-09-11T00:28:24.710212884Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 11 00:28:24.712401 containerd[1747]: time="2025-09-11T00:28:24.711449122Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 11 00:28:24.717227 containerd[1747]: time="2025-09-11T00:28:24.717210435Z" level=info msg="CreateContainer within sandbox \"a5775b106b8f07546e060b453b3e18338fbf53bcd1c4d8e1ba7ce23cfe0cb631\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 11 00:28:24.733426 containerd[1747]: time="2025-09-11T00:28:24.733404699Z" level=info msg="Container 02b899abddd6ee621b1bad69fac26da8038d344f38cf61374d6b167fb7895292: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:28:24.753239 containerd[1747]: time="2025-09-11T00:28:24.753218709Z" level=info msg="CreateContainer within sandbox \"a5775b106b8f07546e060b453b3e18338fbf53bcd1c4d8e1ba7ce23cfe0cb631\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"02b899abddd6ee621b1bad69fac26da8038d344f38cf61374d6b167fb7895292\"" Sep 11 00:28:24.754219 containerd[1747]: time="2025-09-11T00:28:24.754188694Z" level=info msg="StartContainer for \"02b899abddd6ee621b1bad69fac26da8038d344f38cf61374d6b167fb7895292\"" Sep 11 00:28:24.755269 containerd[1747]: time="2025-09-11T00:28:24.755247165Z" level=info msg="connecting to shim 02b899abddd6ee621b1bad69fac26da8038d344f38cf61374d6b167fb7895292" address="unix:///run/containerd/s/20a54408b0b1ff1c30dfffae29ac4f4422c611c7078979dd00cdcfffd6e7a40d" protocol=ttrpc version=3 Sep 11 00:28:24.770533 systemd[1]: Started cri-containerd-02b899abddd6ee621b1bad69fac26da8038d344f38cf61374d6b167fb7895292.scope - libcontainer container 02b899abddd6ee621b1bad69fac26da8038d344f38cf61374d6b167fb7895292. Sep 11 00:28:24.801378 containerd[1747]: time="2025-09-11T00:28:24.801315905Z" level=info msg="StartContainer for \"02b899abddd6ee621b1bad69fac26da8038d344f38cf61374d6b167fb7895292\" returns successfully" Sep 11 00:28:24.936521 systemd-networkd[1358]: cali3edc6671ea0: Gained IPv6LL Sep 11 00:28:25.063553 systemd-networkd[1358]: cali69a695ac666: Gained IPv6LL Sep 11 00:28:25.126028 containerd[1747]: time="2025-09-11T00:28:25.125968053Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-mkpsv,Uid:7896f4f6-30fb-414f-8fc9-01181638003f,Namespace:kube-system,Attempt:0,}" Sep 11 00:28:25.126181 containerd[1747]: time="2025-09-11T00:28:25.125967845Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-h8dt2,Uid:9e4921f3-2a97-4fd2-b436-8e2b54a05b3f,Namespace:calico-system,Attempt:0,}" Sep 11 00:28:25.233258 systemd-networkd[1358]: cali681441ca811: Link UP Sep 11 00:28:25.235365 systemd-networkd[1358]: cali681441ca811: Gained carrier Sep 11 00:28:25.250686 containerd[1747]: 2025-09-11 00:28:25.170 [INFO][5192] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.1.0--n--3f8a739b41-k8s-goldmane--54d579b49d--h8dt2-eth0 goldmane-54d579b49d- calico-system 9e4921f3-2a97-4fd2-b436-8e2b54a05b3f 870 0 2025-09-11 00:27:56 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4372.1.0-n-3f8a739b41 goldmane-54d579b49d-h8dt2 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali681441ca811 [] [] }} ContainerID="dafdf7e10b0687292ed0841e10ad820b462027d5c82f369d3209029a8c9e962f" Namespace="calico-system" Pod="goldmane-54d579b49d-h8dt2" WorkloadEndpoint="ci--4372.1.0--n--3f8a739b41-k8s-goldmane--54d579b49d--h8dt2-" Sep 11 00:28:25.250686 containerd[1747]: 2025-09-11 00:28:25.170 [INFO][5192] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="dafdf7e10b0687292ed0841e10ad820b462027d5c82f369d3209029a8c9e962f" Namespace="calico-system" Pod="goldmane-54d579b49d-h8dt2" WorkloadEndpoint="ci--4372.1.0--n--3f8a739b41-k8s-goldmane--54d579b49d--h8dt2-eth0" Sep 11 00:28:25.250686 containerd[1747]: 2025-09-11 00:28:25.200 [INFO][5216] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dafdf7e10b0687292ed0841e10ad820b462027d5c82f369d3209029a8c9e962f" HandleID="k8s-pod-network.dafdf7e10b0687292ed0841e10ad820b462027d5c82f369d3209029a8c9e962f" Workload="ci--4372.1.0--n--3f8a739b41-k8s-goldmane--54d579b49d--h8dt2-eth0" Sep 11 00:28:25.250686 containerd[1747]: 2025-09-11 00:28:25.202 [INFO][5216] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="dafdf7e10b0687292ed0841e10ad820b462027d5c82f369d3209029a8c9e962f" HandleID="k8s-pod-network.dafdf7e10b0687292ed0841e10ad820b462027d5c82f369d3209029a8c9e962f" Workload="ci--4372.1.0--n--3f8a739b41-k8s-goldmane--54d579b49d--h8dt2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5600), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4372.1.0-n-3f8a739b41", "pod":"goldmane-54d579b49d-h8dt2", "timestamp":"2025-09-11 00:28:25.20069394 +0000 UTC"}, Hostname:"ci-4372.1.0-n-3f8a739b41", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 11 00:28:25.250686 containerd[1747]: 2025-09-11 00:28:25.202 [INFO][5216] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 11 00:28:25.250686 containerd[1747]: 2025-09-11 00:28:25.202 [INFO][5216] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 11 00:28:25.250686 containerd[1747]: 2025-09-11 00:28:25.202 [INFO][5216] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.1.0-n-3f8a739b41' Sep 11 00:28:25.250686 containerd[1747]: 2025-09-11 00:28:25.207 [INFO][5216] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.dafdf7e10b0687292ed0841e10ad820b462027d5c82f369d3209029a8c9e962f" host="ci-4372.1.0-n-3f8a739b41" Sep 11 00:28:25.250686 containerd[1747]: 2025-09-11 00:28:25.209 [INFO][5216] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.1.0-n-3f8a739b41" Sep 11 00:28:25.250686 containerd[1747]: 2025-09-11 00:28:25.212 [INFO][5216] ipam/ipam.go 511: Trying affinity for 192.168.87.0/26 host="ci-4372.1.0-n-3f8a739b41" Sep 11 00:28:25.250686 containerd[1747]: 2025-09-11 00:28:25.213 [INFO][5216] ipam/ipam.go 158: Attempting to load block cidr=192.168.87.0/26 host="ci-4372.1.0-n-3f8a739b41" Sep 11 00:28:25.250686 containerd[1747]: 2025-09-11 00:28:25.215 [INFO][5216] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.87.0/26 host="ci-4372.1.0-n-3f8a739b41" Sep 11 00:28:25.250686 containerd[1747]: 2025-09-11 00:28:25.215 [INFO][5216] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.87.0/26 handle="k8s-pod-network.dafdf7e10b0687292ed0841e10ad820b462027d5c82f369d3209029a8c9e962f" host="ci-4372.1.0-n-3f8a739b41" Sep 11 00:28:25.250686 containerd[1747]: 2025-09-11 00:28:25.216 [INFO][5216] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.dafdf7e10b0687292ed0841e10ad820b462027d5c82f369d3209029a8c9e962f Sep 11 00:28:25.250686 containerd[1747]: 2025-09-11 00:28:25.221 [INFO][5216] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.87.0/26 handle="k8s-pod-network.dafdf7e10b0687292ed0841e10ad820b462027d5c82f369d3209029a8c9e962f" host="ci-4372.1.0-n-3f8a739b41" Sep 11 00:28:25.250686 containerd[1747]: 2025-09-11 00:28:25.227 [INFO][5216] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.87.8/26] block=192.168.87.0/26 handle="k8s-pod-network.dafdf7e10b0687292ed0841e10ad820b462027d5c82f369d3209029a8c9e962f" host="ci-4372.1.0-n-3f8a739b41" Sep 11 00:28:25.250686 containerd[1747]: 2025-09-11 00:28:25.227 [INFO][5216] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.87.8/26] handle="k8s-pod-network.dafdf7e10b0687292ed0841e10ad820b462027d5c82f369d3209029a8c9e962f" host="ci-4372.1.0-n-3f8a739b41" Sep 11 00:28:25.250686 containerd[1747]: 2025-09-11 00:28:25.227 [INFO][5216] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 11 00:28:25.250686 containerd[1747]: 2025-09-11 00:28:25.227 [INFO][5216] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.87.8/26] IPv6=[] ContainerID="dafdf7e10b0687292ed0841e10ad820b462027d5c82f369d3209029a8c9e962f" HandleID="k8s-pod-network.dafdf7e10b0687292ed0841e10ad820b462027d5c82f369d3209029a8c9e962f" Workload="ci--4372.1.0--n--3f8a739b41-k8s-goldmane--54d579b49d--h8dt2-eth0" Sep 11 00:28:25.251493 containerd[1747]: 2025-09-11 00:28:25.230 [INFO][5192] cni-plugin/k8s.go 418: Populated endpoint ContainerID="dafdf7e10b0687292ed0841e10ad820b462027d5c82f369d3209029a8c9e962f" Namespace="calico-system" Pod="goldmane-54d579b49d-h8dt2" WorkloadEndpoint="ci--4372.1.0--n--3f8a739b41-k8s-goldmane--54d579b49d--h8dt2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.1.0--n--3f8a739b41-k8s-goldmane--54d579b49d--h8dt2-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"9e4921f3-2a97-4fd2-b436-8e2b54a05b3f", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 27, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.1.0-n-3f8a739b41", ContainerID:"", Pod:"goldmane-54d579b49d-h8dt2", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.87.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali681441ca811", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:28:25.251493 containerd[1747]: 2025-09-11 00:28:25.230 [INFO][5192] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.87.8/32] ContainerID="dafdf7e10b0687292ed0841e10ad820b462027d5c82f369d3209029a8c9e962f" Namespace="calico-system" Pod="goldmane-54d579b49d-h8dt2" WorkloadEndpoint="ci--4372.1.0--n--3f8a739b41-k8s-goldmane--54d579b49d--h8dt2-eth0" Sep 11 00:28:25.251493 containerd[1747]: 2025-09-11 00:28:25.230 [INFO][5192] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali681441ca811 ContainerID="dafdf7e10b0687292ed0841e10ad820b462027d5c82f369d3209029a8c9e962f" Namespace="calico-system" Pod="goldmane-54d579b49d-h8dt2" WorkloadEndpoint="ci--4372.1.0--n--3f8a739b41-k8s-goldmane--54d579b49d--h8dt2-eth0" Sep 11 00:28:25.251493 containerd[1747]: 2025-09-11 00:28:25.233 [INFO][5192] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dafdf7e10b0687292ed0841e10ad820b462027d5c82f369d3209029a8c9e962f" Namespace="calico-system" Pod="goldmane-54d579b49d-h8dt2" WorkloadEndpoint="ci--4372.1.0--n--3f8a739b41-k8s-goldmane--54d579b49d--h8dt2-eth0" Sep 11 00:28:25.251493 containerd[1747]: 2025-09-11 00:28:25.236 [INFO][5192] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="dafdf7e10b0687292ed0841e10ad820b462027d5c82f369d3209029a8c9e962f" Namespace="calico-system" Pod="goldmane-54d579b49d-h8dt2" WorkloadEndpoint="ci--4372.1.0--n--3f8a739b41-k8s-goldmane--54d579b49d--h8dt2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.1.0--n--3f8a739b41-k8s-goldmane--54d579b49d--h8dt2-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"9e4921f3-2a97-4fd2-b436-8e2b54a05b3f", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 27, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.1.0-n-3f8a739b41", ContainerID:"dafdf7e10b0687292ed0841e10ad820b462027d5c82f369d3209029a8c9e962f", Pod:"goldmane-54d579b49d-h8dt2", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.87.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali681441ca811", MAC:"aa:aa:e5:4a:58:ae", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:28:25.251493 containerd[1747]: 2025-09-11 00:28:25.247 [INFO][5192] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="dafdf7e10b0687292ed0841e10ad820b462027d5c82f369d3209029a8c9e962f" Namespace="calico-system" Pod="goldmane-54d579b49d-h8dt2" WorkloadEndpoint="ci--4372.1.0--n--3f8a739b41-k8s-goldmane--54d579b49d--h8dt2-eth0" Sep 11 00:28:25.278194 kubelet[3157]: I0911 00:28:25.278132 3157 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-9s5qx" podStartSLOduration=39.278109958 podStartE2EDuration="39.278109958s" podCreationTimestamp="2025-09-11 00:27:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-11 00:28:25.277567602 +0000 UTC m=+46.237393467" watchObservedRunningTime="2025-09-11 00:28:25.278109958 +0000 UTC m=+46.237935801" Sep 11 00:28:25.294890 containerd[1747]: time="2025-09-11T00:28:25.294866387Z" level=info msg="connecting to shim dafdf7e10b0687292ed0841e10ad820b462027d5c82f369d3209029a8c9e962f" address="unix:///run/containerd/s/3b83042e1d6d753f2b5885ee25d0906d17634d70d1c19ebd299aa1334d898e3c" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:28:25.325896 systemd[1]: Started cri-containerd-dafdf7e10b0687292ed0841e10ad820b462027d5c82f369d3209029a8c9e962f.scope - libcontainer container dafdf7e10b0687292ed0841e10ad820b462027d5c82f369d3209029a8c9e962f. Sep 11 00:28:25.371165 systemd-networkd[1358]: califa858ae0d0d: Link UP Sep 11 00:28:25.372671 systemd-networkd[1358]: califa858ae0d0d: Gained carrier Sep 11 00:28:25.384901 containerd[1747]: 2025-09-11 00:28:25.179 [INFO][5201] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.1.0--n--3f8a739b41-k8s-coredns--674b8bbfcf--mkpsv-eth0 coredns-674b8bbfcf- kube-system 7896f4f6-30fb-414f-8fc9-01181638003f 865 0 2025-09-11 00:27:46 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4372.1.0-n-3f8a739b41 coredns-674b8bbfcf-mkpsv eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] califa858ae0d0d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="e8b7a9f1eb238bfb7acf3c3b7ed58f09108ad05814062937fafc5bfb2ae48023" Namespace="kube-system" Pod="coredns-674b8bbfcf-mkpsv" WorkloadEndpoint="ci--4372.1.0--n--3f8a739b41-k8s-coredns--674b8bbfcf--mkpsv-" Sep 11 00:28:25.384901 containerd[1747]: 2025-09-11 00:28:25.179 [INFO][5201] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e8b7a9f1eb238bfb7acf3c3b7ed58f09108ad05814062937fafc5bfb2ae48023" Namespace="kube-system" Pod="coredns-674b8bbfcf-mkpsv" WorkloadEndpoint="ci--4372.1.0--n--3f8a739b41-k8s-coredns--674b8bbfcf--mkpsv-eth0" Sep 11 00:28:25.384901 containerd[1747]: 2025-09-11 00:28:25.203 [INFO][5221] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e8b7a9f1eb238bfb7acf3c3b7ed58f09108ad05814062937fafc5bfb2ae48023" HandleID="k8s-pod-network.e8b7a9f1eb238bfb7acf3c3b7ed58f09108ad05814062937fafc5bfb2ae48023" Workload="ci--4372.1.0--n--3f8a739b41-k8s-coredns--674b8bbfcf--mkpsv-eth0" Sep 11 00:28:25.384901 containerd[1747]: 2025-09-11 00:28:25.204 [INFO][5221] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e8b7a9f1eb238bfb7acf3c3b7ed58f09108ad05814062937fafc5bfb2ae48023" HandleID="k8s-pod-network.e8b7a9f1eb238bfb7acf3c3b7ed58f09108ad05814062937fafc5bfb2ae48023" Workload="ci--4372.1.0--n--3f8a739b41-k8s-coredns--674b8bbfcf--mkpsv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4ff0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4372.1.0-n-3f8a739b41", "pod":"coredns-674b8bbfcf-mkpsv", "timestamp":"2025-09-11 00:28:25.203656294 +0000 UTC"}, Hostname:"ci-4372.1.0-n-3f8a739b41", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 11 00:28:25.384901 containerd[1747]: 2025-09-11 00:28:25.204 [INFO][5221] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 11 00:28:25.384901 containerd[1747]: 2025-09-11 00:28:25.228 [INFO][5221] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 11 00:28:25.384901 containerd[1747]: 2025-09-11 00:28:25.228 [INFO][5221] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.1.0-n-3f8a739b41' Sep 11 00:28:25.384901 containerd[1747]: 2025-09-11 00:28:25.310 [INFO][5221] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e8b7a9f1eb238bfb7acf3c3b7ed58f09108ad05814062937fafc5bfb2ae48023" host="ci-4372.1.0-n-3f8a739b41" Sep 11 00:28:25.384901 containerd[1747]: 2025-09-11 00:28:25.321 [INFO][5221] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.1.0-n-3f8a739b41" Sep 11 00:28:25.384901 containerd[1747]: 2025-09-11 00:28:25.329 [INFO][5221] ipam/ipam.go 511: Trying affinity for 192.168.87.0/26 host="ci-4372.1.0-n-3f8a739b41" Sep 11 00:28:25.384901 containerd[1747]: 2025-09-11 00:28:25.335 [INFO][5221] ipam/ipam.go 158: Attempting to load block cidr=192.168.87.0/26 host="ci-4372.1.0-n-3f8a739b41" Sep 11 00:28:25.384901 containerd[1747]: 2025-09-11 00:28:25.342 [INFO][5221] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.87.0/26 host="ci-4372.1.0-n-3f8a739b41" Sep 11 00:28:25.384901 containerd[1747]: 2025-09-11 00:28:25.342 [INFO][5221] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.87.0/26 handle="k8s-pod-network.e8b7a9f1eb238bfb7acf3c3b7ed58f09108ad05814062937fafc5bfb2ae48023" host="ci-4372.1.0-n-3f8a739b41" Sep 11 00:28:25.384901 containerd[1747]: 2025-09-11 00:28:25.343 [INFO][5221] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e8b7a9f1eb238bfb7acf3c3b7ed58f09108ad05814062937fafc5bfb2ae48023 Sep 11 00:28:25.384901 containerd[1747]: 2025-09-11 00:28:25.348 [INFO][5221] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.87.0/26 handle="k8s-pod-network.e8b7a9f1eb238bfb7acf3c3b7ed58f09108ad05814062937fafc5bfb2ae48023" host="ci-4372.1.0-n-3f8a739b41" Sep 11 00:28:25.384901 containerd[1747]: 2025-09-11 00:28:25.357 [INFO][5221] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.87.9/26] block=192.168.87.0/26 handle="k8s-pod-network.e8b7a9f1eb238bfb7acf3c3b7ed58f09108ad05814062937fafc5bfb2ae48023" host="ci-4372.1.0-n-3f8a739b41" Sep 11 00:28:25.384901 containerd[1747]: 2025-09-11 00:28:25.357 [INFO][5221] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.87.9/26] handle="k8s-pod-network.e8b7a9f1eb238bfb7acf3c3b7ed58f09108ad05814062937fafc5bfb2ae48023" host="ci-4372.1.0-n-3f8a739b41" Sep 11 00:28:25.384901 containerd[1747]: 2025-09-11 00:28:25.357 [INFO][5221] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 11 00:28:25.384901 containerd[1747]: 2025-09-11 00:28:25.357 [INFO][5221] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.87.9/26] IPv6=[] ContainerID="e8b7a9f1eb238bfb7acf3c3b7ed58f09108ad05814062937fafc5bfb2ae48023" HandleID="k8s-pod-network.e8b7a9f1eb238bfb7acf3c3b7ed58f09108ad05814062937fafc5bfb2ae48023" Workload="ci--4372.1.0--n--3f8a739b41-k8s-coredns--674b8bbfcf--mkpsv-eth0" Sep 11 00:28:25.386822 containerd[1747]: 2025-09-11 00:28:25.359 [INFO][5201] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e8b7a9f1eb238bfb7acf3c3b7ed58f09108ad05814062937fafc5bfb2ae48023" Namespace="kube-system" Pod="coredns-674b8bbfcf-mkpsv" WorkloadEndpoint="ci--4372.1.0--n--3f8a739b41-k8s-coredns--674b8bbfcf--mkpsv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.1.0--n--3f8a739b41-k8s-coredns--674b8bbfcf--mkpsv-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"7896f4f6-30fb-414f-8fc9-01181638003f", ResourceVersion:"865", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 27, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.1.0-n-3f8a739b41", ContainerID:"", Pod:"coredns-674b8bbfcf-mkpsv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.87.9/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califa858ae0d0d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:28:25.386822 containerd[1747]: 2025-09-11 00:28:25.359 [INFO][5201] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.87.9/32] ContainerID="e8b7a9f1eb238bfb7acf3c3b7ed58f09108ad05814062937fafc5bfb2ae48023" Namespace="kube-system" Pod="coredns-674b8bbfcf-mkpsv" WorkloadEndpoint="ci--4372.1.0--n--3f8a739b41-k8s-coredns--674b8bbfcf--mkpsv-eth0" Sep 11 00:28:25.386822 containerd[1747]: 2025-09-11 00:28:25.359 [INFO][5201] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califa858ae0d0d ContainerID="e8b7a9f1eb238bfb7acf3c3b7ed58f09108ad05814062937fafc5bfb2ae48023" Namespace="kube-system" Pod="coredns-674b8bbfcf-mkpsv" WorkloadEndpoint="ci--4372.1.0--n--3f8a739b41-k8s-coredns--674b8bbfcf--mkpsv-eth0" Sep 11 00:28:25.386822 containerd[1747]: 2025-09-11 00:28:25.371 [INFO][5201] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e8b7a9f1eb238bfb7acf3c3b7ed58f09108ad05814062937fafc5bfb2ae48023" Namespace="kube-system" Pod="coredns-674b8bbfcf-mkpsv" WorkloadEndpoint="ci--4372.1.0--n--3f8a739b41-k8s-coredns--674b8bbfcf--mkpsv-eth0" Sep 11 00:28:25.386822 containerd[1747]: 2025-09-11 00:28:25.371 [INFO][5201] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e8b7a9f1eb238bfb7acf3c3b7ed58f09108ad05814062937fafc5bfb2ae48023" Namespace="kube-system" Pod="coredns-674b8bbfcf-mkpsv" WorkloadEndpoint="ci--4372.1.0--n--3f8a739b41-k8s-coredns--674b8bbfcf--mkpsv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.1.0--n--3f8a739b41-k8s-coredns--674b8bbfcf--mkpsv-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"7896f4f6-30fb-414f-8fc9-01181638003f", ResourceVersion:"865", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 27, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.1.0-n-3f8a739b41", ContainerID:"e8b7a9f1eb238bfb7acf3c3b7ed58f09108ad05814062937fafc5bfb2ae48023", Pod:"coredns-674b8bbfcf-mkpsv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.87.9/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califa858ae0d0d", MAC:"2e:97:d7:1e:63:bc", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:28:25.386822 containerd[1747]: 2025-09-11 00:28:25.382 [INFO][5201] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e8b7a9f1eb238bfb7acf3c3b7ed58f09108ad05814062937fafc5bfb2ae48023" Namespace="kube-system" Pod="coredns-674b8bbfcf-mkpsv" WorkloadEndpoint="ci--4372.1.0--n--3f8a739b41-k8s-coredns--674b8bbfcf--mkpsv-eth0" Sep 11 00:28:25.438379 containerd[1747]: time="2025-09-11T00:28:25.438353103Z" level=info msg="connecting to shim e8b7a9f1eb238bfb7acf3c3b7ed58f09108ad05814062937fafc5bfb2ae48023" address="unix:///run/containerd/s/e6849f177243a6a06916da79e8b94c6d94e7a67ac5f3df9a0b279f8b78f30400" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:28:25.443185 containerd[1747]: time="2025-09-11T00:28:25.443164141Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-h8dt2,Uid:9e4921f3-2a97-4fd2-b436-8e2b54a05b3f,Namespace:calico-system,Attempt:0,} returns sandbox id \"dafdf7e10b0687292ed0841e10ad820b462027d5c82f369d3209029a8c9e962f\"" Sep 11 00:28:25.447530 systemd-networkd[1358]: cali4b24d7d05e8: Gained IPv6LL Sep 11 00:28:25.469525 systemd[1]: Started cri-containerd-e8b7a9f1eb238bfb7acf3c3b7ed58f09108ad05814062937fafc5bfb2ae48023.scope - libcontainer container e8b7a9f1eb238bfb7acf3c3b7ed58f09108ad05814062937fafc5bfb2ae48023. Sep 11 00:28:25.505339 containerd[1747]: time="2025-09-11T00:28:25.505323377Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-mkpsv,Uid:7896f4f6-30fb-414f-8fc9-01181638003f,Namespace:kube-system,Attempt:0,} returns sandbox id \"e8b7a9f1eb238bfb7acf3c3b7ed58f09108ad05814062937fafc5bfb2ae48023\"" Sep 11 00:28:25.513715 containerd[1747]: time="2025-09-11T00:28:25.513544646Z" level=info msg="CreateContainer within sandbox \"e8b7a9f1eb238bfb7acf3c3b7ed58f09108ad05814062937fafc5bfb2ae48023\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 11 00:28:25.533931 containerd[1747]: time="2025-09-11T00:28:25.533915510Z" level=info msg="Container 8c3d98a8da76b45b276e870c26d2a66023aa5977144c7ade102454864d907f67: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:28:25.544924 containerd[1747]: time="2025-09-11T00:28:25.544897705Z" level=info msg="CreateContainer within sandbox \"e8b7a9f1eb238bfb7acf3c3b7ed58f09108ad05814062937fafc5bfb2ae48023\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8c3d98a8da76b45b276e870c26d2a66023aa5977144c7ade102454864d907f67\"" Sep 11 00:28:25.548307 containerd[1747]: time="2025-09-11T00:28:25.548070600Z" level=info msg="StartContainer for \"8c3d98a8da76b45b276e870c26d2a66023aa5977144c7ade102454864d907f67\"" Sep 11 00:28:25.549483 containerd[1747]: time="2025-09-11T00:28:25.549458813Z" level=info msg="connecting to shim 8c3d98a8da76b45b276e870c26d2a66023aa5977144c7ade102454864d907f67" address="unix:///run/containerd/s/e6849f177243a6a06916da79e8b94c6d94e7a67ac5f3df9a0b279f8b78f30400" protocol=ttrpc version=3 Sep 11 00:28:25.565504 systemd[1]: Started cri-containerd-8c3d98a8da76b45b276e870c26d2a66023aa5977144c7ade102454864d907f67.scope - libcontainer container 8c3d98a8da76b45b276e870c26d2a66023aa5977144c7ade102454864d907f67. Sep 11 00:28:25.586820 containerd[1747]: time="2025-09-11T00:28:25.586650295Z" level=info msg="StartContainer for \"8c3d98a8da76b45b276e870c26d2a66023aa5977144c7ade102454864d907f67\" returns successfully" Sep 11 00:28:26.087570 systemd-networkd[1358]: caliaab0a635be2: Gained IPv6LL Sep 11 00:28:26.284186 kubelet[3157]: I0911 00:28:26.284136 3157 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-mkpsv" podStartSLOduration=40.284119249 podStartE2EDuration="40.284119249s" podCreationTimestamp="2025-09-11 00:27:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-11 00:28:26.283124232 +0000 UTC m=+47.242950075" watchObservedRunningTime="2025-09-11 00:28:26.284119249 +0000 UTC m=+47.243945090" Sep 11 00:28:26.983994 systemd-networkd[1358]: cali681441ca811: Gained IPv6LL Sep 11 00:28:27.111785 systemd-networkd[1358]: califa858ae0d0d: Gained IPv6LL Sep 11 00:28:27.754482 containerd[1747]: time="2025-09-11T00:28:27.754448663Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:28:27.756912 containerd[1747]: time="2025-09-11T00:28:27.756880824Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Sep 11 00:28:27.759722 containerd[1747]: time="2025-09-11T00:28:27.759673477Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:28:27.763228 containerd[1747]: time="2025-09-11T00:28:27.763187531Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:28:27.763822 containerd[1747]: time="2025-09-11T00:28:27.763474581Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 3.051248897s" Sep 11 00:28:27.763822 containerd[1747]: time="2025-09-11T00:28:27.763500986Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 11 00:28:27.764339 containerd[1747]: time="2025-09-11T00:28:27.764324422Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 11 00:28:27.769572 containerd[1747]: time="2025-09-11T00:28:27.769551199Z" level=info msg="CreateContainer within sandbox \"13e45be8155bf480af9f0d73b455e1f52591f7b60b65a242a4adb324aac9fa2a\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 11 00:28:27.788182 containerd[1747]: time="2025-09-11T00:28:27.784715382Z" level=info msg="Container 9bfc430a50a8aa7aa656db04f685979df8fbde9f1c8df8b3d3773a5eb3a2ec8b: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:28:27.801657 containerd[1747]: time="2025-09-11T00:28:27.801634472Z" level=info msg="CreateContainer within sandbox \"13e45be8155bf480af9f0d73b455e1f52591f7b60b65a242a4adb324aac9fa2a\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"9bfc430a50a8aa7aa656db04f685979df8fbde9f1c8df8b3d3773a5eb3a2ec8b\"" Sep 11 00:28:27.802042 containerd[1747]: time="2025-09-11T00:28:27.802022408Z" level=info msg="StartContainer for \"9bfc430a50a8aa7aa656db04f685979df8fbde9f1c8df8b3d3773a5eb3a2ec8b\"" Sep 11 00:28:27.803104 containerd[1747]: time="2025-09-11T00:28:27.803078032Z" level=info msg="connecting to shim 9bfc430a50a8aa7aa656db04f685979df8fbde9f1c8df8b3d3773a5eb3a2ec8b" address="unix:///run/containerd/s/9dfe8872f606964839a7f6c31df57fc3774557080faea928ede7c5d397d3e394" protocol=ttrpc version=3 Sep 11 00:28:27.827508 systemd[1]: Started cri-containerd-9bfc430a50a8aa7aa656db04f685979df8fbde9f1c8df8b3d3773a5eb3a2ec8b.scope - libcontainer container 9bfc430a50a8aa7aa656db04f685979df8fbde9f1c8df8b3d3773a5eb3a2ec8b. Sep 11 00:28:27.869729 containerd[1747]: time="2025-09-11T00:28:27.869708868Z" level=info msg="StartContainer for \"9bfc430a50a8aa7aa656db04f685979df8fbde9f1c8df8b3d3773a5eb3a2ec8b\" returns successfully" Sep 11 00:28:28.112989 containerd[1747]: time="2025-09-11T00:28:28.112962706Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:28:28.116285 containerd[1747]: time="2025-09-11T00:28:28.116264514Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 11 00:28:28.117522 containerd[1747]: time="2025-09-11T00:28:28.117500670Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 353.057702ms" Sep 11 00:28:28.117569 containerd[1747]: time="2025-09-11T00:28:28.117527612Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 11 00:28:28.119222 containerd[1747]: time="2025-09-11T00:28:28.119196261Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 11 00:28:28.125082 containerd[1747]: time="2025-09-11T00:28:28.125057774Z" level=info msg="CreateContainer within sandbox \"bbc97d8e5c9d849dd7512306a7d8b3d90fa38823e1f620aa281d3b787f4cfa8d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 11 00:28:28.142395 containerd[1747]: time="2025-09-11T00:28:28.139774123Z" level=info msg="Container 50a36f9e8f9a750eb3ef666c067bfebc6f15f0e92e5a3eecd8f6a0d0eb4490c6: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:28:28.156491 containerd[1747]: time="2025-09-11T00:28:28.156469258Z" level=info msg="CreateContainer within sandbox \"bbc97d8e5c9d849dd7512306a7d8b3d90fa38823e1f620aa281d3b787f4cfa8d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"50a36f9e8f9a750eb3ef666c067bfebc6f15f0e92e5a3eecd8f6a0d0eb4490c6\"" Sep 11 00:28:28.156885 containerd[1747]: time="2025-09-11T00:28:28.156868404Z" level=info msg="StartContainer for \"50a36f9e8f9a750eb3ef666c067bfebc6f15f0e92e5a3eecd8f6a0d0eb4490c6\"" Sep 11 00:28:28.157843 containerd[1747]: time="2025-09-11T00:28:28.157772816Z" level=info msg="connecting to shim 50a36f9e8f9a750eb3ef666c067bfebc6f15f0e92e5a3eecd8f6a0d0eb4490c6" address="unix:///run/containerd/s/c92681ab8bad0a3878f636a147ab4b21504e8065f4402a73db958b5790815c86" protocol=ttrpc version=3 Sep 11 00:28:28.176520 systemd[1]: Started cri-containerd-50a36f9e8f9a750eb3ef666c067bfebc6f15f0e92e5a3eecd8f6a0d0eb4490c6.scope - libcontainer container 50a36f9e8f9a750eb3ef666c067bfebc6f15f0e92e5a3eecd8f6a0d0eb4490c6. Sep 11 00:28:28.227398 containerd[1747]: time="2025-09-11T00:28:28.227364567Z" level=info msg="StartContainer for \"50a36f9e8f9a750eb3ef666c067bfebc6f15f0e92e5a3eecd8f6a0d0eb4490c6\" returns successfully" Sep 11 00:28:28.306710 kubelet[3157]: I0911 00:28:28.306661 3157 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-75d779f9f-s7nnc" podStartSLOduration=29.724234391 podStartE2EDuration="34.30664555s" podCreationTimestamp="2025-09-11 00:27:54 +0000 UTC" firstStartedPulling="2025-09-11 00:28:23.535720403 +0000 UTC m=+44.495546240" lastFinishedPulling="2025-09-11 00:28:28.118131559 +0000 UTC m=+49.077957399" observedRunningTime="2025-09-11 00:28:28.292632978 +0000 UTC m=+49.252458820" watchObservedRunningTime="2025-09-11 00:28:28.30664555 +0000 UTC m=+49.266471389" Sep 11 00:28:28.307012 kubelet[3157]: I0911 00:28:28.306736 3157 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7488f5d565-n2rfl" podStartSLOduration=28.981460883 podStartE2EDuration="33.306732273s" podCreationTimestamp="2025-09-11 00:27:55 +0000 UTC" firstStartedPulling="2025-09-11 00:28:23.438943322 +0000 UTC m=+44.398769157" lastFinishedPulling="2025-09-11 00:28:27.764214704 +0000 UTC m=+48.724040547" observedRunningTime="2025-09-11 00:28:28.306324253 +0000 UTC m=+49.266150097" watchObservedRunningTime="2025-09-11 00:28:28.306732273 +0000 UTC m=+49.266558107" Sep 11 00:28:28.447865 containerd[1747]: time="2025-09-11T00:28:28.447802781Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:28:28.450395 containerd[1747]: time="2025-09-11T00:28:28.450364930Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 11 00:28:28.451311 containerd[1747]: time="2025-09-11T00:28:28.451291218Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 332.064577ms" Sep 11 00:28:28.451344 containerd[1747]: time="2025-09-11T00:28:28.451316400Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 11 00:28:28.452986 containerd[1747]: time="2025-09-11T00:28:28.452966167Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 11 00:28:28.459350 containerd[1747]: time="2025-09-11T00:28:28.459329094Z" level=info msg="CreateContainer within sandbox \"2e00446fcc65c2aa3d65276d401ac323ba32ac8ed278c594962bd7d49d00b978\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 11 00:28:28.478523 containerd[1747]: time="2025-09-11T00:28:28.478503117Z" level=info msg="Container 6bb12726319eb36efc456c1a8f7fddb1f0d890037d2398dc12b8309353c73015: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:28:28.495819 containerd[1747]: time="2025-09-11T00:28:28.495786863Z" level=info msg="CreateContainer within sandbox \"2e00446fcc65c2aa3d65276d401ac323ba32ac8ed278c594962bd7d49d00b978\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"6bb12726319eb36efc456c1a8f7fddb1f0d890037d2398dc12b8309353c73015\"" Sep 11 00:28:28.497406 containerd[1747]: time="2025-09-11T00:28:28.497077984Z" level=info msg="StartContainer for \"6bb12726319eb36efc456c1a8f7fddb1f0d890037d2398dc12b8309353c73015\"" Sep 11 00:28:28.498913 containerd[1747]: time="2025-09-11T00:28:28.498887902Z" level=info msg="connecting to shim 6bb12726319eb36efc456c1a8f7fddb1f0d890037d2398dc12b8309353c73015" address="unix:///run/containerd/s/3b524c75f2967cc9544a5863b337ac814e6e6e6db80241cb52758d0f0441f3f3" protocol=ttrpc version=3 Sep 11 00:28:28.523193 systemd[1]: Started cri-containerd-6bb12726319eb36efc456c1a8f7fddb1f0d890037d2398dc12b8309353c73015.scope - libcontainer container 6bb12726319eb36efc456c1a8f7fddb1f0d890037d2398dc12b8309353c73015. Sep 11 00:28:28.648792 containerd[1747]: time="2025-09-11T00:28:28.648775278Z" level=info msg="StartContainer for \"6bb12726319eb36efc456c1a8f7fddb1f0d890037d2398dc12b8309353c73015\" returns successfully" Sep 11 00:28:29.285510 kubelet[3157]: I0911 00:28:29.285006 3157 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 11 00:28:29.286342 kubelet[3157]: I0911 00:28:29.286307 3157 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 11 00:28:29.302217 kubelet[3157]: I0911 00:28:29.302172 3157 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-75d779f9f-bb974" podStartSLOduration=30.442554927 podStartE2EDuration="35.302158188s" podCreationTimestamp="2025-09-11 00:27:54 +0000 UTC" firstStartedPulling="2025-09-11 00:28:23.593085303 +0000 UTC m=+44.552911150" lastFinishedPulling="2025-09-11 00:28:28.45268856 +0000 UTC m=+49.412514411" observedRunningTime="2025-09-11 00:28:29.301826217 +0000 UTC m=+50.261652064" watchObservedRunningTime="2025-09-11 00:28:29.302158188 +0000 UTC m=+50.261984029" Sep 11 00:28:31.726873 containerd[1747]: time="2025-09-11T00:28:31.726835151Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:28:31.729548 containerd[1747]: time="2025-09-11T00:28:31.729442455Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Sep 11 00:28:31.732765 containerd[1747]: time="2025-09-11T00:28:31.732742169Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:28:31.736846 containerd[1747]: time="2025-09-11T00:28:31.736428639Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:28:31.736846 containerd[1747]: time="2025-09-11T00:28:31.736753441Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 3.283759847s" Sep 11 00:28:31.736846 containerd[1747]: time="2025-09-11T00:28:31.736778048Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 11 00:28:31.737970 containerd[1747]: time="2025-09-11T00:28:31.737947199Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 11 00:28:31.754026 containerd[1747]: time="2025-09-11T00:28:31.753992682Z" level=info msg="CreateContainer within sandbox \"0b5bed727c712bbf1c33c4bc4fa04f0f2c0282fcab0e9a2dcde901ef7814d989\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 11 00:28:31.773362 containerd[1747]: time="2025-09-11T00:28:31.772584026Z" level=info msg="Container 1b25a9188689ccf61661536b56ca66d2f88f57af69274c24773c3d2bd6b33069: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:28:31.787299 containerd[1747]: time="2025-09-11T00:28:31.787262184Z" level=info msg="CreateContainer within sandbox \"0b5bed727c712bbf1c33c4bc4fa04f0f2c0282fcab0e9a2dcde901ef7814d989\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"1b25a9188689ccf61661536b56ca66d2f88f57af69274c24773c3d2bd6b33069\"" Sep 11 00:28:31.787894 containerd[1747]: time="2025-09-11T00:28:31.787836191Z" level=info msg="StartContainer for \"1b25a9188689ccf61661536b56ca66d2f88f57af69274c24773c3d2bd6b33069\"" Sep 11 00:28:31.789072 containerd[1747]: time="2025-09-11T00:28:31.789031798Z" level=info msg="connecting to shim 1b25a9188689ccf61661536b56ca66d2f88f57af69274c24773c3d2bd6b33069" address="unix:///run/containerd/s/0d75a9f785270d565cd3327afc03203a9dc087bb35e84b35d48dc7bff7d8ae49" protocol=ttrpc version=3 Sep 11 00:28:31.810541 systemd[1]: Started cri-containerd-1b25a9188689ccf61661536b56ca66d2f88f57af69274c24773c3d2bd6b33069.scope - libcontainer container 1b25a9188689ccf61661536b56ca66d2f88f57af69274c24773c3d2bd6b33069. Sep 11 00:28:31.857418 containerd[1747]: time="2025-09-11T00:28:31.857054830Z" level=info msg="StartContainer for \"1b25a9188689ccf61661536b56ca66d2f88f57af69274c24773c3d2bd6b33069\" returns successfully" Sep 11 00:28:32.307032 kubelet[3157]: I0911 00:28:32.306257 3157 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6c99b56578-4pbpk" podStartSLOduration=28.09216419 podStartE2EDuration="35.306240107s" podCreationTimestamp="2025-09-11 00:27:57 +0000 UTC" firstStartedPulling="2025-09-11 00:28:24.52346151 +0000 UTC m=+45.483287355" lastFinishedPulling="2025-09-11 00:28:31.737537425 +0000 UTC m=+52.697363272" observedRunningTime="2025-09-11 00:28:32.304800194 +0000 UTC m=+53.264626038" watchObservedRunningTime="2025-09-11 00:28:32.306240107 +0000 UTC m=+53.266065953" Sep 11 00:28:32.333239 containerd[1747]: time="2025-09-11T00:28:32.333215267Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1b25a9188689ccf61661536b56ca66d2f88f57af69274c24773c3d2bd6b33069\" id:\"e798e3b25765fc386f3ba0a1ad10fe8135a6ddbd712aa83ca1c3ceb2197c93e0\" pid:5579 exited_at:{seconds:1757550512 nanos:332856908}" Sep 11 00:28:33.556486 containerd[1747]: time="2025-09-11T00:28:33.556446385Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:28:33.558891 containerd[1747]: time="2025-09-11T00:28:33.558859615Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 11 00:28:33.561335 containerd[1747]: time="2025-09-11T00:28:33.561297384Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:28:33.564660 containerd[1747]: time="2025-09-11T00:28:33.564617759Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:28:33.565258 containerd[1747]: time="2025-09-11T00:28:33.564988975Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 1.827011543s" Sep 11 00:28:33.565258 containerd[1747]: time="2025-09-11T00:28:33.565015224Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 11 00:28:33.567774 containerd[1747]: time="2025-09-11T00:28:33.567738022Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 11 00:28:33.574437 containerd[1747]: time="2025-09-11T00:28:33.574380647Z" level=info msg="CreateContainer within sandbox \"a5775b106b8f07546e060b453b3e18338fbf53bcd1c4d8e1ba7ce23cfe0cb631\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 11 00:28:33.592402 containerd[1747]: time="2025-09-11T00:28:33.592316773Z" level=info msg="Container 61d19c586a1fa740eedf9125e02e6cba8fd458bdbf995cc011299dd66a1b4423: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:28:33.614675 containerd[1747]: time="2025-09-11T00:28:33.614643730Z" level=info msg="CreateContainer within sandbox \"a5775b106b8f07546e060b453b3e18338fbf53bcd1c4d8e1ba7ce23cfe0cb631\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"61d19c586a1fa740eedf9125e02e6cba8fd458bdbf995cc011299dd66a1b4423\"" Sep 11 00:28:33.615617 containerd[1747]: time="2025-09-11T00:28:33.615535271Z" level=info msg="StartContainer for \"61d19c586a1fa740eedf9125e02e6cba8fd458bdbf995cc011299dd66a1b4423\"" Sep 11 00:28:33.617142 containerd[1747]: time="2025-09-11T00:28:33.617119741Z" level=info msg="connecting to shim 61d19c586a1fa740eedf9125e02e6cba8fd458bdbf995cc011299dd66a1b4423" address="unix:///run/containerd/s/20a54408b0b1ff1c30dfffae29ac4f4422c611c7078979dd00cdcfffd6e7a40d" protocol=ttrpc version=3 Sep 11 00:28:33.643561 systemd[1]: Started cri-containerd-61d19c586a1fa740eedf9125e02e6cba8fd458bdbf995cc011299dd66a1b4423.scope - libcontainer container 61d19c586a1fa740eedf9125e02e6cba8fd458bdbf995cc011299dd66a1b4423. Sep 11 00:28:33.689757 containerd[1747]: time="2025-09-11T00:28:33.689715906Z" level=info msg="StartContainer for \"61d19c586a1fa740eedf9125e02e6cba8fd458bdbf995cc011299dd66a1b4423\" returns successfully" Sep 11 00:28:34.213893 kubelet[3157]: I0911 00:28:34.213853 3157 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 11 00:28:34.213893 kubelet[3157]: I0911 00:28:34.213898 3157 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 11 00:28:36.632585 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2133681288.mount: Deactivated successfully. Sep 11 00:28:37.002847 containerd[1747]: time="2025-09-11T00:28:37.002763742Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:28:37.005024 containerd[1747]: time="2025-09-11T00:28:37.004949717Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Sep 11 00:28:37.007489 containerd[1747]: time="2025-09-11T00:28:37.007467578Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:28:37.010864 containerd[1747]: time="2025-09-11T00:28:37.010821636Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:28:37.011353 containerd[1747]: time="2025-09-11T00:28:37.011196253Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 3.443415151s" Sep 11 00:28:37.011353 containerd[1747]: time="2025-09-11T00:28:37.011223649Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 11 00:28:37.017367 containerd[1747]: time="2025-09-11T00:28:37.017342828Z" level=info msg="CreateContainer within sandbox \"dafdf7e10b0687292ed0841e10ad820b462027d5c82f369d3209029a8c9e962f\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 11 00:28:37.029457 containerd[1747]: time="2025-09-11T00:28:37.029430007Z" level=info msg="Container f42c10df8b9ccce1e4116ee4af2906b4977caee3bb769ff96ebc6c7ed58b8a51: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:28:37.046273 containerd[1747]: time="2025-09-11T00:28:37.046248821Z" level=info msg="CreateContainer within sandbox \"dafdf7e10b0687292ed0841e10ad820b462027d5c82f369d3209029a8c9e962f\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"f42c10df8b9ccce1e4116ee4af2906b4977caee3bb769ff96ebc6c7ed58b8a51\"" Sep 11 00:28:37.046718 containerd[1747]: time="2025-09-11T00:28:37.046688649Z" level=info msg="StartContainer for \"f42c10df8b9ccce1e4116ee4af2906b4977caee3bb769ff96ebc6c7ed58b8a51\"" Sep 11 00:28:37.047535 containerd[1747]: time="2025-09-11T00:28:37.047513549Z" level=info msg="connecting to shim f42c10df8b9ccce1e4116ee4af2906b4977caee3bb769ff96ebc6c7ed58b8a51" address="unix:///run/containerd/s/3b83042e1d6d753f2b5885ee25d0906d17634d70d1c19ebd299aa1334d898e3c" protocol=ttrpc version=3 Sep 11 00:28:37.069537 systemd[1]: Started cri-containerd-f42c10df8b9ccce1e4116ee4af2906b4977caee3bb769ff96ebc6c7ed58b8a51.scope - libcontainer container f42c10df8b9ccce1e4116ee4af2906b4977caee3bb769ff96ebc6c7ed58b8a51. Sep 11 00:28:37.109701 containerd[1747]: time="2025-09-11T00:28:37.109677997Z" level=info msg="StartContainer for \"f42c10df8b9ccce1e4116ee4af2906b4977caee3bb769ff96ebc6c7ed58b8a51\" returns successfully" Sep 11 00:28:37.321001 kubelet[3157]: I0911 00:28:37.320435 3157 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-h8dt2" podStartSLOduration=29.754402786 podStartE2EDuration="41.320297926s" podCreationTimestamp="2025-09-11 00:27:56 +0000 UTC" firstStartedPulling="2025-09-11 00:28:25.445959198 +0000 UTC m=+46.405785039" lastFinishedPulling="2025-09-11 00:28:37.011854333 +0000 UTC m=+57.971680179" observedRunningTime="2025-09-11 00:28:37.320144618 +0000 UTC m=+58.279970462" watchObservedRunningTime="2025-09-11 00:28:37.320297926 +0000 UTC m=+58.280123802" Sep 11 00:28:37.321287 kubelet[3157]: I0911 00:28:37.321096 3157 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-ldmgv" podStartSLOduration=29.077332681 podStartE2EDuration="40.3210827s" podCreationTimestamp="2025-09-11 00:27:57 +0000 UTC" firstStartedPulling="2025-09-11 00:28:22.322074307 +0000 UTC m=+43.281900150" lastFinishedPulling="2025-09-11 00:28:33.56582433 +0000 UTC m=+54.525650169" observedRunningTime="2025-09-11 00:28:34.310337737 +0000 UTC m=+55.270163579" watchObservedRunningTime="2025-09-11 00:28:37.3210827 +0000 UTC m=+58.280908542" Sep 11 00:28:37.375580 containerd[1747]: time="2025-09-11T00:28:37.375559268Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f42c10df8b9ccce1e4116ee4af2906b4977caee3bb769ff96ebc6c7ed58b8a51\" id:\"3b0ca9a952fbdb364a0cc3bc136d109abe4b6519574b05f92a3194bd91cac691\" pid:5686 exit_status:1 exited_at:{seconds:1757550517 nanos:375231212}" Sep 11 00:28:38.366488 containerd[1747]: time="2025-09-11T00:28:38.366378172Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f42c10df8b9ccce1e4116ee4af2906b4977caee3bb769ff96ebc6c7ed58b8a51\" id:\"f0c9ce7a6e0317371838bad4a3f466b0882e7cfc5a9c862d724046c3848962dd\" pid:5712 exit_status:1 exited_at:{seconds:1757550518 nanos:366181807}" Sep 11 00:28:39.521203 containerd[1747]: time="2025-09-11T00:28:39.521157248Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f42c10df8b9ccce1e4116ee4af2906b4977caee3bb769ff96ebc6c7ed58b8a51\" id:\"75436a6e12d2beae073d70ecd0519f3b266327c67b435a825bd3f552f24fcb20\" pid:5743 exited_at:{seconds:1757550519 nanos:520944623}" Sep 11 00:28:48.289724 containerd[1747]: time="2025-09-11T00:28:48.289675919Z" level=info msg="TaskExit event in podsandbox handler container_id:\"501b940f048afec0cb344c0bbe460a3694c330693b0af9da65daf0188458dc24\" id:\"e601ef4deb1f1e6bad32edcd1ec6bfe1b71094fc8772fa1f807321f21667383f\" pid:5769 exited_at:{seconds:1757550528 nanos:289233295}" Sep 11 00:28:51.571063 kubelet[3157]: I0911 00:28:51.570709 3157 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 11 00:28:58.035613 systemd[1]: Started sshd@7-10.200.8.4:22-10.200.16.10:33066.service - OpenSSH per-connection server daemon (10.200.16.10:33066). Sep 11 00:28:58.677924 sshd[5788]: Accepted publickey for core from 10.200.16.10 port 33066 ssh2: RSA SHA256:WsqnZe1Vz7kcM6FJ5Bl6636L4nXESJA3OI736agNivA Sep 11 00:28:58.679063 sshd-session[5788]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:28:58.683212 systemd-logind[1706]: New session 10 of user core. Sep 11 00:28:58.688510 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 11 00:28:59.188478 sshd[5790]: Connection closed by 10.200.16.10 port 33066 Sep 11 00:28:59.189778 sshd-session[5788]: pam_unix(sshd:session): session closed for user core Sep 11 00:28:59.194816 systemd[1]: sshd@7-10.200.8.4:22-10.200.16.10:33066.service: Deactivated successfully. Sep 11 00:28:59.199169 systemd[1]: session-10.scope: Deactivated successfully. Sep 11 00:28:59.200337 systemd-logind[1706]: Session 10 logged out. Waiting for processes to exit. Sep 11 00:28:59.205488 systemd-logind[1706]: Removed session 10. Sep 11 00:29:02.415064 containerd[1747]: time="2025-09-11T00:29:02.415021582Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1b25a9188689ccf61661536b56ca66d2f88f57af69274c24773c3d2bd6b33069\" id:\"64e7e472b7ac5fd5efb628c0d29cc022279f23d4170a818b8c2d0c4e08789268\" pid:5819 exited_at:{seconds:1757550542 nanos:414708845}" Sep 11 00:29:04.302500 systemd[1]: Started sshd@8-10.200.8.4:22-10.200.16.10:34824.service - OpenSSH per-connection server daemon (10.200.16.10:34824). Sep 11 00:29:04.953197 sshd[5829]: Accepted publickey for core from 10.200.16.10 port 34824 ssh2: RSA SHA256:WsqnZe1Vz7kcM6FJ5Bl6636L4nXESJA3OI736agNivA Sep 11 00:29:04.954166 sshd-session[5829]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:29:04.958207 systemd-logind[1706]: New session 11 of user core. Sep 11 00:29:04.962517 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 11 00:29:05.473399 sshd[5831]: Connection closed by 10.200.16.10 port 34824 Sep 11 00:29:05.476534 sshd-session[5829]: pam_unix(sshd:session): session closed for user core Sep 11 00:29:05.479255 systemd[1]: sshd@8-10.200.8.4:22-10.200.16.10:34824.service: Deactivated successfully. Sep 11 00:29:05.482333 systemd[1]: session-11.scope: Deactivated successfully. Sep 11 00:29:05.485539 systemd-logind[1706]: Session 11 logged out. Waiting for processes to exit. Sep 11 00:29:05.487255 systemd-logind[1706]: Removed session 11. Sep 11 00:29:08.403081 containerd[1747]: time="2025-09-11T00:29:08.402925547Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f42c10df8b9ccce1e4116ee4af2906b4977caee3bb769ff96ebc6c7ed58b8a51\" id:\"ac45cbf9e16f93c6fd412d0463ff8d0c3c10afb41d73852b6bf2cde0894fb274\" pid:5858 exited_at:{seconds:1757550548 nanos:402551758}" Sep 11 00:29:09.959889 kubelet[3157]: I0911 00:29:09.959812 3157 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 11 00:29:10.020480 containerd[1747]: time="2025-09-11T00:29:10.020439418Z" level=info msg="StopContainer for \"50a36f9e8f9a750eb3ef666c067bfebc6f15f0e92e5a3eecd8f6a0d0eb4490c6\" with timeout 30 (s)" Sep 11 00:29:10.021483 containerd[1747]: time="2025-09-11T00:29:10.021448069Z" level=info msg="Stop container \"50a36f9e8f9a750eb3ef666c067bfebc6f15f0e92e5a3eecd8f6a0d0eb4490c6\" with signal terminated" Sep 11 00:29:10.058354 containerd[1747]: time="2025-09-11T00:29:10.058257585Z" level=info msg="received exit event container_id:\"50a36f9e8f9a750eb3ef666c067bfebc6f15f0e92e5a3eecd8f6a0d0eb4490c6\" id:\"50a36f9e8f9a750eb3ef666c067bfebc6f15f0e92e5a3eecd8f6a0d0eb4490c6\" pid:5438 exit_status:1 exited_at:{seconds:1757550550 nanos:58022417}" Sep 11 00:29:10.058305 systemd[1]: cri-containerd-50a36f9e8f9a750eb3ef666c067bfebc6f15f0e92e5a3eecd8f6a0d0eb4490c6.scope: Deactivated successfully. Sep 11 00:29:10.058796 containerd[1747]: time="2025-09-11T00:29:10.058736872Z" level=info msg="TaskExit event in podsandbox handler container_id:\"50a36f9e8f9a750eb3ef666c067bfebc6f15f0e92e5a3eecd8f6a0d0eb4490c6\" id:\"50a36f9e8f9a750eb3ef666c067bfebc6f15f0e92e5a3eecd8f6a0d0eb4490c6\" pid:5438 exit_status:1 exited_at:{seconds:1757550550 nanos:58022417}" Sep 11 00:29:10.090881 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-50a36f9e8f9a750eb3ef666c067bfebc6f15f0e92e5a3eecd8f6a0d0eb4490c6-rootfs.mount: Deactivated successfully. Sep 11 00:29:10.591230 systemd[1]: Started sshd@9-10.200.8.4:22-10.200.16.10:57484.service - OpenSSH per-connection server daemon (10.200.16.10:57484). Sep 11 00:29:10.959619 containerd[1747]: time="2025-09-11T00:29:10.959509639Z" level=info msg="StopContainer for \"50a36f9e8f9a750eb3ef666c067bfebc6f15f0e92e5a3eecd8f6a0d0eb4490c6\" returns successfully" Sep 11 00:29:10.961748 containerd[1747]: time="2025-09-11T00:29:10.961723712Z" level=info msg="StopPodSandbox for \"bbc97d8e5c9d849dd7512306a7d8b3d90fa38823e1f620aa281d3b787f4cfa8d\"" Sep 11 00:29:10.962135 containerd[1747]: time="2025-09-11T00:29:10.962078550Z" level=info msg="Container to stop \"50a36f9e8f9a750eb3ef666c067bfebc6f15f0e92e5a3eecd8f6a0d0eb4490c6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 11 00:29:10.968194 systemd[1]: cri-containerd-bbc97d8e5c9d849dd7512306a7d8b3d90fa38823e1f620aa281d3b787f4cfa8d.scope: Deactivated successfully. Sep 11 00:29:10.970000 containerd[1747]: time="2025-09-11T00:29:10.969966634Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bbc97d8e5c9d849dd7512306a7d8b3d90fa38823e1f620aa281d3b787f4cfa8d\" id:\"bbc97d8e5c9d849dd7512306a7d8b3d90fa38823e1f620aa281d3b787f4cfa8d\" pid:4903 exit_status:137 exited_at:{seconds:1757550550 nanos:969477817}" Sep 11 00:29:10.990129 containerd[1747]: time="2025-09-11T00:29:10.990098385Z" level=info msg="shim disconnected" id=bbc97d8e5c9d849dd7512306a7d8b3d90fa38823e1f620aa281d3b787f4cfa8d namespace=k8s.io Sep 11 00:29:10.990360 containerd[1747]: time="2025-09-11T00:29:10.990116755Z" level=warning msg="cleaning up after shim disconnected" id=bbc97d8e5c9d849dd7512306a7d8b3d90fa38823e1f620aa281d3b787f4cfa8d namespace=k8s.io Sep 11 00:29:10.990360 containerd[1747]: time="2025-09-11T00:29:10.990293337Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 11 00:29:10.995282 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bbc97d8e5c9d849dd7512306a7d8b3d90fa38823e1f620aa281d3b787f4cfa8d-rootfs.mount: Deactivated successfully. Sep 11 00:29:11.011995 containerd[1747]: time="2025-09-11T00:29:11.011938577Z" level=info msg="received exit event sandbox_id:\"bbc97d8e5c9d849dd7512306a7d8b3d90fa38823e1f620aa281d3b787f4cfa8d\" exit_status:137 exited_at:{seconds:1757550550 nanos:969477817}" Sep 11 00:29:11.016045 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bbc97d8e5c9d849dd7512306a7d8b3d90fa38823e1f620aa281d3b787f4cfa8d-shm.mount: Deactivated successfully. Sep 11 00:29:11.051061 systemd-networkd[1358]: calic5672151f5f: Link DOWN Sep 11 00:29:11.051066 systemd-networkd[1358]: calic5672151f5f: Lost carrier Sep 11 00:29:11.114154 containerd[1747]: 2025-09-11 00:29:11.048 [INFO][5942] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bbc97d8e5c9d849dd7512306a7d8b3d90fa38823e1f620aa281d3b787f4cfa8d" Sep 11 00:29:11.114154 containerd[1747]: 2025-09-11 00:29:11.048 [INFO][5942] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="bbc97d8e5c9d849dd7512306a7d8b3d90fa38823e1f620aa281d3b787f4cfa8d" iface="eth0" netns="/var/run/netns/cni-84143cab-303e-b9de-1c3d-e078fe037974" Sep 11 00:29:11.114154 containerd[1747]: 2025-09-11 00:29:11.049 [INFO][5942] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="bbc97d8e5c9d849dd7512306a7d8b3d90fa38823e1f620aa281d3b787f4cfa8d" iface="eth0" netns="/var/run/netns/cni-84143cab-303e-b9de-1c3d-e078fe037974" Sep 11 00:29:11.114154 containerd[1747]: 2025-09-11 00:29:11.055 [INFO][5942] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="bbc97d8e5c9d849dd7512306a7d8b3d90fa38823e1f620aa281d3b787f4cfa8d" after=6.529348ms iface="eth0" netns="/var/run/netns/cni-84143cab-303e-b9de-1c3d-e078fe037974" Sep 11 00:29:11.114154 containerd[1747]: 2025-09-11 00:29:11.056 [INFO][5942] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bbc97d8e5c9d849dd7512306a7d8b3d90fa38823e1f620aa281d3b787f4cfa8d" Sep 11 00:29:11.114154 containerd[1747]: 2025-09-11 00:29:11.056 [INFO][5942] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bbc97d8e5c9d849dd7512306a7d8b3d90fa38823e1f620aa281d3b787f4cfa8d" Sep 11 00:29:11.114154 containerd[1747]: 2025-09-11 00:29:11.082 [INFO][5949] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bbc97d8e5c9d849dd7512306a7d8b3d90fa38823e1f620aa281d3b787f4cfa8d" HandleID="k8s-pod-network.bbc97d8e5c9d849dd7512306a7d8b3d90fa38823e1f620aa281d3b787f4cfa8d" Workload="ci--4372.1.0--n--3f8a739b41-k8s-calico--apiserver--75d779f9f--s7nnc-eth0" Sep 11 00:29:11.114154 containerd[1747]: 2025-09-11 00:29:11.082 [INFO][5949] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 11 00:29:11.114154 containerd[1747]: 2025-09-11 00:29:11.082 [INFO][5949] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 11 00:29:11.114154 containerd[1747]: 2025-09-11 00:29:11.111 [INFO][5949] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="bbc97d8e5c9d849dd7512306a7d8b3d90fa38823e1f620aa281d3b787f4cfa8d" HandleID="k8s-pod-network.bbc97d8e5c9d849dd7512306a7d8b3d90fa38823e1f620aa281d3b787f4cfa8d" Workload="ci--4372.1.0--n--3f8a739b41-k8s-calico--apiserver--75d779f9f--s7nnc-eth0" Sep 11 00:29:11.114154 containerd[1747]: 2025-09-11 00:29:11.111 [INFO][5949] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bbc97d8e5c9d849dd7512306a7d8b3d90fa38823e1f620aa281d3b787f4cfa8d" HandleID="k8s-pod-network.bbc97d8e5c9d849dd7512306a7d8b3d90fa38823e1f620aa281d3b787f4cfa8d" Workload="ci--4372.1.0--n--3f8a739b41-k8s-calico--apiserver--75d779f9f--s7nnc-eth0" Sep 11 00:29:11.114154 containerd[1747]: 2025-09-11 00:29:11.112 [INFO][5949] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 11 00:29:11.114154 containerd[1747]: 2025-09-11 00:29:11.113 [INFO][5942] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bbc97d8e5c9d849dd7512306a7d8b3d90fa38823e1f620aa281d3b787f4cfa8d" Sep 11 00:29:11.114690 containerd[1747]: time="2025-09-11T00:29:11.114659221Z" level=info msg="TearDown network for sandbox \"bbc97d8e5c9d849dd7512306a7d8b3d90fa38823e1f620aa281d3b787f4cfa8d\" successfully" Sep 11 00:29:11.114809 containerd[1747]: time="2025-09-11T00:29:11.114678152Z" level=info msg="StopPodSandbox for \"bbc97d8e5c9d849dd7512306a7d8b3d90fa38823e1f620aa281d3b787f4cfa8d\" returns successfully" Sep 11 00:29:11.116337 systemd[1]: run-netns-cni\x2d84143cab\x2d303e\x2db9de\x2d1c3d\x2de078fe037974.mount: Deactivated successfully. Sep 11 00:29:11.175685 kubelet[3157]: I0911 00:29:11.175664 3157 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jprrp\" (UniqueName: \"kubernetes.io/projected/e4a2905f-a3b1-4fce-89a6-6dcc04516c41-kube-api-access-jprrp\") pod \"e4a2905f-a3b1-4fce-89a6-6dcc04516c41\" (UID: \"e4a2905f-a3b1-4fce-89a6-6dcc04516c41\") " Sep 11 00:29:11.176110 kubelet[3157]: I0911 00:29:11.175695 3157 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e4a2905f-a3b1-4fce-89a6-6dcc04516c41-calico-apiserver-certs\") pod \"e4a2905f-a3b1-4fce-89a6-6dcc04516c41\" (UID: \"e4a2905f-a3b1-4fce-89a6-6dcc04516c41\") " Sep 11 00:29:11.177870 kubelet[3157]: I0911 00:29:11.177825 3157 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4a2905f-a3b1-4fce-89a6-6dcc04516c41-kube-api-access-jprrp" (OuterVolumeSpecName: "kube-api-access-jprrp") pod "e4a2905f-a3b1-4fce-89a6-6dcc04516c41" (UID: "e4a2905f-a3b1-4fce-89a6-6dcc04516c41"). InnerVolumeSpecName "kube-api-access-jprrp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 11 00:29:11.180533 systemd[1]: var-lib-kubelet-pods-e4a2905f\x2da3b1\x2d4fce\x2d89a6\x2d6dcc04516c41-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djprrp.mount: Deactivated successfully. Sep 11 00:29:11.181740 kubelet[3157]: I0911 00:29:11.181614 3157 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4a2905f-a3b1-4fce-89a6-6dcc04516c41-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "e4a2905f-a3b1-4fce-89a6-6dcc04516c41" (UID: "e4a2905f-a3b1-4fce-89a6-6dcc04516c41"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 11 00:29:11.183048 systemd[1]: var-lib-kubelet-pods-e4a2905f\x2da3b1\x2d4fce\x2d89a6\x2d6dcc04516c41-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Sep 11 00:29:11.234747 sshd[5893]: Accepted publickey for core from 10.200.16.10 port 57484 ssh2: RSA SHA256:WsqnZe1Vz7kcM6FJ5Bl6636L4nXESJA3OI736agNivA Sep 11 00:29:11.235217 sshd-session[5893]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:29:11.238766 systemd-logind[1706]: New session 12 of user core. Sep 11 00:29:11.242509 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 11 00:29:11.276296 kubelet[3157]: I0911 00:29:11.276277 3157 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jprrp\" (UniqueName: \"kubernetes.io/projected/e4a2905f-a3b1-4fce-89a6-6dcc04516c41-kube-api-access-jprrp\") on node \"ci-4372.1.0-n-3f8a739b41\" DevicePath \"\"" Sep 11 00:29:11.276296 kubelet[3157]: I0911 00:29:11.276295 3157 reconciler_common.go:299] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e4a2905f-a3b1-4fce-89a6-6dcc04516c41-calico-apiserver-certs\") on node \"ci-4372.1.0-n-3f8a739b41\" DevicePath \"\"" Sep 11 00:29:11.372073 kubelet[3157]: I0911 00:29:11.372036 3157 scope.go:117] "RemoveContainer" containerID="50a36f9e8f9a750eb3ef666c067bfebc6f15f0e92e5a3eecd8f6a0d0eb4490c6" Sep 11 00:29:11.377348 systemd[1]: Removed slice kubepods-besteffort-pode4a2905f_a3b1_4fce_89a6_6dcc04516c41.slice - libcontainer container kubepods-besteffort-pode4a2905f_a3b1_4fce_89a6_6dcc04516c41.slice. Sep 11 00:29:11.378181 containerd[1747]: time="2025-09-11T00:29:11.378160249Z" level=info msg="RemoveContainer for \"50a36f9e8f9a750eb3ef666c067bfebc6f15f0e92e5a3eecd8f6a0d0eb4490c6\"" Sep 11 00:29:11.390822 containerd[1747]: time="2025-09-11T00:29:11.390236545Z" level=info msg="RemoveContainer for \"50a36f9e8f9a750eb3ef666c067bfebc6f15f0e92e5a3eecd8f6a0d0eb4490c6\" returns successfully" Sep 11 00:29:11.390905 kubelet[3157]: I0911 00:29:11.390788 3157 scope.go:117] "RemoveContainer" containerID="50a36f9e8f9a750eb3ef666c067bfebc6f15f0e92e5a3eecd8f6a0d0eb4490c6" Sep 11 00:29:11.391012 containerd[1747]: time="2025-09-11T00:29:11.390957376Z" level=error msg="ContainerStatus for \"50a36f9e8f9a750eb3ef666c067bfebc6f15f0e92e5a3eecd8f6a0d0eb4490c6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"50a36f9e8f9a750eb3ef666c067bfebc6f15f0e92e5a3eecd8f6a0d0eb4490c6\": not found" Sep 11 00:29:11.391207 kubelet[3157]: E0911 00:29:11.391189 3157 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"50a36f9e8f9a750eb3ef666c067bfebc6f15f0e92e5a3eecd8f6a0d0eb4490c6\": not found" containerID="50a36f9e8f9a750eb3ef666c067bfebc6f15f0e92e5a3eecd8f6a0d0eb4490c6" Sep 11 00:29:11.391240 kubelet[3157]: I0911 00:29:11.391213 3157 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"50a36f9e8f9a750eb3ef666c067bfebc6f15f0e92e5a3eecd8f6a0d0eb4490c6"} err="failed to get container status \"50a36f9e8f9a750eb3ef666c067bfebc6f15f0e92e5a3eecd8f6a0d0eb4490c6\": rpc error: code = NotFound desc = an error occurred when try to find container \"50a36f9e8f9a750eb3ef666c067bfebc6f15f0e92e5a3eecd8f6a0d0eb4490c6\": not found" Sep 11 00:29:11.732079 sshd[5966]: Connection closed by 10.200.16.10 port 57484 Sep 11 00:29:11.732522 sshd-session[5893]: pam_unix(sshd:session): session closed for user core Sep 11 00:29:11.735487 systemd[1]: sshd@9-10.200.8.4:22-10.200.16.10:57484.service: Deactivated successfully. Sep 11 00:29:11.737033 systemd[1]: session-12.scope: Deactivated successfully. Sep 11 00:29:11.737741 systemd-logind[1706]: Session 12 logged out. Waiting for processes to exit. Sep 11 00:29:11.739226 systemd-logind[1706]: Removed session 12. Sep 11 00:29:11.851466 systemd[1]: Started sshd@10-10.200.8.4:22-10.200.16.10:57486.service - OpenSSH per-connection server daemon (10.200.16.10:57486). Sep 11 00:29:12.506185 sshd[5979]: Accepted publickey for core from 10.200.16.10 port 57486 ssh2: RSA SHA256:WsqnZe1Vz7kcM6FJ5Bl6636L4nXESJA3OI736agNivA Sep 11 00:29:12.507026 sshd-session[5979]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:29:12.513864 systemd-logind[1706]: New session 13 of user core. Sep 11 00:29:12.519565 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 11 00:29:12.928134 containerd[1747]: time="2025-09-11T00:29:12.928093707Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1b25a9188689ccf61661536b56ca66d2f88f57af69274c24773c3d2bd6b33069\" id:\"fe3721e50e6b2fa54809c7a625904b4c6c7eb817cac1038a459bae6ee7522b90\" pid:5999 exited_at:{seconds:1757550552 nanos:927676460}" Sep 11 00:29:13.055533 sshd[5981]: Connection closed by 10.200.16.10 port 57486 Sep 11 00:29:13.056712 sshd-session[5979]: pam_unix(sshd:session): session closed for user core Sep 11 00:29:13.060140 systemd[1]: sshd@10-10.200.8.4:22-10.200.16.10:57486.service: Deactivated successfully. Sep 11 00:29:13.061800 systemd[1]: session-13.scope: Deactivated successfully. Sep 11 00:29:13.062431 systemd-logind[1706]: Session 13 logged out. Waiting for processes to exit. Sep 11 00:29:13.064298 systemd-logind[1706]: Removed session 13. Sep 11 00:29:13.127276 kubelet[3157]: I0911 00:29:13.127250 3157 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4a2905f-a3b1-4fce-89a6-6dcc04516c41" path="/var/lib/kubelet/pods/e4a2905f-a3b1-4fce-89a6-6dcc04516c41/volumes" Sep 11 00:29:13.168046 systemd[1]: Started sshd@11-10.200.8.4:22-10.200.16.10:57500.service - OpenSSH per-connection server daemon (10.200.16.10:57500). Sep 11 00:29:13.809409 sshd[6013]: Accepted publickey for core from 10.200.16.10 port 57500 ssh2: RSA SHA256:WsqnZe1Vz7kcM6FJ5Bl6636L4nXESJA3OI736agNivA Sep 11 00:29:13.810952 sshd-session[6013]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:29:13.816572 systemd-logind[1706]: New session 14 of user core. Sep 11 00:29:13.822543 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 11 00:29:14.311036 sshd[6015]: Connection closed by 10.200.16.10 port 57500 Sep 11 00:29:14.311518 sshd-session[6013]: pam_unix(sshd:session): session closed for user core Sep 11 00:29:14.314708 systemd[1]: sshd@11-10.200.8.4:22-10.200.16.10:57500.service: Deactivated successfully. Sep 11 00:29:14.314942 systemd-logind[1706]: Session 14 logged out. Waiting for processes to exit. Sep 11 00:29:14.316835 systemd[1]: session-14.scope: Deactivated successfully. Sep 11 00:29:14.318358 systemd-logind[1706]: Removed session 14. Sep 11 00:29:18.336719 containerd[1747]: time="2025-09-11T00:29:18.336636231Z" level=info msg="TaskExit event in podsandbox handler container_id:\"501b940f048afec0cb344c0bbe460a3694c330693b0af9da65daf0188458dc24\" id:\"504473272a2ee0b18d79cdf8b5311e91de1c0502bbd3d69eb6ab2f33e6f348a0\" pid:6043 exited_at:{seconds:1757550558 nanos:336370870}" Sep 11 00:29:19.428089 systemd[1]: Started sshd@12-10.200.8.4:22-10.200.16.10:57516.service - OpenSSH per-connection server daemon (10.200.16.10:57516). Sep 11 00:29:20.071403 sshd[6057]: Accepted publickey for core from 10.200.16.10 port 57516 ssh2: RSA SHA256:WsqnZe1Vz7kcM6FJ5Bl6636L4nXESJA3OI736agNivA Sep 11 00:29:20.074177 sshd-session[6057]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:29:20.082731 systemd-logind[1706]: New session 15 of user core. Sep 11 00:29:20.089745 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 11 00:29:20.564993 sshd[6059]: Connection closed by 10.200.16.10 port 57516 Sep 11 00:29:20.565441 sshd-session[6057]: pam_unix(sshd:session): session closed for user core Sep 11 00:29:20.568033 systemd[1]: sshd@12-10.200.8.4:22-10.200.16.10:57516.service: Deactivated successfully. Sep 11 00:29:20.569742 systemd[1]: session-15.scope: Deactivated successfully. Sep 11 00:29:20.570446 systemd-logind[1706]: Session 15 logged out. Waiting for processes to exit. Sep 11 00:29:20.572147 systemd-logind[1706]: Removed session 15. Sep 11 00:29:25.679788 systemd[1]: Started sshd@13-10.200.8.4:22-10.200.16.10:44498.service - OpenSSH per-connection server daemon (10.200.16.10:44498). Sep 11 00:29:26.331298 sshd[6073]: Accepted publickey for core from 10.200.16.10 port 44498 ssh2: RSA SHA256:WsqnZe1Vz7kcM6FJ5Bl6636L4nXESJA3OI736agNivA Sep 11 00:29:26.332052 sshd-session[6073]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:29:26.336012 systemd-logind[1706]: New session 16 of user core. Sep 11 00:29:26.342644 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 11 00:29:26.852567 sshd[6079]: Connection closed by 10.200.16.10 port 44498 Sep 11 00:29:26.853923 sshd-session[6073]: pam_unix(sshd:session): session closed for user core Sep 11 00:29:26.857006 systemd-logind[1706]: Session 16 logged out. Waiting for processes to exit. Sep 11 00:29:26.858513 systemd[1]: sshd@13-10.200.8.4:22-10.200.16.10:44498.service: Deactivated successfully. Sep 11 00:29:26.861919 systemd[1]: session-16.scope: Deactivated successfully. Sep 11 00:29:26.863902 systemd-logind[1706]: Removed session 16. Sep 11 00:29:31.970426 systemd[1]: Started sshd@14-10.200.8.4:22-10.200.16.10:40928.service - OpenSSH per-connection server daemon (10.200.16.10:40928). Sep 11 00:29:32.331483 containerd[1747]: time="2025-09-11T00:29:32.331427073Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1b25a9188689ccf61661536b56ca66d2f88f57af69274c24773c3d2bd6b33069\" id:\"4272b5ae60c213e6b637cb7b7953aef4bebe255df427568e2c7306e62f80dab4\" pid:6107 exited_at:{seconds:1757550572 nanos:331035138}" Sep 11 00:29:32.610590 sshd[6092]: Accepted publickey for core from 10.200.16.10 port 40928 ssh2: RSA SHA256:WsqnZe1Vz7kcM6FJ5Bl6636L4nXESJA3OI736agNivA Sep 11 00:29:32.612663 sshd-session[6092]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:29:32.621914 systemd-logind[1706]: New session 17 of user core. Sep 11 00:29:32.627754 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 11 00:29:33.140682 sshd[6116]: Connection closed by 10.200.16.10 port 40928 Sep 11 00:29:33.142226 sshd-session[6092]: pam_unix(sshd:session): session closed for user core Sep 11 00:29:33.145763 systemd-logind[1706]: Session 17 logged out. Waiting for processes to exit. Sep 11 00:29:33.147324 systemd[1]: sshd@14-10.200.8.4:22-10.200.16.10:40928.service: Deactivated successfully. Sep 11 00:29:33.150988 systemd[1]: session-17.scope: Deactivated successfully. Sep 11 00:29:33.153732 systemd-logind[1706]: Removed session 17. Sep 11 00:29:36.771976 containerd[1747]: time="2025-09-11T00:29:36.771927739Z" level=info msg="StopContainer for \"6bb12726319eb36efc456c1a8f7fddb1f0d890037d2398dc12b8309353c73015\" with timeout 30 (s)" Sep 11 00:29:36.772603 containerd[1747]: time="2025-09-11T00:29:36.772466173Z" level=info msg="Stop container \"6bb12726319eb36efc456c1a8f7fddb1f0d890037d2398dc12b8309353c73015\" with signal terminated" Sep 11 00:29:36.802419 systemd[1]: cri-containerd-6bb12726319eb36efc456c1a8f7fddb1f0d890037d2398dc12b8309353c73015.scope: Deactivated successfully. Sep 11 00:29:36.807727 containerd[1747]: time="2025-09-11T00:29:36.807373567Z" level=info msg="received exit event container_id:\"6bb12726319eb36efc456c1a8f7fddb1f0d890037d2398dc12b8309353c73015\" id:\"6bb12726319eb36efc456c1a8f7fddb1f0d890037d2398dc12b8309353c73015\" pid:5474 exit_status:1 exited_at:{seconds:1757550576 nanos:804714208}" Sep 11 00:29:36.808488 containerd[1747]: time="2025-09-11T00:29:36.808462863Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6bb12726319eb36efc456c1a8f7fddb1f0d890037d2398dc12b8309353c73015\" id:\"6bb12726319eb36efc456c1a8f7fddb1f0d890037d2398dc12b8309353c73015\" pid:5474 exit_status:1 exited_at:{seconds:1757550576 nanos:804714208}" Sep 11 00:29:36.838313 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6bb12726319eb36efc456c1a8f7fddb1f0d890037d2398dc12b8309353c73015-rootfs.mount: Deactivated successfully. Sep 11 00:29:36.915291 containerd[1747]: time="2025-09-11T00:29:36.915270227Z" level=info msg="StopContainer for \"6bb12726319eb36efc456c1a8f7fddb1f0d890037d2398dc12b8309353c73015\" returns successfully" Sep 11 00:29:36.915774 containerd[1747]: time="2025-09-11T00:29:36.915756667Z" level=info msg="StopPodSandbox for \"2e00446fcc65c2aa3d65276d401ac323ba32ac8ed278c594962bd7d49d00b978\"" Sep 11 00:29:36.915832 containerd[1747]: time="2025-09-11T00:29:36.915812384Z" level=info msg="Container to stop \"6bb12726319eb36efc456c1a8f7fddb1f0d890037d2398dc12b8309353c73015\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 11 00:29:36.921841 systemd[1]: cri-containerd-2e00446fcc65c2aa3d65276d401ac323ba32ac8ed278c594962bd7d49d00b978.scope: Deactivated successfully. Sep 11 00:29:36.924034 containerd[1747]: time="2025-09-11T00:29:36.924014860Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2e00446fcc65c2aa3d65276d401ac323ba32ac8ed278c594962bd7d49d00b978\" id:\"2e00446fcc65c2aa3d65276d401ac323ba32ac8ed278c594962bd7d49d00b978\" pid:4959 exit_status:137 exited_at:{seconds:1757550576 nanos:923270039}" Sep 11 00:29:36.943576 containerd[1747]: time="2025-09-11T00:29:36.940794961Z" level=info msg="shim disconnected" id=2e00446fcc65c2aa3d65276d401ac323ba32ac8ed278c594962bd7d49d00b978 namespace=k8s.io Sep 11 00:29:36.943576 containerd[1747]: time="2025-09-11T00:29:36.940816295Z" level=warning msg="cleaning up after shim disconnected" id=2e00446fcc65c2aa3d65276d401ac323ba32ac8ed278c594962bd7d49d00b978 namespace=k8s.io Sep 11 00:29:36.943576 containerd[1747]: time="2025-09-11T00:29:36.940822815Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 11 00:29:36.943576 containerd[1747]: time="2025-09-11T00:29:36.942477806Z" level=error msg="failed sending message on channel" error="write unix /run/containerd/s/3b524c75f2967cc9544a5863b337ac814e6e6e6db80241cb52758d0f0441f3f3->@: write: broken pipe" runtime=io.containerd.runc.v2 Sep 11 00:29:36.942325 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2e00446fcc65c2aa3d65276d401ac323ba32ac8ed278c594962bd7d49d00b978-rootfs.mount: Deactivated successfully. Sep 11 00:29:36.963708 containerd[1747]: time="2025-09-11T00:29:36.963654344Z" level=info msg="received exit event sandbox_id:\"2e00446fcc65c2aa3d65276d401ac323ba32ac8ed278c594962bd7d49d00b978\" exit_status:137 exited_at:{seconds:1757550576 nanos:923270039}" Sep 11 00:29:36.963708 containerd[1747]: time="2025-09-11T00:29:36.963670771Z" level=error msg="Failed to handle event container_id:\"2e00446fcc65c2aa3d65276d401ac323ba32ac8ed278c594962bd7d49d00b978\" id:\"2e00446fcc65c2aa3d65276d401ac323ba32ac8ed278c594962bd7d49d00b978\" pid:4959 exit_status:137 exited_at:{seconds:1757550576 nanos:923270039} for 2e00446fcc65c2aa3d65276d401ac323ba32ac8ed278c594962bd7d49d00b978" error="failed to handle container TaskExit event: failed to stop sandbox: failed to delete task: ttrpc: closed" Sep 11 00:29:36.967251 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2e00446fcc65c2aa3d65276d401ac323ba32ac8ed278c594962bd7d49d00b978-shm.mount: Deactivated successfully. Sep 11 00:29:37.016638 systemd-networkd[1358]: cali3edc6671ea0: Link DOWN Sep 11 00:29:37.017812 systemd-networkd[1358]: cali3edc6671ea0: Lost carrier Sep 11 00:29:37.131435 containerd[1747]: 2025-09-11 00:29:37.014 [INFO][6197] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2e00446fcc65c2aa3d65276d401ac323ba32ac8ed278c594962bd7d49d00b978" Sep 11 00:29:37.131435 containerd[1747]: 2025-09-11 00:29:37.015 [INFO][6197] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2e00446fcc65c2aa3d65276d401ac323ba32ac8ed278c594962bd7d49d00b978" iface="eth0" netns="/var/run/netns/cni-c6d85359-0d09-56f3-1829-9f6cdc096ceb" Sep 11 00:29:37.131435 containerd[1747]: 2025-09-11 00:29:37.015 [INFO][6197] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2e00446fcc65c2aa3d65276d401ac323ba32ac8ed278c594962bd7d49d00b978" iface="eth0" netns="/var/run/netns/cni-c6d85359-0d09-56f3-1829-9f6cdc096ceb" Sep 11 00:29:37.131435 containerd[1747]: 2025-09-11 00:29:37.021 [INFO][6197] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="2e00446fcc65c2aa3d65276d401ac323ba32ac8ed278c594962bd7d49d00b978" after=6.064974ms iface="eth0" netns="/var/run/netns/cni-c6d85359-0d09-56f3-1829-9f6cdc096ceb" Sep 11 00:29:37.131435 containerd[1747]: 2025-09-11 00:29:37.021 [INFO][6197] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2e00446fcc65c2aa3d65276d401ac323ba32ac8ed278c594962bd7d49d00b978" Sep 11 00:29:37.131435 containerd[1747]: 2025-09-11 00:29:37.021 [INFO][6197] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2e00446fcc65c2aa3d65276d401ac323ba32ac8ed278c594962bd7d49d00b978" Sep 11 00:29:37.131435 containerd[1747]: 2025-09-11 00:29:37.073 [INFO][6204] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2e00446fcc65c2aa3d65276d401ac323ba32ac8ed278c594962bd7d49d00b978" HandleID="k8s-pod-network.2e00446fcc65c2aa3d65276d401ac323ba32ac8ed278c594962bd7d49d00b978" Workload="ci--4372.1.0--n--3f8a739b41-k8s-calico--apiserver--75d779f9f--bb974-eth0" Sep 11 00:29:37.131435 containerd[1747]: 2025-09-11 00:29:37.073 [INFO][6204] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 11 00:29:37.131435 containerd[1747]: 2025-09-11 00:29:37.073 [INFO][6204] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 11 00:29:37.131435 containerd[1747]: 2025-09-11 00:29:37.121 [INFO][6204] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="2e00446fcc65c2aa3d65276d401ac323ba32ac8ed278c594962bd7d49d00b978" HandleID="k8s-pod-network.2e00446fcc65c2aa3d65276d401ac323ba32ac8ed278c594962bd7d49d00b978" Workload="ci--4372.1.0--n--3f8a739b41-k8s-calico--apiserver--75d779f9f--bb974-eth0" Sep 11 00:29:37.131435 containerd[1747]: 2025-09-11 00:29:37.121 [INFO][6204] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2e00446fcc65c2aa3d65276d401ac323ba32ac8ed278c594962bd7d49d00b978" HandleID="k8s-pod-network.2e00446fcc65c2aa3d65276d401ac323ba32ac8ed278c594962bd7d49d00b978" Workload="ci--4372.1.0--n--3f8a739b41-k8s-calico--apiserver--75d779f9f--bb974-eth0" Sep 11 00:29:37.131435 containerd[1747]: 2025-09-11 00:29:37.122 [INFO][6204] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 11 00:29:37.131435 containerd[1747]: 2025-09-11 00:29:37.123 [INFO][6197] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2e00446fcc65c2aa3d65276d401ac323ba32ac8ed278c594962bd7d49d00b978" Sep 11 00:29:37.131217 systemd[1]: run-netns-cni\x2dc6d85359\x2d0d09\x2d56f3\x2d1829\x2d9f6cdc096ceb.mount: Deactivated successfully. Sep 11 00:29:37.132171 containerd[1747]: time="2025-09-11T00:29:37.132118687Z" level=info msg="TearDown network for sandbox \"2e00446fcc65c2aa3d65276d401ac323ba32ac8ed278c594962bd7d49d00b978\" successfully" Sep 11 00:29:37.132171 containerd[1747]: time="2025-09-11T00:29:37.132144184Z" level=info msg="StopPodSandbox for \"2e00446fcc65c2aa3d65276d401ac323ba32ac8ed278c594962bd7d49d00b978\" returns successfully" Sep 11 00:29:37.231617 kubelet[3157]: I0911 00:29:37.231596 3157 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e7ff1367-64cd-423f-a7ae-69bcd41ce72a-calico-apiserver-certs\") pod \"e7ff1367-64cd-423f-a7ae-69bcd41ce72a\" (UID: \"e7ff1367-64cd-423f-a7ae-69bcd41ce72a\") " Sep 11 00:29:37.232445 kubelet[3157]: I0911 00:29:37.231632 3157 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lqpfj\" (UniqueName: \"kubernetes.io/projected/e7ff1367-64cd-423f-a7ae-69bcd41ce72a-kube-api-access-lqpfj\") pod \"e7ff1367-64cd-423f-a7ae-69bcd41ce72a\" (UID: \"e7ff1367-64cd-423f-a7ae-69bcd41ce72a\") " Sep 11 00:29:37.235330 systemd[1]: var-lib-kubelet-pods-e7ff1367\x2d64cd\x2d423f\x2da7ae\x2d69bcd41ce72a-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Sep 11 00:29:37.235950 kubelet[3157]: I0911 00:29:37.235927 3157 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7ff1367-64cd-423f-a7ae-69bcd41ce72a-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "e7ff1367-64cd-423f-a7ae-69bcd41ce72a" (UID: "e7ff1367-64cd-423f-a7ae-69bcd41ce72a"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 11 00:29:37.236462 kubelet[3157]: I0911 00:29:37.236443 3157 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7ff1367-64cd-423f-a7ae-69bcd41ce72a-kube-api-access-lqpfj" (OuterVolumeSpecName: "kube-api-access-lqpfj") pod "e7ff1367-64cd-423f-a7ae-69bcd41ce72a" (UID: "e7ff1367-64cd-423f-a7ae-69bcd41ce72a"). InnerVolumeSpecName "kube-api-access-lqpfj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 11 00:29:37.332728 kubelet[3157]: I0911 00:29:37.332699 3157 reconciler_common.go:299] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e7ff1367-64cd-423f-a7ae-69bcd41ce72a-calico-apiserver-certs\") on node \"ci-4372.1.0-n-3f8a739b41\" DevicePath \"\"" Sep 11 00:29:37.332728 kubelet[3157]: I0911 00:29:37.332720 3157 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lqpfj\" (UniqueName: \"kubernetes.io/projected/e7ff1367-64cd-423f-a7ae-69bcd41ce72a-kube-api-access-lqpfj\") on node \"ci-4372.1.0-n-3f8a739b41\" DevicePath \"\"" Sep 11 00:29:37.425933 kubelet[3157]: I0911 00:29:37.425868 3157 scope.go:117] "RemoveContainer" containerID="6bb12726319eb36efc456c1a8f7fddb1f0d890037d2398dc12b8309353c73015" Sep 11 00:29:37.431856 containerd[1747]: time="2025-09-11T00:29:37.431802629Z" level=info msg="RemoveContainer for \"6bb12726319eb36efc456c1a8f7fddb1f0d890037d2398dc12b8309353c73015\"" Sep 11 00:29:37.434443 systemd[1]: Removed slice kubepods-besteffort-pode7ff1367_64cd_423f_a7ae_69bcd41ce72a.slice - libcontainer container kubepods-besteffort-pode7ff1367_64cd_423f_a7ae_69bcd41ce72a.slice. Sep 11 00:29:37.442131 containerd[1747]: time="2025-09-11T00:29:37.442018485Z" level=info msg="RemoveContainer for \"6bb12726319eb36efc456c1a8f7fddb1f0d890037d2398dc12b8309353c73015\" returns successfully" Sep 11 00:29:37.442557 kubelet[3157]: I0911 00:29:37.442540 3157 scope.go:117] "RemoveContainer" containerID="6bb12726319eb36efc456c1a8f7fddb1f0d890037d2398dc12b8309353c73015" Sep 11 00:29:37.442715 containerd[1747]: time="2025-09-11T00:29:37.442687931Z" level=error msg="ContainerStatus for \"6bb12726319eb36efc456c1a8f7fddb1f0d890037d2398dc12b8309353c73015\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6bb12726319eb36efc456c1a8f7fddb1f0d890037d2398dc12b8309353c73015\": not found" Sep 11 00:29:37.442977 kubelet[3157]: E0911 00:29:37.442955 3157 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6bb12726319eb36efc456c1a8f7fddb1f0d890037d2398dc12b8309353c73015\": not found" containerID="6bb12726319eb36efc456c1a8f7fddb1f0d890037d2398dc12b8309353c73015" Sep 11 00:29:37.443055 kubelet[3157]: I0911 00:29:37.443029 3157 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6bb12726319eb36efc456c1a8f7fddb1f0d890037d2398dc12b8309353c73015"} err="failed to get container status \"6bb12726319eb36efc456c1a8f7fddb1f0d890037d2398dc12b8309353c73015\": rpc error: code = NotFound desc = an error occurred when try to find container \"6bb12726319eb36efc456c1a8f7fddb1f0d890037d2398dc12b8309353c73015\": not found" Sep 11 00:29:37.834454 systemd[1]: var-lib-kubelet-pods-e7ff1367\x2d64cd\x2d423f\x2da7ae\x2d69bcd41ce72a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dlqpfj.mount: Deactivated successfully. Sep 11 00:29:38.258695 systemd[1]: Started sshd@15-10.200.8.4:22-10.200.16.10:40932.service - OpenSSH per-connection server daemon (10.200.16.10:40932). Sep 11 00:29:38.367623 containerd[1747]: time="2025-09-11T00:29:38.367585384Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2e00446fcc65c2aa3d65276d401ac323ba32ac8ed278c594962bd7d49d00b978\" id:\"2e00446fcc65c2aa3d65276d401ac323ba32ac8ed278c594962bd7d49d00b978\" pid:4959 exit_status:137 exited_at:{seconds:1757550576 nanos:923270039}" Sep 11 00:29:38.383764 containerd[1747]: time="2025-09-11T00:29:38.383741361Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f42c10df8b9ccce1e4116ee4af2906b4977caee3bb769ff96ebc6c7ed58b8a51\" id:\"1575a85540862ec56745fb82486e5717cf9731dce898e712b0d5e203ccd53de6\" pid:6236 exited_at:{seconds:1757550578 nanos:383562663}" Sep 11 00:29:38.903414 sshd[6222]: Accepted publickey for core from 10.200.16.10 port 40932 ssh2: RSA SHA256:WsqnZe1Vz7kcM6FJ5Bl6636L4nXESJA3OI736agNivA Sep 11 00:29:38.905255 sshd-session[6222]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:29:38.911519 systemd-logind[1706]: New session 18 of user core. Sep 11 00:29:38.916628 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 11 00:29:39.129002 kubelet[3157]: I0911 00:29:39.128610 3157 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7ff1367-64cd-423f-a7ae-69bcd41ce72a" path="/var/lib/kubelet/pods/e7ff1367-64cd-423f-a7ae-69bcd41ce72a/volumes" Sep 11 00:29:39.140619 containerd[1747]: time="2025-09-11T00:29:39.140588463Z" level=info msg="StopPodSandbox for \"2e00446fcc65c2aa3d65276d401ac323ba32ac8ed278c594962bd7d49d00b978\"" Sep 11 00:29:39.244968 containerd[1747]: 2025-09-11 00:29:39.181 [WARNING][6257] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="2e00446fcc65c2aa3d65276d401ac323ba32ac8ed278c594962bd7d49d00b978" WorkloadEndpoint="ci--4372.1.0--n--3f8a739b41-k8s-calico--apiserver--75d779f9f--bb974-eth0" Sep 11 00:29:39.244968 containerd[1747]: 2025-09-11 00:29:39.181 [INFO][6257] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2e00446fcc65c2aa3d65276d401ac323ba32ac8ed278c594962bd7d49d00b978" Sep 11 00:29:39.244968 containerd[1747]: 2025-09-11 00:29:39.181 [INFO][6257] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2e00446fcc65c2aa3d65276d401ac323ba32ac8ed278c594962bd7d49d00b978" iface="eth0" netns="" Sep 11 00:29:39.244968 containerd[1747]: 2025-09-11 00:29:39.181 [INFO][6257] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2e00446fcc65c2aa3d65276d401ac323ba32ac8ed278c594962bd7d49d00b978" Sep 11 00:29:39.244968 containerd[1747]: 2025-09-11 00:29:39.181 [INFO][6257] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2e00446fcc65c2aa3d65276d401ac323ba32ac8ed278c594962bd7d49d00b978" Sep 11 00:29:39.244968 containerd[1747]: 2025-09-11 00:29:39.226 [INFO][6265] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2e00446fcc65c2aa3d65276d401ac323ba32ac8ed278c594962bd7d49d00b978" HandleID="k8s-pod-network.2e00446fcc65c2aa3d65276d401ac323ba32ac8ed278c594962bd7d49d00b978" Workload="ci--4372.1.0--n--3f8a739b41-k8s-calico--apiserver--75d779f9f--bb974-eth0" Sep 11 00:29:39.244968 containerd[1747]: 2025-09-11 00:29:39.228 [INFO][6265] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 11 00:29:39.244968 containerd[1747]: 2025-09-11 00:29:39.228 [INFO][6265] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 11 00:29:39.244968 containerd[1747]: 2025-09-11 00:29:39.238 [WARNING][6265] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2e00446fcc65c2aa3d65276d401ac323ba32ac8ed278c594962bd7d49d00b978" HandleID="k8s-pod-network.2e00446fcc65c2aa3d65276d401ac323ba32ac8ed278c594962bd7d49d00b978" Workload="ci--4372.1.0--n--3f8a739b41-k8s-calico--apiserver--75d779f9f--bb974-eth0" Sep 11 00:29:39.244968 containerd[1747]: 2025-09-11 00:29:39.239 [INFO][6265] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2e00446fcc65c2aa3d65276d401ac323ba32ac8ed278c594962bd7d49d00b978" HandleID="k8s-pod-network.2e00446fcc65c2aa3d65276d401ac323ba32ac8ed278c594962bd7d49d00b978" Workload="ci--4372.1.0--n--3f8a739b41-k8s-calico--apiserver--75d779f9f--bb974-eth0" Sep 11 00:29:39.244968 containerd[1747]: 2025-09-11 00:29:39.240 [INFO][6265] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 11 00:29:39.244968 containerd[1747]: 2025-09-11 00:29:39.242 [INFO][6257] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2e00446fcc65c2aa3d65276d401ac323ba32ac8ed278c594962bd7d49d00b978" Sep 11 00:29:39.244968 containerd[1747]: time="2025-09-11T00:29:39.244645591Z" level=info msg="TearDown network for sandbox \"2e00446fcc65c2aa3d65276d401ac323ba32ac8ed278c594962bd7d49d00b978\" successfully" Sep 11 00:29:39.244968 containerd[1747]: time="2025-09-11T00:29:39.244664751Z" level=info msg="StopPodSandbox for \"2e00446fcc65c2aa3d65276d401ac323ba32ac8ed278c594962bd7d49d00b978\" returns successfully" Sep 11 00:29:39.247455 containerd[1747]: time="2025-09-11T00:29:39.246981536Z" level=info msg="RemovePodSandbox for \"2e00446fcc65c2aa3d65276d401ac323ba32ac8ed278c594962bd7d49d00b978\"" Sep 11 00:29:39.247455 containerd[1747]: time="2025-09-11T00:29:39.247095309Z" level=info msg="Forcibly stopping sandbox \"2e00446fcc65c2aa3d65276d401ac323ba32ac8ed278c594962bd7d49d00b978\"" Sep 11 00:29:39.401451 containerd[1747]: 2025-09-11 00:29:39.325 [WARNING][6283] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="2e00446fcc65c2aa3d65276d401ac323ba32ac8ed278c594962bd7d49d00b978" WorkloadEndpoint="ci--4372.1.0--n--3f8a739b41-k8s-calico--apiserver--75d779f9f--bb974-eth0" Sep 11 00:29:39.401451 containerd[1747]: 2025-09-11 00:29:39.326 [INFO][6283] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2e00446fcc65c2aa3d65276d401ac323ba32ac8ed278c594962bd7d49d00b978" Sep 11 00:29:39.401451 containerd[1747]: 2025-09-11 00:29:39.326 [INFO][6283] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2e00446fcc65c2aa3d65276d401ac323ba32ac8ed278c594962bd7d49d00b978" iface="eth0" netns="" Sep 11 00:29:39.401451 containerd[1747]: 2025-09-11 00:29:39.326 [INFO][6283] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2e00446fcc65c2aa3d65276d401ac323ba32ac8ed278c594962bd7d49d00b978" Sep 11 00:29:39.401451 containerd[1747]: 2025-09-11 00:29:39.326 [INFO][6283] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2e00446fcc65c2aa3d65276d401ac323ba32ac8ed278c594962bd7d49d00b978" Sep 11 00:29:39.401451 containerd[1747]: 2025-09-11 00:29:39.358 [INFO][6295] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2e00446fcc65c2aa3d65276d401ac323ba32ac8ed278c594962bd7d49d00b978" HandleID="k8s-pod-network.2e00446fcc65c2aa3d65276d401ac323ba32ac8ed278c594962bd7d49d00b978" Workload="ci--4372.1.0--n--3f8a739b41-k8s-calico--apiserver--75d779f9f--bb974-eth0" Sep 11 00:29:39.401451 containerd[1747]: 2025-09-11 00:29:39.359 [INFO][6295] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 11 00:29:39.401451 containerd[1747]: 2025-09-11 00:29:39.361 [INFO][6295] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 11 00:29:39.401451 containerd[1747]: 2025-09-11 00:29:39.393 [WARNING][6295] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2e00446fcc65c2aa3d65276d401ac323ba32ac8ed278c594962bd7d49d00b978" HandleID="k8s-pod-network.2e00446fcc65c2aa3d65276d401ac323ba32ac8ed278c594962bd7d49d00b978" Workload="ci--4372.1.0--n--3f8a739b41-k8s-calico--apiserver--75d779f9f--bb974-eth0" Sep 11 00:29:39.401451 containerd[1747]: 2025-09-11 00:29:39.393 [INFO][6295] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2e00446fcc65c2aa3d65276d401ac323ba32ac8ed278c594962bd7d49d00b978" HandleID="k8s-pod-network.2e00446fcc65c2aa3d65276d401ac323ba32ac8ed278c594962bd7d49d00b978" Workload="ci--4372.1.0--n--3f8a739b41-k8s-calico--apiserver--75d779f9f--bb974-eth0" Sep 11 00:29:39.401451 containerd[1747]: 2025-09-11 00:29:39.395 [INFO][6295] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 11 00:29:39.401451 containerd[1747]: 2025-09-11 00:29:39.397 [INFO][6283] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2e00446fcc65c2aa3d65276d401ac323ba32ac8ed278c594962bd7d49d00b978" Sep 11 00:29:39.401451 containerd[1747]: time="2025-09-11T00:29:39.400943086Z" level=info msg="TearDown network for sandbox \"2e00446fcc65c2aa3d65276d401ac323ba32ac8ed278c594962bd7d49d00b978\" successfully" Sep 11 00:29:39.405244 containerd[1747]: time="2025-09-11T00:29:39.405218825Z" level=info msg="Ensure that sandbox 2e00446fcc65c2aa3d65276d401ac323ba32ac8ed278c594962bd7d49d00b978 in task-service has been cleanup successfully" Sep 11 00:29:39.415224 containerd[1747]: time="2025-09-11T00:29:39.415199970Z" level=info msg="RemovePodSandbox \"2e00446fcc65c2aa3d65276d401ac323ba32ac8ed278c594962bd7d49d00b978\" returns successfully" Sep 11 00:29:39.416414 containerd[1747]: time="2025-09-11T00:29:39.415639363Z" level=info msg="StopPodSandbox for \"bbc97d8e5c9d849dd7512306a7d8b3d90fa38823e1f620aa281d3b787f4cfa8d\"" Sep 11 00:29:39.505419 sshd[6246]: Connection closed by 10.200.16.10 port 40932 Sep 11 00:29:39.507545 sshd-session[6222]: pam_unix(sshd:session): session closed for user core Sep 11 00:29:39.512471 systemd-logind[1706]: Session 18 logged out. Waiting for processes to exit. Sep 11 00:29:39.513135 systemd[1]: sshd@15-10.200.8.4:22-10.200.16.10:40932.service: Deactivated successfully. Sep 11 00:29:39.516232 systemd[1]: session-18.scope: Deactivated successfully. Sep 11 00:29:39.520721 containerd[1747]: 2025-09-11 00:29:39.479 [WARNING][6310] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="bbc97d8e5c9d849dd7512306a7d8b3d90fa38823e1f620aa281d3b787f4cfa8d" WorkloadEndpoint="ci--4372.1.0--n--3f8a739b41-k8s-calico--apiserver--75d779f9f--s7nnc-eth0" Sep 11 00:29:39.520721 containerd[1747]: 2025-09-11 00:29:39.479 [INFO][6310] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bbc97d8e5c9d849dd7512306a7d8b3d90fa38823e1f620aa281d3b787f4cfa8d" Sep 11 00:29:39.520721 containerd[1747]: 2025-09-11 00:29:39.479 [INFO][6310] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bbc97d8e5c9d849dd7512306a7d8b3d90fa38823e1f620aa281d3b787f4cfa8d" iface="eth0" netns="" Sep 11 00:29:39.520721 containerd[1747]: 2025-09-11 00:29:39.479 [INFO][6310] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bbc97d8e5c9d849dd7512306a7d8b3d90fa38823e1f620aa281d3b787f4cfa8d" Sep 11 00:29:39.520721 containerd[1747]: 2025-09-11 00:29:39.479 [INFO][6310] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bbc97d8e5c9d849dd7512306a7d8b3d90fa38823e1f620aa281d3b787f4cfa8d" Sep 11 00:29:39.520721 containerd[1747]: 2025-09-11 00:29:39.504 [INFO][6327] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bbc97d8e5c9d849dd7512306a7d8b3d90fa38823e1f620aa281d3b787f4cfa8d" HandleID="k8s-pod-network.bbc97d8e5c9d849dd7512306a7d8b3d90fa38823e1f620aa281d3b787f4cfa8d" Workload="ci--4372.1.0--n--3f8a739b41-k8s-calico--apiserver--75d779f9f--s7nnc-eth0" Sep 11 00:29:39.520721 containerd[1747]: 2025-09-11 00:29:39.505 [INFO][6327] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 11 00:29:39.520721 containerd[1747]: 2025-09-11 00:29:39.506 [INFO][6327] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 11 00:29:39.520721 containerd[1747]: 2025-09-11 00:29:39.514 [WARNING][6327] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bbc97d8e5c9d849dd7512306a7d8b3d90fa38823e1f620aa281d3b787f4cfa8d" HandleID="k8s-pod-network.bbc97d8e5c9d849dd7512306a7d8b3d90fa38823e1f620aa281d3b787f4cfa8d" Workload="ci--4372.1.0--n--3f8a739b41-k8s-calico--apiserver--75d779f9f--s7nnc-eth0" Sep 11 00:29:39.520721 containerd[1747]: 2025-09-11 00:29:39.514 [INFO][6327] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bbc97d8e5c9d849dd7512306a7d8b3d90fa38823e1f620aa281d3b787f4cfa8d" HandleID="k8s-pod-network.bbc97d8e5c9d849dd7512306a7d8b3d90fa38823e1f620aa281d3b787f4cfa8d" Workload="ci--4372.1.0--n--3f8a739b41-k8s-calico--apiserver--75d779f9f--s7nnc-eth0" Sep 11 00:29:39.520721 containerd[1747]: 2025-09-11 00:29:39.516 [INFO][6327] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 11 00:29:39.520721 containerd[1747]: 2025-09-11 00:29:39.517 [INFO][6310] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bbc97d8e5c9d849dd7512306a7d8b3d90fa38823e1f620aa281d3b787f4cfa8d" Sep 11 00:29:39.522001 containerd[1747]: time="2025-09-11T00:29:39.521799113Z" level=info msg="TearDown network for sandbox \"bbc97d8e5c9d849dd7512306a7d8b3d90fa38823e1f620aa281d3b787f4cfa8d\" successfully" Sep 11 00:29:39.522001 containerd[1747]: time="2025-09-11T00:29:39.521818456Z" level=info msg="StopPodSandbox for \"bbc97d8e5c9d849dd7512306a7d8b3d90fa38823e1f620aa281d3b787f4cfa8d\" returns successfully" Sep 11 00:29:39.522461 containerd[1747]: time="2025-09-11T00:29:39.522354425Z" level=info msg="RemovePodSandbox for \"bbc97d8e5c9d849dd7512306a7d8b3d90fa38823e1f620aa281d3b787f4cfa8d\"" Sep 11 00:29:39.523436 systemd-logind[1706]: Removed session 18. Sep 11 00:29:39.524042 containerd[1747]: time="2025-09-11T00:29:39.523879657Z" level=info msg="Forcibly stopping sandbox \"bbc97d8e5c9d849dd7512306a7d8b3d90fa38823e1f620aa281d3b787f4cfa8d\"" Sep 11 00:29:39.609482 containerd[1747]: 2025-09-11 00:29:39.570 [WARNING][6353] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="bbc97d8e5c9d849dd7512306a7d8b3d90fa38823e1f620aa281d3b787f4cfa8d" WorkloadEndpoint="ci--4372.1.0--n--3f8a739b41-k8s-calico--apiserver--75d779f9f--s7nnc-eth0" Sep 11 00:29:39.609482 containerd[1747]: 2025-09-11 00:29:39.570 [INFO][6353] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bbc97d8e5c9d849dd7512306a7d8b3d90fa38823e1f620aa281d3b787f4cfa8d" Sep 11 00:29:39.609482 containerd[1747]: 2025-09-11 00:29:39.570 [INFO][6353] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bbc97d8e5c9d849dd7512306a7d8b3d90fa38823e1f620aa281d3b787f4cfa8d" iface="eth0" netns="" Sep 11 00:29:39.609482 containerd[1747]: 2025-09-11 00:29:39.570 [INFO][6353] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bbc97d8e5c9d849dd7512306a7d8b3d90fa38823e1f620aa281d3b787f4cfa8d" Sep 11 00:29:39.609482 containerd[1747]: 2025-09-11 00:29:39.570 [INFO][6353] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bbc97d8e5c9d849dd7512306a7d8b3d90fa38823e1f620aa281d3b787f4cfa8d" Sep 11 00:29:39.609482 containerd[1747]: 2025-09-11 00:29:39.600 [INFO][6361] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bbc97d8e5c9d849dd7512306a7d8b3d90fa38823e1f620aa281d3b787f4cfa8d" HandleID="k8s-pod-network.bbc97d8e5c9d849dd7512306a7d8b3d90fa38823e1f620aa281d3b787f4cfa8d" Workload="ci--4372.1.0--n--3f8a739b41-k8s-calico--apiserver--75d779f9f--s7nnc-eth0" Sep 11 00:29:39.609482 containerd[1747]: 2025-09-11 00:29:39.600 [INFO][6361] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 11 00:29:39.609482 containerd[1747]: 2025-09-11 00:29:39.600 [INFO][6361] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 11 00:29:39.609482 containerd[1747]: 2025-09-11 00:29:39.606 [WARNING][6361] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bbc97d8e5c9d849dd7512306a7d8b3d90fa38823e1f620aa281d3b787f4cfa8d" HandleID="k8s-pod-network.bbc97d8e5c9d849dd7512306a7d8b3d90fa38823e1f620aa281d3b787f4cfa8d" Workload="ci--4372.1.0--n--3f8a739b41-k8s-calico--apiserver--75d779f9f--s7nnc-eth0" Sep 11 00:29:39.609482 containerd[1747]: 2025-09-11 00:29:39.606 [INFO][6361] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bbc97d8e5c9d849dd7512306a7d8b3d90fa38823e1f620aa281d3b787f4cfa8d" HandleID="k8s-pod-network.bbc97d8e5c9d849dd7512306a7d8b3d90fa38823e1f620aa281d3b787f4cfa8d" Workload="ci--4372.1.0--n--3f8a739b41-k8s-calico--apiserver--75d779f9f--s7nnc-eth0" Sep 11 00:29:39.609482 containerd[1747]: 2025-09-11 00:29:39.607 [INFO][6361] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 11 00:29:39.609482 containerd[1747]: 2025-09-11 00:29:39.608 [INFO][6353] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bbc97d8e5c9d849dd7512306a7d8b3d90fa38823e1f620aa281d3b787f4cfa8d" Sep 11 00:29:39.610059 containerd[1747]: time="2025-09-11T00:29:39.609811378Z" level=info msg="TearDown network for sandbox \"bbc97d8e5c9d849dd7512306a7d8b3d90fa38823e1f620aa281d3b787f4cfa8d\" successfully" Sep 11 00:29:39.613518 containerd[1747]: time="2025-09-11T00:29:39.613235236Z" level=info msg="Ensure that sandbox bbc97d8e5c9d849dd7512306a7d8b3d90fa38823e1f620aa281d3b787f4cfa8d in task-service has been cleanup successfully" Sep 11 00:29:39.621515 systemd[1]: Started sshd@16-10.200.8.4:22-10.200.16.10:40938.service - OpenSSH per-connection server daemon (10.200.16.10:40938). Sep 11 00:29:39.624079 containerd[1747]: time="2025-09-11T00:29:39.623972158Z" level=info msg="RemovePodSandbox \"bbc97d8e5c9d849dd7512306a7d8b3d90fa38823e1f620aa281d3b787f4cfa8d\" returns successfully" Sep 11 00:29:39.673441 containerd[1747]: time="2025-09-11T00:29:39.673418118Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f42c10df8b9ccce1e4116ee4af2906b4977caee3bb769ff96ebc6c7ed58b8a51\" id:\"38f2dbb382e66b2bdc644f81bed949d1be899f96b991a8b95361b8cf9de99574\" pid:6335 exited_at:{seconds:1757550579 nanos:672913239}" Sep 11 00:29:40.274122 sshd[6374]: Accepted publickey for core from 10.200.16.10 port 40938 ssh2: RSA SHA256:WsqnZe1Vz7kcM6FJ5Bl6636L4nXESJA3OI736agNivA Sep 11 00:29:40.275117 sshd-session[6374]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:29:40.279349 systemd-logind[1706]: New session 19 of user core. Sep 11 00:29:40.283528 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 11 00:29:40.853591 sshd[6377]: Connection closed by 10.200.16.10 port 40938 Sep 11 00:29:40.854128 sshd-session[6374]: pam_unix(sshd:session): session closed for user core Sep 11 00:29:40.857291 systemd[1]: sshd@16-10.200.8.4:22-10.200.16.10:40938.service: Deactivated successfully. Sep 11 00:29:40.859865 systemd[1]: session-19.scope: Deactivated successfully. Sep 11 00:29:40.862087 systemd-logind[1706]: Session 19 logged out. Waiting for processes to exit. Sep 11 00:29:40.863165 systemd-logind[1706]: Removed session 19. Sep 11 00:29:40.970788 systemd[1]: Started sshd@17-10.200.8.4:22-10.200.16.10:43988.service - OpenSSH per-connection server daemon (10.200.16.10:43988). Sep 11 00:29:41.615833 sshd[6387]: Accepted publickey for core from 10.200.16.10 port 43988 ssh2: RSA SHA256:WsqnZe1Vz7kcM6FJ5Bl6636L4nXESJA3OI736agNivA Sep 11 00:29:41.616826 sshd-session[6387]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:29:41.620918 systemd-logind[1706]: New session 20 of user core. Sep 11 00:29:41.630567 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 11 00:29:42.684820 sshd[6389]: Connection closed by 10.200.16.10 port 43988 Sep 11 00:29:42.683764 sshd-session[6387]: pam_unix(sshd:session): session closed for user core Sep 11 00:29:42.687022 systemd[1]: sshd@17-10.200.8.4:22-10.200.16.10:43988.service: Deactivated successfully. Sep 11 00:29:42.691312 systemd[1]: session-20.scope: Deactivated successfully. Sep 11 00:29:42.692576 systemd-logind[1706]: Session 20 logged out. Waiting for processes to exit. Sep 11 00:29:42.696165 systemd-logind[1706]: Removed session 20. Sep 11 00:29:42.797727 systemd[1]: Started sshd@18-10.200.8.4:22-10.200.16.10:44000.service - OpenSSH per-connection server daemon (10.200.16.10:44000). Sep 11 00:29:43.445456 sshd[6409]: Accepted publickey for core from 10.200.16.10 port 44000 ssh2: RSA SHA256:WsqnZe1Vz7kcM6FJ5Bl6636L4nXESJA3OI736agNivA Sep 11 00:29:43.446532 sshd-session[6409]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:29:43.453473 systemd-logind[1706]: New session 21 of user core. Sep 11 00:29:43.461527 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 11 00:29:44.080843 sshd[6411]: Connection closed by 10.200.16.10 port 44000 Sep 11 00:29:44.082550 sshd-session[6409]: pam_unix(sshd:session): session closed for user core Sep 11 00:29:44.085766 systemd-logind[1706]: Session 21 logged out. Waiting for processes to exit. Sep 11 00:29:44.088781 systemd[1]: sshd@18-10.200.8.4:22-10.200.16.10:44000.service: Deactivated successfully. Sep 11 00:29:44.091339 systemd[1]: session-21.scope: Deactivated successfully. Sep 11 00:29:44.095118 systemd-logind[1706]: Removed session 21. Sep 11 00:29:44.197582 systemd[1]: Started sshd@19-10.200.8.4:22-10.200.16.10:44012.service - OpenSSH per-connection server daemon (10.200.16.10:44012). Sep 11 00:29:44.839796 sshd[6421]: Accepted publickey for core from 10.200.16.10 port 44012 ssh2: RSA SHA256:WsqnZe1Vz7kcM6FJ5Bl6636L4nXESJA3OI736agNivA Sep 11 00:29:44.840808 sshd-session[6421]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:29:44.845208 systemd-logind[1706]: New session 22 of user core. Sep 11 00:29:44.850686 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 11 00:29:45.331137 sshd[6423]: Connection closed by 10.200.16.10 port 44012 Sep 11 00:29:45.331577 sshd-session[6421]: pam_unix(sshd:session): session closed for user core Sep 11 00:29:45.334186 systemd[1]: sshd@19-10.200.8.4:22-10.200.16.10:44012.service: Deactivated successfully. Sep 11 00:29:45.335912 systemd[1]: session-22.scope: Deactivated successfully. Sep 11 00:29:45.336699 systemd-logind[1706]: Session 22 logged out. Waiting for processes to exit. Sep 11 00:29:45.337941 systemd-logind[1706]: Removed session 22. Sep 11 00:29:48.353360 containerd[1747]: time="2025-09-11T00:29:48.353320878Z" level=info msg="TaskExit event in podsandbox handler container_id:\"501b940f048afec0cb344c0bbe460a3694c330693b0af9da65daf0188458dc24\" id:\"84baca071ab700c8add51222d0915a4366021f6aee1bd8add205b6681d70b927\" pid:6450 exited_at:{seconds:1757550588 nanos:353061338}" Sep 11 00:29:50.452482 systemd[1]: Started sshd@20-10.200.8.4:22-10.200.16.10:47364.service - OpenSSH per-connection server daemon (10.200.16.10:47364). Sep 11 00:29:51.104205 sshd[6465]: Accepted publickey for core from 10.200.16.10 port 47364 ssh2: RSA SHA256:WsqnZe1Vz7kcM6FJ5Bl6636L4nXESJA3OI736agNivA Sep 11 00:29:51.105303 sshd-session[6465]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:29:51.109906 systemd-logind[1706]: New session 23 of user core. Sep 11 00:29:51.114597 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 11 00:29:51.595032 sshd[6467]: Connection closed by 10.200.16.10 port 47364 Sep 11 00:29:51.596513 sshd-session[6465]: pam_unix(sshd:session): session closed for user core Sep 11 00:29:51.599503 systemd[1]: sshd@20-10.200.8.4:22-10.200.16.10:47364.service: Deactivated successfully. Sep 11 00:29:51.602150 systemd[1]: session-23.scope: Deactivated successfully. Sep 11 00:29:51.603044 systemd-logind[1706]: Session 23 logged out. Waiting for processes to exit. Sep 11 00:29:51.604533 systemd-logind[1706]: Removed session 23. Sep 11 00:29:56.718989 systemd[1]: Started sshd@21-10.200.8.4:22-10.200.16.10:47380.service - OpenSSH per-connection server daemon (10.200.16.10:47380). Sep 11 00:29:57.368311 sshd[6499]: Accepted publickey for core from 10.200.16.10 port 47380 ssh2: RSA SHA256:WsqnZe1Vz7kcM6FJ5Bl6636L4nXESJA3OI736agNivA Sep 11 00:29:57.369335 sshd-session[6499]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:29:57.373313 systemd-logind[1706]: New session 24 of user core. Sep 11 00:29:57.377547 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 11 00:29:57.867475 sshd[6503]: Connection closed by 10.200.16.10 port 47380 Sep 11 00:29:57.868682 sshd-session[6499]: pam_unix(sshd:session): session closed for user core Sep 11 00:29:57.870994 systemd-logind[1706]: Session 24 logged out. Waiting for processes to exit. Sep 11 00:29:57.872621 systemd[1]: sshd@21-10.200.8.4:22-10.200.16.10:47380.service: Deactivated successfully. Sep 11 00:29:57.875477 systemd[1]: session-24.scope: Deactivated successfully. Sep 11 00:29:57.877522 systemd-logind[1706]: Removed session 24. Sep 11 00:30:02.329819 containerd[1747]: time="2025-09-11T00:30:02.329755904Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1b25a9188689ccf61661536b56ca66d2f88f57af69274c24773c3d2bd6b33069\" id:\"fb573dc203e39d538b05f36f54289f2f00322f9f6cc8c7127015f239f6221da7\" pid:6526 exited_at:{seconds:1757550602 nanos:329529052}" Sep 11 00:30:02.986602 systemd[1]: Started sshd@22-10.200.8.4:22-10.200.16.10:35468.service - OpenSSH per-connection server daemon (10.200.16.10:35468). Sep 11 00:30:03.654424 sshd[6536]: Accepted publickey for core from 10.200.16.10 port 35468 ssh2: RSA SHA256:WsqnZe1Vz7kcM6FJ5Bl6636L4nXESJA3OI736agNivA Sep 11 00:30:03.655054 sshd-session[6536]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:30:03.658794 systemd-logind[1706]: New session 25 of user core. Sep 11 00:30:03.667542 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 11 00:30:04.152434 sshd[6538]: Connection closed by 10.200.16.10 port 35468 Sep 11 00:30:04.152914 sshd-session[6536]: pam_unix(sshd:session): session closed for user core Sep 11 00:30:04.156165 systemd-logind[1706]: Session 25 logged out. Waiting for processes to exit. Sep 11 00:30:04.156525 systemd[1]: sshd@22-10.200.8.4:22-10.200.16.10:35468.service: Deactivated successfully. Sep 11 00:30:04.158896 systemd[1]: session-25.scope: Deactivated successfully. Sep 11 00:30:04.161734 systemd-logind[1706]: Removed session 25. Sep 11 00:30:08.386287 containerd[1747]: time="2025-09-11T00:30:08.386246406Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f42c10df8b9ccce1e4116ee4af2906b4977caee3bb769ff96ebc6c7ed58b8a51\" id:\"60388935b33dbb42b8f1d698fde6a6cfd2116e5c61a5ead1ed64701e5bac55b3\" pid:6562 exited_at:{seconds:1757550608 nanos:386022052}" Sep 11 00:30:09.270122 systemd[1]: Started sshd@23-10.200.8.4:22-10.200.16.10:35484.service - OpenSSH per-connection server daemon (10.200.16.10:35484). Sep 11 00:30:09.917779 sshd[6573]: Accepted publickey for core from 10.200.16.10 port 35484 ssh2: RSA SHA256:WsqnZe1Vz7kcM6FJ5Bl6636L4nXESJA3OI736agNivA Sep 11 00:30:09.919285 sshd-session[6573]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:30:09.925336 systemd-logind[1706]: New session 26 of user core. Sep 11 00:30:09.932771 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 11 00:30:10.411325 sshd[6575]: Connection closed by 10.200.16.10 port 35484 Sep 11 00:30:10.411797 sshd-session[6573]: pam_unix(sshd:session): session closed for user core Sep 11 00:30:10.414547 systemd[1]: sshd@23-10.200.8.4:22-10.200.16.10:35484.service: Deactivated successfully. Sep 11 00:30:10.416215 systemd[1]: session-26.scope: Deactivated successfully. Sep 11 00:30:10.416855 systemd-logind[1706]: Session 26 logged out. Waiting for processes to exit. Sep 11 00:30:10.418060 systemd-logind[1706]: Removed session 26. Sep 11 00:30:12.944638 containerd[1747]: time="2025-09-11T00:30:12.944599966Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1b25a9188689ccf61661536b56ca66d2f88f57af69274c24773c3d2bd6b33069\" id:\"806b439d6b9486840766fb54f979a87746ad4ce97481fbec91d7bc7751a20a39\" pid:6600 exited_at:{seconds:1757550612 nanos:944159673}"