Sep 9 05:40:16.069864 kernel: Linux version 6.12.45-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue Sep 9 03:39:34 -00 2025 Sep 9 05:40:16.069894 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=107bc9be805328e5e30844239fa87d36579f371e3de2c34fec43f6ff6d17b104 Sep 9 05:40:16.069905 kernel: BIOS-provided physical RAM map: Sep 9 05:40:16.069912 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 9 05:40:16.069919 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Sep 9 05:40:16.069926 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000044fdfff] usable Sep 9 05:40:16.069935 kernel: BIOS-e820: [mem 0x00000000044fe000-0x00000000048fdfff] reserved Sep 9 05:40:16.069944 kernel: BIOS-e820: [mem 0x00000000048fe000-0x000000003ff1efff] usable Sep 9 05:40:16.069952 kernel: BIOS-e820: [mem 0x000000003ff1f000-0x000000003ffc8fff] reserved Sep 9 05:40:16.069960 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Sep 9 05:40:16.069968 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Sep 9 05:40:16.069976 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Sep 9 05:40:16.069983 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Sep 9 05:40:16.069991 kernel: printk: legacy bootconsole [earlyser0] enabled Sep 9 05:40:16.070002 kernel: NX (Execute Disable) protection: active Sep 9 05:40:16.070010 kernel: APIC: Static calls initialized Sep 9 05:40:16.070016 kernel: efi: EFI v2.7 by Microsoft Sep 9 05:40:16.070024 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff88000 SMBIOS 3.0=0x3ff86000 MEMATTR=0x3eab5518 RNG=0x3ffd2018 Sep 9 05:40:16.070032 kernel: random: crng init done Sep 9 05:40:16.070040 kernel: secureboot: Secure boot disabled Sep 9 05:40:16.070047 kernel: SMBIOS 3.1.0 present. Sep 9 05:40:16.070055 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 01/28/2025 Sep 9 05:40:16.070063 kernel: DMI: Memory slots populated: 2/2 Sep 9 05:40:16.070072 kernel: Hypervisor detected: Microsoft Hyper-V Sep 9 05:40:16.070080 kernel: Hyper-V: privilege flags low 0xae7f, high 0x3b8030, hints 0x9e4e24, misc 0xe0bed7b2 Sep 9 05:40:16.070086 kernel: Hyper-V: Nested features: 0x3e0101 Sep 9 05:40:16.070093 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Sep 9 05:40:16.070100 kernel: Hyper-V: Using hypercall for remote TLB flush Sep 9 05:40:16.070108 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Sep 9 05:40:16.070114 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Sep 9 05:40:16.070121 kernel: tsc: Detected 2300.000 MHz processor Sep 9 05:40:16.070128 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 9 05:40:16.070137 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 9 05:40:16.070145 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x10000000000 Sep 9 05:40:16.070154 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Sep 9 05:40:16.070162 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 9 05:40:16.070170 kernel: e820: update [mem 0x48000000-0xffffffff] usable ==> reserved Sep 9 05:40:16.070178 kernel: last_pfn = 0x40000 max_arch_pfn = 0x10000000000 Sep 9 05:40:16.070185 kernel: Using GB pages for direct mapping Sep 9 05:40:16.070193 kernel: ACPI: Early table checksum verification disabled Sep 9 05:40:16.070204 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Sep 9 05:40:16.070213 kernel: ACPI: XSDT 0x000000003FFF90E8 00005C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 9 05:40:16.070221 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 9 05:40:16.070229 kernel: ACPI: DSDT 0x000000003FFD6000 01E27A (v02 MSFTVM DSDT01 00000001 INTL 20230628) Sep 9 05:40:16.070237 kernel: ACPI: FACS 0x000000003FFFE000 000040 Sep 9 05:40:16.070245 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 9 05:40:16.070253 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 9 05:40:16.070263 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 9 05:40:16.070271 kernel: ACPI: APIC 0x000000003FFD5000 000052 (v05 HVLITE HVLITETB 00000000 MSHV 00000000) Sep 9 05:40:16.070279 kernel: ACPI: SRAT 0x000000003FFD4000 0000A0 (v03 HVLITE HVLITETB 00000000 MSHV 00000000) Sep 9 05:40:16.070287 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 9 05:40:16.070295 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Sep 9 05:40:16.070303 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4279] Sep 9 05:40:16.070311 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Sep 9 05:40:16.070318 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Sep 9 05:40:16.070326 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Sep 9 05:40:16.070335 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Sep 9 05:40:16.070343 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5051] Sep 9 05:40:16.070351 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd409f] Sep 9 05:40:16.070358 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Sep 9 05:40:16.070366 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Sep 9 05:40:16.070374 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] Sep 9 05:40:16.070382 kernel: NUMA: Node 0 [mem 0x00001000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00001000-0x2bfffffff] Sep 9 05:40:16.070391 kernel: NODE_DATA(0) allocated [mem 0x2bfff8dc0-0x2bfffffff] Sep 9 05:40:16.070399 kernel: Zone ranges: Sep 9 05:40:16.070408 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 9 05:40:16.070417 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Sep 9 05:40:16.070425 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Sep 9 05:40:16.070432 kernel: Device empty Sep 9 05:40:16.070440 kernel: Movable zone start for each node Sep 9 05:40:16.070448 kernel: Early memory node ranges Sep 9 05:40:16.070456 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Sep 9 05:40:16.070464 kernel: node 0: [mem 0x0000000000100000-0x00000000044fdfff] Sep 9 05:40:16.070472 kernel: node 0: [mem 0x00000000048fe000-0x000000003ff1efff] Sep 9 05:40:16.070481 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Sep 9 05:40:16.070489 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Sep 9 05:40:16.070497 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Sep 9 05:40:16.070505 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 9 05:40:16.070513 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Sep 9 05:40:16.070521 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Sep 9 05:40:16.070529 kernel: On node 0, zone DMA32: 224 pages in unavailable ranges Sep 9 05:40:16.070537 kernel: ACPI: PM-Timer IO Port: 0x408 Sep 9 05:40:16.070545 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 9 05:40:16.070555 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 9 05:40:16.070563 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 9 05:40:16.070571 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Sep 9 05:40:16.070579 kernel: TSC deadline timer available Sep 9 05:40:16.070586 kernel: CPU topo: Max. logical packages: 1 Sep 9 05:40:16.070594 kernel: CPU topo: Max. logical dies: 1 Sep 9 05:40:16.070602 kernel: CPU topo: Max. dies per package: 1 Sep 9 05:40:16.070609 kernel: CPU topo: Max. threads per core: 2 Sep 9 05:40:16.070617 kernel: CPU topo: Num. cores per package: 1 Sep 9 05:40:16.070631 kernel: CPU topo: Num. threads per package: 2 Sep 9 05:40:16.070639 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Sep 9 05:40:16.070647 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Sep 9 05:40:16.070655 kernel: Booting paravirtualized kernel on Hyper-V Sep 9 05:40:16.070664 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 9 05:40:16.070671 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Sep 9 05:40:16.070679 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Sep 9 05:40:16.070687 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Sep 9 05:40:16.070695 kernel: pcpu-alloc: [0] 0 1 Sep 9 05:40:16.070704 kernel: Hyper-V: PV spinlocks enabled Sep 9 05:40:16.070713 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 9 05:40:16.070722 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=107bc9be805328e5e30844239fa87d36579f371e3de2c34fec43f6ff6d17b104 Sep 9 05:40:16.070731 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 9 05:40:16.070739 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Sep 9 05:40:16.070747 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 9 05:40:16.070755 kernel: Fallback order for Node 0: 0 Sep 9 05:40:16.070763 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2095807 Sep 9 05:40:16.070773 kernel: Policy zone: Normal Sep 9 05:40:16.070780 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 9 05:40:16.070788 kernel: software IO TLB: area num 2. Sep 9 05:40:16.070796 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 9 05:40:16.070818 kernel: ftrace: allocating 40102 entries in 157 pages Sep 9 05:40:16.070826 kernel: ftrace: allocated 157 pages with 5 groups Sep 9 05:40:16.070834 kernel: Dynamic Preempt: voluntary Sep 9 05:40:16.070842 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 9 05:40:16.070851 kernel: rcu: RCU event tracing is enabled. Sep 9 05:40:16.070867 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 9 05:40:16.070875 kernel: Trampoline variant of Tasks RCU enabled. Sep 9 05:40:16.070884 kernel: Rude variant of Tasks RCU enabled. Sep 9 05:40:16.070894 kernel: Tracing variant of Tasks RCU enabled. Sep 9 05:40:16.070903 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 9 05:40:16.070912 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 9 05:40:16.070920 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 9 05:40:16.070929 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 9 05:40:16.070938 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 9 05:40:16.070947 kernel: Using NULL legacy PIC Sep 9 05:40:16.070957 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Sep 9 05:40:16.070965 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 9 05:40:16.070974 kernel: Console: colour dummy device 80x25 Sep 9 05:40:16.070982 kernel: printk: legacy console [tty1] enabled Sep 9 05:40:16.070991 kernel: printk: legacy console [ttyS0] enabled Sep 9 05:40:16.070999 kernel: printk: legacy bootconsole [earlyser0] disabled Sep 9 05:40:16.071008 kernel: ACPI: Core revision 20240827 Sep 9 05:40:16.071018 kernel: Failed to register legacy timer interrupt Sep 9 05:40:16.071027 kernel: APIC: Switch to symmetric I/O mode setup Sep 9 05:40:16.071035 kernel: x2apic enabled Sep 9 05:40:16.071044 kernel: APIC: Switched APIC routing to: physical x2apic Sep 9 05:40:16.071052 kernel: Hyper-V: Host Build 10.0.26100.1293-1-0 Sep 9 05:40:16.071061 kernel: Hyper-V: enabling crash_kexec_post_notifiers Sep 9 05:40:16.071069 kernel: Hyper-V: Disabling IBT because of Hyper-V bug Sep 9 05:40:16.071078 kernel: Hyper-V: Using IPI hypercalls Sep 9 05:40:16.071086 kernel: APIC: send_IPI() replaced with hv_send_ipi() Sep 9 05:40:16.071096 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Sep 9 05:40:16.071105 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Sep 9 05:40:16.071113 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Sep 9 05:40:16.071122 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Sep 9 05:40:16.071131 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Sep 9 05:40:16.071139 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212735223b2, max_idle_ns: 440795277976 ns Sep 9 05:40:16.071148 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 4600.00 BogoMIPS (lpj=2300000) Sep 9 05:40:16.071157 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 9 05:40:16.071165 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Sep 9 05:40:16.071175 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Sep 9 05:40:16.071184 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 9 05:40:16.071193 kernel: Spectre V2 : Mitigation: Retpolines Sep 9 05:40:16.071201 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 9 05:40:16.071210 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Sep 9 05:40:16.071219 kernel: RETBleed: Vulnerable Sep 9 05:40:16.071228 kernel: Speculative Store Bypass: Vulnerable Sep 9 05:40:16.071236 kernel: active return thunk: its_return_thunk Sep 9 05:40:16.071244 kernel: ITS: Mitigation: Aligned branch/return thunks Sep 9 05:40:16.071253 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 9 05:40:16.071261 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 9 05:40:16.071271 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 9 05:40:16.071279 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Sep 9 05:40:16.071287 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Sep 9 05:40:16.071296 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Sep 9 05:40:16.071304 kernel: x86/fpu: Supporting XSAVE feature 0x800: 'Control-flow User registers' Sep 9 05:40:16.071312 kernel: x86/fpu: Supporting XSAVE feature 0x20000: 'AMX Tile config' Sep 9 05:40:16.071321 kernel: x86/fpu: Supporting XSAVE feature 0x40000: 'AMX Tile data' Sep 9 05:40:16.071329 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 9 05:40:16.071337 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Sep 9 05:40:16.071346 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Sep 9 05:40:16.071356 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Sep 9 05:40:16.071364 kernel: x86/fpu: xstate_offset[11]: 2432, xstate_sizes[11]: 16 Sep 9 05:40:16.071372 kernel: x86/fpu: xstate_offset[17]: 2496, xstate_sizes[17]: 64 Sep 9 05:40:16.071381 kernel: x86/fpu: xstate_offset[18]: 2560, xstate_sizes[18]: 8192 Sep 9 05:40:16.071389 kernel: x86/fpu: Enabled xstate features 0x608e7, context size is 10752 bytes, using 'compacted' format. Sep 9 05:40:16.071397 kernel: Freeing SMP alternatives memory: 32K Sep 9 05:40:16.071405 kernel: pid_max: default: 32768 minimum: 301 Sep 9 05:40:16.071414 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 9 05:40:16.071422 kernel: landlock: Up and running. Sep 9 05:40:16.071430 kernel: SELinux: Initializing. Sep 9 05:40:16.071438 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 9 05:40:16.071447 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 9 05:40:16.071457 kernel: smpboot: CPU0: Intel INTEL(R) XEON(R) PLATINUM 8573C (family: 0x6, model: 0xcf, stepping: 0x2) Sep 9 05:40:16.071466 kernel: Performance Events: unsupported p6 CPU model 207 no PMU driver, software events only. Sep 9 05:40:16.071475 kernel: signal: max sigframe size: 11952 Sep 9 05:40:16.071484 kernel: rcu: Hierarchical SRCU implementation. Sep 9 05:40:16.071494 kernel: rcu: Max phase no-delay instances is 400. Sep 9 05:40:16.071503 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 9 05:40:16.071512 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 9 05:40:16.071520 kernel: smp: Bringing up secondary CPUs ... Sep 9 05:40:16.071529 kernel: smpboot: x86: Booting SMP configuration: Sep 9 05:40:16.071539 kernel: .... node #0, CPUs: #1 Sep 9 05:40:16.071548 kernel: smp: Brought up 1 node, 2 CPUs Sep 9 05:40:16.071559 kernel: smpboot: Total of 2 processors activated (9200.00 BogoMIPS) Sep 9 05:40:16.071570 kernel: Memory: 8077032K/8383228K available (14336K kernel code, 2428K rwdata, 9988K rodata, 54076K init, 2892K bss, 299988K reserved, 0K cma-reserved) Sep 9 05:40:16.071580 kernel: devtmpfs: initialized Sep 9 05:40:16.071590 kernel: x86/mm: Memory block size: 128MB Sep 9 05:40:16.071600 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Sep 9 05:40:16.071610 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 9 05:40:16.071619 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 9 05:40:16.071630 kernel: pinctrl core: initialized pinctrl subsystem Sep 9 05:40:16.071640 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 9 05:40:16.071649 kernel: audit: initializing netlink subsys (disabled) Sep 9 05:40:16.071659 kernel: audit: type=2000 audit(1757396412.077:1): state=initialized audit_enabled=0 res=1 Sep 9 05:40:16.071669 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 9 05:40:16.071679 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 9 05:40:16.071689 kernel: cpuidle: using governor menu Sep 9 05:40:16.071698 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 9 05:40:16.071708 kernel: dca service started, version 1.12.1 Sep 9 05:40:16.071721 kernel: e820: reserve RAM buffer [mem 0x044fe000-0x07ffffff] Sep 9 05:40:16.071730 kernel: e820: reserve RAM buffer [mem 0x3ff1f000-0x3fffffff] Sep 9 05:40:16.071740 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 9 05:40:16.071749 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 9 05:40:16.071758 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 9 05:40:16.071768 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 9 05:40:16.071777 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 9 05:40:16.071786 kernel: ACPI: Added _OSI(Module Device) Sep 9 05:40:16.071795 kernel: ACPI: Added _OSI(Processor Device) Sep 9 05:40:16.071817 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 9 05:40:16.071826 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 9 05:40:16.071835 kernel: ACPI: Interpreter enabled Sep 9 05:40:16.071844 kernel: ACPI: PM: (supports S0 S5) Sep 9 05:40:16.071853 kernel: ACPI: Using IOAPIC for interrupt routing Sep 9 05:40:16.071862 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 9 05:40:16.071872 kernel: PCI: Ignoring E820 reservations for host bridge windows Sep 9 05:40:16.071883 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Sep 9 05:40:16.071891 kernel: iommu: Default domain type: Translated Sep 9 05:40:16.071905 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 9 05:40:16.071915 kernel: efivars: Registered efivars operations Sep 9 05:40:16.071925 kernel: PCI: Using ACPI for IRQ routing Sep 9 05:40:16.071934 kernel: PCI: System does not support PCI Sep 9 05:40:16.071944 kernel: vgaarb: loaded Sep 9 05:40:16.071954 kernel: clocksource: Switched to clocksource tsc-early Sep 9 05:40:16.071963 kernel: VFS: Disk quotas dquot_6.6.0 Sep 9 05:40:16.071973 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 9 05:40:16.071982 kernel: pnp: PnP ACPI init Sep 9 05:40:16.071994 kernel: pnp: PnP ACPI: found 3 devices Sep 9 05:40:16.072003 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 9 05:40:16.072012 kernel: NET: Registered PF_INET protocol family Sep 9 05:40:16.072021 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 9 05:40:16.072031 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Sep 9 05:40:16.072039 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 9 05:40:16.072048 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 9 05:40:16.072057 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Sep 9 05:40:16.072066 kernel: TCP: Hash tables configured (established 65536 bind 65536) Sep 9 05:40:16.072078 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Sep 9 05:40:16.072088 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Sep 9 05:40:16.072097 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 9 05:40:16.072105 kernel: NET: Registered PF_XDP protocol family Sep 9 05:40:16.072114 kernel: PCI: CLS 0 bytes, default 64 Sep 9 05:40:16.072398 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Sep 9 05:40:16.072409 kernel: software IO TLB: mapped [mem 0x000000003a9c6000-0x000000003e9c6000] (64MB) Sep 9 05:40:16.072419 kernel: RAPL PMU: API unit is 2^-32 Joules, 1 fixed counters, 10737418240 ms ovfl timer Sep 9 05:40:16.074833 kernel: RAPL PMU: hw unit of domain psys 2^-0 Joules Sep 9 05:40:16.074853 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212735223b2, max_idle_ns: 440795277976 ns Sep 9 05:40:16.074862 kernel: clocksource: Switched to clocksource tsc Sep 9 05:40:16.074869 kernel: Initialise system trusted keyrings Sep 9 05:40:16.074879 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Sep 9 05:40:16.074888 kernel: Key type asymmetric registered Sep 9 05:40:16.074896 kernel: Asymmetric key parser 'x509' registered Sep 9 05:40:16.074905 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 9 05:40:16.074913 kernel: io scheduler mq-deadline registered Sep 9 05:40:16.074922 kernel: io scheduler kyber registered Sep 9 05:40:16.074933 kernel: io scheduler bfq registered Sep 9 05:40:16.074942 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 9 05:40:16.074951 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 9 05:40:16.074959 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 9 05:40:16.074968 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Sep 9 05:40:16.074977 kernel: serial8250: ttyS2 at I/O 0x3e8 (irq = 4, base_baud = 115200) is a 16550A Sep 9 05:40:16.074986 kernel: i8042: PNP: No PS/2 controller found. Sep 9 05:40:16.075128 kernel: rtc_cmos 00:02: registered as rtc0 Sep 9 05:40:16.075208 kernel: rtc_cmos 00:02: setting system clock to 2025-09-09T05:40:15 UTC (1757396415) Sep 9 05:40:16.075279 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Sep 9 05:40:16.075289 kernel: intel_pstate: Intel P-state driver initializing Sep 9 05:40:16.075298 kernel: efifb: probing for efifb Sep 9 05:40:16.075307 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Sep 9 05:40:16.075316 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Sep 9 05:40:16.075325 kernel: efifb: scrolling: redraw Sep 9 05:40:16.075333 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 9 05:40:16.075344 kernel: Console: switching to colour frame buffer device 128x48 Sep 9 05:40:16.075353 kernel: fb0: EFI VGA frame buffer device Sep 9 05:40:16.075362 kernel: pstore: Using crash dump compression: deflate Sep 9 05:40:16.075371 kernel: pstore: Registered efi_pstore as persistent store backend Sep 9 05:40:16.075380 kernel: NET: Registered PF_INET6 protocol family Sep 9 05:40:16.075388 kernel: Segment Routing with IPv6 Sep 9 05:40:16.075408 kernel: In-situ OAM (IOAM) with IPv6 Sep 9 05:40:16.075416 kernel: NET: Registered PF_PACKET protocol family Sep 9 05:40:16.075425 kernel: Key type dns_resolver registered Sep 9 05:40:16.075434 kernel: IPI shorthand broadcast: enabled Sep 9 05:40:16.075445 kernel: sched_clock: Marking stable (3439004466, 139341207)->(4071780468, -493434795) Sep 9 05:40:16.075454 kernel: registered taskstats version 1 Sep 9 05:40:16.075462 kernel: Loading compiled-in X.509 certificates Sep 9 05:40:16.075472 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.45-flatcar: 884b9ad6a330f59ae6e6488b20a5491e41ff24a3' Sep 9 05:40:16.075480 kernel: Demotion targets for Node 0: null Sep 9 05:40:16.075489 kernel: Key type .fscrypt registered Sep 9 05:40:16.075498 kernel: Key type fscrypt-provisioning registered Sep 9 05:40:16.075507 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 9 05:40:16.075517 kernel: ima: Allocated hash algorithm: sha1 Sep 9 05:40:16.075526 kernel: ima: No architecture policies found Sep 9 05:40:16.075535 kernel: clk: Disabling unused clocks Sep 9 05:40:16.075543 kernel: Warning: unable to open an initial console. Sep 9 05:40:16.075552 kernel: Freeing unused kernel image (initmem) memory: 54076K Sep 9 05:40:16.075561 kernel: Write protecting the kernel read-only data: 24576k Sep 9 05:40:16.075570 kernel: Freeing unused kernel image (rodata/data gap) memory: 252K Sep 9 05:40:16.075579 kernel: Run /init as init process Sep 9 05:40:16.075587 kernel: with arguments: Sep 9 05:40:16.075598 kernel: /init Sep 9 05:40:16.075606 kernel: with environment: Sep 9 05:40:16.075639 kernel: HOME=/ Sep 9 05:40:16.075648 kernel: TERM=linux Sep 9 05:40:16.075657 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 9 05:40:16.075667 systemd[1]: Successfully made /usr/ read-only. Sep 9 05:40:16.075680 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 9 05:40:16.075691 systemd[1]: Detected virtualization microsoft. Sep 9 05:40:16.075702 systemd[1]: Detected architecture x86-64. Sep 9 05:40:16.075711 systemd[1]: Running in initrd. Sep 9 05:40:16.075719 systemd[1]: No hostname configured, using default hostname. Sep 9 05:40:16.075728 systemd[1]: Hostname set to . Sep 9 05:40:16.075737 systemd[1]: Initializing machine ID from random generator. Sep 9 05:40:16.075746 systemd[1]: Queued start job for default target initrd.target. Sep 9 05:40:16.075756 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 05:40:16.075765 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 05:40:16.075777 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 9 05:40:16.075787 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 9 05:40:16.075797 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 9 05:40:16.076106 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 9 05:40:16.076117 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 9 05:40:16.076128 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 9 05:40:16.076137 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 05:40:16.076149 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 9 05:40:16.076158 systemd[1]: Reached target paths.target - Path Units. Sep 9 05:40:16.076167 systemd[1]: Reached target slices.target - Slice Units. Sep 9 05:40:16.076177 systemd[1]: Reached target swap.target - Swaps. Sep 9 05:40:16.076186 systemd[1]: Reached target timers.target - Timer Units. Sep 9 05:40:16.076195 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 9 05:40:16.076204 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 9 05:40:16.076214 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 9 05:40:16.076225 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 9 05:40:16.076234 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 9 05:40:16.076244 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 9 05:40:16.076253 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 05:40:16.076262 systemd[1]: Reached target sockets.target - Socket Units. Sep 9 05:40:16.076270 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 9 05:40:16.076279 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 9 05:40:16.076288 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 9 05:40:16.076298 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 9 05:40:16.076310 systemd[1]: Starting systemd-fsck-usr.service... Sep 9 05:40:16.076319 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 9 05:40:16.076340 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 9 05:40:16.076351 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 05:40:16.076379 systemd-journald[205]: Collecting audit messages is disabled. Sep 9 05:40:16.076404 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 9 05:40:16.076415 systemd-journald[205]: Journal started Sep 9 05:40:16.076441 systemd-journald[205]: Runtime Journal (/run/log/journal/abd5ca3be0324f03aedee4ab7144fc8f) is 8M, max 158.9M, 150.9M free. Sep 9 05:40:16.079020 systemd[1]: Started systemd-journald.service - Journal Service. Sep 9 05:40:16.079081 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 05:40:16.079668 systemd[1]: Finished systemd-fsck-usr.service. Sep 9 05:40:16.081932 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 9 05:40:16.084912 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 9 05:40:16.089546 systemd-modules-load[206]: Inserted module 'overlay' Sep 9 05:40:16.103378 systemd-tmpfiles[215]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 9 05:40:16.105012 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 9 05:40:16.106203 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 9 05:40:16.117513 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 05:40:16.126603 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 05:40:16.132058 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 9 05:40:16.132403 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 9 05:40:16.139836 kernel: Bridge firewalling registered Sep 9 05:40:16.138548 systemd-modules-load[206]: Inserted module 'br_netfilter' Sep 9 05:40:16.140283 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 05:40:16.142255 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 9 05:40:16.156177 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 05:40:16.165761 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 05:40:16.167931 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 9 05:40:16.176927 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 05:40:16.180516 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 9 05:40:16.205316 systemd-resolved[239]: Positive Trust Anchors: Sep 9 05:40:16.205646 systemd-resolved[239]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 05:40:16.205685 systemd-resolved[239]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 9 05:40:16.208730 systemd-resolved[239]: Defaulting to hostname 'linux'. Sep 9 05:40:16.222894 dracut-cmdline[246]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=107bc9be805328e5e30844239fa87d36579f371e3de2c34fec43f6ff6d17b104 Sep 9 05:40:16.210022 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 9 05:40:16.222841 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 9 05:40:16.291828 kernel: SCSI subsystem initialized Sep 9 05:40:16.299819 kernel: Loading iSCSI transport class v2.0-870. Sep 9 05:40:16.309825 kernel: iscsi: registered transport (tcp) Sep 9 05:40:16.327823 kernel: iscsi: registered transport (qla4xxx) Sep 9 05:40:16.327862 kernel: QLogic iSCSI HBA Driver Sep 9 05:40:16.340962 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 9 05:40:16.361176 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 05:40:16.369922 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 9 05:40:16.397162 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 9 05:40:16.399924 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 9 05:40:16.449827 kernel: raid6: avx512x4 gen() 44069 MB/s Sep 9 05:40:16.466815 kernel: raid6: avx512x2 gen() 43313 MB/s Sep 9 05:40:16.483814 kernel: raid6: avx512x1 gen() 25072 MB/s Sep 9 05:40:16.502814 kernel: raid6: avx2x4 gen() 34634 MB/s Sep 9 05:40:16.519813 kernel: raid6: avx2x2 gen() 36404 MB/s Sep 9 05:40:16.537297 kernel: raid6: avx2x1 gen() 30833 MB/s Sep 9 05:40:16.537392 kernel: raid6: using algorithm avx512x4 gen() 44069 MB/s Sep 9 05:40:16.557009 kernel: raid6: .... xor() 7378 MB/s, rmw enabled Sep 9 05:40:16.557026 kernel: raid6: using avx512x2 recovery algorithm Sep 9 05:40:16.574820 kernel: xor: automatically using best checksumming function avx Sep 9 05:40:16.699831 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 9 05:40:16.704655 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 9 05:40:16.708171 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 05:40:16.733003 systemd-udevd[454]: Using default interface naming scheme 'v255'. Sep 9 05:40:16.737093 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 05:40:16.744701 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 9 05:40:16.765354 dracut-pre-trigger[462]: rd.md=0: removing MD RAID activation Sep 9 05:40:16.784158 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 9 05:40:16.787438 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 9 05:40:16.835293 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 05:40:16.841631 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 9 05:40:16.883820 kernel: cryptd: max_cpu_qlen set to 1000 Sep 9 05:40:16.907833 kernel: hv_vmbus: Vmbus version:5.3 Sep 9 05:40:16.917830 kernel: AES CTR mode by8 optimization enabled Sep 9 05:40:16.920853 kernel: hv_vmbus: registering driver hyperv_keyboard Sep 9 05:40:16.921154 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 05:40:16.921261 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 05:40:16.924917 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 05:40:16.936704 kernel: hv_vmbus: registering driver hv_pci Sep 9 05:40:16.936737 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 9 05:40:16.936750 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 9 05:40:16.936028 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 05:40:16.946424 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Sep 9 05:40:16.953387 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI VMBus probing: Using version 0x10004 Sep 9 05:40:16.955146 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 05:40:16.957507 kernel: PTP clock support registered Sep 9 05:40:16.955682 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 05:40:16.962853 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI host bridge to bus c05b:00 Sep 9 05:40:16.968321 kernel: pci_bus c05b:00: root bus resource [mem 0xfc0000000-0xfc007ffff window] Sep 9 05:40:16.968494 kernel: pci_bus c05b:00: No busn resource found for root bus, will use [bus 00-ff] Sep 9 05:40:16.972059 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 05:40:16.976656 kernel: pci c05b:00:00.0: [1414:00a9] type 00 class 0x010802 PCIe Endpoint Sep 9 05:40:16.980183 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit] Sep 9 05:40:17.005566 kernel: pci_bus c05b:00: busn_res: [bus 00-ff] end is updated to 00 Sep 9 05:40:17.005757 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit]: assigned Sep 9 05:40:17.017827 kernel: hv_vmbus: registering driver hv_storvsc Sep 9 05:40:17.020052 kernel: hv_utils: Registering HyperV Utility Driver Sep 9 05:40:17.020097 kernel: hv_vmbus: registering driver hv_utils Sep 9 05:40:17.023824 kernel: scsi host0: storvsc_host_t Sep 9 05:40:17.026018 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 9 05:40:17.026049 kernel: hv_utils: Shutdown IC version 3.2 Sep 9 05:40:17.029153 kernel: hv_utils: Heartbeat IC version 3.0 Sep 9 05:40:17.029185 kernel: hv_vmbus: registering driver hv_netvsc Sep 9 05:40:17.029196 kernel: hv_utils: TimeSync IC version 4.0 Sep 9 05:40:16.799805 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Sep 9 05:40:16.803231 systemd-journald[205]: Time jumped backwards, rotating. Sep 9 05:40:16.802021 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 05:40:16.802313 systemd-resolved[239]: Clock change detected. Flushing caches. Sep 9 05:40:16.813797 kernel: hv_netvsc f8615163-0000-1000-2000-000d3ada58e2 (unnamed net_device) (uninitialized): VF slot 1 added Sep 9 05:40:16.828337 kernel: hv_vmbus: registering driver hid_hyperv Sep 9 05:40:16.828370 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Sep 9 05:40:16.832848 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Sep 9 05:40:16.842005 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Sep 9 05:40:16.842195 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 9 05:40:16.844820 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Sep 9 05:40:16.852484 kernel: nvme nvme0: pci function c05b:00:00.0 Sep 9 05:40:16.852680 kernel: nvme c05b:00:00.0: enabling device (0000 -> 0002) Sep 9 05:40:16.862848 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#261 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Sep 9 05:40:16.880812 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#283 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Sep 9 05:40:17.010813 kernel: nvme nvme0: 2/0/0 default/read/poll queues Sep 9 05:40:17.015804 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 9 05:40:17.305806 kernel: nvme nvme0: using unchecked data buffer Sep 9 05:40:17.527207 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Sep 9 05:40:17.546180 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - MSFT NVMe Accelerator v1.0 ROOT. Sep 9 05:40:17.598929 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - MSFT NVMe Accelerator v1.0 EFI-SYSTEM. Sep 9 05:40:17.662249 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - MSFT NVMe Accelerator v1.0 USR-A. Sep 9 05:40:17.663318 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - MSFT NVMe Accelerator v1.0 USR-A. Sep 9 05:40:17.663556 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 9 05:40:17.664218 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 9 05:40:17.680161 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 05:40:17.680288 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 9 05:40:17.684085 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 9 05:40:17.695892 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 9 05:40:17.713796 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 9 05:40:17.721426 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 9 05:40:17.834798 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI VMBus probing: Using version 0x10004 Sep 9 05:40:17.840801 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI host bridge to bus 7870:00 Sep 9 05:40:17.844797 kernel: pci_bus 7870:00: root bus resource [mem 0xfc2000000-0xfc4007fff window] Sep 9 05:40:17.848804 kernel: pci_bus 7870:00: No busn resource found for root bus, will use [bus 00-ff] Sep 9 05:40:17.856792 kernel: pci 7870:00:00.0: [1414:00ba] type 00 class 0x020000 PCIe Endpoint Sep 9 05:40:17.860856 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref] Sep 9 05:40:17.865827 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref] Sep 9 05:40:17.867950 kernel: pci 7870:00:00.0: enabling Extended Tags Sep 9 05:40:17.885195 kernel: pci_bus 7870:00: busn_res: [bus 00-ff] end is updated to 00 Sep 9 05:40:17.885373 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref]: assigned Sep 9 05:40:17.889970 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref]: assigned Sep 9 05:40:17.894428 kernel: mana 7870:00:00.0: enabling device (0000 -> 0002) Sep 9 05:40:17.904806 kernel: mana 7870:00:00.0: Microsoft Azure Network Adapter protocol version: 0.1.1 Sep 9 05:40:17.908192 kernel: hv_netvsc f8615163-0000-1000-2000-000d3ada58e2 eth0: VF registering: eth1 Sep 9 05:40:17.908900 kernel: mana 7870:00:00.0 eth1: joined to eth0 Sep 9 05:40:17.910797 kernel: mana 7870:00:00.0 enP30832s1: renamed from eth1 Sep 9 05:40:18.730549 disk-uuid[671]: The operation has completed successfully. Sep 9 05:40:18.732356 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 9 05:40:18.790194 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 9 05:40:18.790292 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 9 05:40:18.825478 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 9 05:40:18.838989 sh[714]: Success Sep 9 05:40:18.872802 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 9 05:40:18.872843 kernel: device-mapper: uevent: version 1.0.3 Sep 9 05:40:18.874802 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 9 05:40:18.882798 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Sep 9 05:40:19.172028 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 9 05:40:19.175275 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 9 05:40:19.186890 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 9 05:40:19.199796 kernel: BTRFS: device fsid 9ca60a92-6b53-4529-adc0-1f4392d2ad56 devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (727) Sep 9 05:40:19.202581 kernel: BTRFS info (device dm-0): first mount of filesystem 9ca60a92-6b53-4529-adc0-1f4392d2ad56 Sep 9 05:40:19.202621 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 9 05:40:19.531130 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 9 05:40:19.531224 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 9 05:40:19.532112 kernel: BTRFS info (device dm-0): enabling free space tree Sep 9 05:40:19.570833 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 9 05:40:19.573145 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 9 05:40:19.576894 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 9 05:40:19.579360 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 9 05:40:19.584930 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 9 05:40:19.607806 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (757) Sep 9 05:40:19.609992 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem d4e5a7a8-c50a-463e-827d-ca249a0b8b8b Sep 9 05:40:19.612827 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Sep 9 05:40:19.658909 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 9 05:40:19.660830 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 9 05:40:19.667736 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 9 05:40:19.668200 kernel: BTRFS info (device nvme0n1p6): turning on async discard Sep 9 05:40:19.668558 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Sep 9 05:40:19.672816 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem d4e5a7a8-c50a-463e-827d-ca249a0b8b8b Sep 9 05:40:19.677902 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 9 05:40:19.683916 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 9 05:40:19.700641 systemd-networkd[890]: lo: Link UP Sep 9 05:40:19.700651 systemd-networkd[890]: lo: Gained carrier Sep 9 05:40:19.706523 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Sep 9 05:40:19.702134 systemd-networkd[890]: Enumeration completed Sep 9 05:40:19.710330 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Sep 9 05:40:19.710542 kernel: hv_netvsc f8615163-0000-1000-2000-000d3ada58e2 eth0: Data path switched to VF: enP30832s1 Sep 9 05:40:19.702456 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 9 05:40:19.702551 systemd-networkd[890]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 05:40:19.702555 systemd-networkd[890]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 05:40:19.704068 systemd[1]: Reached target network.target - Network. Sep 9 05:40:19.711391 systemd-networkd[890]: enP30832s1: Link UP Sep 9 05:40:19.711459 systemd-networkd[890]: eth0: Link UP Sep 9 05:40:19.711543 systemd-networkd[890]: eth0: Gained carrier Sep 9 05:40:19.711554 systemd-networkd[890]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 05:40:19.715945 systemd-networkd[890]: enP30832s1: Gained carrier Sep 9 05:40:19.726827 systemd-networkd[890]: eth0: DHCPv4 address 10.200.8.13/24, gateway 10.200.8.1 acquired from 168.63.129.16 Sep 9 05:40:20.745912 systemd-networkd[890]: eth0: Gained IPv6LL Sep 9 05:40:20.756325 ignition[897]: Ignition 2.22.0 Sep 9 05:40:20.756336 ignition[897]: Stage: fetch-offline Sep 9 05:40:20.758736 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 9 05:40:20.756451 ignition[897]: no configs at "/usr/lib/ignition/base.d" Sep 9 05:40:20.761619 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 9 05:40:20.756460 ignition[897]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 9 05:40:20.756562 ignition[897]: parsed url from cmdline: "" Sep 9 05:40:20.756565 ignition[897]: no config URL provided Sep 9 05:40:20.756569 ignition[897]: reading system config file "/usr/lib/ignition/user.ign" Sep 9 05:40:20.756575 ignition[897]: no config at "/usr/lib/ignition/user.ign" Sep 9 05:40:20.756580 ignition[897]: failed to fetch config: resource requires networking Sep 9 05:40:20.756826 ignition[897]: Ignition finished successfully Sep 9 05:40:20.793291 ignition[905]: Ignition 2.22.0 Sep 9 05:40:20.793301 ignition[905]: Stage: fetch Sep 9 05:40:20.793523 ignition[905]: no configs at "/usr/lib/ignition/base.d" Sep 9 05:40:20.793531 ignition[905]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 9 05:40:20.793638 ignition[905]: parsed url from cmdline: "" Sep 9 05:40:20.793642 ignition[905]: no config URL provided Sep 9 05:40:20.793647 ignition[905]: reading system config file "/usr/lib/ignition/user.ign" Sep 9 05:40:20.793654 ignition[905]: no config at "/usr/lib/ignition/user.ign" Sep 9 05:40:20.793675 ignition[905]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Sep 9 05:40:20.858158 ignition[905]: GET result: OK Sep 9 05:40:20.858236 ignition[905]: config has been read from IMDS userdata Sep 9 05:40:20.858266 ignition[905]: parsing config with SHA512: d7edb1c6ce13b4037e020367472716d0c066a5a98bd46368d8584f5255263e2ab5a5c233b4c0b89f947536d54f7d08baf7d63aa5d3ee353e706fbd0359713c72 Sep 9 05:40:20.865002 unknown[905]: fetched base config from "system" Sep 9 05:40:20.865327 ignition[905]: fetch: fetch complete Sep 9 05:40:20.865011 unknown[905]: fetched base config from "system" Sep 9 05:40:20.865331 ignition[905]: fetch: fetch passed Sep 9 05:40:20.865017 unknown[905]: fetched user config from "azure" Sep 9 05:40:20.865357 ignition[905]: Ignition finished successfully Sep 9 05:40:20.868471 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 9 05:40:20.875546 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 9 05:40:20.901673 ignition[911]: Ignition 2.22.0 Sep 9 05:40:20.901684 ignition[911]: Stage: kargs Sep 9 05:40:20.901933 ignition[911]: no configs at "/usr/lib/ignition/base.d" Sep 9 05:40:20.901942 ignition[911]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 9 05:40:20.905401 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 9 05:40:20.902757 ignition[911]: kargs: kargs passed Sep 9 05:40:20.908228 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 9 05:40:20.902806 ignition[911]: Ignition finished successfully Sep 9 05:40:20.939660 ignition[917]: Ignition 2.22.0 Sep 9 05:40:20.939670 ignition[917]: Stage: disks Sep 9 05:40:20.939901 ignition[917]: no configs at "/usr/lib/ignition/base.d" Sep 9 05:40:20.941844 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 9 05:40:20.939909 ignition[917]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 9 05:40:20.945130 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 9 05:40:20.940663 ignition[917]: disks: disks passed Sep 9 05:40:20.947608 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 9 05:40:20.940697 ignition[917]: Ignition finished successfully Sep 9 05:40:20.950214 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 9 05:40:20.954570 systemd[1]: Reached target sysinit.target - System Initialization. Sep 9 05:40:20.957937 systemd[1]: Reached target basic.target - Basic System. Sep 9 05:40:20.963234 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 9 05:40:21.030995 systemd-fsck[925]: ROOT: clean, 15/7326000 files, 477845/7359488 blocks Sep 9 05:40:21.035496 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 9 05:40:21.041794 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 9 05:40:23.111074 kernel: EXT4-fs (nvme0n1p9): mounted filesystem d2d7815e-fa16-4396-ab9d-ac540c1d8856 r/w with ordered data mode. Quota mode: none. Sep 9 05:40:23.111732 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 9 05:40:23.115335 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 9 05:40:23.147470 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 9 05:40:23.165172 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 9 05:40:23.170189 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Sep 9 05:40:23.173879 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (934) Sep 9 05:40:23.177283 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem d4e5a7a8-c50a-463e-827d-ca249a0b8b8b Sep 9 05:40:23.177394 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Sep 9 05:40:23.177570 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 9 05:40:23.177608 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 9 05:40:23.189906 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 9 05:40:23.189928 kernel: BTRFS info (device nvme0n1p6): turning on async discard Sep 9 05:40:23.189939 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Sep 9 05:40:23.188959 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 9 05:40:23.191883 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 9 05:40:23.195835 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 9 05:40:23.755894 coreos-metadata[936]: Sep 09 05:40:23.755 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Sep 9 05:40:23.760633 coreos-metadata[936]: Sep 09 05:40:23.760 INFO Fetch successful Sep 9 05:40:23.762857 coreos-metadata[936]: Sep 09 05:40:23.761 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Sep 9 05:40:23.774603 coreos-metadata[936]: Sep 09 05:40:23.774 INFO Fetch successful Sep 9 05:40:23.789254 coreos-metadata[936]: Sep 09 05:40:23.789 INFO wrote hostname ci-4452.0.0-n-23b47482b2 to /sysroot/etc/hostname Sep 9 05:40:23.792527 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 9 05:40:24.014456 initrd-setup-root[965]: cut: /sysroot/etc/passwd: No such file or directory Sep 9 05:40:24.070710 initrd-setup-root[972]: cut: /sysroot/etc/group: No such file or directory Sep 9 05:40:24.088743 initrd-setup-root[979]: cut: /sysroot/etc/shadow: No such file or directory Sep 9 05:40:24.093144 initrd-setup-root[986]: cut: /sysroot/etc/gshadow: No such file or directory Sep 9 05:40:25.149525 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 9 05:40:25.153021 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 9 05:40:25.165016 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 9 05:40:25.174886 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 9 05:40:25.180004 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem d4e5a7a8-c50a-463e-827d-ca249a0b8b8b Sep 9 05:40:25.200275 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 9 05:40:25.202328 ignition[1054]: INFO : Ignition 2.22.0 Sep 9 05:40:25.202328 ignition[1054]: INFO : Stage: mount Sep 9 05:40:25.202328 ignition[1054]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 05:40:25.202328 ignition[1054]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 9 05:40:25.212885 ignition[1054]: INFO : mount: mount passed Sep 9 05:40:25.212885 ignition[1054]: INFO : Ignition finished successfully Sep 9 05:40:25.205517 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 9 05:40:25.211796 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 9 05:40:25.229907 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 9 05:40:25.251799 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (1065) Sep 9 05:40:25.251832 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem d4e5a7a8-c50a-463e-827d-ca249a0b8b8b Sep 9 05:40:25.253855 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Sep 9 05:40:25.259955 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 9 05:40:25.259997 kernel: BTRFS info (device nvme0n1p6): turning on async discard Sep 9 05:40:25.261040 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Sep 9 05:40:25.262876 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 9 05:40:25.291401 ignition[1081]: INFO : Ignition 2.22.0 Sep 9 05:40:25.291401 ignition[1081]: INFO : Stage: files Sep 9 05:40:25.295885 ignition[1081]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 05:40:25.295885 ignition[1081]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 9 05:40:25.295885 ignition[1081]: DEBUG : files: compiled without relabeling support, skipping Sep 9 05:40:25.338168 ignition[1081]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 9 05:40:25.338168 ignition[1081]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 9 05:40:25.391282 ignition[1081]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 9 05:40:25.394850 ignition[1081]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 9 05:40:25.394850 ignition[1081]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 9 05:40:25.393212 unknown[1081]: wrote ssh authorized keys file for user: core Sep 9 05:40:25.435695 ignition[1081]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 9 05:40:25.439842 ignition[1081]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Sep 9 05:40:25.472475 ignition[1081]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 9 05:40:25.515147 ignition[1081]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 9 05:40:25.519872 ignition[1081]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 9 05:40:25.519872 ignition[1081]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 9 05:40:25.519872 ignition[1081]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 9 05:40:25.519872 ignition[1081]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 9 05:40:25.519872 ignition[1081]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 05:40:25.519872 ignition[1081]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 05:40:25.519872 ignition[1081]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 05:40:25.519872 ignition[1081]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 05:40:25.540815 ignition[1081]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 05:40:25.540815 ignition[1081]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 05:40:25.540815 ignition[1081]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 9 05:40:25.540815 ignition[1081]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 9 05:40:25.540815 ignition[1081]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 9 05:40:25.540815 ignition[1081]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Sep 9 05:40:26.037015 ignition[1081]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 9 05:40:26.883823 ignition[1081]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 9 05:40:26.883823 ignition[1081]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 9 05:40:26.950117 ignition[1081]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 05:40:26.961264 ignition[1081]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 05:40:26.961264 ignition[1081]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 9 05:40:26.969599 ignition[1081]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Sep 9 05:40:26.969599 ignition[1081]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Sep 9 05:40:26.969599 ignition[1081]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 9 05:40:26.969599 ignition[1081]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 9 05:40:26.969599 ignition[1081]: INFO : files: files passed Sep 9 05:40:26.969599 ignition[1081]: INFO : Ignition finished successfully Sep 9 05:40:26.964978 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 9 05:40:26.968756 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 9 05:40:26.986089 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 9 05:40:26.991081 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 9 05:40:26.991164 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 9 05:40:27.005693 initrd-setup-root-after-ignition[1112]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 05:40:27.005693 initrd-setup-root-after-ignition[1112]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 9 05:40:27.011915 initrd-setup-root-after-ignition[1116]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 05:40:27.011002 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 9 05:40:27.014981 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 9 05:40:27.018857 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 9 05:40:27.056055 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 9 05:40:27.056143 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 9 05:40:27.060170 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 9 05:40:27.062998 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 9 05:40:27.067986 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 9 05:40:27.069586 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 9 05:40:27.092276 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 9 05:40:27.096522 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 9 05:40:27.117996 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 9 05:40:27.118498 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 05:40:27.123806 systemd[1]: Stopped target timers.target - Timer Units. Sep 9 05:40:27.126256 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 9 05:40:27.126720 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 9 05:40:27.132923 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 9 05:40:27.135044 systemd[1]: Stopped target basic.target - Basic System. Sep 9 05:40:27.139664 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 9 05:40:27.142063 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 9 05:40:27.142517 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 9 05:40:27.147922 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 9 05:40:27.148579 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 9 05:40:27.154118 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 9 05:40:27.157441 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 9 05:40:27.160627 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 9 05:40:27.165969 systemd[1]: Stopped target swap.target - Swaps. Sep 9 05:40:27.167119 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 9 05:40:27.168676 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 9 05:40:27.173933 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 9 05:40:27.174685 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 05:40:27.188868 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 9 05:40:27.189595 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 05:40:27.194539 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 9 05:40:27.194689 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 9 05:40:27.201988 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 9 05:40:27.202155 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 9 05:40:27.205111 systemd[1]: ignition-files.service: Deactivated successfully. Sep 9 05:40:27.206109 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 9 05:40:27.210214 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 9 05:40:27.210362 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 9 05:40:27.216470 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 9 05:40:27.218528 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 9 05:40:27.218667 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 05:40:27.219544 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 9 05:40:27.219898 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 9 05:40:27.220168 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 05:40:27.236006 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 9 05:40:27.236136 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 9 05:40:27.248861 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 9 05:40:27.250234 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 9 05:40:27.262284 ignition[1136]: INFO : Ignition 2.22.0 Sep 9 05:40:27.262284 ignition[1136]: INFO : Stage: umount Sep 9 05:40:27.266648 ignition[1136]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 05:40:27.266648 ignition[1136]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 9 05:40:27.266648 ignition[1136]: INFO : umount: umount passed Sep 9 05:40:27.266648 ignition[1136]: INFO : Ignition finished successfully Sep 9 05:40:27.264734 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 9 05:40:27.264840 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 9 05:40:27.266548 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 9 05:40:27.266589 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 9 05:40:27.270386 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 9 05:40:27.271565 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 9 05:40:27.273999 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 9 05:40:27.274039 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 9 05:40:27.276409 systemd[1]: Stopped target network.target - Network. Sep 9 05:40:27.281842 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 9 05:40:27.281888 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 9 05:40:27.285854 systemd[1]: Stopped target paths.target - Path Units. Sep 9 05:40:27.289826 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 9 05:40:27.294827 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 05:40:27.296013 systemd[1]: Stopped target slices.target - Slice Units. Sep 9 05:40:27.300286 systemd[1]: Stopped target sockets.target - Socket Units. Sep 9 05:40:27.304314 systemd[1]: iscsid.socket: Deactivated successfully. Sep 9 05:40:27.304518 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 9 05:40:27.306487 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 9 05:40:27.306872 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 9 05:40:27.309539 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 9 05:40:27.310574 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 9 05:40:27.313950 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 9 05:40:27.313980 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 9 05:40:27.317009 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 9 05:40:27.320306 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 9 05:40:27.326123 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 9 05:40:27.326526 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 9 05:40:27.326591 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 9 05:40:27.336057 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 9 05:40:27.336240 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 9 05:40:27.336314 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 9 05:40:27.338850 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 9 05:40:27.339039 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 9 05:40:27.339114 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 9 05:40:27.344589 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 9 05:40:27.345474 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 9 05:40:27.345503 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 9 05:40:27.350310 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 9 05:40:27.406874 kernel: hv_netvsc f8615163-0000-1000-2000-000d3ada58e2 eth0: Data path switched from VF: enP30832s1 Sep 9 05:40:27.407039 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Sep 9 05:40:27.350857 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 9 05:40:27.356549 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 9 05:40:27.360777 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 9 05:40:27.361401 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 9 05:40:27.365490 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 9 05:40:27.365761 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 9 05:40:27.368590 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 9 05:40:27.369090 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 9 05:40:27.371205 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 9 05:40:27.372032 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 05:40:27.377216 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 05:40:27.382189 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 9 05:40:27.382235 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 9 05:40:27.396273 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 9 05:40:27.396421 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 05:40:27.404081 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 9 05:40:27.404147 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 9 05:40:27.409889 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 9 05:40:27.409922 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 05:40:27.413858 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 9 05:40:27.413905 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 9 05:40:27.418825 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 9 05:40:27.418870 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 9 05:40:27.423674 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 9 05:40:27.423719 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 05:40:27.432365 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 9 05:40:27.438063 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 9 05:40:27.438467 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 05:40:27.446109 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 9 05:40:27.446324 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 05:40:27.456367 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 05:40:27.456412 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 05:40:27.460208 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Sep 9 05:40:27.460256 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 9 05:40:27.460291 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 9 05:40:27.460586 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 9 05:40:27.460664 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 9 05:40:27.465922 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 9 05:40:27.465995 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 9 05:40:27.469319 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 9 05:40:27.473995 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 9 05:40:27.514792 systemd[1]: Switching root. Sep 9 05:40:27.593652 systemd-journald[205]: Journal stopped Sep 9 05:40:34.701748 systemd-journald[205]: Received SIGTERM from PID 1 (systemd). Sep 9 05:40:34.703927 kernel: SELinux: policy capability network_peer_controls=1 Sep 9 05:40:34.703954 kernel: SELinux: policy capability open_perms=1 Sep 9 05:40:34.703964 kernel: SELinux: policy capability extended_socket_class=1 Sep 9 05:40:34.703973 kernel: SELinux: policy capability always_check_network=0 Sep 9 05:40:34.703982 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 9 05:40:34.703992 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 9 05:40:34.704006 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 9 05:40:34.704014 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 9 05:40:34.704023 kernel: SELinux: policy capability userspace_initial_context=0 Sep 9 05:40:34.704032 kernel: audit: type=1403 audit(1757396428.791:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 9 05:40:34.704045 systemd[1]: Successfully loaded SELinux policy in 175.326ms. Sep 9 05:40:34.704057 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 4.200ms. Sep 9 05:40:34.704068 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 9 05:40:34.704082 systemd[1]: Detected virtualization microsoft. Sep 9 05:40:34.704092 systemd[1]: Detected architecture x86-64. Sep 9 05:40:34.704101 systemd[1]: Detected first boot. Sep 9 05:40:34.704111 systemd[1]: Hostname set to . Sep 9 05:40:34.704123 systemd[1]: Initializing machine ID from random generator. Sep 9 05:40:34.704134 zram_generator::config[1180]: No configuration found. Sep 9 05:40:34.704145 kernel: Guest personality initialized and is inactive Sep 9 05:40:34.704155 kernel: VMCI host device registered (name=vmci, major=10, minor=124) Sep 9 05:40:34.704164 kernel: Initialized host personality Sep 9 05:40:34.704173 kernel: NET: Registered PF_VSOCK protocol family Sep 9 05:40:34.704183 systemd[1]: Populated /etc with preset unit settings. Sep 9 05:40:34.704197 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 9 05:40:34.704208 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 9 05:40:34.704219 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 9 05:40:34.704230 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 9 05:40:34.704240 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 9 05:40:34.704252 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 9 05:40:34.704262 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 9 05:40:34.704273 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 9 05:40:34.704286 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 9 05:40:34.704298 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 9 05:40:34.704308 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 9 05:40:34.704318 systemd[1]: Created slice user.slice - User and Session Slice. Sep 9 05:40:34.704328 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 05:40:34.704338 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 05:40:34.704348 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 9 05:40:34.704362 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 9 05:40:34.704375 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 9 05:40:34.704386 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 9 05:40:34.704396 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 9 05:40:34.704406 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 05:40:34.704416 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 9 05:40:34.704427 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 9 05:40:34.704437 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 9 05:40:34.704449 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 9 05:40:34.704460 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 9 05:40:34.704470 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 05:40:34.704480 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 9 05:40:34.704490 systemd[1]: Reached target slices.target - Slice Units. Sep 9 05:40:34.704500 systemd[1]: Reached target swap.target - Swaps. Sep 9 05:40:34.704510 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 9 05:40:34.704522 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 9 05:40:34.704536 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 9 05:40:34.704547 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 9 05:40:34.704557 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 9 05:40:34.704568 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 05:40:34.704579 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 9 05:40:34.704592 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 9 05:40:34.704603 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 9 05:40:34.704613 systemd[1]: Mounting media.mount - External Media Directory... Sep 9 05:40:34.704625 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 05:40:34.704636 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 9 05:40:34.704647 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 9 05:40:34.704658 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 9 05:40:34.704669 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 9 05:40:34.704681 systemd[1]: Reached target machines.target - Containers. Sep 9 05:40:34.704694 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 9 05:40:34.704705 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 05:40:34.704716 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 9 05:40:34.704727 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 9 05:40:34.704738 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 05:40:34.704749 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 9 05:40:34.704761 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 05:40:34.704772 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 9 05:40:34.704815 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 05:40:34.704828 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 9 05:40:34.704838 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 9 05:40:34.704850 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 9 05:40:34.704861 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 9 05:40:34.704871 systemd[1]: Stopped systemd-fsck-usr.service. Sep 9 05:40:34.704883 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 05:40:34.704894 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 9 05:40:34.704907 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 9 05:40:34.704918 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 9 05:40:34.704929 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 9 05:40:34.704940 kernel: fuse: init (API version 7.41) Sep 9 05:40:34.704951 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 9 05:40:34.704962 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 9 05:40:34.704973 systemd[1]: verity-setup.service: Deactivated successfully. Sep 9 05:40:34.704983 systemd[1]: Stopped verity-setup.service. Sep 9 05:40:34.704994 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 05:40:34.705007 kernel: loop: module loaded Sep 9 05:40:34.705018 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 9 05:40:34.705030 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 9 05:40:34.705041 systemd[1]: Mounted media.mount - External Media Directory. Sep 9 05:40:34.705079 systemd-journald[1263]: Collecting audit messages is disabled. Sep 9 05:40:34.705107 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 9 05:40:34.705118 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 9 05:40:34.705130 systemd-journald[1263]: Journal started Sep 9 05:40:34.705156 systemd-journald[1263]: Runtime Journal (/run/log/journal/1668b152223b42cf94a9674a113c2279) is 8M, max 158.9M, 150.9M free. Sep 9 05:40:34.236763 systemd[1]: Queued start job for default target multi-user.target. Sep 9 05:40:34.244327 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Sep 9 05:40:34.244647 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 9 05:40:34.710997 systemd[1]: Started systemd-journald.service - Journal Service. Sep 9 05:40:34.713232 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 9 05:40:34.716051 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 9 05:40:34.719182 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 05:40:34.722116 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 9 05:40:34.722288 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 9 05:40:34.725161 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 05:40:34.725314 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 05:40:34.728001 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 05:40:34.728141 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 05:40:34.729531 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 9 05:40:34.729670 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 9 05:40:34.733004 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 05:40:34.733142 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 05:40:34.736048 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 05:40:34.739041 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 9 05:40:34.747157 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 9 05:40:34.751884 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 9 05:40:34.757859 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 9 05:40:34.759343 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 9 05:40:34.759374 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 9 05:40:34.764507 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 9 05:40:34.768922 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 9 05:40:34.770839 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 05:40:34.773899 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 9 05:40:34.779898 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 9 05:40:34.782893 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 05:40:34.785107 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 9 05:40:34.787025 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 9 05:40:34.788892 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 9 05:40:34.792957 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 9 05:40:34.797846 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 9 05:40:34.799942 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 9 05:40:34.802763 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 9 05:40:34.805372 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 9 05:40:34.815905 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 05:40:34.826974 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 05:40:34.841479 systemd-journald[1263]: Time spent on flushing to /var/log/journal/1668b152223b42cf94a9674a113c2279 is 35.712ms for 989 entries. Sep 9 05:40:34.841479 systemd-journald[1263]: System Journal (/var/log/journal/1668b152223b42cf94a9674a113c2279) is 11.9M, max 2.6G, 2.6G free. Sep 9 05:40:34.934297 systemd-journald[1263]: Received client request to flush runtime journal. Sep 9 05:40:34.934342 kernel: ACPI: bus type drm_connector registered Sep 9 05:40:34.934361 systemd-journald[1263]: /var/log/journal/1668b152223b42cf94a9674a113c2279/system.journal: Realtime clock jumped backwards relative to last journal entry, rotating. Sep 9 05:40:34.934384 systemd-journald[1263]: Rotating system journal. Sep 9 05:40:34.934406 kernel: loop0: detected capacity change from 0 to 110984 Sep 9 05:40:34.866464 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 05:40:34.866609 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 9 05:40:34.868113 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 9 05:40:34.870342 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 9 05:40:34.873921 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 9 05:40:34.935854 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 9 05:40:34.947993 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 05:40:34.978710 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 9 05:40:35.245920 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 9 05:40:35.446813 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 9 05:40:35.476484 kernel: loop1: detected capacity change from 0 to 229808 Sep 9 05:40:35.528807 kernel: loop2: detected capacity change from 0 to 27936 Sep 9 05:40:35.537097 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 9 05:40:35.540945 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 9 05:40:35.686446 systemd-tmpfiles[1340]: ACLs are not supported, ignoring. Sep 9 05:40:35.686464 systemd-tmpfiles[1340]: ACLs are not supported, ignoring. Sep 9 05:40:35.690184 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 05:40:36.103807 kernel: loop3: detected capacity change from 0 to 128016 Sep 9 05:40:36.515375 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 9 05:40:36.520091 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 05:40:36.548646 systemd-udevd[1346]: Using default interface naming scheme 'v255'. Sep 9 05:40:36.583803 kernel: loop4: detected capacity change from 0 to 110984 Sep 9 05:40:36.606814 kernel: loop5: detected capacity change from 0 to 229808 Sep 9 05:40:36.623834 kernel: loop6: detected capacity change from 0 to 27936 Sep 9 05:40:36.634812 kernel: loop7: detected capacity change from 0 to 128016 Sep 9 05:40:36.644229 (sd-merge)[1348]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Sep 9 05:40:36.644598 (sd-merge)[1348]: Merged extensions into '/usr'. Sep 9 05:40:36.648173 systemd[1]: Reload requested from client PID 1317 ('systemd-sysext') (unit systemd-sysext.service)... Sep 9 05:40:36.648188 systemd[1]: Reloading... Sep 9 05:40:36.709279 zram_generator::config[1376]: No configuration found. Sep 9 05:40:36.886234 systemd[1]: Reloading finished in 237 ms. Sep 9 05:40:36.917812 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 9 05:40:36.927572 systemd[1]: Starting ensure-sysext.service... Sep 9 05:40:36.930933 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 9 05:40:36.971135 systemd[1]: Reload requested from client PID 1432 ('systemctl') (unit ensure-sysext.service)... Sep 9 05:40:36.971148 systemd[1]: Reloading... Sep 9 05:40:36.992348 systemd-tmpfiles[1433]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 9 05:40:36.992389 systemd-tmpfiles[1433]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 9 05:40:36.992664 systemd-tmpfiles[1433]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 9 05:40:36.992880 systemd-tmpfiles[1433]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 9 05:40:36.993375 systemd-tmpfiles[1433]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 9 05:40:36.993561 systemd-tmpfiles[1433]: ACLs are not supported, ignoring. Sep 9 05:40:36.993626 systemd-tmpfiles[1433]: ACLs are not supported, ignoring. Sep 9 05:40:37.016805 zram_generator::config[1457]: No configuration found. Sep 9 05:40:37.050726 systemd-tmpfiles[1433]: Detected autofs mount point /boot during canonicalization of boot. Sep 9 05:40:37.050736 systemd-tmpfiles[1433]: Skipping /boot Sep 9 05:40:37.055929 systemd-tmpfiles[1433]: Detected autofs mount point /boot during canonicalization of boot. Sep 9 05:40:37.055938 systemd-tmpfiles[1433]: Skipping /boot Sep 9 05:40:37.199040 systemd[1]: Reloading finished in 227 ms. Sep 9 05:40:37.225943 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 05:40:37.233577 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 9 05:40:37.264183 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 9 05:40:37.268714 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 9 05:40:37.279996 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 9 05:40:37.283554 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 9 05:40:37.288922 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 05:40:37.289089 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 05:40:37.293478 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 05:40:37.298988 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 05:40:37.302840 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 05:40:37.304413 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 05:40:37.304547 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 05:40:37.304651 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 05:40:37.305730 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 05:40:37.308110 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 05:40:37.312523 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 05:40:37.312712 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 05:40:37.314548 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 05:40:37.314733 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 05:40:37.319085 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 05:40:37.319336 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 9 05:40:37.326770 systemd[1]: Expecting device dev-ptp_hyperv.device - /dev/ptp_hyperv... Sep 9 05:40:37.329939 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 05:40:37.330196 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 05:40:37.331109 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 05:40:37.334832 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 9 05:40:37.339282 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 05:40:37.344005 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 05:40:37.344462 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 05:40:37.345140 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 05:40:37.345310 systemd[1]: Reached target time-set.target - System Time Set. Sep 9 05:40:37.345411 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 05:40:37.346922 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 05:40:37.347815 systemd[1]: Finished ensure-sysext.service. Sep 9 05:40:37.352324 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 9 05:40:37.362022 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 9 05:40:37.368257 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 05:40:37.368411 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 9 05:40:37.370528 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 05:40:37.370672 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 05:40:37.374104 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 05:40:37.378978 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 05:40:37.382260 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 05:40:37.382422 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 05:40:37.393126 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 05:40:37.393166 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 9 05:40:37.396931 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 9 05:40:37.490532 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 9 05:40:37.500868 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 9 05:40:37.556616 augenrules[1608]: No rules Sep 9 05:40:37.559303 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 05:40:37.559513 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 9 05:40:37.589485 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 9 05:40:37.607802 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#120 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Sep 9 05:40:37.632841 kernel: mousedev: PS/2 mouse device common for all mice Sep 9 05:40:37.663803 kernel: hv_vmbus: registering driver hv_balloon Sep 9 05:40:37.675811 kernel: hv_vmbus: registering driver hyperv_fb Sep 9 05:40:37.692842 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Sep 9 05:40:37.698527 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Sep 9 05:40:37.698598 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Sep 9 05:40:37.701805 kernel: Console: switching to colour dummy device 80x25 Sep 9 05:40:37.704801 kernel: Console: switching to colour frame buffer device 128x48 Sep 9 05:40:37.711060 systemd-resolved[1524]: Positive Trust Anchors: Sep 9 05:40:37.711074 systemd-resolved[1524]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 05:40:37.711107 systemd-resolved[1524]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 9 05:40:37.729265 systemd[1]: Condition check resulted in dev-ptp_hyperv.device - /dev/ptp_hyperv being skipped. Sep 9 05:40:37.742467 systemd-networkd[1551]: lo: Link UP Sep 9 05:40:37.742474 systemd-networkd[1551]: lo: Gained carrier Sep 9 05:40:37.743819 systemd-networkd[1551]: Enumeration completed Sep 9 05:40:37.743899 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 9 05:40:37.744413 systemd-networkd[1551]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 05:40:37.744417 systemd-networkd[1551]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 05:40:37.746427 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Sep 9 05:40:37.748800 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Sep 9 05:40:37.750803 kernel: hv_netvsc f8615163-0000-1000-2000-000d3ada58e2 eth0: Data path switched to VF: enP30832s1 Sep 9 05:40:37.754439 systemd-networkd[1551]: enP30832s1: Link UP Sep 9 05:40:37.754526 systemd-networkd[1551]: eth0: Link UP Sep 9 05:40:37.754529 systemd-networkd[1551]: eth0: Gained carrier Sep 9 05:40:37.754547 systemd-networkd[1551]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 05:40:37.755060 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 9 05:40:37.758018 systemd-networkd[1551]: enP30832s1: Gained carrier Sep 9 05:40:37.759145 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 9 05:40:37.765871 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 05:40:37.766838 systemd-networkd[1551]: eth0: DHCPv4 address 10.200.8.13/24, gateway 10.200.8.1 acquired from 168.63.129.16 Sep 9 05:40:37.773643 systemd-resolved[1524]: Using system hostname 'ci-4452.0.0-n-23b47482b2'. Sep 9 05:40:37.791455 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 9 05:40:37.793176 systemd[1]: Reached target network.target - Network. Sep 9 05:40:37.794848 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 9 05:40:37.815012 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 05:40:37.816599 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 05:40:37.830978 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 9 05:40:37.833613 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 05:40:37.844947 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 9 05:40:37.887086 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 05:40:37.887274 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 05:40:37.891038 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 9 05:40:37.904034 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 05:40:37.997992 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Sep 9 05:40:38.006936 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 9 05:40:38.016950 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Sep 9 05:40:38.106360 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 9 05:40:38.857919 systemd-networkd[1551]: eth0: Gained IPv6LL Sep 9 05:40:38.860182 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 9 05:40:38.861043 systemd[1]: Reached target network-online.target - Network is Online. Sep 9 05:40:39.213006 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 05:40:39.900877 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 9 05:40:39.904038 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 9 05:40:43.263387 ldconfig[1313]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 9 05:40:43.273874 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 9 05:40:43.278025 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 9 05:40:43.311420 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 9 05:40:43.313353 systemd[1]: Reached target sysinit.target - System Initialization. Sep 9 05:40:43.315928 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 9 05:40:43.317441 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 9 05:40:43.320842 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Sep 9 05:40:43.322461 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 9 05:40:43.323966 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 9 05:40:43.326843 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 9 05:40:43.328340 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 9 05:40:43.328371 systemd[1]: Reached target paths.target - Path Units. Sep 9 05:40:43.329566 systemd[1]: Reached target timers.target - Timer Units. Sep 9 05:40:43.346905 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 9 05:40:43.350803 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 9 05:40:43.356318 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 9 05:40:43.358431 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 9 05:40:43.360169 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 9 05:40:43.363183 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 9 05:40:43.367057 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 9 05:40:43.369137 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 9 05:40:43.371121 systemd[1]: Reached target sockets.target - Socket Units. Sep 9 05:40:43.373842 systemd[1]: Reached target basic.target - Basic System. Sep 9 05:40:43.376862 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 9 05:40:43.376886 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 9 05:40:43.392008 systemd[1]: Starting chronyd.service - NTP client/server... Sep 9 05:40:43.395738 systemd[1]: Starting containerd.service - containerd container runtime... Sep 9 05:40:43.402967 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 9 05:40:43.406940 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 9 05:40:43.410610 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 9 05:40:43.414883 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 9 05:40:43.418410 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 9 05:40:43.420942 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 9 05:40:43.427015 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Sep 9 05:40:43.429138 systemd[1]: hv_fcopy_uio_daemon.service - Hyper-V FCOPY UIO daemon was skipped because of an unmet condition check (ConditionPathExists=/sys/bus/vmbus/devices/eb765408-105f-49b6-b4aa-c123b64d17d4/uio). Sep 9 05:40:43.430364 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Sep 9 05:40:43.432686 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Sep 9 05:40:43.435901 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 05:40:43.440760 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 9 05:40:43.446862 jq[1688]: false Sep 9 05:40:43.451724 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 9 05:40:43.455637 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 9 05:40:43.462076 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 9 05:40:43.468022 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 9 05:40:43.477496 KVP[1691]: KVP starting; pid is:1691 Sep 9 05:40:43.478911 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 9 05:40:43.480814 kernel: hv_utils: KVP IC version 4.0 Sep 9 05:40:43.480866 KVP[1691]: KVP LIC Version: 3.1 Sep 9 05:40:43.482233 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 9 05:40:43.487870 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 9 05:40:43.489149 google_oslogin_nss_cache[1690]: oslogin_cache_refresh[1690]: Refreshing passwd entry cache Sep 9 05:40:43.489007 systemd[1]: Starting update-engine.service - Update Engine... Sep 9 05:40:43.488626 oslogin_cache_refresh[1690]: Refreshing passwd entry cache Sep 9 05:40:43.497872 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 9 05:40:43.501146 extend-filesystems[1689]: Found /dev/nvme0n1p6 Sep 9 05:40:43.504211 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 9 05:40:43.508132 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 9 05:40:43.511939 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 9 05:40:43.512760 google_oslogin_nss_cache[1690]: oslogin_cache_refresh[1690]: Failure getting users, quitting Sep 9 05:40:43.512760 google_oslogin_nss_cache[1690]: oslogin_cache_refresh[1690]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 9 05:40:43.512724 oslogin_cache_refresh[1690]: Failure getting users, quitting Sep 9 05:40:43.512740 oslogin_cache_refresh[1690]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 9 05:40:43.516637 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 9 05:40:43.517646 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 9 05:40:43.531769 google_oslogin_nss_cache[1690]: oslogin_cache_refresh[1690]: Refreshing group entry cache Sep 9 05:40:43.531769 google_oslogin_nss_cache[1690]: oslogin_cache_refresh[1690]: Failure getting groups, quitting Sep 9 05:40:43.531769 google_oslogin_nss_cache[1690]: oslogin_cache_refresh[1690]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 9 05:40:43.522838 oslogin_cache_refresh[1690]: Refreshing group entry cache Sep 9 05:40:43.529871 oslogin_cache_refresh[1690]: Failure getting groups, quitting Sep 9 05:40:43.529881 oslogin_cache_refresh[1690]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 9 05:40:43.532367 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Sep 9 05:40:43.537049 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Sep 9 05:40:43.542937 extend-filesystems[1689]: Found /dev/nvme0n1p9 Sep 9 05:40:43.544558 systemd[1]: motdgen.service: Deactivated successfully. Sep 9 05:40:43.547537 jq[1705]: true Sep 9 05:40:43.544736 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 9 05:40:43.551109 chronyd[1680]: chronyd version 4.7 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +NTS +SECHASH +IPV6 -DEBUG) Sep 9 05:40:43.555821 extend-filesystems[1689]: Checking size of /dev/nvme0n1p9 Sep 9 05:40:43.564023 (ntainerd)[1723]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 9 05:40:43.572293 jq[1725]: true Sep 9 05:40:43.584171 extend-filesystems[1689]: Old size kept for /dev/nvme0n1p9 Sep 9 05:40:43.589476 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 9 05:40:43.589712 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 9 05:40:43.608354 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 9 05:40:43.611856 chronyd[1680]: Timezone right/UTC failed leap second check, ignoring Sep 9 05:40:43.612009 chronyd[1680]: Loaded seccomp filter (level 2) Sep 9 05:40:43.612892 systemd[1]: Started chronyd.service - NTP client/server. Sep 9 05:40:43.622671 update_engine[1704]: I20250909 05:40:43.622344 1704 main.cc:92] Flatcar Update Engine starting Sep 9 05:40:43.659517 tar[1711]: linux-amd64/LICENSE Sep 9 05:40:43.660928 tar[1711]: linux-amd64/helm Sep 9 05:40:43.674481 systemd-logind[1701]: New seat seat0. Sep 9 05:40:43.679951 systemd-logind[1701]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 9 05:40:43.680092 systemd[1]: Started systemd-logind.service - User Login Management. Sep 9 05:40:43.724535 bash[1767]: Updated "/home/core/.ssh/authorized_keys" Sep 9 05:40:43.726985 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 9 05:40:43.733323 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 9 05:40:43.894159 dbus-daemon[1683]: [system] SELinux support is enabled Sep 9 05:40:43.894340 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 9 05:40:43.901366 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 9 05:40:43.901832 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 9 05:40:43.903774 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 9 05:40:43.903887 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 9 05:40:43.911880 dbus-daemon[1683]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 9 05:40:43.914552 systemd[1]: Started update-engine.service - Update Engine. Sep 9 05:40:43.915049 update_engine[1704]: I20250909 05:40:43.914989 1704 update_check_scheduler.cc:74] Next update check in 4m41s Sep 9 05:40:43.924942 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 9 05:40:44.021809 coreos-metadata[1682]: Sep 09 05:40:44.021 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Sep 9 05:40:44.027160 coreos-metadata[1682]: Sep 09 05:40:44.027 INFO Fetch successful Sep 9 05:40:44.027373 coreos-metadata[1682]: Sep 09 05:40:44.027 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Sep 9 05:40:44.031945 coreos-metadata[1682]: Sep 09 05:40:44.031 INFO Fetch successful Sep 9 05:40:44.032407 coreos-metadata[1682]: Sep 09 05:40:44.032 INFO Fetching http://168.63.129.16/machine/90797a26-4a02-4bd6-82ec-db391395ac50/3ece8ff9%2Db1b3%2D4649%2D9a29%2D1e0ec978e1a2.%5Fci%2D4452.0.0%2Dn%2D23b47482b2?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Sep 9 05:40:44.036244 coreos-metadata[1682]: Sep 09 05:40:44.036 INFO Fetch successful Sep 9 05:40:44.036466 coreos-metadata[1682]: Sep 09 05:40:44.036 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Sep 9 05:40:44.047991 coreos-metadata[1682]: Sep 09 05:40:44.047 INFO Fetch successful Sep 9 05:40:44.095169 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 9 05:40:44.097539 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 9 05:40:44.163925 sshd_keygen[1745]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 9 05:40:44.192959 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 9 05:40:44.201209 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 9 05:40:44.204933 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Sep 9 05:40:44.242957 systemd[1]: issuegen.service: Deactivated successfully. Sep 9 05:40:44.243138 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 9 05:40:44.252058 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 9 05:40:44.258356 tar[1711]: linux-amd64/README.md Sep 9 05:40:44.269924 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 9 05:40:44.275902 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Sep 9 05:40:44.288412 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 9 05:40:44.294179 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 9 05:40:44.299003 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 9 05:40:44.301678 systemd[1]: Reached target getty.target - Login Prompts. Sep 9 05:40:44.311602 locksmithd[1787]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 9 05:40:44.472853 containerd[1723]: time="2025-09-09T05:40:44Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 9 05:40:44.472853 containerd[1723]: time="2025-09-09T05:40:44.470119226Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 9 05:40:44.479544 containerd[1723]: time="2025-09-09T05:40:44.479260108Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="13.856µs" Sep 9 05:40:44.479544 containerd[1723]: time="2025-09-09T05:40:44.479292598Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 9 05:40:44.479544 containerd[1723]: time="2025-09-09T05:40:44.479310934Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 9 05:40:44.479544 containerd[1723]: time="2025-09-09T05:40:44.479435449Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 9 05:40:44.479544 containerd[1723]: time="2025-09-09T05:40:44.479447181Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 9 05:40:44.479544 containerd[1723]: time="2025-09-09T05:40:44.479467430Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 9 05:40:44.479544 containerd[1723]: time="2025-09-09T05:40:44.479518655Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 9 05:40:44.479544 containerd[1723]: time="2025-09-09T05:40:44.479528869Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 9 05:40:44.479810 containerd[1723]: time="2025-09-09T05:40:44.479757638Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 9 05:40:44.479810 containerd[1723]: time="2025-09-09T05:40:44.479775982Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 9 05:40:44.479860 containerd[1723]: time="2025-09-09T05:40:44.479808581Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 9 05:40:44.479860 containerd[1723]: time="2025-09-09T05:40:44.479817273Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 9 05:40:44.479902 containerd[1723]: time="2025-09-09T05:40:44.479884729Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 9 05:40:44.480056 containerd[1723]: time="2025-09-09T05:40:44.480028988Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 9 05:40:44.480085 containerd[1723]: time="2025-09-09T05:40:44.480055896Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 9 05:40:44.480085 containerd[1723]: time="2025-09-09T05:40:44.480065129Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 9 05:40:44.480135 containerd[1723]: time="2025-09-09T05:40:44.480094572Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 9 05:40:44.480447 containerd[1723]: time="2025-09-09T05:40:44.480413176Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 9 05:40:44.480513 containerd[1723]: time="2025-09-09T05:40:44.480500436Z" level=info msg="metadata content store policy set" policy=shared Sep 9 05:40:44.495543 containerd[1723]: time="2025-09-09T05:40:44.495445556Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 9 05:40:44.495543 containerd[1723]: time="2025-09-09T05:40:44.495515127Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 9 05:40:44.495543 containerd[1723]: time="2025-09-09T05:40:44.495533881Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 9 05:40:44.495543 containerd[1723]: time="2025-09-09T05:40:44.495546920Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 9 05:40:44.495675 containerd[1723]: time="2025-09-09T05:40:44.495559297Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 9 05:40:44.495675 containerd[1723]: time="2025-09-09T05:40:44.495569512Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 9 05:40:44.495675 containerd[1723]: time="2025-09-09T05:40:44.495583947Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 9 05:40:44.495675 containerd[1723]: time="2025-09-09T05:40:44.495595553Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 9 05:40:44.495675 containerd[1723]: time="2025-09-09T05:40:44.495606892Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 9 05:40:44.495675 containerd[1723]: time="2025-09-09T05:40:44.495616993Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 9 05:40:44.495675 containerd[1723]: time="2025-09-09T05:40:44.495626547Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 9 05:40:44.495675 containerd[1723]: time="2025-09-09T05:40:44.495640832Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 9 05:40:44.495838 containerd[1723]: time="2025-09-09T05:40:44.495743923Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 9 05:40:44.495838 containerd[1723]: time="2025-09-09T05:40:44.495761742Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 9 05:40:44.495838 containerd[1723]: time="2025-09-09T05:40:44.495777279Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 9 05:40:44.495838 containerd[1723]: time="2025-09-09T05:40:44.495820345Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 9 05:40:44.495916 containerd[1723]: time="2025-09-09T05:40:44.495837472Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 9 05:40:44.495916 containerd[1723]: time="2025-09-09T05:40:44.495853491Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 9 05:40:44.495916 containerd[1723]: time="2025-09-09T05:40:44.495894884Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 9 05:40:44.495916 containerd[1723]: time="2025-09-09T05:40:44.495906366Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 9 05:40:44.495999 containerd[1723]: time="2025-09-09T05:40:44.495917736Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 9 05:40:44.495999 containerd[1723]: time="2025-09-09T05:40:44.495928839Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 9 05:40:44.495999 containerd[1723]: time="2025-09-09T05:40:44.495939163Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 9 05:40:44.496054 containerd[1723]: time="2025-09-09T05:40:44.496000166Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 9 05:40:44.496054 containerd[1723]: time="2025-09-09T05:40:44.496013331Z" level=info msg="Start snapshots syncer" Sep 9 05:40:44.496054 containerd[1723]: time="2025-09-09T05:40:44.496042040Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 9 05:40:44.496317 containerd[1723]: time="2025-09-09T05:40:44.496277667Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 9 05:40:44.496431 containerd[1723]: time="2025-09-09T05:40:44.496323763Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 9 05:40:44.496431 containerd[1723]: time="2025-09-09T05:40:44.496398829Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 9 05:40:44.496548 containerd[1723]: time="2025-09-09T05:40:44.496525842Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 9 05:40:44.496575 containerd[1723]: time="2025-09-09T05:40:44.496547714Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 9 05:40:44.496575 containerd[1723]: time="2025-09-09T05:40:44.496568061Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 9 05:40:44.496611 containerd[1723]: time="2025-09-09T05:40:44.496580975Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 9 05:40:44.496611 containerd[1723]: time="2025-09-09T05:40:44.496593507Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 9 05:40:44.496611 containerd[1723]: time="2025-09-09T05:40:44.496604606Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 9 05:40:44.498132 containerd[1723]: time="2025-09-09T05:40:44.496616308Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 9 05:40:44.498132 containerd[1723]: time="2025-09-09T05:40:44.496655809Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 9 05:40:44.498132 containerd[1723]: time="2025-09-09T05:40:44.496666550Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 9 05:40:44.498132 containerd[1723]: time="2025-09-09T05:40:44.496676945Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 9 05:40:44.498132 containerd[1723]: time="2025-09-09T05:40:44.496713945Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 9 05:40:44.498132 containerd[1723]: time="2025-09-09T05:40:44.496727481Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 9 05:40:44.498132 containerd[1723]: time="2025-09-09T05:40:44.496736472Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 9 05:40:44.498132 containerd[1723]: time="2025-09-09T05:40:44.496745720Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 9 05:40:44.498132 containerd[1723]: time="2025-09-09T05:40:44.496754493Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 9 05:40:44.498132 containerd[1723]: time="2025-09-09T05:40:44.496812196Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 9 05:40:44.498132 containerd[1723]: time="2025-09-09T05:40:44.496824620Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 9 05:40:44.498132 containerd[1723]: time="2025-09-09T05:40:44.496844615Z" level=info msg="runtime interface created" Sep 9 05:40:44.498132 containerd[1723]: time="2025-09-09T05:40:44.496850927Z" level=info msg="created NRI interface" Sep 9 05:40:44.498132 containerd[1723]: time="2025-09-09T05:40:44.496863198Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 9 05:40:44.498132 containerd[1723]: time="2025-09-09T05:40:44.496875421Z" level=info msg="Connect containerd service" Sep 9 05:40:44.498335 containerd[1723]: time="2025-09-09T05:40:44.496898100Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 9 05:40:44.498335 containerd[1723]: time="2025-09-09T05:40:44.497573994Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 9 05:40:44.861853 containerd[1723]: time="2025-09-09T05:40:44.861732665Z" level=info msg="Start subscribing containerd event" Sep 9 05:40:44.862843 containerd[1723]: time="2025-09-09T05:40:44.861966294Z" level=info msg="Start recovering state" Sep 9 05:40:44.862843 containerd[1723]: time="2025-09-09T05:40:44.861913440Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 9 05:40:44.862843 containerd[1723]: time="2025-09-09T05:40:44.862123468Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 9 05:40:44.864547 containerd[1723]: time="2025-09-09T05:40:44.863845716Z" level=info msg="Start event monitor" Sep 9 05:40:44.864547 containerd[1723]: time="2025-09-09T05:40:44.863876434Z" level=info msg="Start cni network conf syncer for default" Sep 9 05:40:44.864547 containerd[1723]: time="2025-09-09T05:40:44.863886286Z" level=info msg="Start streaming server" Sep 9 05:40:44.864547 containerd[1723]: time="2025-09-09T05:40:44.863913922Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 9 05:40:44.864547 containerd[1723]: time="2025-09-09T05:40:44.863923068Z" level=info msg="runtime interface starting up..." Sep 9 05:40:44.864547 containerd[1723]: time="2025-09-09T05:40:44.863929765Z" level=info msg="starting plugins..." Sep 9 05:40:44.864547 containerd[1723]: time="2025-09-09T05:40:44.863948027Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 9 05:40:44.864171 systemd[1]: Started containerd.service - containerd container runtime. Sep 9 05:40:44.866047 containerd[1723]: time="2025-09-09T05:40:44.866014415Z" level=info msg="containerd successfully booted in 0.396852s" Sep 9 05:40:44.891806 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 05:40:44.894255 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 9 05:40:44.898175 systemd[1]: Startup finished in 3.601s (kernel) + 13.072s (initrd) + 16.280s (userspace) = 32.954s. Sep 9 05:40:44.985914 (kubelet)[1848]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 05:40:45.516035 login[1824]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 9 05:40:45.518272 login[1825]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 9 05:40:45.528542 systemd-logind[1701]: New session 2 of user core. Sep 9 05:40:45.529140 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 9 05:40:45.530894 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 9 05:40:45.533986 systemd-logind[1701]: New session 1 of user core. Sep 9 05:40:45.566146 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 9 05:40:45.568190 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 9 05:40:45.591726 (systemd)[1860]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 9 05:40:45.594634 systemd-logind[1701]: New session c1 of user core. Sep 9 05:40:45.628635 kubelet[1848]: E0909 05:40:45.628605 1848 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 05:40:45.630865 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 05:40:45.630986 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 05:40:45.631271 systemd[1]: kubelet.service: Consumed 963ms CPU time, 267.4M memory peak. Sep 9 05:40:45.947709 systemd[1860]: Queued start job for default target default.target. Sep 9 05:40:45.953600 systemd[1860]: Created slice app.slice - User Application Slice. Sep 9 05:40:45.953629 systemd[1860]: Reached target paths.target - Paths. Sep 9 05:40:45.953662 systemd[1860]: Reached target timers.target - Timers. Sep 9 05:40:45.954627 systemd[1860]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 9 05:40:45.963389 systemd[1860]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 9 05:40:45.963439 systemd[1860]: Reached target sockets.target - Sockets. Sep 9 05:40:45.963474 systemd[1860]: Reached target basic.target - Basic System. Sep 9 05:40:45.963914 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 9 05:40:45.963917 systemd[1860]: Reached target default.target - Main User Target. Sep 9 05:40:45.963943 systemd[1860]: Startup finished in 363ms. Sep 9 05:40:45.970959 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 9 05:40:45.971674 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 9 05:40:46.299736 waagent[1821]: 2025-09-09T05:40:46.299597Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Sep 9 05:40:46.301442 waagent[1821]: 2025-09-09T05:40:46.301349Z INFO Daemon Daemon OS: flatcar 4452.0.0 Sep 9 05:40:46.302102 waagent[1821]: 2025-09-09T05:40:46.302073Z INFO Daemon Daemon Python: 3.11.13 Sep 9 05:40:46.303652 waagent[1821]: 2025-09-09T05:40:46.303558Z INFO Daemon Daemon Run daemon Sep 9 05:40:46.304673 waagent[1821]: 2025-09-09T05:40:46.304643Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4452.0.0' Sep 9 05:40:46.306801 waagent[1821]: 2025-09-09T05:40:46.305484Z INFO Daemon Daemon Using waagent for provisioning Sep 9 05:40:46.306801 waagent[1821]: 2025-09-09T05:40:46.305955Z INFO Daemon Daemon Activate resource disk Sep 9 05:40:46.306801 waagent[1821]: 2025-09-09T05:40:46.306156Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Sep 9 05:40:46.308145 waagent[1821]: 2025-09-09T05:40:46.308116Z INFO Daemon Daemon Found device: None Sep 9 05:40:46.308430 waagent[1821]: 2025-09-09T05:40:46.308410Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Sep 9 05:40:46.308756 waagent[1821]: 2025-09-09T05:40:46.308737Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Sep 9 05:40:46.309765 waagent[1821]: 2025-09-09T05:40:46.309735Z INFO Daemon Daemon Clean protocol and wireserver endpoint Sep 9 05:40:46.310008 waagent[1821]: 2025-09-09T05:40:46.309985Z INFO Daemon Daemon Running default provisioning handler Sep 9 05:40:46.316915 waagent[1821]: 2025-09-09T05:40:46.316870Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Sep 9 05:40:46.318587 waagent[1821]: 2025-09-09T05:40:46.318545Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Sep 9 05:40:46.319128 waagent[1821]: 2025-09-09T05:40:46.319103Z INFO Daemon Daemon cloud-init is enabled: False Sep 9 05:40:46.319446 waagent[1821]: 2025-09-09T05:40:46.319428Z INFO Daemon Daemon Copying ovf-env.xml Sep 9 05:40:46.441471 waagent[1821]: 2025-09-09T05:40:46.441408Z INFO Daemon Daemon Successfully mounted dvd Sep 9 05:40:46.466234 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Sep 9 05:40:46.468395 waagent[1821]: 2025-09-09T05:40:46.468346Z INFO Daemon Daemon Detect protocol endpoint Sep 9 05:40:46.469593 waagent[1821]: 2025-09-09T05:40:46.469556Z INFO Daemon Daemon Clean protocol and wireserver endpoint Sep 9 05:40:46.471095 waagent[1821]: 2025-09-09T05:40:46.471066Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Sep 9 05:40:46.472483 waagent[1821]: 2025-09-09T05:40:46.472014Z INFO Daemon Daemon Test for route to 168.63.129.16 Sep 9 05:40:46.474120 waagent[1821]: 2025-09-09T05:40:46.474088Z INFO Daemon Daemon Route to 168.63.129.16 exists Sep 9 05:40:46.475141 waagent[1821]: 2025-09-09T05:40:46.474750Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Sep 9 05:40:46.527543 waagent[1821]: 2025-09-09T05:40:46.527503Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Sep 9 05:40:46.528152 waagent[1821]: 2025-09-09T05:40:46.527970Z INFO Daemon Daemon Wire protocol version:2012-11-30 Sep 9 05:40:46.528742 waagent[1821]: 2025-09-09T05:40:46.528154Z INFO Daemon Daemon Server preferred version:2015-04-05 Sep 9 05:40:46.665638 waagent[1821]: 2025-09-09T05:40:46.665505Z INFO Daemon Daemon Initializing goal state during protocol detection Sep 9 05:40:46.667273 waagent[1821]: 2025-09-09T05:40:46.667185Z INFO Daemon Daemon Forcing an update of the goal state. Sep 9 05:40:46.671531 waagent[1821]: 2025-09-09T05:40:46.671490Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Sep 9 05:40:46.682680 waagent[1821]: 2025-09-09T05:40:46.682647Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.175 Sep 9 05:40:46.684352 waagent[1821]: 2025-09-09T05:40:46.684316Z INFO Daemon Sep 9 05:40:46.685022 waagent[1821]: 2025-09-09T05:40:46.684945Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 667093cb-18cc-464e-ae21-8b100a7c56b1 eTag: 4291904088767370897 source: Fabric] Sep 9 05:40:46.687606 waagent[1821]: 2025-09-09T05:40:46.687573Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Sep 9 05:40:46.689277 waagent[1821]: 2025-09-09T05:40:46.689246Z INFO Daemon Sep 9 05:40:46.690028 waagent[1821]: 2025-09-09T05:40:46.689955Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Sep 9 05:40:46.700348 waagent[1821]: 2025-09-09T05:40:46.700320Z INFO Daemon Daemon Downloading artifacts profile blob Sep 9 05:40:46.755053 waagent[1821]: 2025-09-09T05:40:46.755005Z INFO Daemon Downloaded certificate {'thumbprint': 'FE6B27217C0B0EB4AEF7464D8E653F61FADE7709', 'hasPrivateKey': True} Sep 9 05:40:46.757304 waagent[1821]: 2025-09-09T05:40:46.755845Z INFO Daemon Fetch goal state completed Sep 9 05:40:46.762654 waagent[1821]: 2025-09-09T05:40:46.762587Z INFO Daemon Daemon Starting provisioning Sep 9 05:40:46.764583 waagent[1821]: 2025-09-09T05:40:46.763128Z INFO Daemon Daemon Handle ovf-env.xml. Sep 9 05:40:46.764583 waagent[1821]: 2025-09-09T05:40:46.763204Z INFO Daemon Daemon Set hostname [ci-4452.0.0-n-23b47482b2] Sep 9 05:40:46.778273 waagent[1821]: 2025-09-09T05:40:46.778230Z INFO Daemon Daemon Publish hostname [ci-4452.0.0-n-23b47482b2] Sep 9 05:40:46.781777 waagent[1821]: 2025-09-09T05:40:46.778939Z INFO Daemon Daemon Examine /proc/net/route for primary interface Sep 9 05:40:46.781777 waagent[1821]: 2025-09-09T05:40:46.779583Z INFO Daemon Daemon Primary interface is [eth0] Sep 9 05:40:46.787150 systemd-networkd[1551]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 05:40:46.787157 systemd-networkd[1551]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 05:40:46.787178 systemd-networkd[1551]: eth0: DHCP lease lost Sep 9 05:40:46.788083 waagent[1821]: 2025-09-09T05:40:46.788037Z INFO Daemon Daemon Create user account if not exists Sep 9 05:40:46.789375 waagent[1821]: 2025-09-09T05:40:46.788606Z INFO Daemon Daemon User core already exists, skip useradd Sep 9 05:40:46.790450 waagent[1821]: 2025-09-09T05:40:46.789380Z INFO Daemon Daemon Configure sudoer Sep 9 05:40:46.794768 waagent[1821]: 2025-09-09T05:40:46.794719Z INFO Daemon Daemon Configure sshd Sep 9 05:40:46.799590 waagent[1821]: 2025-09-09T05:40:46.799549Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Sep 9 05:40:46.802548 waagent[1821]: 2025-09-09T05:40:46.802490Z INFO Daemon Daemon Deploy ssh public key. Sep 9 05:40:46.804260 systemd-networkd[1551]: eth0: DHCPv4 address 10.200.8.13/24, gateway 10.200.8.1 acquired from 168.63.129.16 Sep 9 05:40:47.928037 waagent[1821]: 2025-09-09T05:40:47.927997Z INFO Daemon Daemon Provisioning complete Sep 9 05:40:47.937562 waagent[1821]: 2025-09-09T05:40:47.937531Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Sep 9 05:40:47.940550 waagent[1821]: 2025-09-09T05:40:47.938139Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Sep 9 05:40:47.940550 waagent[1821]: 2025-09-09T05:40:47.938472Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Sep 9 05:40:48.044581 waagent[1916]: 2025-09-09T05:40:48.044515Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Sep 9 05:40:48.044851 waagent[1916]: 2025-09-09T05:40:48.044619Z INFO ExtHandler ExtHandler OS: flatcar 4452.0.0 Sep 9 05:40:48.044851 waagent[1916]: 2025-09-09T05:40:48.044663Z INFO ExtHandler ExtHandler Python: 3.11.13 Sep 9 05:40:48.044851 waagent[1916]: 2025-09-09T05:40:48.044703Z INFO ExtHandler ExtHandler CPU Arch: x86_64 Sep 9 05:40:48.098560 waagent[1916]: 2025-09-09T05:40:48.098510Z INFO ExtHandler ExtHandler Distro: flatcar-4452.0.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.13; Arch: x86_64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Sep 9 05:40:48.098697 waagent[1916]: 2025-09-09T05:40:48.098673Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 9 05:40:48.098747 waagent[1916]: 2025-09-09T05:40:48.098726Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 9 05:40:48.118696 waagent[1916]: 2025-09-09T05:40:48.118641Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Sep 9 05:40:48.134804 waagent[1916]: 2025-09-09T05:40:48.134766Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.175 Sep 9 05:40:48.135139 waagent[1916]: 2025-09-09T05:40:48.135112Z INFO ExtHandler Sep 9 05:40:48.135180 waagent[1916]: 2025-09-09T05:40:48.135166Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: c49fb970-b00e-4fe4-86db-79617f5accc9 eTag: 4291904088767370897 source: Fabric] Sep 9 05:40:48.135384 waagent[1916]: 2025-09-09T05:40:48.135363Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Sep 9 05:40:48.135707 waagent[1916]: 2025-09-09T05:40:48.135682Z INFO ExtHandler Sep 9 05:40:48.135742 waagent[1916]: 2025-09-09T05:40:48.135725Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Sep 9 05:40:48.139650 waagent[1916]: 2025-09-09T05:40:48.139620Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Sep 9 05:40:48.206280 waagent[1916]: 2025-09-09T05:40:48.206198Z INFO ExtHandler Downloaded certificate {'thumbprint': 'FE6B27217C0B0EB4AEF7464D8E653F61FADE7709', 'hasPrivateKey': True} Sep 9 05:40:48.206598 waagent[1916]: 2025-09-09T05:40:48.206567Z INFO ExtHandler Fetch goal state completed Sep 9 05:40:48.220646 waagent[1916]: 2025-09-09T05:40:48.220597Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.4.2 1 Jul 2025 (Library: OpenSSL 3.4.2 1 Jul 2025) Sep 9 05:40:48.224699 waagent[1916]: 2025-09-09T05:40:48.224645Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 1916 Sep 9 05:40:48.224814 waagent[1916]: 2025-09-09T05:40:48.224772Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Sep 9 05:40:48.225064 waagent[1916]: 2025-09-09T05:40:48.225040Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Sep 9 05:40:48.226140 waagent[1916]: 2025-09-09T05:40:48.226106Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4452.0.0', '', 'Flatcar Container Linux by Kinvolk'] Sep 9 05:40:48.226431 waagent[1916]: 2025-09-09T05:40:48.226404Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4452.0.0', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Sep 9 05:40:48.226532 waagent[1916]: 2025-09-09T05:40:48.226509Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Sep 9 05:40:48.226957 waagent[1916]: 2025-09-09T05:40:48.226934Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Sep 9 05:40:48.285970 waagent[1916]: 2025-09-09T05:40:48.285938Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Sep 9 05:40:48.286118 waagent[1916]: 2025-09-09T05:40:48.286095Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Sep 9 05:40:48.291705 waagent[1916]: 2025-09-09T05:40:48.291331Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Sep 9 05:40:48.296435 systemd[1]: Reload requested from client PID 1931 ('systemctl') (unit waagent.service)... Sep 9 05:40:48.296448 systemd[1]: Reloading... Sep 9 05:40:48.384850 zram_generator::config[1976]: No configuration found. Sep 9 05:40:48.526803 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x4a status: scsi 0x0 srb 0x20 hv 0xc0000001 Sep 9 05:40:48.562336 systemd[1]: Reloading finished in 265 ms. Sep 9 05:40:48.581263 waagent[1916]: 2025-09-09T05:40:48.580986Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Sep 9 05:40:48.581263 waagent[1916]: 2025-09-09T05:40:48.581134Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Sep 9 05:40:49.008267 waagent[1916]: 2025-09-09T05:40:49.008143Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Sep 9 05:40:49.008487 waagent[1916]: 2025-09-09T05:40:49.008459Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Sep 9 05:40:49.009233 waagent[1916]: 2025-09-09T05:40:49.009198Z INFO ExtHandler ExtHandler Starting env monitor service. Sep 9 05:40:49.009403 waagent[1916]: 2025-09-09T05:40:49.009376Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 9 05:40:49.009460 waagent[1916]: 2025-09-09T05:40:49.009439Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 9 05:40:49.009637 waagent[1916]: 2025-09-09T05:40:49.009612Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Sep 9 05:40:49.009943 waagent[1916]: 2025-09-09T05:40:49.009902Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Sep 9 05:40:49.010079 waagent[1916]: 2025-09-09T05:40:49.010056Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Sep 9 05:40:49.010079 waagent[1916]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Sep 9 05:40:49.010079 waagent[1916]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Sep 9 05:40:49.010079 waagent[1916]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Sep 9 05:40:49.010079 waagent[1916]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Sep 9 05:40:49.010079 waagent[1916]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Sep 9 05:40:49.010079 waagent[1916]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Sep 9 05:40:49.010337 waagent[1916]: 2025-09-09T05:40:49.010301Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 9 05:40:49.010394 waagent[1916]: 2025-09-09T05:40:49.010372Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 9 05:40:49.010771 waagent[1916]: 2025-09-09T05:40:49.010593Z INFO EnvHandler ExtHandler Configure routes Sep 9 05:40:49.010870 waagent[1916]: 2025-09-09T05:40:49.010840Z INFO EnvHandler ExtHandler Gateway:None Sep 9 05:40:49.010917 waagent[1916]: 2025-09-09T05:40:49.010894Z INFO EnvHandler ExtHandler Routes:None Sep 9 05:40:49.011177 waagent[1916]: 2025-09-09T05:40:49.011151Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Sep 9 05:40:49.011295 waagent[1916]: 2025-09-09T05:40:49.011264Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Sep 9 05:40:49.011530 waagent[1916]: 2025-09-09T05:40:49.011495Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Sep 9 05:40:49.011569 waagent[1916]: 2025-09-09T05:40:49.011538Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Sep 9 05:40:49.011808 waagent[1916]: 2025-09-09T05:40:49.011697Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Sep 9 05:40:49.031849 waagent[1916]: 2025-09-09T05:40:49.031808Z INFO ExtHandler ExtHandler Sep 9 05:40:49.031922 waagent[1916]: 2025-09-09T05:40:49.031870Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 7ab31de4-1076-453e-a32c-36d22ad57f1d correlation e3d8e2b4-6670-4c39-b29f-783940059e9c created: 2025-09-09T05:39:31.498672Z] Sep 9 05:40:49.032186 waagent[1916]: 2025-09-09T05:40:49.032155Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Sep 9 05:40:49.032567 waagent[1916]: 2025-09-09T05:40:49.032538Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 0 ms] Sep 9 05:40:49.071098 waagent[1916]: 2025-09-09T05:40:49.071052Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Sep 9 05:40:49.071098 waagent[1916]: Try `iptables -h' or 'iptables --help' for more information.) Sep 9 05:40:49.071451 waagent[1916]: 2025-09-09T05:40:49.071418Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: C6FC6D79-75E7-4C2D-B99C-B16D40B49719;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Sep 9 05:40:49.098261 waagent[1916]: 2025-09-09T05:40:49.098209Z INFO MonitorHandler ExtHandler Network interfaces: Sep 9 05:40:49.098261 waagent[1916]: Executing ['ip', '-a', '-o', 'link']: Sep 9 05:40:49.098261 waagent[1916]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Sep 9 05:40:49.098261 waagent[1916]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:da:58:e2 brd ff:ff:ff:ff:ff:ff\ alias Network Device Sep 9 05:40:49.098261 waagent[1916]: 3: enP30832s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:da:58:e2 brd ff:ff:ff:ff:ff:ff\ altname enP30832p0s0 Sep 9 05:40:49.098261 waagent[1916]: Executing ['ip', '-4', '-a', '-o', 'address']: Sep 9 05:40:49.098261 waagent[1916]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Sep 9 05:40:49.098261 waagent[1916]: 2: eth0 inet 10.200.8.13/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Sep 9 05:40:49.098261 waagent[1916]: Executing ['ip', '-6', '-a', '-o', 'address']: Sep 9 05:40:49.098261 waagent[1916]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Sep 9 05:40:49.098261 waagent[1916]: 2: eth0 inet6 fe80::20d:3aff:feda:58e2/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Sep 9 05:40:49.154162 waagent[1916]: 2025-09-09T05:40:49.154111Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Sep 9 05:40:49.154162 waagent[1916]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Sep 9 05:40:49.154162 waagent[1916]: pkts bytes target prot opt in out source destination Sep 9 05:40:49.154162 waagent[1916]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Sep 9 05:40:49.154162 waagent[1916]: pkts bytes target prot opt in out source destination Sep 9 05:40:49.154162 waagent[1916]: Chain OUTPUT (policy ACCEPT 3 packets, 349 bytes) Sep 9 05:40:49.154162 waagent[1916]: pkts bytes target prot opt in out source destination Sep 9 05:40:49.154162 waagent[1916]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Sep 9 05:40:49.154162 waagent[1916]: 1 52 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Sep 9 05:40:49.154162 waagent[1916]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Sep 9 05:40:49.156755 waagent[1916]: 2025-09-09T05:40:49.156706Z INFO EnvHandler ExtHandler Current Firewall rules: Sep 9 05:40:49.156755 waagent[1916]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Sep 9 05:40:49.156755 waagent[1916]: pkts bytes target prot opt in out source destination Sep 9 05:40:49.156755 waagent[1916]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Sep 9 05:40:49.156755 waagent[1916]: pkts bytes target prot opt in out source destination Sep 9 05:40:49.156755 waagent[1916]: Chain OUTPUT (policy ACCEPT 3 packets, 349 bytes) Sep 9 05:40:49.156755 waagent[1916]: pkts bytes target prot opt in out source destination Sep 9 05:40:49.156755 waagent[1916]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Sep 9 05:40:49.156755 waagent[1916]: 1 52 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Sep 9 05:40:49.156755 waagent[1916]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Sep 9 05:40:55.646650 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 9 05:40:55.648078 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 05:40:56.150520 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 05:40:56.161069 (kubelet)[2068]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 05:40:56.216452 kubelet[2068]: E0909 05:40:56.216415 2068 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 05:40:56.219550 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 05:40:56.219683 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 05:40:56.220015 systemd[1]: kubelet.service: Consumed 138ms CPU time, 111.5M memory peak. Sep 9 05:41:06.396760 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 9 05:41:06.398225 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 05:41:06.947938 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 05:41:06.953990 (kubelet)[2083]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 05:41:06.988446 kubelet[2083]: E0909 05:41:06.988408 2083 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 05:41:06.990247 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 05:41:06.990354 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 05:41:06.990622 systemd[1]: kubelet.service: Consumed 127ms CPU time, 110.6M memory peak. Sep 9 05:41:07.394891 chronyd[1680]: Selected source PHC0 Sep 9 05:41:17.146712 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 9 05:41:17.148193 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 05:41:17.646816 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 05:41:17.649880 (kubelet)[2098]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 05:41:17.680390 kubelet[2098]: E0909 05:41:17.680353 2098 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 05:41:17.682061 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 05:41:17.682164 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 05:41:17.682467 systemd[1]: kubelet.service: Consumed 121ms CPU time, 109.7M memory peak. Sep 9 05:41:18.507386 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 9 05:41:18.508447 systemd[1]: Started sshd@0-10.200.8.13:22-10.200.16.10:41200.service - OpenSSH per-connection server daemon (10.200.16.10:41200). Sep 9 05:41:19.254862 sshd[2106]: Accepted publickey for core from 10.200.16.10 port 41200 ssh2: RSA SHA256:sZgtQeKACOXr8pJrh4GrBINrKp/VHU8dqG1ZDK7f3fs Sep 9 05:41:19.255970 sshd-session[2106]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:41:19.260310 systemd-logind[1701]: New session 3 of user core. Sep 9 05:41:19.262915 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 9 05:41:19.805679 systemd[1]: Started sshd@1-10.200.8.13:22-10.200.16.10:41204.service - OpenSSH per-connection server daemon (10.200.16.10:41204). Sep 9 05:41:20.433853 sshd[2112]: Accepted publickey for core from 10.200.16.10 port 41204 ssh2: RSA SHA256:sZgtQeKACOXr8pJrh4GrBINrKp/VHU8dqG1ZDK7f3fs Sep 9 05:41:20.434995 sshd-session[2112]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:41:20.439425 systemd-logind[1701]: New session 4 of user core. Sep 9 05:41:20.444952 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 9 05:41:20.876207 sshd[2115]: Connection closed by 10.200.16.10 port 41204 Sep 9 05:41:20.876990 sshd-session[2112]: pam_unix(sshd:session): session closed for user core Sep 9 05:41:20.880200 systemd[1]: sshd@1-10.200.8.13:22-10.200.16.10:41204.service: Deactivated successfully. Sep 9 05:41:20.881681 systemd[1]: session-4.scope: Deactivated successfully. Sep 9 05:41:20.882392 systemd-logind[1701]: Session 4 logged out. Waiting for processes to exit. Sep 9 05:41:20.883463 systemd-logind[1701]: Removed session 4. Sep 9 05:41:20.990362 systemd[1]: Started sshd@2-10.200.8.13:22-10.200.16.10:51026.service - OpenSSH per-connection server daemon (10.200.16.10:51026). Sep 9 05:41:21.616158 sshd[2121]: Accepted publickey for core from 10.200.16.10 port 51026 ssh2: RSA SHA256:sZgtQeKACOXr8pJrh4GrBINrKp/VHU8dqG1ZDK7f3fs Sep 9 05:41:21.617261 sshd-session[2121]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:41:21.621484 systemd-logind[1701]: New session 5 of user core. Sep 9 05:41:21.627920 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 9 05:41:22.054887 sshd[2124]: Connection closed by 10.200.16.10 port 51026 Sep 9 05:41:22.055467 sshd-session[2121]: pam_unix(sshd:session): session closed for user core Sep 9 05:41:22.058899 systemd[1]: sshd@2-10.200.8.13:22-10.200.16.10:51026.service: Deactivated successfully. Sep 9 05:41:22.060482 systemd[1]: session-5.scope: Deactivated successfully. Sep 9 05:41:22.061465 systemd-logind[1701]: Session 5 logged out. Waiting for processes to exit. Sep 9 05:41:22.062327 systemd-logind[1701]: Removed session 5. Sep 9 05:41:22.171600 systemd[1]: Started sshd@3-10.200.8.13:22-10.200.16.10:51030.service - OpenSSH per-connection server daemon (10.200.16.10:51030). Sep 9 05:41:22.800892 sshd[2130]: Accepted publickey for core from 10.200.16.10 port 51030 ssh2: RSA SHA256:sZgtQeKACOXr8pJrh4GrBINrKp/VHU8dqG1ZDK7f3fs Sep 9 05:41:22.802007 sshd-session[2130]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:41:22.806632 systemd-logind[1701]: New session 6 of user core. Sep 9 05:41:22.811969 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 9 05:41:23.244741 sshd[2133]: Connection closed by 10.200.16.10 port 51030 Sep 9 05:41:23.245290 sshd-session[2130]: pam_unix(sshd:session): session closed for user core Sep 9 05:41:23.248753 systemd[1]: sshd@3-10.200.8.13:22-10.200.16.10:51030.service: Deactivated successfully. Sep 9 05:41:23.250291 systemd[1]: session-6.scope: Deactivated successfully. Sep 9 05:41:23.251205 systemd-logind[1701]: Session 6 logged out. Waiting for processes to exit. Sep 9 05:41:23.252304 systemd-logind[1701]: Removed session 6. Sep 9 05:41:23.375273 systemd[1]: Started sshd@4-10.200.8.13:22-10.200.16.10:51036.service - OpenSSH per-connection server daemon (10.200.16.10:51036). Sep 9 05:41:23.998299 sshd[2139]: Accepted publickey for core from 10.200.16.10 port 51036 ssh2: RSA SHA256:sZgtQeKACOXr8pJrh4GrBINrKp/VHU8dqG1ZDK7f3fs Sep 9 05:41:23.999399 sshd-session[2139]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:41:24.003923 systemd-logind[1701]: New session 7 of user core. Sep 9 05:41:24.010935 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 9 05:41:24.491939 sudo[2143]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 9 05:41:24.492166 sudo[2143]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 05:41:24.516588 sudo[2143]: pam_unix(sudo:session): session closed for user root Sep 9 05:41:24.618650 sshd[2142]: Connection closed by 10.200.16.10 port 51036 Sep 9 05:41:24.619329 sshd-session[2139]: pam_unix(sshd:session): session closed for user core Sep 9 05:41:24.623029 systemd[1]: sshd@4-10.200.8.13:22-10.200.16.10:51036.service: Deactivated successfully. Sep 9 05:41:24.624521 systemd[1]: session-7.scope: Deactivated successfully. Sep 9 05:41:24.625221 systemd-logind[1701]: Session 7 logged out. Waiting for processes to exit. Sep 9 05:41:24.626512 systemd-logind[1701]: Removed session 7. Sep 9 05:41:24.730495 systemd[1]: Started sshd@5-10.200.8.13:22-10.200.16.10:51040.service - OpenSSH per-connection server daemon (10.200.16.10:51040). Sep 9 05:41:25.355754 sshd[2149]: Accepted publickey for core from 10.200.16.10 port 51040 ssh2: RSA SHA256:sZgtQeKACOXr8pJrh4GrBINrKp/VHU8dqG1ZDK7f3fs Sep 9 05:41:25.356939 sshd-session[2149]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:41:25.361092 systemd-logind[1701]: New session 8 of user core. Sep 9 05:41:25.374917 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 9 05:41:25.698071 sudo[2154]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 9 05:41:25.698296 sudo[2154]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 05:41:25.705202 sudo[2154]: pam_unix(sudo:session): session closed for user root Sep 9 05:41:25.709227 sudo[2153]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 9 05:41:25.709446 sudo[2153]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 05:41:25.717273 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 9 05:41:25.750100 augenrules[2176]: No rules Sep 9 05:41:25.751002 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 05:41:25.751160 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 9 05:41:25.752295 sudo[2153]: pam_unix(sudo:session): session closed for user root Sep 9 05:41:25.823672 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Sep 9 05:41:25.853771 sshd[2152]: Connection closed by 10.200.16.10 port 51040 Sep 9 05:41:25.854233 sshd-session[2149]: pam_unix(sshd:session): session closed for user core Sep 9 05:41:25.857710 systemd[1]: sshd@5-10.200.8.13:22-10.200.16.10:51040.service: Deactivated successfully. Sep 9 05:41:25.859167 systemd[1]: session-8.scope: Deactivated successfully. Sep 9 05:41:25.860022 systemd-logind[1701]: Session 8 logged out. Waiting for processes to exit. Sep 9 05:41:25.861002 systemd-logind[1701]: Removed session 8. Sep 9 05:41:25.963273 systemd[1]: Started sshd@6-10.200.8.13:22-10.200.16.10:51046.service - OpenSSH per-connection server daemon (10.200.16.10:51046). Sep 9 05:41:26.591176 sshd[2185]: Accepted publickey for core from 10.200.16.10 port 51046 ssh2: RSA SHA256:sZgtQeKACOXr8pJrh4GrBINrKp/VHU8dqG1ZDK7f3fs Sep 9 05:41:26.592291 sshd-session[2185]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:41:26.596807 systemd-logind[1701]: New session 9 of user core. Sep 9 05:41:26.600936 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 9 05:41:26.932709 sudo[2189]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 9 05:41:26.932952 sudo[2189]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 05:41:27.896632 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Sep 9 05:41:27.898046 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 05:41:28.614916 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 05:41:28.624102 (kubelet)[2214]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 05:41:28.657404 kubelet[2214]: E0909 05:41:28.657351 2214 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 05:41:28.658958 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 05:41:28.659082 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 05:41:28.659392 systemd[1]: kubelet.service: Consumed 127ms CPU time, 110.4M memory peak. Sep 9 05:41:28.945900 update_engine[1704]: I20250909 05:41:28.945834 1704 update_attempter.cc:509] Updating boot flags... Sep 9 05:41:28.955183 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 9 05:41:28.965091 (dockerd)[2223]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 9 05:41:30.183064 dockerd[2223]: time="2025-09-09T05:41:30.182806010Z" level=info msg="Starting up" Sep 9 05:41:30.186070 dockerd[2223]: time="2025-09-09T05:41:30.185909683Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 9 05:41:30.194814 dockerd[2223]: time="2025-09-09T05:41:30.194739203Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 9 05:41:30.223283 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1128908809-merged.mount: Deactivated successfully. Sep 9 05:41:30.249927 systemd[1]: var-lib-docker-metacopy\x2dcheck2401552082-merged.mount: Deactivated successfully. Sep 9 05:41:30.272037 dockerd[2223]: time="2025-09-09T05:41:30.272007315Z" level=info msg="Loading containers: start." Sep 9 05:41:30.340802 kernel: Initializing XFRM netlink socket Sep 9 05:41:30.772443 systemd-networkd[1551]: docker0: Link UP Sep 9 05:41:30.784236 dockerd[2223]: time="2025-09-09T05:41:30.784202357Z" level=info msg="Loading containers: done." Sep 9 05:41:30.857861 dockerd[2223]: time="2025-09-09T05:41:30.857819980Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 9 05:41:30.857994 dockerd[2223]: time="2025-09-09T05:41:30.857904351Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 9 05:41:30.857994 dockerd[2223]: time="2025-09-09T05:41:30.857989709Z" level=info msg="Initializing buildkit" Sep 9 05:41:30.899077 dockerd[2223]: time="2025-09-09T05:41:30.899029638Z" level=info msg="Completed buildkit initialization" Sep 9 05:41:30.905587 dockerd[2223]: time="2025-09-09T05:41:30.905554276Z" level=info msg="Daemon has completed initialization" Sep 9 05:41:30.905843 dockerd[2223]: time="2025-09-09T05:41:30.905669712Z" level=info msg="API listen on /run/docker.sock" Sep 9 05:41:30.905772 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 9 05:41:32.092618 containerd[1723]: time="2025-09-09T05:41:32.092301295Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.4\"" Sep 9 05:41:32.845875 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2187075702.mount: Deactivated successfully. Sep 9 05:41:34.022401 containerd[1723]: time="2025-09-09T05:41:34.022348383Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:41:34.024721 containerd[1723]: time="2025-09-09T05:41:34.024686970Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.4: active requests=0, bytes read=30078672" Sep 9 05:41:34.027313 containerd[1723]: time="2025-09-09T05:41:34.027265581Z" level=info msg="ImageCreate event name:\"sha256:1f41885d0a91155d5a5e670b2862eed338c7f12b0e8a5bbc88b1ab4a2d505ae8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:41:34.030748 containerd[1723]: time="2025-09-09T05:41:34.030704420Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0d441d0d347145b3f02f20cb313239cdae86067643d7f70803fab8bac2d28876\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:41:34.031598 containerd[1723]: time="2025-09-09T05:41:34.031361847Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.4\" with image id \"sha256:1f41885d0a91155d5a5e670b2862eed338c7f12b0e8a5bbc88b1ab4a2d505ae8\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0d441d0d347145b3f02f20cb313239cdae86067643d7f70803fab8bac2d28876\", size \"30075464\" in 1.939022527s" Sep 9 05:41:34.031598 containerd[1723]: time="2025-09-09T05:41:34.031396878Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.4\" returns image reference \"sha256:1f41885d0a91155d5a5e670b2862eed338c7f12b0e8a5bbc88b1ab4a2d505ae8\"" Sep 9 05:41:34.032153 containerd[1723]: time="2025-09-09T05:41:34.032130280Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.4\"" Sep 9 05:41:35.355142 containerd[1723]: time="2025-09-09T05:41:35.355092995Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:41:35.357624 containerd[1723]: time="2025-09-09T05:41:35.357597750Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.4: active requests=0, bytes read=26018074" Sep 9 05:41:35.361820 containerd[1723]: time="2025-09-09T05:41:35.361799638Z" level=info msg="ImageCreate event name:\"sha256:358ab71c1a1ea4846ad0b3dff0d9db6b124236b64bc8a6b79dc874f65dc0d492\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:41:35.365980 containerd[1723]: time="2025-09-09T05:41:35.365939513Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:bd22c2af2f30a8f818568b4d5fe131098fdd38267e9e07872cfc33e8f5876bc3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:41:35.366794 containerd[1723]: time="2025-09-09T05:41:35.366688673Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.4\" with image id \"sha256:358ab71c1a1ea4846ad0b3dff0d9db6b124236b64bc8a6b79dc874f65dc0d492\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:bd22c2af2f30a8f818568b4d5fe131098fdd38267e9e07872cfc33e8f5876bc3\", size \"27646961\" in 1.334532209s" Sep 9 05:41:35.366794 containerd[1723]: time="2025-09-09T05:41:35.366719771Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.4\" returns image reference \"sha256:358ab71c1a1ea4846ad0b3dff0d9db6b124236b64bc8a6b79dc874f65dc0d492\"" Sep 9 05:41:35.367150 containerd[1723]: time="2025-09-09T05:41:35.367124983Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.4\"" Sep 9 05:41:36.475565 containerd[1723]: time="2025-09-09T05:41:36.475516565Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:41:36.477988 containerd[1723]: time="2025-09-09T05:41:36.477953034Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.4: active requests=0, bytes read=20153919" Sep 9 05:41:36.482357 containerd[1723]: time="2025-09-09T05:41:36.482308646Z" level=info msg="ImageCreate event name:\"sha256:ab4ad8a84c3c69c18494ef32fa087b32f7c44d71e6acba463d2c7dda798c3d66\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:41:36.486027 containerd[1723]: time="2025-09-09T05:41:36.485982179Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:71533e5a960e2955a54164905e92dac516ec874a23e0bf31304db82650101a4a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:41:36.486832 containerd[1723]: time="2025-09-09T05:41:36.486595609Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.4\" with image id \"sha256:ab4ad8a84c3c69c18494ef32fa087b32f7c44d71e6acba463d2c7dda798c3d66\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:71533e5a960e2955a54164905e92dac516ec874a23e0bf31304db82650101a4a\", size \"21782824\" in 1.119443896s" Sep 9 05:41:36.486832 containerd[1723]: time="2025-09-09T05:41:36.486628236Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.4\" returns image reference \"sha256:ab4ad8a84c3c69c18494ef32fa087b32f7c44d71e6acba463d2c7dda798c3d66\"" Sep 9 05:41:36.487277 containerd[1723]: time="2025-09-09T05:41:36.487183433Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.4\"" Sep 9 05:41:37.493338 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount154573210.mount: Deactivated successfully. Sep 9 05:41:37.865944 containerd[1723]: time="2025-09-09T05:41:37.865827685Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:41:37.868195 containerd[1723]: time="2025-09-09T05:41:37.868156198Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.4: active requests=0, bytes read=31899634" Sep 9 05:41:37.870833 containerd[1723]: time="2025-09-09T05:41:37.870774287Z" level=info msg="ImageCreate event name:\"sha256:1b2ea5e018dbbbd2efb8e5c540a6d3c463d77f250d3904429402ee057f09c64e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:41:37.874024 containerd[1723]: time="2025-09-09T05:41:37.873992983Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bb04e9247da3aaeb96406b4d530a79fc865695b6807353dd1a28871df0d7f837\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:41:37.874379 containerd[1723]: time="2025-09-09T05:41:37.874359133Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.4\" with image id \"sha256:1b2ea5e018dbbbd2efb8e5c540a6d3c463d77f250d3904429402ee057f09c64e\", repo tag \"registry.k8s.io/kube-proxy:v1.33.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:bb04e9247da3aaeb96406b4d530a79fc865695b6807353dd1a28871df0d7f837\", size \"31898645\" in 1.38701565s" Sep 9 05:41:37.874452 containerd[1723]: time="2025-09-09T05:41:37.874441777Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.4\" returns image reference \"sha256:1b2ea5e018dbbbd2efb8e5c540a6d3c463d77f250d3904429402ee057f09c64e\"" Sep 9 05:41:37.875068 containerd[1723]: time="2025-09-09T05:41:37.875041492Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Sep 9 05:41:38.503104 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3782446336.mount: Deactivated successfully. Sep 9 05:41:38.896640 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Sep 9 05:41:38.898084 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 05:41:39.461718 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 05:41:39.473039 (kubelet)[2578]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 05:41:39.506635 kubelet[2578]: E0909 05:41:39.506572 2578 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 05:41:39.508204 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 05:41:39.508336 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 05:41:39.508745 systemd[1]: kubelet.service: Consumed 132ms CPU time, 110.2M memory peak. Sep 9 05:41:40.050844 containerd[1723]: time="2025-09-09T05:41:40.050797518Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:41:40.054016 containerd[1723]: time="2025-09-09T05:41:40.053763531Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942246" Sep 9 05:41:40.057997 containerd[1723]: time="2025-09-09T05:41:40.057974493Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:41:40.062614 containerd[1723]: time="2025-09-09T05:41:40.062584299Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:41:40.063253 containerd[1723]: time="2025-09-09T05:41:40.063227610Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 2.188160363s" Sep 9 05:41:40.063298 containerd[1723]: time="2025-09-09T05:41:40.063262351Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Sep 9 05:41:40.064056 containerd[1723]: time="2025-09-09T05:41:40.063724905Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 9 05:41:40.633169 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3214014971.mount: Deactivated successfully. Sep 9 05:41:40.650322 containerd[1723]: time="2025-09-09T05:41:40.650277714Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 05:41:40.652805 containerd[1723]: time="2025-09-09T05:41:40.652726923Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Sep 9 05:41:40.655679 containerd[1723]: time="2025-09-09T05:41:40.655640020Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 05:41:40.659424 containerd[1723]: time="2025-09-09T05:41:40.659384133Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 05:41:40.660157 containerd[1723]: time="2025-09-09T05:41:40.659828433Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 596.077977ms" Sep 9 05:41:40.660157 containerd[1723]: time="2025-09-09T05:41:40.659855496Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 9 05:41:40.660343 containerd[1723]: time="2025-09-09T05:41:40.660324523Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Sep 9 05:41:41.309236 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount306875040.mount: Deactivated successfully. Sep 9 05:41:42.880085 containerd[1723]: time="2025-09-09T05:41:42.880030287Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:41:42.882445 containerd[1723]: time="2025-09-09T05:41:42.882407565Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58377879" Sep 9 05:41:42.885279 containerd[1723]: time="2025-09-09T05:41:42.885225759Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:41:42.889075 containerd[1723]: time="2025-09-09T05:41:42.889041781Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:41:42.889916 containerd[1723]: time="2025-09-09T05:41:42.889796358Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 2.229431717s" Sep 9 05:41:42.889916 containerd[1723]: time="2025-09-09T05:41:42.889826703Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Sep 9 05:41:46.776083 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 05:41:46.776238 systemd[1]: kubelet.service: Consumed 132ms CPU time, 110.2M memory peak. Sep 9 05:41:46.778261 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 05:41:46.805022 systemd[1]: Reload requested from client PID 2684 ('systemctl') (unit session-9.scope)... Sep 9 05:41:46.805039 systemd[1]: Reloading... Sep 9 05:41:46.900814 zram_generator::config[2727]: No configuration found. Sep 9 05:41:47.113403 systemd[1]: Reloading finished in 308 ms. Sep 9 05:41:47.167142 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 9 05:41:47.167214 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 9 05:41:47.167455 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 05:41:47.167503 systemd[1]: kubelet.service: Consumed 102ms CPU time, 98.3M memory peak. Sep 9 05:41:47.170043 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 05:41:47.797899 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 05:41:47.801664 (kubelet)[2801]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 9 05:41:47.835775 kubelet[2801]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 05:41:47.835775 kubelet[2801]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 9 05:41:47.835775 kubelet[2801]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 05:41:47.836079 kubelet[2801]: I0909 05:41:47.835835 2801 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 05:41:48.120491 kubelet[2801]: I0909 05:41:48.120379 2801 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 9 05:41:48.120491 kubelet[2801]: I0909 05:41:48.120408 2801 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 05:41:48.120872 kubelet[2801]: I0909 05:41:48.120637 2801 server.go:956] "Client rotation is on, will bootstrap in background" Sep 9 05:41:48.150651 kubelet[2801]: E0909 05:41:48.150606 2801 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.8.13:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.8.13:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 9 05:41:48.152004 kubelet[2801]: I0909 05:41:48.151741 2801 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 05:41:48.163180 kubelet[2801]: I0909 05:41:48.163156 2801 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 9 05:41:48.165607 kubelet[2801]: I0909 05:41:48.165588 2801 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 05:41:48.165839 kubelet[2801]: I0909 05:41:48.165814 2801 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 05:41:48.165998 kubelet[2801]: I0909 05:41:48.165835 2801 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4452.0.0-n-23b47482b2","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 9 05:41:48.166106 kubelet[2801]: I0909 05:41:48.166003 2801 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 05:41:48.166106 kubelet[2801]: I0909 05:41:48.166012 2801 container_manager_linux.go:303] "Creating device plugin manager" Sep 9 05:41:48.166151 kubelet[2801]: I0909 05:41:48.166125 2801 state_mem.go:36] "Initialized new in-memory state store" Sep 9 05:41:48.169375 kubelet[2801]: I0909 05:41:48.169349 2801 kubelet.go:480] "Attempting to sync node with API server" Sep 9 05:41:48.169375 kubelet[2801]: I0909 05:41:48.169377 2801 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 05:41:48.170079 kubelet[2801]: I0909 05:41:48.169401 2801 kubelet.go:386] "Adding apiserver pod source" Sep 9 05:41:48.171856 kubelet[2801]: I0909 05:41:48.171818 2801 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 05:41:48.176916 kubelet[2801]: E0909 05:41:48.176888 2801 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.8.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4452.0.0-n-23b47482b2&limit=500&resourceVersion=0\": dial tcp 10.200.8.13:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 9 05:41:48.177363 kubelet[2801]: E0909 05:41:48.177345 2801 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.8.13:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.13:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 9 05:41:48.177754 kubelet[2801]: I0909 05:41:48.177742 2801 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 9 05:41:48.178359 kubelet[2801]: I0909 05:41:48.178346 2801 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 9 05:41:48.179228 kubelet[2801]: W0909 05:41:48.179215 2801 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 9 05:41:48.182053 kubelet[2801]: I0909 05:41:48.182031 2801 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 9 05:41:48.182119 kubelet[2801]: I0909 05:41:48.182079 2801 server.go:1289] "Started kubelet" Sep 9 05:41:48.214181 kubelet[2801]: I0909 05:41:48.214016 2801 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 05:41:48.220899 kubelet[2801]: I0909 05:41:48.220754 2801 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 05:41:48.221465 kubelet[2801]: I0909 05:41:48.221439 2801 server.go:317] "Adding debug handlers to kubelet server" Sep 9 05:41:48.224495 kubelet[2801]: I0909 05:41:48.224445 2801 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 05:41:48.224661 kubelet[2801]: I0909 05:41:48.224645 2801 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 05:41:48.224851 kubelet[2801]: I0909 05:41:48.224838 2801 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 05:41:48.226003 kubelet[2801]: I0909 05:41:48.225988 2801 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 9 05:41:48.226179 kubelet[2801]: E0909 05:41:48.226165 2801 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4452.0.0-n-23b47482b2\" not found" Sep 9 05:41:48.228248 kubelet[2801]: I0909 05:41:48.228223 2801 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 9 05:41:48.228311 kubelet[2801]: I0909 05:41:48.228268 2801 reconciler.go:26] "Reconciler: start to sync state" Sep 9 05:41:48.248826 kubelet[2801]: E0909 05:41:48.248714 2801 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4452.0.0-n-23b47482b2?timeout=10s\": dial tcp 10.200.8.13:6443: connect: connection refused" interval="200ms" Sep 9 05:41:48.253493 kubelet[2801]: E0909 05:41:48.250127 2801 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.13:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.13:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4452.0.0-n-23b47482b2.186386d027b3f0e0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4452.0.0-n-23b47482b2,UID:ci-4452.0.0-n-23b47482b2,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4452.0.0-n-23b47482b2,},FirstTimestamp:2025-09-09 05:41:48.182048992 +0000 UTC m=+0.376915444,LastTimestamp:2025-09-09 05:41:48.182048992 +0000 UTC m=+0.376915444,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4452.0.0-n-23b47482b2,}" Sep 9 05:41:48.253493 kubelet[2801]: E0909 05:41:48.251496 2801 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.8.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.13:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 9 05:41:48.253493 kubelet[2801]: I0909 05:41:48.252920 2801 factory.go:223] Registration of the systemd container factory successfully Sep 9 05:41:48.253493 kubelet[2801]: I0909 05:41:48.253009 2801 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 05:41:48.254404 kubelet[2801]: E0909 05:41:48.254389 2801 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 9 05:41:48.254489 kubelet[2801]: I0909 05:41:48.254431 2801 factory.go:223] Registration of the containerd container factory successfully Sep 9 05:41:48.271537 kubelet[2801]: I0909 05:41:48.271517 2801 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 9 05:41:48.271537 kubelet[2801]: I0909 05:41:48.271530 2801 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 9 05:41:48.271633 kubelet[2801]: I0909 05:41:48.271548 2801 state_mem.go:36] "Initialized new in-memory state store" Sep 9 05:41:48.278612 kubelet[2801]: I0909 05:41:48.278589 2801 policy_none.go:49] "None policy: Start" Sep 9 05:41:48.278612 kubelet[2801]: I0909 05:41:48.278609 2801 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 9 05:41:48.278612 kubelet[2801]: I0909 05:41:48.278619 2801 state_mem.go:35] "Initializing new in-memory state store" Sep 9 05:41:48.320429 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 9 05:41:48.326915 kubelet[2801]: E0909 05:41:48.326895 2801 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4452.0.0-n-23b47482b2\" not found" Sep 9 05:41:48.332726 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 9 05:41:48.341537 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 9 05:41:48.343403 kubelet[2801]: E0909 05:41:48.343381 2801 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 9 05:41:48.343581 kubelet[2801]: I0909 05:41:48.343555 2801 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 05:41:48.343619 kubelet[2801]: I0909 05:41:48.343582 2801 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 05:41:48.343885 kubelet[2801]: I0909 05:41:48.343854 2801 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 9 05:41:48.344327 kubelet[2801]: I0909 05:41:48.344310 2801 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 05:41:48.345712 kubelet[2801]: I0909 05:41:48.345691 2801 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 9 05:41:48.345712 kubelet[2801]: I0909 05:41:48.345708 2801 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 9 05:41:48.345814 kubelet[2801]: I0909 05:41:48.345725 2801 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 9 05:41:48.345814 kubelet[2801]: I0909 05:41:48.345732 2801 kubelet.go:2436] "Starting kubelet main sync loop" Sep 9 05:41:48.345814 kubelet[2801]: E0909 05:41:48.345764 2801 kubelet.go:2460] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Sep 9 05:41:48.348578 kubelet[2801]: E0909 05:41:48.348492 2801 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.8.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.13:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 9 05:41:48.349180 kubelet[2801]: E0909 05:41:48.349158 2801 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 9 05:41:48.349237 kubelet[2801]: E0909 05:41:48.349202 2801 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4452.0.0-n-23b47482b2\" not found" Sep 9 05:41:48.446238 kubelet[2801]: I0909 05:41:48.446141 2801 kubelet_node_status.go:75] "Attempting to register node" node="ci-4452.0.0-n-23b47482b2" Sep 9 05:41:48.446881 kubelet[2801]: E0909 05:41:48.446838 2801 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.13:6443/api/v1/nodes\": dial tcp 10.200.8.13:6443: connect: connection refused" node="ci-4452.0.0-n-23b47482b2" Sep 9 05:41:48.449558 kubelet[2801]: E0909 05:41:48.449532 2801 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4452.0.0-n-23b47482b2?timeout=10s\": dial tcp 10.200.8.13:6443: connect: connection refused" interval="400ms" Sep 9 05:41:48.488876 systemd[1]: Created slice kubepods-burstable-podbc4693d30c430fbbda44aadbcc62f9eb.slice - libcontainer container kubepods-burstable-podbc4693d30c430fbbda44aadbcc62f9eb.slice. Sep 9 05:41:48.496658 kubelet[2801]: E0909 05:41:48.496496 2801 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4452.0.0-n-23b47482b2\" not found" node="ci-4452.0.0-n-23b47482b2" Sep 9 05:41:48.500129 systemd[1]: Created slice kubepods-burstable-pod4658b18973eb01d581fe15b0036b27ff.slice - libcontainer container kubepods-burstable-pod4658b18973eb01d581fe15b0036b27ff.slice. Sep 9 05:41:48.501666 kubelet[2801]: E0909 05:41:48.501645 2801 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4452.0.0-n-23b47482b2\" not found" node="ci-4452.0.0-n-23b47482b2" Sep 9 05:41:48.529584 kubelet[2801]: I0909 05:41:48.529562 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4658b18973eb01d581fe15b0036b27ff-ca-certs\") pod \"kube-controller-manager-ci-4452.0.0-n-23b47482b2\" (UID: \"4658b18973eb01d581fe15b0036b27ff\") " pod="kube-system/kube-controller-manager-ci-4452.0.0-n-23b47482b2" Sep 9 05:41:48.529664 kubelet[2801]: I0909 05:41:48.529593 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4658b18973eb01d581fe15b0036b27ff-flexvolume-dir\") pod \"kube-controller-manager-ci-4452.0.0-n-23b47482b2\" (UID: \"4658b18973eb01d581fe15b0036b27ff\") " pod="kube-system/kube-controller-manager-ci-4452.0.0-n-23b47482b2" Sep 9 05:41:48.529664 kubelet[2801]: I0909 05:41:48.529613 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4658b18973eb01d581fe15b0036b27ff-k8s-certs\") pod \"kube-controller-manager-ci-4452.0.0-n-23b47482b2\" (UID: \"4658b18973eb01d581fe15b0036b27ff\") " pod="kube-system/kube-controller-manager-ci-4452.0.0-n-23b47482b2" Sep 9 05:41:48.529664 kubelet[2801]: I0909 05:41:48.529631 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bd8c78e4b01d262214efd0845f130ff2-kubeconfig\") pod \"kube-scheduler-ci-4452.0.0-n-23b47482b2\" (UID: \"bd8c78e4b01d262214efd0845f130ff2\") " pod="kube-system/kube-scheduler-ci-4452.0.0-n-23b47482b2" Sep 9 05:41:48.529664 kubelet[2801]: I0909 05:41:48.529649 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bc4693d30c430fbbda44aadbcc62f9eb-k8s-certs\") pod \"kube-apiserver-ci-4452.0.0-n-23b47482b2\" (UID: \"bc4693d30c430fbbda44aadbcc62f9eb\") " pod="kube-system/kube-apiserver-ci-4452.0.0-n-23b47482b2" Sep 9 05:41:48.529764 kubelet[2801]: I0909 05:41:48.529667 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bc4693d30c430fbbda44aadbcc62f9eb-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4452.0.0-n-23b47482b2\" (UID: \"bc4693d30c430fbbda44aadbcc62f9eb\") " pod="kube-system/kube-apiserver-ci-4452.0.0-n-23b47482b2" Sep 9 05:41:48.529764 kubelet[2801]: I0909 05:41:48.529686 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4658b18973eb01d581fe15b0036b27ff-kubeconfig\") pod \"kube-controller-manager-ci-4452.0.0-n-23b47482b2\" (UID: \"4658b18973eb01d581fe15b0036b27ff\") " pod="kube-system/kube-controller-manager-ci-4452.0.0-n-23b47482b2" Sep 9 05:41:48.529764 kubelet[2801]: I0909 05:41:48.529705 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4658b18973eb01d581fe15b0036b27ff-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4452.0.0-n-23b47482b2\" (UID: \"4658b18973eb01d581fe15b0036b27ff\") " pod="kube-system/kube-controller-manager-ci-4452.0.0-n-23b47482b2" Sep 9 05:41:48.529764 kubelet[2801]: I0909 05:41:48.529722 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bc4693d30c430fbbda44aadbcc62f9eb-ca-certs\") pod \"kube-apiserver-ci-4452.0.0-n-23b47482b2\" (UID: \"bc4693d30c430fbbda44aadbcc62f9eb\") " pod="kube-system/kube-apiserver-ci-4452.0.0-n-23b47482b2" Sep 9 05:41:48.531875 systemd[1]: Created slice kubepods-burstable-podbd8c78e4b01d262214efd0845f130ff2.slice - libcontainer container kubepods-burstable-podbd8c78e4b01d262214efd0845f130ff2.slice. Sep 9 05:41:48.533255 kubelet[2801]: E0909 05:41:48.533236 2801 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4452.0.0-n-23b47482b2\" not found" node="ci-4452.0.0-n-23b47482b2" Sep 9 05:41:48.649067 kubelet[2801]: I0909 05:41:48.649033 2801 kubelet_node_status.go:75] "Attempting to register node" node="ci-4452.0.0-n-23b47482b2" Sep 9 05:41:48.649369 kubelet[2801]: E0909 05:41:48.649347 2801 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.13:6443/api/v1/nodes\": dial tcp 10.200.8.13:6443: connect: connection refused" node="ci-4452.0.0-n-23b47482b2" Sep 9 05:41:48.798412 containerd[1723]: time="2025-09-09T05:41:48.798094609Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4452.0.0-n-23b47482b2,Uid:bc4693d30c430fbbda44aadbcc62f9eb,Namespace:kube-system,Attempt:0,}" Sep 9 05:41:48.802685 containerd[1723]: time="2025-09-09T05:41:48.802636931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4452.0.0-n-23b47482b2,Uid:4658b18973eb01d581fe15b0036b27ff,Namespace:kube-system,Attempt:0,}" Sep 9 05:41:48.834637 containerd[1723]: time="2025-09-09T05:41:48.834603139Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4452.0.0-n-23b47482b2,Uid:bd8c78e4b01d262214efd0845f130ff2,Namespace:kube-system,Attempt:0,}" Sep 9 05:41:48.850300 kubelet[2801]: E0909 05:41:48.850258 2801 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4452.0.0-n-23b47482b2?timeout=10s\": dial tcp 10.200.8.13:6443: connect: connection refused" interval="800ms" Sep 9 05:41:49.014599 containerd[1723]: time="2025-09-09T05:41:49.014312334Z" level=info msg="connecting to shim 89a815951bc363a33a8cda565a2c5f41fac428fdeb53d548fc09c48ae76630ea" address="unix:///run/containerd/s/596819e8ea9eddc24f075ea460f1b6d5fa878097ce1984cea6100c180e5f30b4" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:41:49.038297 containerd[1723]: time="2025-09-09T05:41:49.038263410Z" level=info msg="connecting to shim 71c136371b2fefb73a299314ab511ab8f0045766aee5c47ccb3e2e07fdc4a934" address="unix:///run/containerd/s/656064e47f01d9af9158cc53a98bd1fa790f492516de393c270e4347de7a5b34" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:41:49.044952 systemd[1]: Started cri-containerd-89a815951bc363a33a8cda565a2c5f41fac428fdeb53d548fc09c48ae76630ea.scope - libcontainer container 89a815951bc363a33a8cda565a2c5f41fac428fdeb53d548fc09c48ae76630ea. Sep 9 05:41:49.059716 kubelet[2801]: I0909 05:41:49.059413 2801 kubelet_node_status.go:75] "Attempting to register node" node="ci-4452.0.0-n-23b47482b2" Sep 9 05:41:49.060405 kubelet[2801]: E0909 05:41:49.059888 2801 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.13:6443/api/v1/nodes\": dial tcp 10.200.8.13:6443: connect: connection refused" node="ci-4452.0.0-n-23b47482b2" Sep 9 05:41:49.067316 containerd[1723]: time="2025-09-09T05:41:49.066881985Z" level=info msg="connecting to shim 1b393146e7a8cad920e2b58230579f04b48004bf7a022c6896db42092d59d874" address="unix:///run/containerd/s/518a91dc1a2cff8b530d910fdee49b6b4e7cd6fea70cefc90273b4b99e9d5670" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:41:49.086067 systemd[1]: Started cri-containerd-71c136371b2fefb73a299314ab511ab8f0045766aee5c47ccb3e2e07fdc4a934.scope - libcontainer container 71c136371b2fefb73a299314ab511ab8f0045766aee5c47ccb3e2e07fdc4a934. Sep 9 05:41:49.095099 systemd[1]: Started cri-containerd-1b393146e7a8cad920e2b58230579f04b48004bf7a022c6896db42092d59d874.scope - libcontainer container 1b393146e7a8cad920e2b58230579f04b48004bf7a022c6896db42092d59d874. Sep 9 05:41:49.119732 containerd[1723]: time="2025-09-09T05:41:49.119700227Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4452.0.0-n-23b47482b2,Uid:4658b18973eb01d581fe15b0036b27ff,Namespace:kube-system,Attempt:0,} returns sandbox id \"89a815951bc363a33a8cda565a2c5f41fac428fdeb53d548fc09c48ae76630ea\"" Sep 9 05:41:49.129798 containerd[1723]: time="2025-09-09T05:41:49.129471717Z" level=info msg="CreateContainer within sandbox \"89a815951bc363a33a8cda565a2c5f41fac428fdeb53d548fc09c48ae76630ea\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 9 05:41:49.152841 containerd[1723]: time="2025-09-09T05:41:49.152813727Z" level=info msg="Container d219bdfd7cca22f6a55a215f139187b849f85c237f91014c8140891dadb53365: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:41:49.168104 containerd[1723]: time="2025-09-09T05:41:49.168083914Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4452.0.0-n-23b47482b2,Uid:bc4693d30c430fbbda44aadbcc62f9eb,Namespace:kube-system,Attempt:0,} returns sandbox id \"71c136371b2fefb73a299314ab511ab8f0045766aee5c47ccb3e2e07fdc4a934\"" Sep 9 05:41:49.169070 containerd[1723]: time="2025-09-09T05:41:49.168988568Z" level=info msg="CreateContainer within sandbox \"89a815951bc363a33a8cda565a2c5f41fac428fdeb53d548fc09c48ae76630ea\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d219bdfd7cca22f6a55a215f139187b849f85c237f91014c8140891dadb53365\"" Sep 9 05:41:49.170226 containerd[1723]: time="2025-09-09T05:41:49.169624885Z" level=info msg="StartContainer for \"d219bdfd7cca22f6a55a215f139187b849f85c237f91014c8140891dadb53365\"" Sep 9 05:41:49.170581 containerd[1723]: time="2025-09-09T05:41:49.170552957Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4452.0.0-n-23b47482b2,Uid:bd8c78e4b01d262214efd0845f130ff2,Namespace:kube-system,Attempt:0,} returns sandbox id \"1b393146e7a8cad920e2b58230579f04b48004bf7a022c6896db42092d59d874\"" Sep 9 05:41:49.171282 containerd[1723]: time="2025-09-09T05:41:49.171263852Z" level=info msg="connecting to shim d219bdfd7cca22f6a55a215f139187b849f85c237f91014c8140891dadb53365" address="unix:///run/containerd/s/596819e8ea9eddc24f075ea460f1b6d5fa878097ce1984cea6100c180e5f30b4" protocol=ttrpc version=3 Sep 9 05:41:49.177292 containerd[1723]: time="2025-09-09T05:41:49.177257284Z" level=info msg="CreateContainer within sandbox \"71c136371b2fefb73a299314ab511ab8f0045766aee5c47ccb3e2e07fdc4a934\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 9 05:41:49.182105 containerd[1723]: time="2025-09-09T05:41:49.182077995Z" level=info msg="CreateContainer within sandbox \"1b393146e7a8cad920e2b58230579f04b48004bf7a022c6896db42092d59d874\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 9 05:41:49.191933 systemd[1]: Started cri-containerd-d219bdfd7cca22f6a55a215f139187b849f85c237f91014c8140891dadb53365.scope - libcontainer container d219bdfd7cca22f6a55a215f139187b849f85c237f91014c8140891dadb53365. Sep 9 05:41:49.207499 containerd[1723]: time="2025-09-09T05:41:49.207473479Z" level=info msg="Container 8414b49b8dcfdd218a6b92e38a89ab7a7d57be725e0221d33e77ecb9b1df015d: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:41:49.212656 containerd[1723]: time="2025-09-09T05:41:49.212069843Z" level=info msg="Container 336ae1db71e08d4563582c3530bd487638a2ee31504dba8a18b62ac610556593: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:41:49.230212 containerd[1723]: time="2025-09-09T05:41:49.230184271Z" level=info msg="CreateContainer within sandbox \"71c136371b2fefb73a299314ab511ab8f0045766aee5c47ccb3e2e07fdc4a934\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"8414b49b8dcfdd218a6b92e38a89ab7a7d57be725e0221d33e77ecb9b1df015d\"" Sep 9 05:41:49.231079 containerd[1723]: time="2025-09-09T05:41:49.231052063Z" level=info msg="StartContainer for \"8414b49b8dcfdd218a6b92e38a89ab7a7d57be725e0221d33e77ecb9b1df015d\"" Sep 9 05:41:49.233460 containerd[1723]: time="2025-09-09T05:41:49.233434462Z" level=info msg="connecting to shim 8414b49b8dcfdd218a6b92e38a89ab7a7d57be725e0221d33e77ecb9b1df015d" address="unix:///run/containerd/s/656064e47f01d9af9158cc53a98bd1fa790f492516de393c270e4347de7a5b34" protocol=ttrpc version=3 Sep 9 05:41:49.245705 containerd[1723]: time="2025-09-09T05:41:49.245668506Z" level=info msg="CreateContainer within sandbox \"1b393146e7a8cad920e2b58230579f04b48004bf7a022c6896db42092d59d874\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"336ae1db71e08d4563582c3530bd487638a2ee31504dba8a18b62ac610556593\"" Sep 9 05:41:49.246913 containerd[1723]: time="2025-09-09T05:41:49.246887572Z" level=info msg="StartContainer for \"336ae1db71e08d4563582c3530bd487638a2ee31504dba8a18b62ac610556593\"" Sep 9 05:41:49.250022 containerd[1723]: time="2025-09-09T05:41:49.249980573Z" level=info msg="connecting to shim 336ae1db71e08d4563582c3530bd487638a2ee31504dba8a18b62ac610556593" address="unix:///run/containerd/s/518a91dc1a2cff8b530d910fdee49b6b4e7cd6fea70cefc90273b4b99e9d5670" protocol=ttrpc version=3 Sep 9 05:41:49.258757 containerd[1723]: time="2025-09-09T05:41:49.258523501Z" level=info msg="StartContainer for \"d219bdfd7cca22f6a55a215f139187b849f85c237f91014c8140891dadb53365\" returns successfully" Sep 9 05:41:49.274079 systemd[1]: Started cri-containerd-8414b49b8dcfdd218a6b92e38a89ab7a7d57be725e0221d33e77ecb9b1df015d.scope - libcontainer container 8414b49b8dcfdd218a6b92e38a89ab7a7d57be725e0221d33e77ecb9b1df015d. Sep 9 05:41:49.281934 systemd[1]: Started cri-containerd-336ae1db71e08d4563582c3530bd487638a2ee31504dba8a18b62ac610556593.scope - libcontainer container 336ae1db71e08d4563582c3530bd487638a2ee31504dba8a18b62ac610556593. Sep 9 05:41:49.337814 containerd[1723]: time="2025-09-09T05:41:49.337610007Z" level=info msg="StartContainer for \"336ae1db71e08d4563582c3530bd487638a2ee31504dba8a18b62ac610556593\" returns successfully" Sep 9 05:41:49.365689 kubelet[2801]: E0909 05:41:49.365534 2801 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4452.0.0-n-23b47482b2\" not found" node="ci-4452.0.0-n-23b47482b2" Sep 9 05:41:49.380428 kubelet[2801]: E0909 05:41:49.380407 2801 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4452.0.0-n-23b47482b2\" not found" node="ci-4452.0.0-n-23b47482b2" Sep 9 05:41:49.381979 containerd[1723]: time="2025-09-09T05:41:49.381950856Z" level=info msg="StartContainer for \"8414b49b8dcfdd218a6b92e38a89ab7a7d57be725e0221d33e77ecb9b1df015d\" returns successfully" Sep 9 05:41:49.862650 kubelet[2801]: I0909 05:41:49.862031 2801 kubelet_node_status.go:75] "Attempting to register node" node="ci-4452.0.0-n-23b47482b2" Sep 9 05:41:50.385167 kubelet[2801]: E0909 05:41:50.385136 2801 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4452.0.0-n-23b47482b2\" not found" node="ci-4452.0.0-n-23b47482b2" Sep 9 05:41:50.385730 kubelet[2801]: E0909 05:41:50.385715 2801 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4452.0.0-n-23b47482b2\" not found" node="ci-4452.0.0-n-23b47482b2" Sep 9 05:41:51.359151 kubelet[2801]: E0909 05:41:51.359104 2801 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4452.0.0-n-23b47482b2\" not found" node="ci-4452.0.0-n-23b47482b2" Sep 9 05:41:51.385641 kubelet[2801]: E0909 05:41:51.385509 2801 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4452.0.0-n-23b47482b2\" not found" node="ci-4452.0.0-n-23b47482b2" Sep 9 05:41:51.443519 kubelet[2801]: I0909 05:41:51.443479 2801 kubelet_node_status.go:78] "Successfully registered node" node="ci-4452.0.0-n-23b47482b2" Sep 9 05:41:51.527152 kubelet[2801]: I0909 05:41:51.526826 2801 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4452.0.0-n-23b47482b2" Sep 9 05:41:51.563010 kubelet[2801]: E0909 05:41:51.562966 2801 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4452.0.0-n-23b47482b2\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4452.0.0-n-23b47482b2" Sep 9 05:41:51.563010 kubelet[2801]: I0909 05:41:51.563010 2801 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4452.0.0-n-23b47482b2" Sep 9 05:41:51.564266 kubelet[2801]: E0909 05:41:51.564240 2801 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4452.0.0-n-23b47482b2\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4452.0.0-n-23b47482b2" Sep 9 05:41:51.564266 kubelet[2801]: I0909 05:41:51.564262 2801 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4452.0.0-n-23b47482b2" Sep 9 05:41:51.565544 kubelet[2801]: E0909 05:41:51.565504 2801 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4452.0.0-n-23b47482b2\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4452.0.0-n-23b47482b2" Sep 9 05:41:52.179600 kubelet[2801]: I0909 05:41:52.179574 2801 apiserver.go:52] "Watching apiserver" Sep 9 05:41:52.228401 kubelet[2801]: I0909 05:41:52.228358 2801 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 9 05:41:52.385381 kubelet[2801]: I0909 05:41:52.385350 2801 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4452.0.0-n-23b47482b2" Sep 9 05:41:52.392098 kubelet[2801]: I0909 05:41:52.392068 2801 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 9 05:41:53.124820 kubelet[2801]: I0909 05:41:53.124776 2801 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4452.0.0-n-23b47482b2" Sep 9 05:41:53.131884 kubelet[2801]: I0909 05:41:53.131844 2801 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 9 05:41:53.266254 systemd[1]: Reload requested from client PID 3081 ('systemctl') (unit session-9.scope)... Sep 9 05:41:53.266269 systemd[1]: Reloading... Sep 9 05:41:53.342803 zram_generator::config[3128]: No configuration found. Sep 9 05:41:53.543811 systemd[1]: Reloading finished in 277 ms. Sep 9 05:41:53.566528 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 05:41:53.581644 systemd[1]: kubelet.service: Deactivated successfully. Sep 9 05:41:53.581882 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 05:41:53.581940 systemd[1]: kubelet.service: Consumed 692ms CPU time, 132.1M memory peak. Sep 9 05:41:53.583407 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 05:41:54.002955 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 05:41:54.013130 (kubelet)[3195]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 9 05:41:54.047825 kubelet[3195]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 05:41:54.047825 kubelet[3195]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 9 05:41:54.047825 kubelet[3195]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 05:41:54.047825 kubelet[3195]: I0909 05:41:54.047262 3195 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 05:41:54.051447 kubelet[3195]: I0909 05:41:54.051422 3195 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 9 05:41:54.051447 kubelet[3195]: I0909 05:41:54.051440 3195 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 05:41:54.051622 kubelet[3195]: I0909 05:41:54.051610 3195 server.go:956] "Client rotation is on, will bootstrap in background" Sep 9 05:41:54.052534 kubelet[3195]: I0909 05:41:54.052518 3195 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Sep 9 05:41:54.056596 kubelet[3195]: I0909 05:41:54.056578 3195 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 05:41:54.061242 kubelet[3195]: I0909 05:41:54.061230 3195 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 9 05:41:54.063800 kubelet[3195]: I0909 05:41:54.063191 3195 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 05:41:54.063800 kubelet[3195]: I0909 05:41:54.063392 3195 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 05:41:54.063800 kubelet[3195]: I0909 05:41:54.063406 3195 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4452.0.0-n-23b47482b2","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 9 05:41:54.063800 kubelet[3195]: I0909 05:41:54.063516 3195 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 05:41:54.064020 kubelet[3195]: I0909 05:41:54.063523 3195 container_manager_linux.go:303] "Creating device plugin manager" Sep 9 05:41:54.064020 kubelet[3195]: I0909 05:41:54.063553 3195 state_mem.go:36] "Initialized new in-memory state store" Sep 9 05:41:54.064020 kubelet[3195]: I0909 05:41:54.063657 3195 kubelet.go:480] "Attempting to sync node with API server" Sep 9 05:41:54.064020 kubelet[3195]: I0909 05:41:54.063671 3195 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 05:41:54.064020 kubelet[3195]: I0909 05:41:54.063685 3195 kubelet.go:386] "Adding apiserver pod source" Sep 9 05:41:54.064020 kubelet[3195]: I0909 05:41:54.063695 3195 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 05:41:54.068794 kubelet[3195]: I0909 05:41:54.067393 3195 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 9 05:41:54.069399 kubelet[3195]: I0909 05:41:54.069386 3195 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 9 05:41:54.072501 kubelet[3195]: I0909 05:41:54.072483 3195 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 9 05:41:54.072686 kubelet[3195]: I0909 05:41:54.072679 3195 server.go:1289] "Started kubelet" Sep 9 05:41:54.074589 kubelet[3195]: I0909 05:41:54.074574 3195 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 05:41:54.085188 kubelet[3195]: I0909 05:41:54.085162 3195 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 05:41:54.088178 kubelet[3195]: I0909 05:41:54.088162 3195 server.go:317] "Adding debug handlers to kubelet server" Sep 9 05:41:54.089837 kubelet[3195]: I0909 05:41:54.089809 3195 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 9 05:41:54.096702 kubelet[3195]: I0909 05:41:54.096651 3195 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 05:41:54.097126 kubelet[3195]: I0909 05:41:54.097114 3195 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 05:41:54.097399 kubelet[3195]: I0909 05:41:54.097387 3195 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 05:41:54.101143 kubelet[3195]: I0909 05:41:54.101116 3195 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 9 05:41:54.107271 kubelet[3195]: I0909 05:41:54.107247 3195 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 9 05:41:54.107448 kubelet[3195]: I0909 05:41:54.107440 3195 reconciler.go:26] "Reconciler: start to sync state" Sep 9 05:41:54.110361 kubelet[3195]: I0909 05:41:54.110345 3195 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 9 05:41:54.110531 kubelet[3195]: I0909 05:41:54.110432 3195 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 9 05:41:54.110531 kubelet[3195]: I0909 05:41:54.110447 3195 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 9 05:41:54.110531 kubelet[3195]: I0909 05:41:54.110454 3195 kubelet.go:2436] "Starting kubelet main sync loop" Sep 9 05:41:54.110714 kubelet[3195]: E0909 05:41:54.110617 3195 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 9 05:41:54.115508 kubelet[3195]: E0909 05:41:54.115227 3195 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 9 05:41:54.118043 kubelet[3195]: I0909 05:41:54.118030 3195 factory.go:223] Registration of the containerd container factory successfully Sep 9 05:41:54.118204 kubelet[3195]: I0909 05:41:54.118129 3195 factory.go:223] Registration of the systemd container factory successfully Sep 9 05:41:54.118318 kubelet[3195]: I0909 05:41:54.118293 3195 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 05:41:54.155881 kubelet[3195]: I0909 05:41:54.155865 3195 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 9 05:41:54.155988 kubelet[3195]: I0909 05:41:54.155960 3195 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 9 05:41:54.156046 kubelet[3195]: I0909 05:41:54.156042 3195 state_mem.go:36] "Initialized new in-memory state store" Sep 9 05:41:54.156181 kubelet[3195]: I0909 05:41:54.156164 3195 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 9 05:41:54.156217 kubelet[3195]: I0909 05:41:54.156171 3195 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 9 05:41:54.156284 kubelet[3195]: I0909 05:41:54.156240 3195 policy_none.go:49] "None policy: Start" Sep 9 05:41:54.156284 kubelet[3195]: I0909 05:41:54.156248 3195 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 9 05:41:54.156284 kubelet[3195]: I0909 05:41:54.156254 3195 state_mem.go:35] "Initializing new in-memory state store" Sep 9 05:41:54.156651 kubelet[3195]: I0909 05:41:54.156580 3195 state_mem.go:75] "Updated machine memory state" Sep 9 05:41:54.161816 kubelet[3195]: E0909 05:41:54.161798 3195 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 9 05:41:54.162018 kubelet[3195]: I0909 05:41:54.161943 3195 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 05:41:54.162018 kubelet[3195]: I0909 05:41:54.161952 3195 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 05:41:54.163131 kubelet[3195]: I0909 05:41:54.163040 3195 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 05:41:54.165049 kubelet[3195]: E0909 05:41:54.164877 3195 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 9 05:41:54.211865 kubelet[3195]: I0909 05:41:54.211848 3195 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4452.0.0-n-23b47482b2" Sep 9 05:41:54.211990 kubelet[3195]: I0909 05:41:54.211978 3195 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4452.0.0-n-23b47482b2" Sep 9 05:41:54.212052 kubelet[3195]: I0909 05:41:54.211898 3195 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4452.0.0-n-23b47482b2" Sep 9 05:41:54.227165 kubelet[3195]: I0909 05:41:54.227142 3195 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 9 05:41:54.227665 kubelet[3195]: I0909 05:41:54.227588 3195 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 9 05:41:54.227665 kubelet[3195]: I0909 05:41:54.227609 3195 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 9 05:41:54.227665 kubelet[3195]: E0909 05:41:54.227628 3195 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4452.0.0-n-23b47482b2\" already exists" pod="kube-system/kube-apiserver-ci-4452.0.0-n-23b47482b2" Sep 9 05:41:54.227665 kubelet[3195]: E0909 05:41:54.227633 3195 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4452.0.0-n-23b47482b2\" already exists" pod="kube-system/kube-controller-manager-ci-4452.0.0-n-23b47482b2" Sep 9 05:41:54.268941 kubelet[3195]: I0909 05:41:54.268877 3195 kubelet_node_status.go:75] "Attempting to register node" node="ci-4452.0.0-n-23b47482b2" Sep 9 05:41:54.278878 kubelet[3195]: I0909 05:41:54.278822 3195 kubelet_node_status.go:124] "Node was previously registered" node="ci-4452.0.0-n-23b47482b2" Sep 9 05:41:54.278878 kubelet[3195]: I0909 05:41:54.278881 3195 kubelet_node_status.go:78] "Successfully registered node" node="ci-4452.0.0-n-23b47482b2" Sep 9 05:41:54.409078 kubelet[3195]: I0909 05:41:54.409051 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4658b18973eb01d581fe15b0036b27ff-flexvolume-dir\") pod \"kube-controller-manager-ci-4452.0.0-n-23b47482b2\" (UID: \"4658b18973eb01d581fe15b0036b27ff\") " pod="kube-system/kube-controller-manager-ci-4452.0.0-n-23b47482b2" Sep 9 05:41:54.409382 kubelet[3195]: I0909 05:41:54.409085 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4658b18973eb01d581fe15b0036b27ff-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4452.0.0-n-23b47482b2\" (UID: \"4658b18973eb01d581fe15b0036b27ff\") " pod="kube-system/kube-controller-manager-ci-4452.0.0-n-23b47482b2" Sep 9 05:41:54.409382 kubelet[3195]: I0909 05:41:54.409105 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bc4693d30c430fbbda44aadbcc62f9eb-ca-certs\") pod \"kube-apiserver-ci-4452.0.0-n-23b47482b2\" (UID: \"bc4693d30c430fbbda44aadbcc62f9eb\") " pod="kube-system/kube-apiserver-ci-4452.0.0-n-23b47482b2" Sep 9 05:41:54.409382 kubelet[3195]: I0909 05:41:54.409149 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4658b18973eb01d581fe15b0036b27ff-ca-certs\") pod \"kube-controller-manager-ci-4452.0.0-n-23b47482b2\" (UID: \"4658b18973eb01d581fe15b0036b27ff\") " pod="kube-system/kube-controller-manager-ci-4452.0.0-n-23b47482b2" Sep 9 05:41:54.409382 kubelet[3195]: I0909 05:41:54.409178 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4658b18973eb01d581fe15b0036b27ff-k8s-certs\") pod \"kube-controller-manager-ci-4452.0.0-n-23b47482b2\" (UID: \"4658b18973eb01d581fe15b0036b27ff\") " pod="kube-system/kube-controller-manager-ci-4452.0.0-n-23b47482b2" Sep 9 05:41:54.409382 kubelet[3195]: I0909 05:41:54.409200 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4658b18973eb01d581fe15b0036b27ff-kubeconfig\") pod \"kube-controller-manager-ci-4452.0.0-n-23b47482b2\" (UID: \"4658b18973eb01d581fe15b0036b27ff\") " pod="kube-system/kube-controller-manager-ci-4452.0.0-n-23b47482b2" Sep 9 05:41:54.409523 kubelet[3195]: I0909 05:41:54.409225 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bd8c78e4b01d262214efd0845f130ff2-kubeconfig\") pod \"kube-scheduler-ci-4452.0.0-n-23b47482b2\" (UID: \"bd8c78e4b01d262214efd0845f130ff2\") " pod="kube-system/kube-scheduler-ci-4452.0.0-n-23b47482b2" Sep 9 05:41:54.409523 kubelet[3195]: I0909 05:41:54.409243 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bc4693d30c430fbbda44aadbcc62f9eb-k8s-certs\") pod \"kube-apiserver-ci-4452.0.0-n-23b47482b2\" (UID: \"bc4693d30c430fbbda44aadbcc62f9eb\") " pod="kube-system/kube-apiserver-ci-4452.0.0-n-23b47482b2" Sep 9 05:41:54.409523 kubelet[3195]: I0909 05:41:54.409268 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bc4693d30c430fbbda44aadbcc62f9eb-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4452.0.0-n-23b47482b2\" (UID: \"bc4693d30c430fbbda44aadbcc62f9eb\") " pod="kube-system/kube-apiserver-ci-4452.0.0-n-23b47482b2" Sep 9 05:41:55.065826 kubelet[3195]: I0909 05:41:55.065255 3195 apiserver.go:52] "Watching apiserver" Sep 9 05:41:55.107956 kubelet[3195]: I0909 05:41:55.107916 3195 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 9 05:41:55.141228 kubelet[3195]: I0909 05:41:55.141196 3195 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4452.0.0-n-23b47482b2" Sep 9 05:41:55.141429 kubelet[3195]: I0909 05:41:55.141393 3195 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4452.0.0-n-23b47482b2" Sep 9 05:41:55.148858 kubelet[3195]: I0909 05:41:55.148836 3195 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 9 05:41:55.149124 kubelet[3195]: E0909 05:41:55.148887 3195 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4452.0.0-n-23b47482b2\" already exists" pod="kube-system/kube-apiserver-ci-4452.0.0-n-23b47482b2" Sep 9 05:41:55.149246 kubelet[3195]: I0909 05:41:55.149157 3195 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 9 05:41:55.149246 kubelet[3195]: E0909 05:41:55.149195 3195 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4452.0.0-n-23b47482b2\" already exists" pod="kube-system/kube-scheduler-ci-4452.0.0-n-23b47482b2" Sep 9 05:41:55.171207 kubelet[3195]: I0909 05:41:55.171137 3195 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4452.0.0-n-23b47482b2" podStartSLOduration=1.17112288 podStartE2EDuration="1.17112288s" podCreationTimestamp="2025-09-09 05:41:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 05:41:55.170679684 +0000 UTC m=+1.153632147" watchObservedRunningTime="2025-09-09 05:41:55.17112288 +0000 UTC m=+1.154075343" Sep 9 05:41:55.194024 kubelet[3195]: I0909 05:41:55.193903 3195 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4452.0.0-n-23b47482b2" podStartSLOduration=2.193890837 podStartE2EDuration="2.193890837s" podCreationTimestamp="2025-09-09 05:41:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 05:41:55.180823832 +0000 UTC m=+1.163776314" watchObservedRunningTime="2025-09-09 05:41:55.193890837 +0000 UTC m=+1.176843301" Sep 9 05:42:00.019876 kubelet[3195]: I0909 05:42:00.019836 3195 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 9 05:42:00.020410 containerd[1723]: time="2025-09-09T05:42:00.020377714Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 9 05:42:00.020652 kubelet[3195]: I0909 05:42:00.020634 3195 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 9 05:42:00.738067 kubelet[3195]: I0909 05:42:00.738013 3195 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4452.0.0-n-23b47482b2" podStartSLOduration=8.737979788 podStartE2EDuration="8.737979788s" podCreationTimestamp="2025-09-09 05:41:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 05:41:55.194776499 +0000 UTC m=+1.177728961" watchObservedRunningTime="2025-09-09 05:42:00.737979788 +0000 UTC m=+6.720932245" Sep 9 05:42:00.754046 systemd[1]: Created slice kubepods-besteffort-poddca49f8e_e97a_4793_af72_46089ccb86ae.slice - libcontainer container kubepods-besteffort-poddca49f8e_e97a_4793_af72_46089ccb86ae.slice. Sep 9 05:42:00.754637 kubelet[3195]: I0909 05:42:00.754192 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dca49f8e-e97a-4793-af72-46089ccb86ae-xtables-lock\") pod \"kube-proxy-n8cxq\" (UID: \"dca49f8e-e97a-4793-af72-46089ccb86ae\") " pod="kube-system/kube-proxy-n8cxq" Sep 9 05:42:00.754637 kubelet[3195]: I0909 05:42:00.754224 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dca49f8e-e97a-4793-af72-46089ccb86ae-lib-modules\") pod \"kube-proxy-n8cxq\" (UID: \"dca49f8e-e97a-4793-af72-46089ccb86ae\") " pod="kube-system/kube-proxy-n8cxq" Sep 9 05:42:00.754637 kubelet[3195]: I0909 05:42:00.754244 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/dca49f8e-e97a-4793-af72-46089ccb86ae-kube-proxy\") pod \"kube-proxy-n8cxq\" (UID: \"dca49f8e-e97a-4793-af72-46089ccb86ae\") " pod="kube-system/kube-proxy-n8cxq" Sep 9 05:42:00.756815 kubelet[3195]: I0909 05:42:00.754262 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4kw4\" (UniqueName: \"kubernetes.io/projected/dca49f8e-e97a-4793-af72-46089ccb86ae-kube-api-access-c4kw4\") pod \"kube-proxy-n8cxq\" (UID: \"dca49f8e-e97a-4793-af72-46089ccb86ae\") " pod="kube-system/kube-proxy-n8cxq" Sep 9 05:42:00.861438 kubelet[3195]: E0909 05:42:00.861412 3195 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Sep 9 05:42:00.861438 kubelet[3195]: E0909 05:42:00.861439 3195 projected.go:194] Error preparing data for projected volume kube-api-access-c4kw4 for pod kube-system/kube-proxy-n8cxq: configmap "kube-root-ca.crt" not found Sep 9 05:42:00.861561 kubelet[3195]: E0909 05:42:00.861508 3195 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/dca49f8e-e97a-4793-af72-46089ccb86ae-kube-api-access-c4kw4 podName:dca49f8e-e97a-4793-af72-46089ccb86ae nodeName:}" failed. No retries permitted until 2025-09-09 05:42:01.361485322 +0000 UTC m=+7.344437783 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-c4kw4" (UniqueName: "kubernetes.io/projected/dca49f8e-e97a-4793-af72-46089ccb86ae-kube-api-access-c4kw4") pod "kube-proxy-n8cxq" (UID: "dca49f8e-e97a-4793-af72-46089ccb86ae") : configmap "kube-root-ca.crt" not found Sep 9 05:42:01.334365 systemd[1]: Created slice kubepods-besteffort-pod2b7f4df4_da8d_4047_8ce7_4fb709775684.slice - libcontainer container kubepods-besteffort-pod2b7f4df4_da8d_4047_8ce7_4fb709775684.slice. Sep 9 05:42:01.358356 kubelet[3195]: I0909 05:42:01.358324 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/2b7f4df4-da8d-4047-8ce7-4fb709775684-var-lib-calico\") pod \"tigera-operator-755d956888-x4cwf\" (UID: \"2b7f4df4-da8d-4047-8ce7-4fb709775684\") " pod="tigera-operator/tigera-operator-755d956888-x4cwf" Sep 9 05:42:01.358684 kubelet[3195]: I0909 05:42:01.358431 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xccd\" (UniqueName: \"kubernetes.io/projected/2b7f4df4-da8d-4047-8ce7-4fb709775684-kube-api-access-4xccd\") pod \"tigera-operator-755d956888-x4cwf\" (UID: \"2b7f4df4-da8d-4047-8ce7-4fb709775684\") " pod="tigera-operator/tigera-operator-755d956888-x4cwf" Sep 9 05:42:01.640882 containerd[1723]: time="2025-09-09T05:42:01.640553699Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-x4cwf,Uid:2b7f4df4-da8d-4047-8ce7-4fb709775684,Namespace:tigera-operator,Attempt:0,}" Sep 9 05:42:01.663897 containerd[1723]: time="2025-09-09T05:42:01.663864268Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-n8cxq,Uid:dca49f8e-e97a-4793-af72-46089ccb86ae,Namespace:kube-system,Attempt:0,}" Sep 9 05:42:01.716110 containerd[1723]: time="2025-09-09T05:42:01.716065399Z" level=info msg="connecting to shim f17a15ad348b462b7f910099d884e57e023ff22f352119a4c530eed4f694fb3b" address="unix:///run/containerd/s/7fe5632189351733c2f997d9935fb8ffa6d29986284293893a2a0c9dea835db6" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:42:01.734300 containerd[1723]: time="2025-09-09T05:42:01.734265303Z" level=info msg="connecting to shim 336525bdf923513e84988493409c8fdfbcb792c82d2e88bef79d57f9bb546b89" address="unix:///run/containerd/s/960e90930557c32b852ce606266b2be60969cc38e4106e0c6c97cc6ec58ffa3e" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:42:01.742966 systemd[1]: Started cri-containerd-f17a15ad348b462b7f910099d884e57e023ff22f352119a4c530eed4f694fb3b.scope - libcontainer container f17a15ad348b462b7f910099d884e57e023ff22f352119a4c530eed4f694fb3b. Sep 9 05:42:01.760936 systemd[1]: Started cri-containerd-336525bdf923513e84988493409c8fdfbcb792c82d2e88bef79d57f9bb546b89.scope - libcontainer container 336525bdf923513e84988493409c8fdfbcb792c82d2e88bef79d57f9bb546b89. Sep 9 05:42:01.786102 containerd[1723]: time="2025-09-09T05:42:01.786022535Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-n8cxq,Uid:dca49f8e-e97a-4793-af72-46089ccb86ae,Namespace:kube-system,Attempt:0,} returns sandbox id \"336525bdf923513e84988493409c8fdfbcb792c82d2e88bef79d57f9bb546b89\"" Sep 9 05:42:01.794295 containerd[1723]: time="2025-09-09T05:42:01.794257399Z" level=info msg="CreateContainer within sandbox \"336525bdf923513e84988493409c8fdfbcb792c82d2e88bef79d57f9bb546b89\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 9 05:42:01.812216 containerd[1723]: time="2025-09-09T05:42:01.812156142Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-x4cwf,Uid:2b7f4df4-da8d-4047-8ce7-4fb709775684,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"f17a15ad348b462b7f910099d884e57e023ff22f352119a4c530eed4f694fb3b\"" Sep 9 05:42:01.813096 containerd[1723]: time="2025-09-09T05:42:01.812960238Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 9 05:42:01.830086 containerd[1723]: time="2025-09-09T05:42:01.830060619Z" level=info msg="Container dfe6b97b77e7cad29827644c37720f03f69a2d81a3a21eddc683893db7b90353: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:42:01.844607 containerd[1723]: time="2025-09-09T05:42:01.844581303Z" level=info msg="CreateContainer within sandbox \"336525bdf923513e84988493409c8fdfbcb792c82d2e88bef79d57f9bb546b89\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"dfe6b97b77e7cad29827644c37720f03f69a2d81a3a21eddc683893db7b90353\"" Sep 9 05:42:01.845831 containerd[1723]: time="2025-09-09T05:42:01.845161166Z" level=info msg="StartContainer for \"dfe6b97b77e7cad29827644c37720f03f69a2d81a3a21eddc683893db7b90353\"" Sep 9 05:42:01.846526 containerd[1723]: time="2025-09-09T05:42:01.846501574Z" level=info msg="connecting to shim dfe6b97b77e7cad29827644c37720f03f69a2d81a3a21eddc683893db7b90353" address="unix:///run/containerd/s/960e90930557c32b852ce606266b2be60969cc38e4106e0c6c97cc6ec58ffa3e" protocol=ttrpc version=3 Sep 9 05:42:01.861932 systemd[1]: Started cri-containerd-dfe6b97b77e7cad29827644c37720f03f69a2d81a3a21eddc683893db7b90353.scope - libcontainer container dfe6b97b77e7cad29827644c37720f03f69a2d81a3a21eddc683893db7b90353. Sep 9 05:42:01.901017 containerd[1723]: time="2025-09-09T05:42:01.900993601Z" level=info msg="StartContainer for \"dfe6b97b77e7cad29827644c37720f03f69a2d81a3a21eddc683893db7b90353\" returns successfully" Sep 9 05:42:02.168554 kubelet[3195]: I0909 05:42:02.168175 3195 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-n8cxq" podStartSLOduration=2.168158401 podStartE2EDuration="2.168158401s" podCreationTimestamp="2025-09-09 05:42:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 05:42:02.167845239 +0000 UTC m=+8.150797723" watchObservedRunningTime="2025-09-09 05:42:02.168158401 +0000 UTC m=+8.151110862" Sep 9 05:42:03.274296 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount499594071.mount: Deactivated successfully. Sep 9 05:42:03.754614 containerd[1723]: time="2025-09-09T05:42:03.754567609Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:42:03.758456 containerd[1723]: time="2025-09-09T05:42:03.758414503Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Sep 9 05:42:03.763347 containerd[1723]: time="2025-09-09T05:42:03.763294510Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:42:03.767922 containerd[1723]: time="2025-09-09T05:42:03.767396746Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:42:03.767922 containerd[1723]: time="2025-09-09T05:42:03.767838816Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 1.954854098s" Sep 9 05:42:03.767922 containerd[1723]: time="2025-09-09T05:42:03.767863789Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 9 05:42:03.774992 containerd[1723]: time="2025-09-09T05:42:03.774956657Z" level=info msg="CreateContainer within sandbox \"f17a15ad348b462b7f910099d884e57e023ff22f352119a4c530eed4f694fb3b\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 9 05:42:03.793248 containerd[1723]: time="2025-09-09T05:42:03.793225112Z" level=info msg="Container 37e37ec5f4568c0004fe2668645dd9b68745c965012f7b642641e3558cfcf37c: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:42:03.805168 containerd[1723]: time="2025-09-09T05:42:03.805142826Z" level=info msg="CreateContainer within sandbox \"f17a15ad348b462b7f910099d884e57e023ff22f352119a4c530eed4f694fb3b\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"37e37ec5f4568c0004fe2668645dd9b68745c965012f7b642641e3558cfcf37c\"" Sep 9 05:42:03.805668 containerd[1723]: time="2025-09-09T05:42:03.805644283Z" level=info msg="StartContainer for \"37e37ec5f4568c0004fe2668645dd9b68745c965012f7b642641e3558cfcf37c\"" Sep 9 05:42:03.806594 containerd[1723]: time="2025-09-09T05:42:03.806565502Z" level=info msg="connecting to shim 37e37ec5f4568c0004fe2668645dd9b68745c965012f7b642641e3558cfcf37c" address="unix:///run/containerd/s/7fe5632189351733c2f997d9935fb8ffa6d29986284293893a2a0c9dea835db6" protocol=ttrpc version=3 Sep 9 05:42:03.829986 systemd[1]: Started cri-containerd-37e37ec5f4568c0004fe2668645dd9b68745c965012f7b642641e3558cfcf37c.scope - libcontainer container 37e37ec5f4568c0004fe2668645dd9b68745c965012f7b642641e3558cfcf37c. Sep 9 05:42:03.855369 containerd[1723]: time="2025-09-09T05:42:03.855343976Z" level=info msg="StartContainer for \"37e37ec5f4568c0004fe2668645dd9b68745c965012f7b642641e3558cfcf37c\" returns successfully" Sep 9 05:42:04.173425 kubelet[3195]: I0909 05:42:04.173304 3195 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-x4cwf" podStartSLOduration=1.217475992 podStartE2EDuration="3.173286334s" podCreationTimestamp="2025-09-09 05:42:01 +0000 UTC" firstStartedPulling="2025-09-09 05:42:01.812749182 +0000 UTC m=+7.795701639" lastFinishedPulling="2025-09-09 05:42:03.768559524 +0000 UTC m=+9.751511981" observedRunningTime="2025-09-09 05:42:04.173146258 +0000 UTC m=+10.156098720" watchObservedRunningTime="2025-09-09 05:42:04.173286334 +0000 UTC m=+10.156238797" Sep 9 05:42:09.639766 sudo[2189]: pam_unix(sudo:session): session closed for user root Sep 9 05:42:09.741564 sshd[2188]: Connection closed by 10.200.16.10 port 51046 Sep 9 05:42:09.742223 sshd-session[2185]: pam_unix(sshd:session): session closed for user core Sep 9 05:42:09.747013 systemd[1]: sshd@6-10.200.8.13:22-10.200.16.10:51046.service: Deactivated successfully. Sep 9 05:42:09.749603 systemd[1]: session-9.scope: Deactivated successfully. Sep 9 05:42:09.749901 systemd[1]: session-9.scope: Consumed 5.110s CPU time, 228.8M memory peak. Sep 9 05:42:09.751566 systemd-logind[1701]: Session 9 logged out. Waiting for processes to exit. Sep 9 05:42:09.754743 systemd-logind[1701]: Removed session 9. Sep 9 05:42:13.102323 systemd[1]: Created slice kubepods-besteffort-pod9ac4f536_e7ad_4af3_85b4_8b18982b783a.slice - libcontainer container kubepods-besteffort-pod9ac4f536_e7ad_4af3_85b4_8b18982b783a.slice. Sep 9 05:42:13.135829 kubelet[3195]: I0909 05:42:13.135792 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/9ac4f536-e7ad-4af3-85b4-8b18982b783a-typha-certs\") pod \"calico-typha-66bd6488bf-pfdl7\" (UID: \"9ac4f536-e7ad-4af3-85b4-8b18982b783a\") " pod="calico-system/calico-typha-66bd6488bf-pfdl7" Sep 9 05:42:13.136130 kubelet[3195]: I0909 05:42:13.135837 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bp92s\" (UniqueName: \"kubernetes.io/projected/9ac4f536-e7ad-4af3-85b4-8b18982b783a-kube-api-access-bp92s\") pod \"calico-typha-66bd6488bf-pfdl7\" (UID: \"9ac4f536-e7ad-4af3-85b4-8b18982b783a\") " pod="calico-system/calico-typha-66bd6488bf-pfdl7" Sep 9 05:42:13.136130 kubelet[3195]: I0909 05:42:13.135859 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9ac4f536-e7ad-4af3-85b4-8b18982b783a-tigera-ca-bundle\") pod \"calico-typha-66bd6488bf-pfdl7\" (UID: \"9ac4f536-e7ad-4af3-85b4-8b18982b783a\") " pod="calico-system/calico-typha-66bd6488bf-pfdl7" Sep 9 05:42:13.412057 containerd[1723]: time="2025-09-09T05:42:13.412012293Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-66bd6488bf-pfdl7,Uid:9ac4f536-e7ad-4af3-85b4-8b18982b783a,Namespace:calico-system,Attempt:0,}" Sep 9 05:42:13.420831 systemd[1]: Created slice kubepods-besteffort-poda7863778_9120_434a_ac30_9ec95b62fa9d.slice - libcontainer container kubepods-besteffort-poda7863778_9120_434a_ac30_9ec95b62fa9d.slice. Sep 9 05:42:13.438074 kubelet[3195]: I0909 05:42:13.438044 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/a7863778-9120-434a-ac30-9ec95b62fa9d-cni-bin-dir\") pod \"calico-node-8vnsq\" (UID: \"a7863778-9120-434a-ac30-9ec95b62fa9d\") " pod="calico-system/calico-node-8vnsq" Sep 9 05:42:13.438177 kubelet[3195]: I0909 05:42:13.438080 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/a7863778-9120-434a-ac30-9ec95b62fa9d-node-certs\") pod \"calico-node-8vnsq\" (UID: \"a7863778-9120-434a-ac30-9ec95b62fa9d\") " pod="calico-system/calico-node-8vnsq" Sep 9 05:42:13.438177 kubelet[3195]: I0909 05:42:13.438099 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/a7863778-9120-434a-ac30-9ec95b62fa9d-policysync\") pod \"calico-node-8vnsq\" (UID: \"a7863778-9120-434a-ac30-9ec95b62fa9d\") " pod="calico-system/calico-node-8vnsq" Sep 9 05:42:13.438177 kubelet[3195]: I0909 05:42:13.438117 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9g6ng\" (UniqueName: \"kubernetes.io/projected/a7863778-9120-434a-ac30-9ec95b62fa9d-kube-api-access-9g6ng\") pod \"calico-node-8vnsq\" (UID: \"a7863778-9120-434a-ac30-9ec95b62fa9d\") " pod="calico-system/calico-node-8vnsq" Sep 9 05:42:13.438177 kubelet[3195]: I0909 05:42:13.438141 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/a7863778-9120-434a-ac30-9ec95b62fa9d-cni-log-dir\") pod \"calico-node-8vnsq\" (UID: \"a7863778-9120-434a-ac30-9ec95b62fa9d\") " pod="calico-system/calico-node-8vnsq" Sep 9 05:42:13.438177 kubelet[3195]: I0909 05:42:13.438156 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/a7863778-9120-434a-ac30-9ec95b62fa9d-cni-net-dir\") pod \"calico-node-8vnsq\" (UID: \"a7863778-9120-434a-ac30-9ec95b62fa9d\") " pod="calico-system/calico-node-8vnsq" Sep 9 05:42:13.438520 kubelet[3195]: I0909 05:42:13.438177 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/a7863778-9120-434a-ac30-9ec95b62fa9d-flexvol-driver-host\") pod \"calico-node-8vnsq\" (UID: \"a7863778-9120-434a-ac30-9ec95b62fa9d\") " pod="calico-system/calico-node-8vnsq" Sep 9 05:42:13.438520 kubelet[3195]: I0909 05:42:13.438194 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a7863778-9120-434a-ac30-9ec95b62fa9d-tigera-ca-bundle\") pod \"calico-node-8vnsq\" (UID: \"a7863778-9120-434a-ac30-9ec95b62fa9d\") " pod="calico-system/calico-node-8vnsq" Sep 9 05:42:13.438520 kubelet[3195]: I0909 05:42:13.438213 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a7863778-9120-434a-ac30-9ec95b62fa9d-lib-modules\") pod \"calico-node-8vnsq\" (UID: \"a7863778-9120-434a-ac30-9ec95b62fa9d\") " pod="calico-system/calico-node-8vnsq" Sep 9 05:42:13.438520 kubelet[3195]: I0909 05:42:13.438230 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a7863778-9120-434a-ac30-9ec95b62fa9d-var-lib-calico\") pod \"calico-node-8vnsq\" (UID: \"a7863778-9120-434a-ac30-9ec95b62fa9d\") " pod="calico-system/calico-node-8vnsq" Sep 9 05:42:13.438520 kubelet[3195]: I0909 05:42:13.438246 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a7863778-9120-434a-ac30-9ec95b62fa9d-xtables-lock\") pod \"calico-node-8vnsq\" (UID: \"a7863778-9120-434a-ac30-9ec95b62fa9d\") " pod="calico-system/calico-node-8vnsq" Sep 9 05:42:13.438648 kubelet[3195]: I0909 05:42:13.438265 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/a7863778-9120-434a-ac30-9ec95b62fa9d-var-run-calico\") pod \"calico-node-8vnsq\" (UID: \"a7863778-9120-434a-ac30-9ec95b62fa9d\") " pod="calico-system/calico-node-8vnsq" Sep 9 05:42:13.480401 containerd[1723]: time="2025-09-09T05:42:13.480356158Z" level=info msg="connecting to shim d01b72e15f3e17aa4fcc96c7e22380fec0fbaf1a390bb60c58d60e5cfe43a407" address="unix:///run/containerd/s/0c51bbe79e2411d240835cb9505ef35d876d8710a2c490709c65d4b5bac3991f" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:42:13.505948 systemd[1]: Started cri-containerd-d01b72e15f3e17aa4fcc96c7e22380fec0fbaf1a390bb60c58d60e5cfe43a407.scope - libcontainer container d01b72e15f3e17aa4fcc96c7e22380fec0fbaf1a390bb60c58d60e5cfe43a407. Sep 9 05:42:13.544911 kubelet[3195]: E0909 05:42:13.544891 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:13.545017 kubelet[3195]: W0909 05:42:13.545005 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:13.545095 kubelet[3195]: E0909 05:42:13.545084 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:13.552055 kubelet[3195]: E0909 05:42:13.552040 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:13.552146 kubelet[3195]: W0909 05:42:13.552138 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:13.552199 kubelet[3195]: E0909 05:42:13.552192 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:13.570601 containerd[1723]: time="2025-09-09T05:42:13.570576176Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-66bd6488bf-pfdl7,Uid:9ac4f536-e7ad-4af3-85b4-8b18982b783a,Namespace:calico-system,Attempt:0,} returns sandbox id \"d01b72e15f3e17aa4fcc96c7e22380fec0fbaf1a390bb60c58d60e5cfe43a407\"" Sep 9 05:42:13.572275 containerd[1723]: time="2025-09-09T05:42:13.572253000Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 9 05:42:13.709083 kubelet[3195]: E0909 05:42:13.708986 3195 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kpfkf" podUID="6ce6737c-5cbb-4d00-b926-8ff7cccb578b" Sep 9 05:42:13.723505 kubelet[3195]: E0909 05:42:13.723482 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:13.723505 kubelet[3195]: W0909 05:42:13.723499 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:13.723664 kubelet[3195]: E0909 05:42:13.723515 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:13.723664 kubelet[3195]: E0909 05:42:13.723623 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:13.723664 kubelet[3195]: W0909 05:42:13.723629 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:13.723664 kubelet[3195]: E0909 05:42:13.723636 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:13.723806 kubelet[3195]: E0909 05:42:13.723723 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:13.723806 kubelet[3195]: W0909 05:42:13.723728 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:13.723806 kubelet[3195]: E0909 05:42:13.723734 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:13.723919 kubelet[3195]: E0909 05:42:13.723907 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:13.723919 kubelet[3195]: W0909 05:42:13.723917 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:13.723993 kubelet[3195]: E0909 05:42:13.723925 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:13.724048 kubelet[3195]: E0909 05:42:13.724027 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:13.724048 kubelet[3195]: W0909 05:42:13.724036 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:13.724048 kubelet[3195]: E0909 05:42:13.724042 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:13.724141 kubelet[3195]: E0909 05:42:13.724125 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:13.724141 kubelet[3195]: W0909 05:42:13.724130 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:13.724210 kubelet[3195]: E0909 05:42:13.724142 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:13.724240 kubelet[3195]: E0909 05:42:13.724226 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:13.724240 kubelet[3195]: W0909 05:42:13.724231 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:13.724240 kubelet[3195]: E0909 05:42:13.724238 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:13.724333 kubelet[3195]: E0909 05:42:13.724321 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:13.724333 kubelet[3195]: W0909 05:42:13.724326 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:13.724398 kubelet[3195]: E0909 05:42:13.724332 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:13.724451 kubelet[3195]: E0909 05:42:13.724424 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:13.724451 kubelet[3195]: W0909 05:42:13.724429 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:13.724451 kubelet[3195]: E0909 05:42:13.724436 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:13.724550 kubelet[3195]: E0909 05:42:13.724514 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:13.724550 kubelet[3195]: W0909 05:42:13.724518 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:13.724550 kubelet[3195]: E0909 05:42:13.724524 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:13.724641 kubelet[3195]: E0909 05:42:13.724601 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:13.724641 kubelet[3195]: W0909 05:42:13.724605 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:13.724641 kubelet[3195]: E0909 05:42:13.724611 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:13.724722 kubelet[3195]: E0909 05:42:13.724690 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:13.724722 kubelet[3195]: W0909 05:42:13.724695 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:13.724722 kubelet[3195]: E0909 05:42:13.724702 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:13.724836 kubelet[3195]: E0909 05:42:13.724809 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:13.724836 kubelet[3195]: W0909 05:42:13.724814 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:13.724836 kubelet[3195]: E0909 05:42:13.724820 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:13.724928 kubelet[3195]: E0909 05:42:13.724905 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:13.724928 kubelet[3195]: W0909 05:42:13.724910 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:13.724928 kubelet[3195]: E0909 05:42:13.724916 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:13.725029 kubelet[3195]: E0909 05:42:13.724993 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:13.725029 kubelet[3195]: W0909 05:42:13.724998 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:13.725029 kubelet[3195]: E0909 05:42:13.725003 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:13.725110 kubelet[3195]: E0909 05:42:13.725081 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:13.725110 kubelet[3195]: W0909 05:42:13.725086 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:13.725110 kubelet[3195]: E0909 05:42:13.725092 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:13.725226 kubelet[3195]: E0909 05:42:13.725173 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:13.725226 kubelet[3195]: W0909 05:42:13.725178 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:13.725226 kubelet[3195]: E0909 05:42:13.725183 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:13.725322 kubelet[3195]: E0909 05:42:13.725262 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:13.725322 kubelet[3195]: W0909 05:42:13.725266 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:13.725322 kubelet[3195]: E0909 05:42:13.725272 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:13.725422 kubelet[3195]: E0909 05:42:13.725347 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:13.725422 kubelet[3195]: W0909 05:42:13.725352 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:13.725422 kubelet[3195]: E0909 05:42:13.725357 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:13.725525 kubelet[3195]: E0909 05:42:13.725434 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:13.725525 kubelet[3195]: W0909 05:42:13.725439 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:13.725525 kubelet[3195]: E0909 05:42:13.725445 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:13.726950 containerd[1723]: time="2025-09-09T05:42:13.726924294Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8vnsq,Uid:a7863778-9120-434a-ac30-9ec95b62fa9d,Namespace:calico-system,Attempt:0,}" Sep 9 05:42:13.742066 kubelet[3195]: E0909 05:42:13.741905 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:13.742066 kubelet[3195]: W0909 05:42:13.741922 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:13.742066 kubelet[3195]: E0909 05:42:13.741939 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:13.742066 kubelet[3195]: I0909 05:42:13.741969 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6m5x\" (UniqueName: \"kubernetes.io/projected/6ce6737c-5cbb-4d00-b926-8ff7cccb578b-kube-api-access-q6m5x\") pod \"csi-node-driver-kpfkf\" (UID: \"6ce6737c-5cbb-4d00-b926-8ff7cccb578b\") " pod="calico-system/csi-node-driver-kpfkf" Sep 9 05:42:13.743702 kubelet[3195]: E0909 05:42:13.743556 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:13.743702 kubelet[3195]: W0909 05:42:13.743574 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:13.743702 kubelet[3195]: E0909 05:42:13.743589 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:13.743702 kubelet[3195]: I0909 05:42:13.743610 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/6ce6737c-5cbb-4d00-b926-8ff7cccb578b-varrun\") pod \"csi-node-driver-kpfkf\" (UID: \"6ce6737c-5cbb-4d00-b926-8ff7cccb578b\") " pod="calico-system/csi-node-driver-kpfkf" Sep 9 05:42:13.744138 kubelet[3195]: E0909 05:42:13.744115 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:13.744138 kubelet[3195]: W0909 05:42:13.744132 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:13.744210 kubelet[3195]: E0909 05:42:13.744146 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:13.744210 kubelet[3195]: I0909 05:42:13.744171 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6ce6737c-5cbb-4d00-b926-8ff7cccb578b-kubelet-dir\") pod \"csi-node-driver-kpfkf\" (UID: \"6ce6737c-5cbb-4d00-b926-8ff7cccb578b\") " pod="calico-system/csi-node-driver-kpfkf" Sep 9 05:42:13.744411 kubelet[3195]: E0909 05:42:13.744399 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:13.744444 kubelet[3195]: W0909 05:42:13.744422 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:13.744444 kubelet[3195]: E0909 05:42:13.744433 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:13.744511 kubelet[3195]: I0909 05:42:13.744453 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/6ce6737c-5cbb-4d00-b926-8ff7cccb578b-registration-dir\") pod \"csi-node-driver-kpfkf\" (UID: \"6ce6737c-5cbb-4d00-b926-8ff7cccb578b\") " pod="calico-system/csi-node-driver-kpfkf" Sep 9 05:42:13.744633 kubelet[3195]: E0909 05:42:13.744623 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:13.744658 kubelet[3195]: W0909 05:42:13.744634 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:13.744658 kubelet[3195]: E0909 05:42:13.744643 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:13.744828 kubelet[3195]: I0909 05:42:13.744707 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/6ce6737c-5cbb-4d00-b926-8ff7cccb578b-socket-dir\") pod \"csi-node-driver-kpfkf\" (UID: \"6ce6737c-5cbb-4d00-b926-8ff7cccb578b\") " pod="calico-system/csi-node-driver-kpfkf" Sep 9 05:42:13.745425 kubelet[3195]: E0909 05:42:13.745406 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:13.745425 kubelet[3195]: W0909 05:42:13.745421 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:13.745528 kubelet[3195]: E0909 05:42:13.745445 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:13.745873 kubelet[3195]: E0909 05:42:13.745817 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:13.745873 kubelet[3195]: W0909 05:42:13.745831 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:13.745873 kubelet[3195]: E0909 05:42:13.745843 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:13.746774 kubelet[3195]: E0909 05:42:13.746754 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:13.746774 kubelet[3195]: W0909 05:42:13.746773 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:13.746774 kubelet[3195]: E0909 05:42:13.746804 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:13.747067 kubelet[3195]: E0909 05:42:13.746997 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:13.747067 kubelet[3195]: W0909 05:42:13.747005 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:13.747067 kubelet[3195]: E0909 05:42:13.747014 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:13.747233 kubelet[3195]: E0909 05:42:13.747174 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:13.747233 kubelet[3195]: W0909 05:42:13.747180 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:13.747233 kubelet[3195]: E0909 05:42:13.747188 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:13.747432 kubelet[3195]: E0909 05:42:13.747420 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:13.747432 kubelet[3195]: W0909 05:42:13.747431 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:13.747484 kubelet[3195]: E0909 05:42:13.747441 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:13.747941 kubelet[3195]: E0909 05:42:13.747923 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:13.747941 kubelet[3195]: W0909 05:42:13.747936 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:13.748080 kubelet[3195]: E0909 05:42:13.747948 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:13.748286 kubelet[3195]: E0909 05:42:13.748269 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:13.748286 kubelet[3195]: W0909 05:42:13.748283 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:13.748350 kubelet[3195]: E0909 05:42:13.748294 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:13.748768 kubelet[3195]: E0909 05:42:13.748750 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:13.748768 kubelet[3195]: W0909 05:42:13.748767 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:13.749582 kubelet[3195]: E0909 05:42:13.748817 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:13.749582 kubelet[3195]: E0909 05:42:13.749000 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:13.749582 kubelet[3195]: W0909 05:42:13.749007 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:13.749582 kubelet[3195]: E0909 05:42:13.749016 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:13.768164 containerd[1723]: time="2025-09-09T05:42:13.767982090Z" level=info msg="connecting to shim 5a6e319cabf8526b3048c9e3d18021502b557ae8a571d56d01fccf03b6b2b0d8" address="unix:///run/containerd/s/ffd15538824b6325b20933b1e6e6a66abef0acb1dd62e57fc12da6967bf56507" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:42:13.789953 systemd[1]: Started cri-containerd-5a6e319cabf8526b3048c9e3d18021502b557ae8a571d56d01fccf03b6b2b0d8.scope - libcontainer container 5a6e319cabf8526b3048c9e3d18021502b557ae8a571d56d01fccf03b6b2b0d8. Sep 9 05:42:13.812409 containerd[1723]: time="2025-09-09T05:42:13.812375994Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8vnsq,Uid:a7863778-9120-434a-ac30-9ec95b62fa9d,Namespace:calico-system,Attempt:0,} returns sandbox id \"5a6e319cabf8526b3048c9e3d18021502b557ae8a571d56d01fccf03b6b2b0d8\"" Sep 9 05:42:13.845205 kubelet[3195]: E0909 05:42:13.845176 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:13.845336 kubelet[3195]: W0909 05:42:13.845299 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:13.845396 kubelet[3195]: E0909 05:42:13.845322 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:13.845604 kubelet[3195]: E0909 05:42:13.845592 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:13.845604 kubelet[3195]: W0909 05:42:13.845602 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:13.845691 kubelet[3195]: E0909 05:42:13.845612 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:13.845828 kubelet[3195]: E0909 05:42:13.845817 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:13.845828 kubelet[3195]: W0909 05:42:13.845825 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:13.845895 kubelet[3195]: E0909 05:42:13.845834 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:13.846620 kubelet[3195]: E0909 05:42:13.846595 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:13.846716 kubelet[3195]: W0909 05:42:13.846648 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:13.846716 kubelet[3195]: E0909 05:42:13.846664 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:13.847030 kubelet[3195]: E0909 05:42:13.846985 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:13.847030 kubelet[3195]: W0909 05:42:13.846996 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:13.847030 kubelet[3195]: E0909 05:42:13.847007 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:13.847408 kubelet[3195]: E0909 05:42:13.847388 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:13.847469 kubelet[3195]: W0909 05:42:13.847457 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:13.847509 kubelet[3195]: E0909 05:42:13.847472 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:13.847874 kubelet[3195]: E0909 05:42:13.847858 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:13.847874 kubelet[3195]: W0909 05:42:13.847875 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:13.848069 kubelet[3195]: E0909 05:42:13.847888 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:13.848348 kubelet[3195]: E0909 05:42:13.848328 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:13.848348 kubelet[3195]: W0909 05:42:13.848344 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:13.848586 kubelet[3195]: E0909 05:42:13.848357 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:13.848843 kubelet[3195]: E0909 05:42:13.848830 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:13.848990 kubelet[3195]: W0909 05:42:13.848843 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:13.848990 kubelet[3195]: E0909 05:42:13.848975 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:13.849505 kubelet[3195]: E0909 05:42:13.849490 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:13.849505 kubelet[3195]: W0909 05:42:13.849504 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:13.849614 kubelet[3195]: E0909 05:42:13.849537 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:13.849761 kubelet[3195]: E0909 05:42:13.849741 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:13.849761 kubelet[3195]: W0909 05:42:13.849751 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:13.850604 kubelet[3195]: E0909 05:42:13.849761 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:13.850604 kubelet[3195]: E0909 05:42:13.849977 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:13.850604 kubelet[3195]: W0909 05:42:13.849987 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:13.850604 kubelet[3195]: E0909 05:42:13.849997 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:13.850604 kubelet[3195]: E0909 05:42:13.850270 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:13.850604 kubelet[3195]: W0909 05:42:13.850279 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:13.850604 kubelet[3195]: E0909 05:42:13.850289 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:13.850975 kubelet[3195]: E0909 05:42:13.850953 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:13.850975 kubelet[3195]: W0909 05:42:13.850971 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:13.851072 kubelet[3195]: E0909 05:42:13.850987 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:13.851433 kubelet[3195]: E0909 05:42:13.851413 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:13.851433 kubelet[3195]: W0909 05:42:13.851431 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:13.851515 kubelet[3195]: E0909 05:42:13.851443 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:13.851626 kubelet[3195]: E0909 05:42:13.851616 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:13.851626 kubelet[3195]: W0909 05:42:13.851626 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:13.851688 kubelet[3195]: E0909 05:42:13.851635 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:13.851931 kubelet[3195]: E0909 05:42:13.851918 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:13.851931 kubelet[3195]: W0909 05:42:13.851931 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:13.852036 kubelet[3195]: E0909 05:42:13.851942 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:13.852119 kubelet[3195]: E0909 05:42:13.852108 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:13.852149 kubelet[3195]: W0909 05:42:13.852119 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:13.852173 kubelet[3195]: E0909 05:42:13.852155 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:13.852525 kubelet[3195]: E0909 05:42:13.852509 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:13.852525 kubelet[3195]: W0909 05:42:13.852523 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:13.852628 kubelet[3195]: E0909 05:42:13.852557 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:13.852765 kubelet[3195]: E0909 05:42:13.852754 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:13.852765 kubelet[3195]: W0909 05:42:13.852764 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:13.852842 kubelet[3195]: E0909 05:42:13.852773 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:13.853026 kubelet[3195]: E0909 05:42:13.853013 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:13.853057 kubelet[3195]: W0909 05:42:13.853026 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:13.853087 kubelet[3195]: E0909 05:42:13.853063 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:13.853276 kubelet[3195]: E0909 05:42:13.853265 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:13.853310 kubelet[3195]: W0909 05:42:13.853300 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:13.853334 kubelet[3195]: E0909 05:42:13.853309 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:13.853492 kubelet[3195]: E0909 05:42:13.853479 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:13.853526 kubelet[3195]: W0909 05:42:13.853491 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:13.853526 kubelet[3195]: E0909 05:42:13.853501 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:13.853992 kubelet[3195]: E0909 05:42:13.853975 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:13.853992 kubelet[3195]: W0909 05:42:13.853991 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:13.854075 kubelet[3195]: E0909 05:42:13.854001 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:13.854613 kubelet[3195]: E0909 05:42:13.854595 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:13.854613 kubelet[3195]: W0909 05:42:13.854611 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:13.854703 kubelet[3195]: E0909 05:42:13.854624 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:13.862565 kubelet[3195]: E0909 05:42:13.862500 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:13.862565 kubelet[3195]: W0909 05:42:13.862515 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:13.862565 kubelet[3195]: E0909 05:42:13.862533 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:14.853727 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1517375793.mount: Deactivated successfully. Sep 9 05:42:15.111489 kubelet[3195]: E0909 05:42:15.111387 3195 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kpfkf" podUID="6ce6737c-5cbb-4d00-b926-8ff7cccb578b" Sep 9 05:42:15.465091 containerd[1723]: time="2025-09-09T05:42:15.465050301Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:42:15.470538 containerd[1723]: time="2025-09-09T05:42:15.470501924Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=35237389" Sep 9 05:42:15.478320 containerd[1723]: time="2025-09-09T05:42:15.478272419Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:42:15.485820 containerd[1723]: time="2025-09-09T05:42:15.485545511Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:42:15.486017 containerd[1723]: time="2025-09-09T05:42:15.485997164Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 1.913564593s" Sep 9 05:42:15.486088 containerd[1723]: time="2025-09-09T05:42:15.486077397Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 9 05:42:15.486917 containerd[1723]: time="2025-09-09T05:42:15.486896554Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 9 05:42:15.504274 containerd[1723]: time="2025-09-09T05:42:15.504235927Z" level=info msg="CreateContainer within sandbox \"d01b72e15f3e17aa4fcc96c7e22380fec0fbaf1a390bb60c58d60e5cfe43a407\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 9 05:42:15.525818 containerd[1723]: time="2025-09-09T05:42:15.522723936Z" level=info msg="Container 11672efd028e8d0d4cf94598c26040aa0853687c4a597a2163cc26f090c61d69: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:42:15.539167 containerd[1723]: time="2025-09-09T05:42:15.539139251Z" level=info msg="CreateContainer within sandbox \"d01b72e15f3e17aa4fcc96c7e22380fec0fbaf1a390bb60c58d60e5cfe43a407\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"11672efd028e8d0d4cf94598c26040aa0853687c4a597a2163cc26f090c61d69\"" Sep 9 05:42:15.539768 containerd[1723]: time="2025-09-09T05:42:15.539745637Z" level=info msg="StartContainer for \"11672efd028e8d0d4cf94598c26040aa0853687c4a597a2163cc26f090c61d69\"" Sep 9 05:42:15.540974 containerd[1723]: time="2025-09-09T05:42:15.540936040Z" level=info msg="connecting to shim 11672efd028e8d0d4cf94598c26040aa0853687c4a597a2163cc26f090c61d69" address="unix:///run/containerd/s/0c51bbe79e2411d240835cb9505ef35d876d8710a2c490709c65d4b5bac3991f" protocol=ttrpc version=3 Sep 9 05:42:15.563994 systemd[1]: Started cri-containerd-11672efd028e8d0d4cf94598c26040aa0853687c4a597a2163cc26f090c61d69.scope - libcontainer container 11672efd028e8d0d4cf94598c26040aa0853687c4a597a2163cc26f090c61d69. Sep 9 05:42:15.609093 containerd[1723]: time="2025-09-09T05:42:15.608976782Z" level=info msg="StartContainer for \"11672efd028e8d0d4cf94598c26040aa0853687c4a597a2163cc26f090c61d69\" returns successfully" Sep 9 05:42:16.242364 kubelet[3195]: E0909 05:42:16.242329 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:16.242364 kubelet[3195]: W0909 05:42:16.242349 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:16.242879 kubelet[3195]: E0909 05:42:16.242373 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:16.242879 kubelet[3195]: E0909 05:42:16.242491 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:16.242879 kubelet[3195]: W0909 05:42:16.242497 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:16.242879 kubelet[3195]: E0909 05:42:16.242516 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:16.242879 kubelet[3195]: E0909 05:42:16.242685 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:16.242879 kubelet[3195]: W0909 05:42:16.242692 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:16.242879 kubelet[3195]: E0909 05:42:16.242701 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:16.242879 kubelet[3195]: E0909 05:42:16.242845 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:16.242879 kubelet[3195]: W0909 05:42:16.242858 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:16.242879 kubelet[3195]: E0909 05:42:16.242866 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:16.243191 kubelet[3195]: E0909 05:42:16.243001 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:16.243191 kubelet[3195]: W0909 05:42:16.243007 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:16.243191 kubelet[3195]: E0909 05:42:16.243014 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:16.243191 kubelet[3195]: E0909 05:42:16.243124 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:16.243191 kubelet[3195]: W0909 05:42:16.243129 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:16.243191 kubelet[3195]: E0909 05:42:16.243134 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:16.243371 kubelet[3195]: E0909 05:42:16.243236 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:16.243371 kubelet[3195]: W0909 05:42:16.243242 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:16.243371 kubelet[3195]: E0909 05:42:16.243248 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:16.243371 kubelet[3195]: E0909 05:42:16.243350 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:16.243371 kubelet[3195]: W0909 05:42:16.243356 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:16.243371 kubelet[3195]: E0909 05:42:16.243362 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:16.243554 kubelet[3195]: E0909 05:42:16.243469 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:16.243554 kubelet[3195]: W0909 05:42:16.243474 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:16.243554 kubelet[3195]: E0909 05:42:16.243480 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:16.243647 kubelet[3195]: E0909 05:42:16.243583 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:16.243647 kubelet[3195]: W0909 05:42:16.243588 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:16.243647 kubelet[3195]: E0909 05:42:16.243594 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:16.243752 kubelet[3195]: E0909 05:42:16.243690 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:16.243752 kubelet[3195]: W0909 05:42:16.243694 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:16.243752 kubelet[3195]: E0909 05:42:16.243701 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:16.243856 kubelet[3195]: E0909 05:42:16.243844 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:16.243856 kubelet[3195]: W0909 05:42:16.243850 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:16.243918 kubelet[3195]: E0909 05:42:16.243856 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:16.243965 kubelet[3195]: E0909 05:42:16.243949 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:16.243965 kubelet[3195]: W0909 05:42:16.243960 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:16.244042 kubelet[3195]: E0909 05:42:16.243967 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:16.244119 kubelet[3195]: E0909 05:42:16.244111 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:16.244119 kubelet[3195]: W0909 05:42:16.244118 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:16.244171 kubelet[3195]: E0909 05:42:16.244124 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:16.244229 kubelet[3195]: E0909 05:42:16.244212 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:16.244261 kubelet[3195]: W0909 05:42:16.244230 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:16.244261 kubelet[3195]: E0909 05:42:16.244241 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:16.267650 kubelet[3195]: E0909 05:42:16.267624 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:16.267650 kubelet[3195]: W0909 05:42:16.267641 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:16.267839 kubelet[3195]: E0909 05:42:16.267657 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:16.267839 kubelet[3195]: E0909 05:42:16.267834 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:16.267932 kubelet[3195]: W0909 05:42:16.267841 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:16.267932 kubelet[3195]: E0909 05:42:16.267851 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:16.267996 kubelet[3195]: E0909 05:42:16.267982 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:16.267996 kubelet[3195]: W0909 05:42:16.267988 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:16.267996 kubelet[3195]: E0909 05:42:16.267994 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:16.268176 kubelet[3195]: E0909 05:42:16.268165 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:16.268176 kubelet[3195]: W0909 05:42:16.268176 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:16.268236 kubelet[3195]: E0909 05:42:16.268185 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:16.268312 kubelet[3195]: E0909 05:42:16.268295 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:16.268312 kubelet[3195]: W0909 05:42:16.268306 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:16.268362 kubelet[3195]: E0909 05:42:16.268313 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:16.268547 kubelet[3195]: E0909 05:42:16.268419 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:16.268547 kubelet[3195]: W0909 05:42:16.268423 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:16.268547 kubelet[3195]: E0909 05:42:16.268429 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:16.268726 kubelet[3195]: E0909 05:42:16.268710 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:16.268726 kubelet[3195]: W0909 05:42:16.268723 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:16.268810 kubelet[3195]: E0909 05:42:16.268733 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:16.269572 kubelet[3195]: E0909 05:42:16.269546 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:16.269572 kubelet[3195]: W0909 05:42:16.269558 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:16.269572 kubelet[3195]: E0909 05:42:16.269570 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:16.269964 kubelet[3195]: E0909 05:42:16.269934 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:16.269964 kubelet[3195]: W0909 05:42:16.269945 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:16.269964 kubelet[3195]: E0909 05:42:16.269957 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:16.270943 kubelet[3195]: E0909 05:42:16.270845 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:16.270943 kubelet[3195]: W0909 05:42:16.270859 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:16.270943 kubelet[3195]: E0909 05:42:16.270872 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:16.271216 kubelet[3195]: E0909 05:42:16.271128 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:16.271216 kubelet[3195]: W0909 05:42:16.271137 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:16.271216 kubelet[3195]: E0909 05:42:16.271146 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:16.271679 kubelet[3195]: E0909 05:42:16.271523 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:16.271679 kubelet[3195]: W0909 05:42:16.271535 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:16.271679 kubelet[3195]: E0909 05:42:16.271546 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:16.271979 kubelet[3195]: E0909 05:42:16.271963 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:16.271979 kubelet[3195]: W0909 05:42:16.271974 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:16.272050 kubelet[3195]: E0909 05:42:16.271985 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:16.272259 kubelet[3195]: E0909 05:42:16.272231 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:16.272259 kubelet[3195]: W0909 05:42:16.272256 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:16.272322 kubelet[3195]: E0909 05:42:16.272265 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:16.272403 kubelet[3195]: E0909 05:42:16.272392 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:16.272403 kubelet[3195]: W0909 05:42:16.272400 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:16.272452 kubelet[3195]: E0909 05:42:16.272408 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:16.272593 kubelet[3195]: E0909 05:42:16.272582 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:16.272593 kubelet[3195]: W0909 05:42:16.272590 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:16.272646 kubelet[3195]: E0909 05:42:16.272597 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:16.273144 kubelet[3195]: E0909 05:42:16.273082 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:16.273144 kubelet[3195]: W0909 05:42:16.273100 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:16.273144 kubelet[3195]: E0909 05:42:16.273113 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:16.273394 kubelet[3195]: E0909 05:42:16.273379 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:16.273394 kubelet[3195]: W0909 05:42:16.273390 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:16.273447 kubelet[3195]: E0909 05:42:16.273400 3195 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:16.857065 containerd[1723]: time="2025-09-09T05:42:16.856985537Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:42:16.860040 containerd[1723]: time="2025-09-09T05:42:16.859867521Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4446660" Sep 9 05:42:16.863106 containerd[1723]: time="2025-09-09T05:42:16.863046906Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:42:16.867058 containerd[1723]: time="2025-09-09T05:42:16.866995148Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:42:16.868152 containerd[1723]: time="2025-09-09T05:42:16.868122519Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 1.381096132s" Sep 9 05:42:16.868323 containerd[1723]: time="2025-09-09T05:42:16.868243290Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 9 05:42:16.876218 containerd[1723]: time="2025-09-09T05:42:16.876195395Z" level=info msg="CreateContainer within sandbox \"5a6e319cabf8526b3048c9e3d18021502b557ae8a571d56d01fccf03b6b2b0d8\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 9 05:42:16.893804 containerd[1723]: time="2025-09-09T05:42:16.893636371Z" level=info msg="Container 4eacc42013f47ce2997d930f6317a13e21f603d0bb7d984e0a797a2252eb6a93: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:42:16.912045 containerd[1723]: time="2025-09-09T05:42:16.912014163Z" level=info msg="CreateContainer within sandbox \"5a6e319cabf8526b3048c9e3d18021502b557ae8a571d56d01fccf03b6b2b0d8\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"4eacc42013f47ce2997d930f6317a13e21f603d0bb7d984e0a797a2252eb6a93\"" Sep 9 05:42:16.912819 containerd[1723]: time="2025-09-09T05:42:16.912416478Z" level=info msg="StartContainer for \"4eacc42013f47ce2997d930f6317a13e21f603d0bb7d984e0a797a2252eb6a93\"" Sep 9 05:42:16.913989 containerd[1723]: time="2025-09-09T05:42:16.913960632Z" level=info msg="connecting to shim 4eacc42013f47ce2997d930f6317a13e21f603d0bb7d984e0a797a2252eb6a93" address="unix:///run/containerd/s/ffd15538824b6325b20933b1e6e6a66abef0acb1dd62e57fc12da6967bf56507" protocol=ttrpc version=3 Sep 9 05:42:16.932925 systemd[1]: Started cri-containerd-4eacc42013f47ce2997d930f6317a13e21f603d0bb7d984e0a797a2252eb6a93.scope - libcontainer container 4eacc42013f47ce2997d930f6317a13e21f603d0bb7d984e0a797a2252eb6a93. Sep 9 05:42:16.969678 containerd[1723]: time="2025-09-09T05:42:16.969636790Z" level=info msg="StartContainer for \"4eacc42013f47ce2997d930f6317a13e21f603d0bb7d984e0a797a2252eb6a93\" returns successfully" Sep 9 05:42:16.975601 systemd[1]: cri-containerd-4eacc42013f47ce2997d930f6317a13e21f603d0bb7d984e0a797a2252eb6a93.scope: Deactivated successfully. Sep 9 05:42:16.980026 containerd[1723]: time="2025-09-09T05:42:16.979940955Z" level=info msg="received exit event container_id:\"4eacc42013f47ce2997d930f6317a13e21f603d0bb7d984e0a797a2252eb6a93\" id:\"4eacc42013f47ce2997d930f6317a13e21f603d0bb7d984e0a797a2252eb6a93\" pid:3868 exited_at:{seconds:1757396536 nanos:979593448}" Sep 9 05:42:16.980026 containerd[1723]: time="2025-09-09T05:42:16.980005251Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4eacc42013f47ce2997d930f6317a13e21f603d0bb7d984e0a797a2252eb6a93\" id:\"4eacc42013f47ce2997d930f6317a13e21f603d0bb7d984e0a797a2252eb6a93\" pid:3868 exited_at:{seconds:1757396536 nanos:979593448}" Sep 9 05:42:16.998912 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4eacc42013f47ce2997d930f6317a13e21f603d0bb7d984e0a797a2252eb6a93-rootfs.mount: Deactivated successfully. Sep 9 05:42:17.111604 kubelet[3195]: E0909 05:42:17.111508 3195 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kpfkf" podUID="6ce6737c-5cbb-4d00-b926-8ff7cccb578b" Sep 9 05:42:17.187838 kubelet[3195]: I0909 05:42:17.187497 3195 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 05:42:17.202091 kubelet[3195]: I0909 05:42:17.201960 3195 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-66bd6488bf-pfdl7" podStartSLOduration=2.286785954 podStartE2EDuration="4.201945207s" podCreationTimestamp="2025-09-09 05:42:13 +0000 UTC" firstStartedPulling="2025-09-09 05:42:13.571587311 +0000 UTC m=+19.554539777" lastFinishedPulling="2025-09-09 05:42:15.486746566 +0000 UTC m=+21.469699030" observedRunningTime="2025-09-09 05:42:16.196110011 +0000 UTC m=+22.179062496" watchObservedRunningTime="2025-09-09 05:42:17.201945207 +0000 UTC m=+23.184897676" Sep 9 05:42:19.110936 kubelet[3195]: E0909 05:42:19.110896 3195 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kpfkf" podUID="6ce6737c-5cbb-4d00-b926-8ff7cccb578b" Sep 9 05:42:19.194899 containerd[1723]: time="2025-09-09T05:42:19.194774596Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 9 05:42:21.111220 kubelet[3195]: E0909 05:42:21.111169 3195 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kpfkf" podUID="6ce6737c-5cbb-4d00-b926-8ff7cccb578b" Sep 9 05:42:21.826995 containerd[1723]: time="2025-09-09T05:42:21.826952729Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:42:21.829192 containerd[1723]: time="2025-09-09T05:42:21.829102015Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 9 05:42:21.831706 containerd[1723]: time="2025-09-09T05:42:21.831680365Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:42:21.835917 containerd[1723]: time="2025-09-09T05:42:21.835482899Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:42:21.835917 containerd[1723]: time="2025-09-09T05:42:21.835824509Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 2.640982479s" Sep 9 05:42:21.835917 containerd[1723]: time="2025-09-09T05:42:21.835849629Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 9 05:42:21.842067 containerd[1723]: time="2025-09-09T05:42:21.842024488Z" level=info msg="CreateContainer within sandbox \"5a6e319cabf8526b3048c9e3d18021502b557ae8a571d56d01fccf03b6b2b0d8\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 9 05:42:21.858812 containerd[1723]: time="2025-09-09T05:42:21.858500624Z" level=info msg="Container e72456b7ddff852b8bc46478e98993d790953bcdf6155b3f02e2eed81d83f164: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:42:21.896489 containerd[1723]: time="2025-09-09T05:42:21.896460082Z" level=info msg="CreateContainer within sandbox \"5a6e319cabf8526b3048c9e3d18021502b557ae8a571d56d01fccf03b6b2b0d8\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"e72456b7ddff852b8bc46478e98993d790953bcdf6155b3f02e2eed81d83f164\"" Sep 9 05:42:21.897814 containerd[1723]: time="2025-09-09T05:42:21.896918930Z" level=info msg="StartContainer for \"e72456b7ddff852b8bc46478e98993d790953bcdf6155b3f02e2eed81d83f164\"" Sep 9 05:42:21.898523 containerd[1723]: time="2025-09-09T05:42:21.898497134Z" level=info msg="connecting to shim e72456b7ddff852b8bc46478e98993d790953bcdf6155b3f02e2eed81d83f164" address="unix:///run/containerd/s/ffd15538824b6325b20933b1e6e6a66abef0acb1dd62e57fc12da6967bf56507" protocol=ttrpc version=3 Sep 9 05:42:21.921925 systemd[1]: Started cri-containerd-e72456b7ddff852b8bc46478e98993d790953bcdf6155b3f02e2eed81d83f164.scope - libcontainer container e72456b7ddff852b8bc46478e98993d790953bcdf6155b3f02e2eed81d83f164. Sep 9 05:42:21.953945 containerd[1723]: time="2025-09-09T05:42:21.953916596Z" level=info msg="StartContainer for \"e72456b7ddff852b8bc46478e98993d790953bcdf6155b3f02e2eed81d83f164\" returns successfully" Sep 9 05:42:23.070274 containerd[1723]: time="2025-09-09T05:42:23.070221113Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 9 05:42:23.072180 systemd[1]: cri-containerd-e72456b7ddff852b8bc46478e98993d790953bcdf6155b3f02e2eed81d83f164.scope: Deactivated successfully. Sep 9 05:42:23.072619 systemd[1]: cri-containerd-e72456b7ddff852b8bc46478e98993d790953bcdf6155b3f02e2eed81d83f164.scope: Consumed 388ms CPU time, 191.3M memory peak, 171.3M written to disk. Sep 9 05:42:23.074952 containerd[1723]: time="2025-09-09T05:42:23.074878085Z" level=info msg="received exit event container_id:\"e72456b7ddff852b8bc46478e98993d790953bcdf6155b3f02e2eed81d83f164\" id:\"e72456b7ddff852b8bc46478e98993d790953bcdf6155b3f02e2eed81d83f164\" pid:3927 exited_at:{seconds:1757396543 nanos:74443430}" Sep 9 05:42:23.075278 containerd[1723]: time="2025-09-09T05:42:23.075251724Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e72456b7ddff852b8bc46478e98993d790953bcdf6155b3f02e2eed81d83f164\" id:\"e72456b7ddff852b8bc46478e98993d790953bcdf6155b3f02e2eed81d83f164\" pid:3927 exited_at:{seconds:1757396543 nanos:74443430}" Sep 9 05:42:23.091716 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e72456b7ddff852b8bc46478e98993d790953bcdf6155b3f02e2eed81d83f164-rootfs.mount: Deactivated successfully. Sep 9 05:42:23.111046 kubelet[3195]: E0909 05:42:23.111010 3195 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kpfkf" podUID="6ce6737c-5cbb-4d00-b926-8ff7cccb578b" Sep 9 05:42:23.139976 kubelet[3195]: I0909 05:42:23.139954 3195 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 9 05:42:23.371427 systemd[1]: Created slice kubepods-burstable-pode844f59d_88d0_4ca4_bf84_35ef219a5745.slice - libcontainer container kubepods-burstable-pode844f59d_88d0_4ca4_bf84_35ef219a5745.slice. Sep 9 05:42:23.416556 kubelet[3195]: I0909 05:42:23.416509 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgrdp\" (UniqueName: \"kubernetes.io/projected/e844f59d-88d0-4ca4-bf84-35ef219a5745-kube-api-access-jgrdp\") pod \"coredns-674b8bbfcf-x75ls\" (UID: \"e844f59d-88d0-4ca4-bf84-35ef219a5745\") " pod="kube-system/coredns-674b8bbfcf-x75ls" Sep 9 05:42:23.416556 kubelet[3195]: I0909 05:42:23.416552 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e844f59d-88d0-4ca4-bf84-35ef219a5745-config-volume\") pod \"coredns-674b8bbfcf-x75ls\" (UID: \"e844f59d-88d0-4ca4-bf84-35ef219a5745\") " pod="kube-system/coredns-674b8bbfcf-x75ls" Sep 9 05:42:23.595475 systemd[1]: Created slice kubepods-besteffort-pod2327e27d_1902_4105_a58c_732597795f2d.slice - libcontainer container kubepods-besteffort-pod2327e27d_1902_4105_a58c_732597795f2d.slice. Sep 9 05:42:23.617152 kubelet[3195]: I0909 05:42:23.617116 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2327e27d-1902-4105-a58c-732597795f2d-calico-apiserver-certs\") pod \"calico-apiserver-7d4c6f5df5-5c5bv\" (UID: \"2327e27d-1902-4105-a58c-732597795f2d\") " pod="calico-apiserver/calico-apiserver-7d4c6f5df5-5c5bv" Sep 9 05:42:23.617152 kubelet[3195]: I0909 05:42:23.617149 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xn9t\" (UniqueName: \"kubernetes.io/projected/2327e27d-1902-4105-a58c-732597795f2d-kube-api-access-5xn9t\") pod \"calico-apiserver-7d4c6f5df5-5c5bv\" (UID: \"2327e27d-1902-4105-a58c-732597795f2d\") " pod="calico-apiserver/calico-apiserver-7d4c6f5df5-5c5bv" Sep 9 05:42:23.674852 containerd[1723]: time="2025-09-09T05:42:23.674817663Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-x75ls,Uid:e844f59d-88d0-4ca4-bf84-35ef219a5745,Namespace:kube-system,Attempt:0,}" Sep 9 05:42:23.832718 systemd[1]: Created slice kubepods-besteffort-pod86f9250a_df33_4f1c_a5e5_c1195c97f00c.slice - libcontainer container kubepods-besteffort-pod86f9250a_df33_4f1c_a5e5_c1195c97f00c.slice. Sep 9 05:42:23.898585 containerd[1723]: time="2025-09-09T05:42:23.898542705Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d4c6f5df5-5c5bv,Uid:2327e27d-1902-4105-a58c-732597795f2d,Namespace:calico-apiserver,Attempt:0,}" Sep 9 05:42:23.919013 kubelet[3195]: I0909 05:42:23.918985 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/86f9250a-df33-4f1c-a5e5-c1195c97f00c-tigera-ca-bundle\") pod \"calico-kube-controllers-6cfdf88cf4-n2xbw\" (UID: \"86f9250a-df33-4f1c-a5e5-c1195c97f00c\") " pod="calico-system/calico-kube-controllers-6cfdf88cf4-n2xbw" Sep 9 05:42:23.919105 kubelet[3195]: I0909 05:42:23.919024 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pf52l\" (UniqueName: \"kubernetes.io/projected/86f9250a-df33-4f1c-a5e5-c1195c97f00c-kube-api-access-pf52l\") pod \"calico-kube-controllers-6cfdf88cf4-n2xbw\" (UID: \"86f9250a-df33-4f1c-a5e5-c1195c97f00c\") " pod="calico-system/calico-kube-controllers-6cfdf88cf4-n2xbw" Sep 9 05:42:24.061021 systemd[1]: Created slice kubepods-besteffort-pod59a7decc_b070_4036_86c2_61eb36a53dee.slice - libcontainer container kubepods-besteffort-pod59a7decc_b070_4036_86c2_61eb36a53dee.slice. Sep 9 05:42:24.071459 systemd[1]: Created slice kubepods-burstable-podd4ed2dc7_4551_4678_8f20_c675198c7103.slice - libcontainer container kubepods-burstable-podd4ed2dc7_4551_4678_8f20_c675198c7103.slice. Sep 9 05:42:24.085321 systemd[1]: Created slice kubepods-besteffort-pod0bafd0a8_d202_4cdc_b16c_862a0665c3a9.slice - libcontainer container kubepods-besteffort-pod0bafd0a8_d202_4cdc_b16c_862a0665c3a9.slice. Sep 9 05:42:24.100568 systemd[1]: Created slice kubepods-besteffort-pod5fe8f76a_afbe_4d04_b31e_ea5198ff1e9f.slice - libcontainer container kubepods-besteffort-pod5fe8f76a_afbe_4d04_b31e_ea5198ff1e9f.slice. Sep 9 05:42:24.120128 kubelet[3195]: I0909 05:42:24.120059 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59a7decc-b070-4036-86c2-61eb36a53dee-config\") pod \"goldmane-54d579b49d-smrln\" (UID: \"59a7decc-b070-4036-86c2-61eb36a53dee\") " pod="calico-system/goldmane-54d579b49d-smrln" Sep 9 05:42:24.120902 kubelet[3195]: I0909 05:42:24.120105 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/59a7decc-b070-4036-86c2-61eb36a53dee-goldmane-key-pair\") pod \"goldmane-54d579b49d-smrln\" (UID: \"59a7decc-b070-4036-86c2-61eb36a53dee\") " pod="calico-system/goldmane-54d579b49d-smrln" Sep 9 05:42:24.120902 kubelet[3195]: I0909 05:42:24.120577 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d4ed2dc7-4551-4678-8f20-c675198c7103-config-volume\") pod \"coredns-674b8bbfcf-t2sgm\" (UID: \"d4ed2dc7-4551-4678-8f20-c675198c7103\") " pod="kube-system/coredns-674b8bbfcf-t2sgm" Sep 9 05:42:24.120902 kubelet[3195]: I0909 05:42:24.120599 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/59a7decc-b070-4036-86c2-61eb36a53dee-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-smrln\" (UID: \"59a7decc-b070-4036-86c2-61eb36a53dee\") " pod="calico-system/goldmane-54d579b49d-smrln" Sep 9 05:42:24.120902 kubelet[3195]: I0909 05:42:24.120621 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sz7wh\" (UniqueName: \"kubernetes.io/projected/59a7decc-b070-4036-86c2-61eb36a53dee-kube-api-access-sz7wh\") pod \"goldmane-54d579b49d-smrln\" (UID: \"59a7decc-b070-4036-86c2-61eb36a53dee\") " pod="calico-system/goldmane-54d579b49d-smrln" Sep 9 05:42:24.120902 kubelet[3195]: I0909 05:42:24.120645 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvrnp\" (UniqueName: \"kubernetes.io/projected/0bafd0a8-d202-4cdc-b16c-862a0665c3a9-kube-api-access-xvrnp\") pod \"calico-apiserver-7d4c6f5df5-vfjwk\" (UID: \"0bafd0a8-d202-4cdc-b16c-862a0665c3a9\") " pod="calico-apiserver/calico-apiserver-7d4c6f5df5-vfjwk" Sep 9 05:42:24.121082 kubelet[3195]: I0909 05:42:24.120662 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5fe8f76a-afbe-4d04-b31e-ea5198ff1e9f-whisker-ca-bundle\") pod \"whisker-5d4fd48569-mfwkq\" (UID: \"5fe8f76a-afbe-4d04-b31e-ea5198ff1e9f\") " pod="calico-system/whisker-5d4fd48569-mfwkq" Sep 9 05:42:24.121082 kubelet[3195]: I0909 05:42:24.120692 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2sbpr\" (UniqueName: \"kubernetes.io/projected/d4ed2dc7-4551-4678-8f20-c675198c7103-kube-api-access-2sbpr\") pod \"coredns-674b8bbfcf-t2sgm\" (UID: \"d4ed2dc7-4551-4678-8f20-c675198c7103\") " pod="kube-system/coredns-674b8bbfcf-t2sgm" Sep 9 05:42:24.121082 kubelet[3195]: I0909 05:42:24.120711 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0bafd0a8-d202-4cdc-b16c-862a0665c3a9-calico-apiserver-certs\") pod \"calico-apiserver-7d4c6f5df5-vfjwk\" (UID: \"0bafd0a8-d202-4cdc-b16c-862a0665c3a9\") " pod="calico-apiserver/calico-apiserver-7d4c6f5df5-vfjwk" Sep 9 05:42:24.121082 kubelet[3195]: I0909 05:42:24.120728 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/5fe8f76a-afbe-4d04-b31e-ea5198ff1e9f-whisker-backend-key-pair\") pod \"whisker-5d4fd48569-mfwkq\" (UID: \"5fe8f76a-afbe-4d04-b31e-ea5198ff1e9f\") " pod="calico-system/whisker-5d4fd48569-mfwkq" Sep 9 05:42:24.121082 kubelet[3195]: I0909 05:42:24.120746 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xl7ns\" (UniqueName: \"kubernetes.io/projected/5fe8f76a-afbe-4d04-b31e-ea5198ff1e9f-kube-api-access-xl7ns\") pod \"whisker-5d4fd48569-mfwkq\" (UID: \"5fe8f76a-afbe-4d04-b31e-ea5198ff1e9f\") " pod="calico-system/whisker-5d4fd48569-mfwkq" Sep 9 05:42:24.132684 containerd[1723]: time="2025-09-09T05:42:24.132637143Z" level=error msg="Failed to destroy network for sandbox \"3a770c5eec07bd39087560f7b862dcace106c5261487db8c67e3d863ec22c7cd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:42:24.136069 containerd[1723]: time="2025-09-09T05:42:24.135294333Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6cfdf88cf4-n2xbw,Uid:86f9250a-df33-4f1c-a5e5-c1195c97f00c,Namespace:calico-system,Attempt:0,}" Sep 9 05:42:24.135932 systemd[1]: run-netns-cni\x2de8e58408\x2d8d9f\x2d7b54\x2d55bd\x2daea89c0f077e.mount: Deactivated successfully. Sep 9 05:42:24.149013 containerd[1723]: time="2025-09-09T05:42:24.147555958Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-x75ls,Uid:e844f59d-88d0-4ca4-bf84-35ef219a5745,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a770c5eec07bd39087560f7b862dcace106c5261487db8c67e3d863ec22c7cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:42:24.149132 kubelet[3195]: E0909 05:42:24.147833 3195 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a770c5eec07bd39087560f7b862dcace106c5261487db8c67e3d863ec22c7cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:42:24.149132 kubelet[3195]: E0909 05:42:24.147890 3195 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a770c5eec07bd39087560f7b862dcace106c5261487db8c67e3d863ec22c7cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-x75ls" Sep 9 05:42:24.149132 kubelet[3195]: E0909 05:42:24.147913 3195 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a770c5eec07bd39087560f7b862dcace106c5261487db8c67e3d863ec22c7cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-x75ls" Sep 9 05:42:24.149229 kubelet[3195]: E0909 05:42:24.147959 3195 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-x75ls_kube-system(e844f59d-88d0-4ca4-bf84-35ef219a5745)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-x75ls_kube-system(e844f59d-88d0-4ca4-bf84-35ef219a5745)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3a770c5eec07bd39087560f7b862dcace106c5261487db8c67e3d863ec22c7cd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-x75ls" podUID="e844f59d-88d0-4ca4-bf84-35ef219a5745" Sep 9 05:42:24.158800 containerd[1723]: time="2025-09-09T05:42:24.157043726Z" level=error msg="Failed to destroy network for sandbox \"608430dd40f0aee6159ac1ae3a306c7b33e32ff785ef629c79f180896600c4f9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:42:24.160120 systemd[1]: run-netns-cni\x2d28934585\x2d870d\x2db70a\x2d2f4a\x2d61e136d612c7.mount: Deactivated successfully. Sep 9 05:42:24.165411 containerd[1723]: time="2025-09-09T05:42:24.165077361Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d4c6f5df5-5c5bv,Uid:2327e27d-1902-4105-a58c-732597795f2d,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"608430dd40f0aee6159ac1ae3a306c7b33e32ff785ef629c79f180896600c4f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:42:24.166374 kubelet[3195]: E0909 05:42:24.166350 3195 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"608430dd40f0aee6159ac1ae3a306c7b33e32ff785ef629c79f180896600c4f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:42:24.166768 kubelet[3195]: E0909 05:42:24.166448 3195 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"608430dd40f0aee6159ac1ae3a306c7b33e32ff785ef629c79f180896600c4f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d4c6f5df5-5c5bv" Sep 9 05:42:24.166768 kubelet[3195]: E0909 05:42:24.166471 3195 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"608430dd40f0aee6159ac1ae3a306c7b33e32ff785ef629c79f180896600c4f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d4c6f5df5-5c5bv" Sep 9 05:42:24.167294 kubelet[3195]: E0909 05:42:24.167112 3195 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7d4c6f5df5-5c5bv_calico-apiserver(2327e27d-1902-4105-a58c-732597795f2d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7d4c6f5df5-5c5bv_calico-apiserver(2327e27d-1902-4105-a58c-732597795f2d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"608430dd40f0aee6159ac1ae3a306c7b33e32ff785ef629c79f180896600c4f9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7d4c6f5df5-5c5bv" podUID="2327e27d-1902-4105-a58c-732597795f2d" Sep 9 05:42:24.202285 containerd[1723]: time="2025-09-09T05:42:24.202254580Z" level=error msg="Failed to destroy network for sandbox \"013b8544861c4748789e6343feca60095a0b89f397eb3dd83d8f82c20c9bcd06\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:42:24.204062 systemd[1]: run-netns-cni\x2d72dc69a8\x2d61a6\x2d1083\x2dc22e\x2dc33ea69149c8.mount: Deactivated successfully. Sep 9 05:42:24.208514 containerd[1723]: time="2025-09-09T05:42:24.208408480Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6cfdf88cf4-n2xbw,Uid:86f9250a-df33-4f1c-a5e5-c1195c97f00c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"013b8544861c4748789e6343feca60095a0b89f397eb3dd83d8f82c20c9bcd06\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:42:24.209253 kubelet[3195]: E0909 05:42:24.208893 3195 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"013b8544861c4748789e6343feca60095a0b89f397eb3dd83d8f82c20c9bcd06\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:42:24.209253 kubelet[3195]: E0909 05:42:24.208929 3195 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"013b8544861c4748789e6343feca60095a0b89f397eb3dd83d8f82c20c9bcd06\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6cfdf88cf4-n2xbw" Sep 9 05:42:24.209253 kubelet[3195]: E0909 05:42:24.208963 3195 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"013b8544861c4748789e6343feca60095a0b89f397eb3dd83d8f82c20c9bcd06\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6cfdf88cf4-n2xbw" Sep 9 05:42:24.209649 kubelet[3195]: E0909 05:42:24.209003 3195 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6cfdf88cf4-n2xbw_calico-system(86f9250a-df33-4f1c-a5e5-c1195c97f00c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6cfdf88cf4-n2xbw_calico-system(86f9250a-df33-4f1c-a5e5-c1195c97f00c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"013b8544861c4748789e6343feca60095a0b89f397eb3dd83d8f82c20c9bcd06\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6cfdf88cf4-n2xbw" podUID="86f9250a-df33-4f1c-a5e5-c1195c97f00c" Sep 9 05:42:24.213407 containerd[1723]: time="2025-09-09T05:42:24.213373472Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 9 05:42:24.368495 containerd[1723]: time="2025-09-09T05:42:24.368264996Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-smrln,Uid:59a7decc-b070-4036-86c2-61eb36a53dee,Namespace:calico-system,Attempt:0,}" Sep 9 05:42:24.380435 containerd[1723]: time="2025-09-09T05:42:24.380253469Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-t2sgm,Uid:d4ed2dc7-4551-4678-8f20-c675198c7103,Namespace:kube-system,Attempt:0,}" Sep 9 05:42:24.390215 containerd[1723]: time="2025-09-09T05:42:24.390182015Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d4c6f5df5-vfjwk,Uid:0bafd0a8-d202-4cdc-b16c-862a0665c3a9,Namespace:calico-apiserver,Attempt:0,}" Sep 9 05:42:24.407288 containerd[1723]: time="2025-09-09T05:42:24.407259604Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5d4fd48569-mfwkq,Uid:5fe8f76a-afbe-4d04-b31e-ea5198ff1e9f,Namespace:calico-system,Attempt:0,}" Sep 9 05:42:24.413923 containerd[1723]: time="2025-09-09T05:42:24.413884507Z" level=error msg="Failed to destroy network for sandbox \"0bb46086eb80e19a814adbe299807dbc372ebfc168c7e1daf4a2c242a84c4d1a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:42:24.435955 containerd[1723]: time="2025-09-09T05:42:24.435868362Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-smrln,Uid:59a7decc-b070-4036-86c2-61eb36a53dee,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0bb46086eb80e19a814adbe299807dbc372ebfc168c7e1daf4a2c242a84c4d1a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:42:24.436233 kubelet[3195]: E0909 05:42:24.436180 3195 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0bb46086eb80e19a814adbe299807dbc372ebfc168c7e1daf4a2c242a84c4d1a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:42:24.436298 kubelet[3195]: E0909 05:42:24.436254 3195 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0bb46086eb80e19a814adbe299807dbc372ebfc168c7e1daf4a2c242a84c4d1a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-smrln" Sep 9 05:42:24.436298 kubelet[3195]: E0909 05:42:24.436277 3195 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0bb46086eb80e19a814adbe299807dbc372ebfc168c7e1daf4a2c242a84c4d1a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-smrln" Sep 9 05:42:24.436655 kubelet[3195]: E0909 05:42:24.436340 3195 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-smrln_calico-system(59a7decc-b070-4036-86c2-61eb36a53dee)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-smrln_calico-system(59a7decc-b070-4036-86c2-61eb36a53dee)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0bb46086eb80e19a814adbe299807dbc372ebfc168c7e1daf4a2c242a84c4d1a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-smrln" podUID="59a7decc-b070-4036-86c2-61eb36a53dee" Sep 9 05:42:24.488664 containerd[1723]: time="2025-09-09T05:42:24.488621195Z" level=error msg="Failed to destroy network for sandbox \"e1bd188477a4fda35ee123d2b1b26fd9f24fc45f1b7b0ec092963481bef6368a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:42:24.494558 containerd[1723]: time="2025-09-09T05:42:24.494501465Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-t2sgm,Uid:d4ed2dc7-4551-4678-8f20-c675198c7103,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1bd188477a4fda35ee123d2b1b26fd9f24fc45f1b7b0ec092963481bef6368a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:42:24.494843 kubelet[3195]: E0909 05:42:24.494815 3195 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1bd188477a4fda35ee123d2b1b26fd9f24fc45f1b7b0ec092963481bef6368a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:42:24.494901 kubelet[3195]: E0909 05:42:24.494865 3195 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1bd188477a4fda35ee123d2b1b26fd9f24fc45f1b7b0ec092963481bef6368a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-t2sgm" Sep 9 05:42:24.494901 kubelet[3195]: E0909 05:42:24.494885 3195 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1bd188477a4fda35ee123d2b1b26fd9f24fc45f1b7b0ec092963481bef6368a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-t2sgm" Sep 9 05:42:24.494952 kubelet[3195]: E0909 05:42:24.494930 3195 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-t2sgm_kube-system(d4ed2dc7-4551-4678-8f20-c675198c7103)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-t2sgm_kube-system(d4ed2dc7-4551-4678-8f20-c675198c7103)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e1bd188477a4fda35ee123d2b1b26fd9f24fc45f1b7b0ec092963481bef6368a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-t2sgm" podUID="d4ed2dc7-4551-4678-8f20-c675198c7103" Sep 9 05:42:24.510800 containerd[1723]: time="2025-09-09T05:42:24.510714027Z" level=error msg="Failed to destroy network for sandbox \"b557674cac83b437c96549e7b3b6a472d32bf6a453c01c6123bfdf55f25eb054\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:42:24.514858 containerd[1723]: time="2025-09-09T05:42:24.514820774Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d4c6f5df5-vfjwk,Uid:0bafd0a8-d202-4cdc-b16c-862a0665c3a9,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b557674cac83b437c96549e7b3b6a472d32bf6a453c01c6123bfdf55f25eb054\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:42:24.515026 kubelet[3195]: E0909 05:42:24.514990 3195 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b557674cac83b437c96549e7b3b6a472d32bf6a453c01c6123bfdf55f25eb054\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:42:24.515073 kubelet[3195]: E0909 05:42:24.515046 3195 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b557674cac83b437c96549e7b3b6a472d32bf6a453c01c6123bfdf55f25eb054\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d4c6f5df5-vfjwk" Sep 9 05:42:24.515073 kubelet[3195]: E0909 05:42:24.515067 3195 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b557674cac83b437c96549e7b3b6a472d32bf6a453c01c6123bfdf55f25eb054\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d4c6f5df5-vfjwk" Sep 9 05:42:24.515130 kubelet[3195]: E0909 05:42:24.515109 3195 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7d4c6f5df5-vfjwk_calico-apiserver(0bafd0a8-d202-4cdc-b16c-862a0665c3a9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7d4c6f5df5-vfjwk_calico-apiserver(0bafd0a8-d202-4cdc-b16c-862a0665c3a9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b557674cac83b437c96549e7b3b6a472d32bf6a453c01c6123bfdf55f25eb054\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7d4c6f5df5-vfjwk" podUID="0bafd0a8-d202-4cdc-b16c-862a0665c3a9" Sep 9 05:42:24.516568 containerd[1723]: time="2025-09-09T05:42:24.516538523Z" level=error msg="Failed to destroy network for sandbox \"f5a57601b2c27b27bfb61081c3cae86e0628ac5a04bb771ae5a011e1b69a85c0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:42:24.521968 containerd[1723]: time="2025-09-09T05:42:24.521914573Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5d4fd48569-mfwkq,Uid:5fe8f76a-afbe-4d04-b31e-ea5198ff1e9f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5a57601b2c27b27bfb61081c3cae86e0628ac5a04bb771ae5a011e1b69a85c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:42:24.522207 kubelet[3195]: E0909 05:42:24.522181 3195 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5a57601b2c27b27bfb61081c3cae86e0628ac5a04bb771ae5a011e1b69a85c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:42:24.522265 kubelet[3195]: E0909 05:42:24.522241 3195 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5a57601b2c27b27bfb61081c3cae86e0628ac5a04bb771ae5a011e1b69a85c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5d4fd48569-mfwkq" Sep 9 05:42:24.522306 kubelet[3195]: E0909 05:42:24.522262 3195 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5a57601b2c27b27bfb61081c3cae86e0628ac5a04bb771ae5a011e1b69a85c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5d4fd48569-mfwkq" Sep 9 05:42:24.522332 kubelet[3195]: E0909 05:42:24.522312 3195 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5d4fd48569-mfwkq_calico-system(5fe8f76a-afbe-4d04-b31e-ea5198ff1e9f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5d4fd48569-mfwkq_calico-system(5fe8f76a-afbe-4d04-b31e-ea5198ff1e9f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f5a57601b2c27b27bfb61081c3cae86e0628ac5a04bb771ae5a011e1b69a85c0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5d4fd48569-mfwkq" podUID="5fe8f76a-afbe-4d04-b31e-ea5198ff1e9f" Sep 9 05:42:25.116176 systemd[1]: Created slice kubepods-besteffort-pod6ce6737c_5cbb_4d00_b926_8ff7cccb578b.slice - libcontainer container kubepods-besteffort-pod6ce6737c_5cbb_4d00_b926_8ff7cccb578b.slice. Sep 9 05:42:25.118377 containerd[1723]: time="2025-09-09T05:42:25.118344529Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kpfkf,Uid:6ce6737c-5cbb-4d00-b926-8ff7cccb578b,Namespace:calico-system,Attempt:0,}" Sep 9 05:42:25.167763 containerd[1723]: time="2025-09-09T05:42:25.167723993Z" level=error msg="Failed to destroy network for sandbox \"7e065e3c1c6807d668e4a5ad747e9ba12a145120f2404cf64459c58ded9ebe50\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:42:25.169759 systemd[1]: run-netns-cni\x2dc4e58649\x2d49da\x2d7c00\x2d27d0\x2defe5181cc005.mount: Deactivated successfully. Sep 9 05:42:25.182609 containerd[1723]: time="2025-09-09T05:42:25.182566850Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kpfkf,Uid:6ce6737c-5cbb-4d00-b926-8ff7cccb578b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e065e3c1c6807d668e4a5ad747e9ba12a145120f2404cf64459c58ded9ebe50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:42:25.182833 kubelet[3195]: E0909 05:42:25.182794 3195 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e065e3c1c6807d668e4a5ad747e9ba12a145120f2404cf64459c58ded9ebe50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:42:25.183093 kubelet[3195]: E0909 05:42:25.182853 3195 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e065e3c1c6807d668e4a5ad747e9ba12a145120f2404cf64459c58ded9ebe50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kpfkf" Sep 9 05:42:25.183093 kubelet[3195]: E0909 05:42:25.182876 3195 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e065e3c1c6807d668e4a5ad747e9ba12a145120f2404cf64459c58ded9ebe50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kpfkf" Sep 9 05:42:25.183093 kubelet[3195]: E0909 05:42:25.182929 3195 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-kpfkf_calico-system(6ce6737c-5cbb-4d00-b926-8ff7cccb578b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-kpfkf_calico-system(6ce6737c-5cbb-4d00-b926-8ff7cccb578b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7e065e3c1c6807d668e4a5ad747e9ba12a145120f2404cf64459c58ded9ebe50\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-kpfkf" podUID="6ce6737c-5cbb-4d00-b926-8ff7cccb578b" Sep 9 05:42:28.994555 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3893413204.mount: Deactivated successfully. Sep 9 05:42:29.022325 containerd[1723]: time="2025-09-09T05:42:29.022283399Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:42:29.024580 containerd[1723]: time="2025-09-09T05:42:29.024504410Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 9 05:42:29.027460 containerd[1723]: time="2025-09-09T05:42:29.027418724Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:42:29.031053 containerd[1723]: time="2025-09-09T05:42:29.030671701Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:42:29.031053 containerd[1723]: time="2025-09-09T05:42:29.030938161Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 4.817522032s" Sep 9 05:42:29.031053 containerd[1723]: time="2025-09-09T05:42:29.030965914Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 9 05:42:29.046673 containerd[1723]: time="2025-09-09T05:42:29.046639087Z" level=info msg="CreateContainer within sandbox \"5a6e319cabf8526b3048c9e3d18021502b557ae8a571d56d01fccf03b6b2b0d8\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 9 05:42:29.083474 containerd[1723]: time="2025-09-09T05:42:29.083444024Z" level=info msg="Container c0b1b11fc7333500d56a0ebe20b25e452eb3544495f69bc9014479027ea8776f: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:42:29.101841 containerd[1723]: time="2025-09-09T05:42:29.101808754Z" level=info msg="CreateContainer within sandbox \"5a6e319cabf8526b3048c9e3d18021502b557ae8a571d56d01fccf03b6b2b0d8\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"c0b1b11fc7333500d56a0ebe20b25e452eb3544495f69bc9014479027ea8776f\"" Sep 9 05:42:29.106328 containerd[1723]: time="2025-09-09T05:42:29.106152440Z" level=info msg="StartContainer for \"c0b1b11fc7333500d56a0ebe20b25e452eb3544495f69bc9014479027ea8776f\"" Sep 9 05:42:29.108286 containerd[1723]: time="2025-09-09T05:42:29.107946824Z" level=info msg="connecting to shim c0b1b11fc7333500d56a0ebe20b25e452eb3544495f69bc9014479027ea8776f" address="unix:///run/containerd/s/ffd15538824b6325b20933b1e6e6a66abef0acb1dd62e57fc12da6967bf56507" protocol=ttrpc version=3 Sep 9 05:42:29.129927 systemd[1]: Started cri-containerd-c0b1b11fc7333500d56a0ebe20b25e452eb3544495f69bc9014479027ea8776f.scope - libcontainer container c0b1b11fc7333500d56a0ebe20b25e452eb3544495f69bc9014479027ea8776f. Sep 9 05:42:29.163957 containerd[1723]: time="2025-09-09T05:42:29.163934195Z" level=info msg="StartContainer for \"c0b1b11fc7333500d56a0ebe20b25e452eb3544495f69bc9014479027ea8776f\" returns successfully" Sep 9 05:42:29.558810 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 9 05:42:29.558911 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 9 05:42:29.562711 containerd[1723]: time="2025-09-09T05:42:29.562670623Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c0b1b11fc7333500d56a0ebe20b25e452eb3544495f69bc9014479027ea8776f\" id:\"d808199e857106624197aef1d30efa495d9568922f454279e9a59bc8408084a8\" pid:4230 exit_status:1 exited_at:{seconds:1757396549 nanos:562199011}" Sep 9 05:42:29.681807 kubelet[3195]: I0909 05:42:29.681488 3195 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-8vnsq" podStartSLOduration=1.463469066 podStartE2EDuration="16.681468943s" podCreationTimestamp="2025-09-09 05:42:13 +0000 UTC" firstStartedPulling="2025-09-09 05:42:13.813656606 +0000 UTC m=+19.796609056" lastFinishedPulling="2025-09-09 05:42:29.031656467 +0000 UTC m=+35.014608933" observedRunningTime="2025-09-09 05:42:29.245011195 +0000 UTC m=+35.227963678" watchObservedRunningTime="2025-09-09 05:42:29.681468943 +0000 UTC m=+35.664421405" Sep 9 05:42:29.753825 kubelet[3195]: I0909 05:42:29.752057 3195 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xl7ns\" (UniqueName: \"kubernetes.io/projected/5fe8f76a-afbe-4d04-b31e-ea5198ff1e9f-kube-api-access-xl7ns\") pod \"5fe8f76a-afbe-4d04-b31e-ea5198ff1e9f\" (UID: \"5fe8f76a-afbe-4d04-b31e-ea5198ff1e9f\") " Sep 9 05:42:29.753825 kubelet[3195]: I0909 05:42:29.752115 3195 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/5fe8f76a-afbe-4d04-b31e-ea5198ff1e9f-whisker-backend-key-pair\") pod \"5fe8f76a-afbe-4d04-b31e-ea5198ff1e9f\" (UID: \"5fe8f76a-afbe-4d04-b31e-ea5198ff1e9f\") " Sep 9 05:42:29.753825 kubelet[3195]: I0909 05:42:29.752143 3195 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5fe8f76a-afbe-4d04-b31e-ea5198ff1e9f-whisker-ca-bundle\") pod \"5fe8f76a-afbe-4d04-b31e-ea5198ff1e9f\" (UID: \"5fe8f76a-afbe-4d04-b31e-ea5198ff1e9f\") " Sep 9 05:42:29.753825 kubelet[3195]: I0909 05:42:29.752496 3195 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5fe8f76a-afbe-4d04-b31e-ea5198ff1e9f-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "5fe8f76a-afbe-4d04-b31e-ea5198ff1e9f" (UID: "5fe8f76a-afbe-4d04-b31e-ea5198ff1e9f"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 9 05:42:29.758929 kubelet[3195]: I0909 05:42:29.758897 3195 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe8f76a-afbe-4d04-b31e-ea5198ff1e9f-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "5fe8f76a-afbe-4d04-b31e-ea5198ff1e9f" (UID: "5fe8f76a-afbe-4d04-b31e-ea5198ff1e9f"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 9 05:42:29.759147 kubelet[3195]: I0909 05:42:29.759096 3195 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe8f76a-afbe-4d04-b31e-ea5198ff1e9f-kube-api-access-xl7ns" (OuterVolumeSpecName: "kube-api-access-xl7ns") pod "5fe8f76a-afbe-4d04-b31e-ea5198ff1e9f" (UID: "5fe8f76a-afbe-4d04-b31e-ea5198ff1e9f"). InnerVolumeSpecName "kube-api-access-xl7ns". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 9 05:42:29.852879 kubelet[3195]: I0909 05:42:29.852744 3195 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/5fe8f76a-afbe-4d04-b31e-ea5198ff1e9f-whisker-backend-key-pair\") on node \"ci-4452.0.0-n-23b47482b2\" DevicePath \"\"" Sep 9 05:42:29.852879 kubelet[3195]: I0909 05:42:29.852854 3195 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5fe8f76a-afbe-4d04-b31e-ea5198ff1e9f-whisker-ca-bundle\") on node \"ci-4452.0.0-n-23b47482b2\" DevicePath \"\"" Sep 9 05:42:29.853212 kubelet[3195]: I0909 05:42:29.853198 3195 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xl7ns\" (UniqueName: \"kubernetes.io/projected/5fe8f76a-afbe-4d04-b31e-ea5198ff1e9f-kube-api-access-xl7ns\") on node \"ci-4452.0.0-n-23b47482b2\" DevicePath \"\"" Sep 9 05:42:29.993503 systemd[1]: var-lib-kubelet-pods-5fe8f76a\x2dafbe\x2d4d04\x2db31e\x2dea5198ff1e9f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxl7ns.mount: Deactivated successfully. Sep 9 05:42:29.993643 systemd[1]: var-lib-kubelet-pods-5fe8f76a\x2dafbe\x2d4d04\x2db31e\x2dea5198ff1e9f-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 9 05:42:30.116985 systemd[1]: Removed slice kubepods-besteffort-pod5fe8f76a_afbe_4d04_b31e_ea5198ff1e9f.slice - libcontainer container kubepods-besteffort-pod5fe8f76a_afbe_4d04_b31e_ea5198ff1e9f.slice. Sep 9 05:42:30.330691 containerd[1723]: time="2025-09-09T05:42:30.330640070Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c0b1b11fc7333500d56a0ebe20b25e452eb3544495f69bc9014479027ea8776f\" id:\"737787b2e5701fa962bb2210b7e3dacb4e64e98c5c13581de467aa12c2154473\" pid:4281 exit_status:1 exited_at:{seconds:1757396550 nanos:330415958}" Sep 9 05:42:30.342462 systemd[1]: Created slice kubepods-besteffort-pod67c96f58_9e5a_4795_bcd0_fef8f3262710.slice - libcontainer container kubepods-besteffort-pod67c96f58_9e5a_4795_bcd0_fef8f3262710.slice. Sep 9 05:42:30.358979 kubelet[3195]: I0909 05:42:30.358945 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/67c96f58-9e5a-4795-bcd0-fef8f3262710-whisker-backend-key-pair\") pod \"whisker-645fd57f64-k28zf\" (UID: \"67c96f58-9e5a-4795-bcd0-fef8f3262710\") " pod="calico-system/whisker-645fd57f64-k28zf" Sep 9 05:42:30.359148 kubelet[3195]: I0909 05:42:30.358989 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkzgb\" (UniqueName: \"kubernetes.io/projected/67c96f58-9e5a-4795-bcd0-fef8f3262710-kube-api-access-jkzgb\") pod \"whisker-645fd57f64-k28zf\" (UID: \"67c96f58-9e5a-4795-bcd0-fef8f3262710\") " pod="calico-system/whisker-645fd57f64-k28zf" Sep 9 05:42:30.359148 kubelet[3195]: I0909 05:42:30.359009 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/67c96f58-9e5a-4795-bcd0-fef8f3262710-whisker-ca-bundle\") pod \"whisker-645fd57f64-k28zf\" (UID: \"67c96f58-9e5a-4795-bcd0-fef8f3262710\") " pod="calico-system/whisker-645fd57f64-k28zf" Sep 9 05:42:30.645678 containerd[1723]: time="2025-09-09T05:42:30.645638198Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-645fd57f64-k28zf,Uid:67c96f58-9e5a-4795-bcd0-fef8f3262710,Namespace:calico-system,Attempt:0,}" Sep 9 05:42:30.749545 systemd-networkd[1551]: cali05a0a3c02d9: Link UP Sep 9 05:42:30.750280 systemd-networkd[1551]: cali05a0a3c02d9: Gained carrier Sep 9 05:42:30.768887 containerd[1723]: 2025-09-09 05:42:30.670 [INFO][4297] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 9 05:42:30.768887 containerd[1723]: 2025-09-09 05:42:30.678 [INFO][4297] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4452.0.0--n--23b47482b2-k8s-whisker--645fd57f64--k28zf-eth0 whisker-645fd57f64- calico-system 67c96f58-9e5a-4795-bcd0-fef8f3262710 875 0 2025-09-09 05:42:30 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:645fd57f64 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4452.0.0-n-23b47482b2 whisker-645fd57f64-k28zf eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali05a0a3c02d9 [] [] }} ContainerID="df673eb9df6497d92547a9c56cf8493d45b8c5dff0b921a9a76e5c0e46749df7" Namespace="calico-system" Pod="whisker-645fd57f64-k28zf" WorkloadEndpoint="ci--4452.0.0--n--23b47482b2-k8s-whisker--645fd57f64--k28zf-" Sep 9 05:42:30.768887 containerd[1723]: 2025-09-09 05:42:30.678 [INFO][4297] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="df673eb9df6497d92547a9c56cf8493d45b8c5dff0b921a9a76e5c0e46749df7" Namespace="calico-system" Pod="whisker-645fd57f64-k28zf" WorkloadEndpoint="ci--4452.0.0--n--23b47482b2-k8s-whisker--645fd57f64--k28zf-eth0" Sep 9 05:42:30.768887 containerd[1723]: 2025-09-09 05:42:30.699 [INFO][4309] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="df673eb9df6497d92547a9c56cf8493d45b8c5dff0b921a9a76e5c0e46749df7" HandleID="k8s-pod-network.df673eb9df6497d92547a9c56cf8493d45b8c5dff0b921a9a76e5c0e46749df7" Workload="ci--4452.0.0--n--23b47482b2-k8s-whisker--645fd57f64--k28zf-eth0" Sep 9 05:42:30.769107 containerd[1723]: 2025-09-09 05:42:30.699 [INFO][4309] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="df673eb9df6497d92547a9c56cf8493d45b8c5dff0b921a9a76e5c0e46749df7" HandleID="k8s-pod-network.df673eb9df6497d92547a9c56cf8493d45b8c5dff0b921a9a76e5c0e46749df7" Workload="ci--4452.0.0--n--23b47482b2-k8s-whisker--645fd57f64--k28zf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5250), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4452.0.0-n-23b47482b2", "pod":"whisker-645fd57f64-k28zf", "timestamp":"2025-09-09 05:42:30.699744943 +0000 UTC"}, Hostname:"ci-4452.0.0-n-23b47482b2", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 05:42:30.769107 containerd[1723]: 2025-09-09 05:42:30.699 [INFO][4309] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 05:42:30.769107 containerd[1723]: 2025-09-09 05:42:30.700 [INFO][4309] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 05:42:30.769107 containerd[1723]: 2025-09-09 05:42:30.700 [INFO][4309] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4452.0.0-n-23b47482b2' Sep 9 05:42:30.769107 containerd[1723]: 2025-09-09 05:42:30.706 [INFO][4309] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.df673eb9df6497d92547a9c56cf8493d45b8c5dff0b921a9a76e5c0e46749df7" host="ci-4452.0.0-n-23b47482b2" Sep 9 05:42:30.769107 containerd[1723]: 2025-09-09 05:42:30.709 [INFO][4309] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4452.0.0-n-23b47482b2" Sep 9 05:42:30.769107 containerd[1723]: 2025-09-09 05:42:30.712 [INFO][4309] ipam/ipam.go 511: Trying affinity for 192.168.49.128/26 host="ci-4452.0.0-n-23b47482b2" Sep 9 05:42:30.769107 containerd[1723]: 2025-09-09 05:42:30.713 [INFO][4309] ipam/ipam.go 158: Attempting to load block cidr=192.168.49.128/26 host="ci-4452.0.0-n-23b47482b2" Sep 9 05:42:30.769107 containerd[1723]: 2025-09-09 05:42:30.715 [INFO][4309] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.49.128/26 host="ci-4452.0.0-n-23b47482b2" Sep 9 05:42:30.769328 containerd[1723]: 2025-09-09 05:42:30.715 [INFO][4309] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.49.128/26 handle="k8s-pod-network.df673eb9df6497d92547a9c56cf8493d45b8c5dff0b921a9a76e5c0e46749df7" host="ci-4452.0.0-n-23b47482b2" Sep 9 05:42:30.769328 containerd[1723]: 2025-09-09 05:42:30.716 [INFO][4309] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.df673eb9df6497d92547a9c56cf8493d45b8c5dff0b921a9a76e5c0e46749df7 Sep 9 05:42:30.769328 containerd[1723]: 2025-09-09 05:42:30.722 [INFO][4309] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.49.128/26 handle="k8s-pod-network.df673eb9df6497d92547a9c56cf8493d45b8c5dff0b921a9a76e5c0e46749df7" host="ci-4452.0.0-n-23b47482b2" Sep 9 05:42:30.769328 containerd[1723]: 2025-09-09 05:42:30.726 [INFO][4309] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.49.129/26] block=192.168.49.128/26 handle="k8s-pod-network.df673eb9df6497d92547a9c56cf8493d45b8c5dff0b921a9a76e5c0e46749df7" host="ci-4452.0.0-n-23b47482b2" Sep 9 05:42:30.769328 containerd[1723]: 2025-09-09 05:42:30.726 [INFO][4309] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.49.129/26] handle="k8s-pod-network.df673eb9df6497d92547a9c56cf8493d45b8c5dff0b921a9a76e5c0e46749df7" host="ci-4452.0.0-n-23b47482b2" Sep 9 05:42:30.769328 containerd[1723]: 2025-09-09 05:42:30.727 [INFO][4309] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 05:42:30.769328 containerd[1723]: 2025-09-09 05:42:30.727 [INFO][4309] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.49.129/26] IPv6=[] ContainerID="df673eb9df6497d92547a9c56cf8493d45b8c5dff0b921a9a76e5c0e46749df7" HandleID="k8s-pod-network.df673eb9df6497d92547a9c56cf8493d45b8c5dff0b921a9a76e5c0e46749df7" Workload="ci--4452.0.0--n--23b47482b2-k8s-whisker--645fd57f64--k28zf-eth0" Sep 9 05:42:30.769483 containerd[1723]: 2025-09-09 05:42:30.730 [INFO][4297] cni-plugin/k8s.go 418: Populated endpoint ContainerID="df673eb9df6497d92547a9c56cf8493d45b8c5dff0b921a9a76e5c0e46749df7" Namespace="calico-system" Pod="whisker-645fd57f64-k28zf" WorkloadEndpoint="ci--4452.0.0--n--23b47482b2-k8s-whisker--645fd57f64--k28zf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4452.0.0--n--23b47482b2-k8s-whisker--645fd57f64--k28zf-eth0", GenerateName:"whisker-645fd57f64-", Namespace:"calico-system", SelfLink:"", UID:"67c96f58-9e5a-4795-bcd0-fef8f3262710", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 5, 42, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"645fd57f64", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4452.0.0-n-23b47482b2", ContainerID:"", Pod:"whisker-645fd57f64-k28zf", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.49.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali05a0a3c02d9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 05:42:30.769483 containerd[1723]: 2025-09-09 05:42:30.730 [INFO][4297] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.49.129/32] ContainerID="df673eb9df6497d92547a9c56cf8493d45b8c5dff0b921a9a76e5c0e46749df7" Namespace="calico-system" Pod="whisker-645fd57f64-k28zf" WorkloadEndpoint="ci--4452.0.0--n--23b47482b2-k8s-whisker--645fd57f64--k28zf-eth0" Sep 9 05:42:30.769580 containerd[1723]: 2025-09-09 05:42:30.730 [INFO][4297] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali05a0a3c02d9 ContainerID="df673eb9df6497d92547a9c56cf8493d45b8c5dff0b921a9a76e5c0e46749df7" Namespace="calico-system" Pod="whisker-645fd57f64-k28zf" WorkloadEndpoint="ci--4452.0.0--n--23b47482b2-k8s-whisker--645fd57f64--k28zf-eth0" Sep 9 05:42:30.769580 containerd[1723]: 2025-09-09 05:42:30.750 [INFO][4297] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="df673eb9df6497d92547a9c56cf8493d45b8c5dff0b921a9a76e5c0e46749df7" Namespace="calico-system" Pod="whisker-645fd57f64-k28zf" WorkloadEndpoint="ci--4452.0.0--n--23b47482b2-k8s-whisker--645fd57f64--k28zf-eth0" Sep 9 05:42:30.769632 containerd[1723]: 2025-09-09 05:42:30.751 [INFO][4297] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="df673eb9df6497d92547a9c56cf8493d45b8c5dff0b921a9a76e5c0e46749df7" Namespace="calico-system" Pod="whisker-645fd57f64-k28zf" WorkloadEndpoint="ci--4452.0.0--n--23b47482b2-k8s-whisker--645fd57f64--k28zf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4452.0.0--n--23b47482b2-k8s-whisker--645fd57f64--k28zf-eth0", GenerateName:"whisker-645fd57f64-", Namespace:"calico-system", SelfLink:"", UID:"67c96f58-9e5a-4795-bcd0-fef8f3262710", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 5, 42, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"645fd57f64", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4452.0.0-n-23b47482b2", ContainerID:"df673eb9df6497d92547a9c56cf8493d45b8c5dff0b921a9a76e5c0e46749df7", Pod:"whisker-645fd57f64-k28zf", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.49.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali05a0a3c02d9", MAC:"8a:dd:e8:72:34:e9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 05:42:30.769695 containerd[1723]: 2025-09-09 05:42:30.765 [INFO][4297] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="df673eb9df6497d92547a9c56cf8493d45b8c5dff0b921a9a76e5c0e46749df7" Namespace="calico-system" Pod="whisker-645fd57f64-k28zf" WorkloadEndpoint="ci--4452.0.0--n--23b47482b2-k8s-whisker--645fd57f64--k28zf-eth0" Sep 9 05:42:30.812337 containerd[1723]: time="2025-09-09T05:42:30.812235705Z" level=info msg="connecting to shim df673eb9df6497d92547a9c56cf8493d45b8c5dff0b921a9a76e5c0e46749df7" address="unix:///run/containerd/s/8590dd62999523d6a5166315145ab9302f8355e542ad6cb67849784ec8506033" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:42:30.833930 systemd[1]: Started cri-containerd-df673eb9df6497d92547a9c56cf8493d45b8c5dff0b921a9a76e5c0e46749df7.scope - libcontainer container df673eb9df6497d92547a9c56cf8493d45b8c5dff0b921a9a76e5c0e46749df7. Sep 9 05:42:30.874095 containerd[1723]: time="2025-09-09T05:42:30.874066203Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-645fd57f64-k28zf,Uid:67c96f58-9e5a-4795-bcd0-fef8f3262710,Namespace:calico-system,Attempt:0,} returns sandbox id \"df673eb9df6497d92547a9c56cf8493d45b8c5dff0b921a9a76e5c0e46749df7\"" Sep 9 05:42:30.875601 containerd[1723]: time="2025-09-09T05:42:30.875577878Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 9 05:42:32.115317 kubelet[3195]: I0909 05:42:32.115272 3195 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe8f76a-afbe-4d04-b31e-ea5198ff1e9f" path="/var/lib/kubelet/pods/5fe8f76a-afbe-4d04-b31e-ea5198ff1e9f/volumes" Sep 9 05:42:32.158632 containerd[1723]: time="2025-09-09T05:42:32.158588085Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:42:32.161074 containerd[1723]: time="2025-09-09T05:42:32.161038811Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Sep 9 05:42:32.163753 containerd[1723]: time="2025-09-09T05:42:32.163717708Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:42:32.172577 containerd[1723]: time="2025-09-09T05:42:32.172537232Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:42:32.175185 containerd[1723]: time="2025-09-09T05:42:32.175019303Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 1.299405145s" Sep 9 05:42:32.175185 containerd[1723]: time="2025-09-09T05:42:32.175065534Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 9 05:42:32.182206 containerd[1723]: time="2025-09-09T05:42:32.182179917Z" level=info msg="CreateContainer within sandbox \"df673eb9df6497d92547a9c56cf8493d45b8c5dff0b921a9a76e5c0e46749df7\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 9 05:42:32.211801 containerd[1723]: time="2025-09-09T05:42:32.208068098Z" level=info msg="Container bb1698a1ec81e9b507aabd329047c9d78db9ff798c8e21169241050488f3d8f0: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:42:32.229385 containerd[1723]: time="2025-09-09T05:42:32.229352880Z" level=info msg="CreateContainer within sandbox \"df673eb9df6497d92547a9c56cf8493d45b8c5dff0b921a9a76e5c0e46749df7\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"bb1698a1ec81e9b507aabd329047c9d78db9ff798c8e21169241050488f3d8f0\"" Sep 9 05:42:32.230746 containerd[1723]: time="2025-09-09T05:42:32.229865088Z" level=info msg="StartContainer for \"bb1698a1ec81e9b507aabd329047c9d78db9ff798c8e21169241050488f3d8f0\"" Sep 9 05:42:32.231093 containerd[1723]: time="2025-09-09T05:42:32.230950581Z" level=info msg="connecting to shim bb1698a1ec81e9b507aabd329047c9d78db9ff798c8e21169241050488f3d8f0" address="unix:///run/containerd/s/8590dd62999523d6a5166315145ab9302f8355e542ad6cb67849784ec8506033" protocol=ttrpc version=3 Sep 9 05:42:32.255958 systemd[1]: Started cri-containerd-bb1698a1ec81e9b507aabd329047c9d78db9ff798c8e21169241050488f3d8f0.scope - libcontainer container bb1698a1ec81e9b507aabd329047c9d78db9ff798c8e21169241050488f3d8f0. Sep 9 05:42:32.312739 containerd[1723]: time="2025-09-09T05:42:32.312713374Z" level=info msg="StartContainer for \"bb1698a1ec81e9b507aabd329047c9d78db9ff798c8e21169241050488f3d8f0\" returns successfully" Sep 9 05:42:32.315022 containerd[1723]: time="2025-09-09T05:42:32.315001485Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 9 05:42:32.713905 systemd-networkd[1551]: cali05a0a3c02d9: Gained IPv6LL Sep 9 05:42:33.207468 kubelet[3195]: I0909 05:42:33.207118 3195 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 05:42:34.050520 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3262850481.mount: Deactivated successfully. Sep 9 05:42:34.105491 containerd[1723]: time="2025-09-09T05:42:34.105431007Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:42:34.108028 containerd[1723]: time="2025-09-09T05:42:34.107883647Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Sep 9 05:42:34.110586 containerd[1723]: time="2025-09-09T05:42:34.110562543Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:42:34.115150 containerd[1723]: time="2025-09-09T05:42:34.114334797Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:42:34.115150 containerd[1723]: time="2025-09-09T05:42:34.115039272Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 1.800011651s" Sep 9 05:42:34.115150 containerd[1723]: time="2025-09-09T05:42:34.115066586Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 9 05:42:34.121530 containerd[1723]: time="2025-09-09T05:42:34.121505053Z" level=info msg="CreateContainer within sandbox \"df673eb9df6497d92547a9c56cf8493d45b8c5dff0b921a9a76e5c0e46749df7\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 9 05:42:34.146175 containerd[1723]: time="2025-09-09T05:42:34.142987918Z" level=info msg="Container f4d010aa82bd472f8350d1e0da0111064acbeeeeadf4f053300c4ea5b8027222: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:42:34.160269 containerd[1723]: time="2025-09-09T05:42:34.160243866Z" level=info msg="CreateContainer within sandbox \"df673eb9df6497d92547a9c56cf8493d45b8c5dff0b921a9a76e5c0e46749df7\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"f4d010aa82bd472f8350d1e0da0111064acbeeeeadf4f053300c4ea5b8027222\"" Sep 9 05:42:34.161676 containerd[1723]: time="2025-09-09T05:42:34.160810302Z" level=info msg="StartContainer for \"f4d010aa82bd472f8350d1e0da0111064acbeeeeadf4f053300c4ea5b8027222\"" Sep 9 05:42:34.162166 containerd[1723]: time="2025-09-09T05:42:34.162137961Z" level=info msg="connecting to shim f4d010aa82bd472f8350d1e0da0111064acbeeeeadf4f053300c4ea5b8027222" address="unix:///run/containerd/s/8590dd62999523d6a5166315145ab9302f8355e542ad6cb67849784ec8506033" protocol=ttrpc version=3 Sep 9 05:42:34.188984 systemd[1]: Started cri-containerd-f4d010aa82bd472f8350d1e0da0111064acbeeeeadf4f053300c4ea5b8027222.scope - libcontainer container f4d010aa82bd472f8350d1e0da0111064acbeeeeadf4f053300c4ea5b8027222. Sep 9 05:42:34.240095 containerd[1723]: time="2025-09-09T05:42:34.239998025Z" level=info msg="StartContainer for \"f4d010aa82bd472f8350d1e0da0111064acbeeeeadf4f053300c4ea5b8027222\" returns successfully" Sep 9 05:42:34.284228 kubelet[3195]: I0909 05:42:34.284094 3195 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-645fd57f64-k28zf" podStartSLOduration=1.0435590829999999 podStartE2EDuration="4.284078807s" podCreationTimestamp="2025-09-09 05:42:30 +0000 UTC" firstStartedPulling="2025-09-09 05:42:30.875187631 +0000 UTC m=+36.858140087" lastFinishedPulling="2025-09-09 05:42:34.115707353 +0000 UTC m=+40.098659811" observedRunningTime="2025-09-09 05:42:34.276271079 +0000 UTC m=+40.259223542" watchObservedRunningTime="2025-09-09 05:42:34.284078807 +0000 UTC m=+40.267031271" Sep 9 05:42:34.695754 systemd-networkd[1551]: vxlan.calico: Link UP Sep 9 05:42:34.695763 systemd-networkd[1551]: vxlan.calico: Gained carrier Sep 9 05:42:35.111737 containerd[1723]: time="2025-09-09T05:42:35.111463506Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6cfdf88cf4-n2xbw,Uid:86f9250a-df33-4f1c-a5e5-c1195c97f00c,Namespace:calico-system,Attempt:0,}" Sep 9 05:42:35.200872 systemd-networkd[1551]: cali674298d3b66: Link UP Sep 9 05:42:35.203871 systemd-networkd[1551]: cali674298d3b66: Gained carrier Sep 9 05:42:35.220877 containerd[1723]: 2025-09-09 05:42:35.146 [INFO][4704] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4452.0.0--n--23b47482b2-k8s-calico--kube--controllers--6cfdf88cf4--n2xbw-eth0 calico-kube-controllers-6cfdf88cf4- calico-system 86f9250a-df33-4f1c-a5e5-c1195c97f00c 807 0 2025-09-09 05:42:13 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6cfdf88cf4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4452.0.0-n-23b47482b2 calico-kube-controllers-6cfdf88cf4-n2xbw eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali674298d3b66 [] [] }} ContainerID="b595daf385f78a6e29d5aa0f009b824f1e99ac2783cedb31ca0f5b73d36905ea" Namespace="calico-system" Pod="calico-kube-controllers-6cfdf88cf4-n2xbw" WorkloadEndpoint="ci--4452.0.0--n--23b47482b2-k8s-calico--kube--controllers--6cfdf88cf4--n2xbw-" Sep 9 05:42:35.220877 containerd[1723]: 2025-09-09 05:42:35.147 [INFO][4704] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b595daf385f78a6e29d5aa0f009b824f1e99ac2783cedb31ca0f5b73d36905ea" Namespace="calico-system" Pod="calico-kube-controllers-6cfdf88cf4-n2xbw" WorkloadEndpoint="ci--4452.0.0--n--23b47482b2-k8s-calico--kube--controllers--6cfdf88cf4--n2xbw-eth0" Sep 9 05:42:35.220877 containerd[1723]: 2025-09-09 05:42:35.166 [INFO][4715] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b595daf385f78a6e29d5aa0f009b824f1e99ac2783cedb31ca0f5b73d36905ea" HandleID="k8s-pod-network.b595daf385f78a6e29d5aa0f009b824f1e99ac2783cedb31ca0f5b73d36905ea" Workload="ci--4452.0.0--n--23b47482b2-k8s-calico--kube--controllers--6cfdf88cf4--n2xbw-eth0" Sep 9 05:42:35.221077 containerd[1723]: 2025-09-09 05:42:35.166 [INFO][4715] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b595daf385f78a6e29d5aa0f009b824f1e99ac2783cedb31ca0f5b73d36905ea" HandleID="k8s-pod-network.b595daf385f78a6e29d5aa0f009b824f1e99ac2783cedb31ca0f5b73d36905ea" Workload="ci--4452.0.0--n--23b47482b2-k8s-calico--kube--controllers--6cfdf88cf4--n2xbw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f610), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4452.0.0-n-23b47482b2", "pod":"calico-kube-controllers-6cfdf88cf4-n2xbw", "timestamp":"2025-09-09 05:42:35.166721474 +0000 UTC"}, Hostname:"ci-4452.0.0-n-23b47482b2", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 05:42:35.221077 containerd[1723]: 2025-09-09 05:42:35.166 [INFO][4715] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 05:42:35.221077 containerd[1723]: 2025-09-09 05:42:35.166 [INFO][4715] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 05:42:35.221077 containerd[1723]: 2025-09-09 05:42:35.166 [INFO][4715] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4452.0.0-n-23b47482b2' Sep 9 05:42:35.221077 containerd[1723]: 2025-09-09 05:42:35.171 [INFO][4715] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b595daf385f78a6e29d5aa0f009b824f1e99ac2783cedb31ca0f5b73d36905ea" host="ci-4452.0.0-n-23b47482b2" Sep 9 05:42:35.221077 containerd[1723]: 2025-09-09 05:42:35.174 [INFO][4715] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4452.0.0-n-23b47482b2" Sep 9 05:42:35.221077 containerd[1723]: 2025-09-09 05:42:35.178 [INFO][4715] ipam/ipam.go 511: Trying affinity for 192.168.49.128/26 host="ci-4452.0.0-n-23b47482b2" Sep 9 05:42:35.221077 containerd[1723]: 2025-09-09 05:42:35.179 [INFO][4715] ipam/ipam.go 158: Attempting to load block cidr=192.168.49.128/26 host="ci-4452.0.0-n-23b47482b2" Sep 9 05:42:35.221077 containerd[1723]: 2025-09-09 05:42:35.181 [INFO][4715] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.49.128/26 host="ci-4452.0.0-n-23b47482b2" Sep 9 05:42:35.221329 containerd[1723]: 2025-09-09 05:42:35.181 [INFO][4715] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.49.128/26 handle="k8s-pod-network.b595daf385f78a6e29d5aa0f009b824f1e99ac2783cedb31ca0f5b73d36905ea" host="ci-4452.0.0-n-23b47482b2" Sep 9 05:42:35.221329 containerd[1723]: 2025-09-09 05:42:35.182 [INFO][4715] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b595daf385f78a6e29d5aa0f009b824f1e99ac2783cedb31ca0f5b73d36905ea Sep 9 05:42:35.221329 containerd[1723]: 2025-09-09 05:42:35.186 [INFO][4715] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.49.128/26 handle="k8s-pod-network.b595daf385f78a6e29d5aa0f009b824f1e99ac2783cedb31ca0f5b73d36905ea" host="ci-4452.0.0-n-23b47482b2" Sep 9 05:42:35.221329 containerd[1723]: 2025-09-09 05:42:35.194 [INFO][4715] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.49.130/26] block=192.168.49.128/26 handle="k8s-pod-network.b595daf385f78a6e29d5aa0f009b824f1e99ac2783cedb31ca0f5b73d36905ea" host="ci-4452.0.0-n-23b47482b2" Sep 9 05:42:35.221329 containerd[1723]: 2025-09-09 05:42:35.194 [INFO][4715] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.49.130/26] handle="k8s-pod-network.b595daf385f78a6e29d5aa0f009b824f1e99ac2783cedb31ca0f5b73d36905ea" host="ci-4452.0.0-n-23b47482b2" Sep 9 05:42:35.221329 containerd[1723]: 2025-09-09 05:42:35.194 [INFO][4715] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 05:42:35.221329 containerd[1723]: 2025-09-09 05:42:35.194 [INFO][4715] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.49.130/26] IPv6=[] ContainerID="b595daf385f78a6e29d5aa0f009b824f1e99ac2783cedb31ca0f5b73d36905ea" HandleID="k8s-pod-network.b595daf385f78a6e29d5aa0f009b824f1e99ac2783cedb31ca0f5b73d36905ea" Workload="ci--4452.0.0--n--23b47482b2-k8s-calico--kube--controllers--6cfdf88cf4--n2xbw-eth0" Sep 9 05:42:35.221487 containerd[1723]: 2025-09-09 05:42:35.196 [INFO][4704] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b595daf385f78a6e29d5aa0f009b824f1e99ac2783cedb31ca0f5b73d36905ea" Namespace="calico-system" Pod="calico-kube-controllers-6cfdf88cf4-n2xbw" WorkloadEndpoint="ci--4452.0.0--n--23b47482b2-k8s-calico--kube--controllers--6cfdf88cf4--n2xbw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4452.0.0--n--23b47482b2-k8s-calico--kube--controllers--6cfdf88cf4--n2xbw-eth0", GenerateName:"calico-kube-controllers-6cfdf88cf4-", Namespace:"calico-system", SelfLink:"", UID:"86f9250a-df33-4f1c-a5e5-c1195c97f00c", ResourceVersion:"807", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 5, 42, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6cfdf88cf4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4452.0.0-n-23b47482b2", ContainerID:"", Pod:"calico-kube-controllers-6cfdf88cf4-n2xbw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.49.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali674298d3b66", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 05:42:35.221567 containerd[1723]: 2025-09-09 05:42:35.196 [INFO][4704] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.49.130/32] ContainerID="b595daf385f78a6e29d5aa0f009b824f1e99ac2783cedb31ca0f5b73d36905ea" Namespace="calico-system" Pod="calico-kube-controllers-6cfdf88cf4-n2xbw" WorkloadEndpoint="ci--4452.0.0--n--23b47482b2-k8s-calico--kube--controllers--6cfdf88cf4--n2xbw-eth0" Sep 9 05:42:35.221567 containerd[1723]: 2025-09-09 05:42:35.196 [INFO][4704] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali674298d3b66 ContainerID="b595daf385f78a6e29d5aa0f009b824f1e99ac2783cedb31ca0f5b73d36905ea" Namespace="calico-system" Pod="calico-kube-controllers-6cfdf88cf4-n2xbw" WorkloadEndpoint="ci--4452.0.0--n--23b47482b2-k8s-calico--kube--controllers--6cfdf88cf4--n2xbw-eth0" Sep 9 05:42:35.221567 containerd[1723]: 2025-09-09 05:42:35.201 [INFO][4704] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b595daf385f78a6e29d5aa0f009b824f1e99ac2783cedb31ca0f5b73d36905ea" Namespace="calico-system" Pod="calico-kube-controllers-6cfdf88cf4-n2xbw" WorkloadEndpoint="ci--4452.0.0--n--23b47482b2-k8s-calico--kube--controllers--6cfdf88cf4--n2xbw-eth0" Sep 9 05:42:35.221645 containerd[1723]: 2025-09-09 05:42:35.201 [INFO][4704] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b595daf385f78a6e29d5aa0f009b824f1e99ac2783cedb31ca0f5b73d36905ea" Namespace="calico-system" Pod="calico-kube-controllers-6cfdf88cf4-n2xbw" WorkloadEndpoint="ci--4452.0.0--n--23b47482b2-k8s-calico--kube--controllers--6cfdf88cf4--n2xbw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4452.0.0--n--23b47482b2-k8s-calico--kube--controllers--6cfdf88cf4--n2xbw-eth0", GenerateName:"calico-kube-controllers-6cfdf88cf4-", Namespace:"calico-system", SelfLink:"", UID:"86f9250a-df33-4f1c-a5e5-c1195c97f00c", ResourceVersion:"807", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 5, 42, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6cfdf88cf4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4452.0.0-n-23b47482b2", ContainerID:"b595daf385f78a6e29d5aa0f009b824f1e99ac2783cedb31ca0f5b73d36905ea", Pod:"calico-kube-controllers-6cfdf88cf4-n2xbw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.49.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali674298d3b66", MAC:"46:70:4e:1e:5a:08", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 05:42:35.221714 containerd[1723]: 2025-09-09 05:42:35.218 [INFO][4704] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b595daf385f78a6e29d5aa0f009b824f1e99ac2783cedb31ca0f5b73d36905ea" Namespace="calico-system" Pod="calico-kube-controllers-6cfdf88cf4-n2xbw" WorkloadEndpoint="ci--4452.0.0--n--23b47482b2-k8s-calico--kube--controllers--6cfdf88cf4--n2xbw-eth0" Sep 9 05:42:35.263166 containerd[1723]: time="2025-09-09T05:42:35.263107779Z" level=info msg="connecting to shim b595daf385f78a6e29d5aa0f009b824f1e99ac2783cedb31ca0f5b73d36905ea" address="unix:///run/containerd/s/d8a554946856297ad5ead128b06f6ca7345ea58539d76956d215d1bf9df9ae3d" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:42:35.287912 systemd[1]: Started cri-containerd-b595daf385f78a6e29d5aa0f009b824f1e99ac2783cedb31ca0f5b73d36905ea.scope - libcontainer container b595daf385f78a6e29d5aa0f009b824f1e99ac2783cedb31ca0f5b73d36905ea. Sep 9 05:42:35.326978 containerd[1723]: time="2025-09-09T05:42:35.326953222Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6cfdf88cf4-n2xbw,Uid:86f9250a-df33-4f1c-a5e5-c1195c97f00c,Namespace:calico-system,Attempt:0,} returns sandbox id \"b595daf385f78a6e29d5aa0f009b824f1e99ac2783cedb31ca0f5b73d36905ea\"" Sep 9 05:42:35.328235 containerd[1723]: time="2025-09-09T05:42:35.328211838Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 9 05:42:35.913946 systemd-networkd[1551]: vxlan.calico: Gained IPv6LL Sep 9 05:42:36.617938 systemd-networkd[1551]: cali674298d3b66: Gained IPv6LL Sep 9 05:42:37.112282 containerd[1723]: time="2025-09-09T05:42:37.112233282Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d4c6f5df5-5c5bv,Uid:2327e27d-1902-4105-a58c-732597795f2d,Namespace:calico-apiserver,Attempt:0,}" Sep 9 05:42:37.238967 systemd-networkd[1551]: cali9a0974919da: Link UP Sep 9 05:42:37.241234 systemd-networkd[1551]: cali9a0974919da: Gained carrier Sep 9 05:42:37.265358 containerd[1723]: 2025-09-09 05:42:37.159 [INFO][4780] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4452.0.0--n--23b47482b2-k8s-calico--apiserver--7d4c6f5df5--5c5bv-eth0 calico-apiserver-7d4c6f5df5- calico-apiserver 2327e27d-1902-4105-a58c-732597795f2d 806 0 2025-09-09 05:42:10 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7d4c6f5df5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4452.0.0-n-23b47482b2 calico-apiserver-7d4c6f5df5-5c5bv eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali9a0974919da [] [] }} ContainerID="4a22729dfa39ae16518134c009e409687b056e571841a71233f99535164850ca" Namespace="calico-apiserver" Pod="calico-apiserver-7d4c6f5df5-5c5bv" WorkloadEndpoint="ci--4452.0.0--n--23b47482b2-k8s-calico--apiserver--7d4c6f5df5--5c5bv-" Sep 9 05:42:37.265358 containerd[1723]: 2025-09-09 05:42:37.159 [INFO][4780] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4a22729dfa39ae16518134c009e409687b056e571841a71233f99535164850ca" Namespace="calico-apiserver" Pod="calico-apiserver-7d4c6f5df5-5c5bv" WorkloadEndpoint="ci--4452.0.0--n--23b47482b2-k8s-calico--apiserver--7d4c6f5df5--5c5bv-eth0" Sep 9 05:42:37.265358 containerd[1723]: 2025-09-09 05:42:37.187 [INFO][4795] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4a22729dfa39ae16518134c009e409687b056e571841a71233f99535164850ca" HandleID="k8s-pod-network.4a22729dfa39ae16518134c009e409687b056e571841a71233f99535164850ca" Workload="ci--4452.0.0--n--23b47482b2-k8s-calico--apiserver--7d4c6f5df5--5c5bv-eth0" Sep 9 05:42:37.265573 containerd[1723]: 2025-09-09 05:42:37.187 [INFO][4795] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4a22729dfa39ae16518134c009e409687b056e571841a71233f99535164850ca" HandleID="k8s-pod-network.4a22729dfa39ae16518134c009e409687b056e571841a71233f99535164850ca" Workload="ci--4452.0.0--n--23b47482b2-k8s-calico--apiserver--7d4c6f5df5--5c5bv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ad4a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4452.0.0-n-23b47482b2", "pod":"calico-apiserver-7d4c6f5df5-5c5bv", "timestamp":"2025-09-09 05:42:37.187193603 +0000 UTC"}, Hostname:"ci-4452.0.0-n-23b47482b2", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 05:42:37.265573 containerd[1723]: 2025-09-09 05:42:37.187 [INFO][4795] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 05:42:37.265573 containerd[1723]: 2025-09-09 05:42:37.187 [INFO][4795] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 05:42:37.265573 containerd[1723]: 2025-09-09 05:42:37.187 [INFO][4795] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4452.0.0-n-23b47482b2' Sep 9 05:42:37.265573 containerd[1723]: 2025-09-09 05:42:37.194 [INFO][4795] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4a22729dfa39ae16518134c009e409687b056e571841a71233f99535164850ca" host="ci-4452.0.0-n-23b47482b2" Sep 9 05:42:37.265573 containerd[1723]: 2025-09-09 05:42:37.198 [INFO][4795] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4452.0.0-n-23b47482b2" Sep 9 05:42:37.265573 containerd[1723]: 2025-09-09 05:42:37.204 [INFO][4795] ipam/ipam.go 511: Trying affinity for 192.168.49.128/26 host="ci-4452.0.0-n-23b47482b2" Sep 9 05:42:37.265573 containerd[1723]: 2025-09-09 05:42:37.205 [INFO][4795] ipam/ipam.go 158: Attempting to load block cidr=192.168.49.128/26 host="ci-4452.0.0-n-23b47482b2" Sep 9 05:42:37.265573 containerd[1723]: 2025-09-09 05:42:37.208 [INFO][4795] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.49.128/26 host="ci-4452.0.0-n-23b47482b2" Sep 9 05:42:37.266097 containerd[1723]: 2025-09-09 05:42:37.208 [INFO][4795] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.49.128/26 handle="k8s-pod-network.4a22729dfa39ae16518134c009e409687b056e571841a71233f99535164850ca" host="ci-4452.0.0-n-23b47482b2" Sep 9 05:42:37.266097 containerd[1723]: 2025-09-09 05:42:37.209 [INFO][4795] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4a22729dfa39ae16518134c009e409687b056e571841a71233f99535164850ca Sep 9 05:42:37.266097 containerd[1723]: 2025-09-09 05:42:37.216 [INFO][4795] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.49.128/26 handle="k8s-pod-network.4a22729dfa39ae16518134c009e409687b056e571841a71233f99535164850ca" host="ci-4452.0.0-n-23b47482b2" Sep 9 05:42:37.266097 containerd[1723]: 2025-09-09 05:42:37.228 [INFO][4795] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.49.131/26] block=192.168.49.128/26 handle="k8s-pod-network.4a22729dfa39ae16518134c009e409687b056e571841a71233f99535164850ca" host="ci-4452.0.0-n-23b47482b2" Sep 9 05:42:37.266097 containerd[1723]: 2025-09-09 05:42:37.228 [INFO][4795] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.49.131/26] handle="k8s-pod-network.4a22729dfa39ae16518134c009e409687b056e571841a71233f99535164850ca" host="ci-4452.0.0-n-23b47482b2" Sep 9 05:42:37.266097 containerd[1723]: 2025-09-09 05:42:37.228 [INFO][4795] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 05:42:37.266097 containerd[1723]: 2025-09-09 05:42:37.228 [INFO][4795] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.49.131/26] IPv6=[] ContainerID="4a22729dfa39ae16518134c009e409687b056e571841a71233f99535164850ca" HandleID="k8s-pod-network.4a22729dfa39ae16518134c009e409687b056e571841a71233f99535164850ca" Workload="ci--4452.0.0--n--23b47482b2-k8s-calico--apiserver--7d4c6f5df5--5c5bv-eth0" Sep 9 05:42:37.266544 containerd[1723]: 2025-09-09 05:42:37.230 [INFO][4780] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4a22729dfa39ae16518134c009e409687b056e571841a71233f99535164850ca" Namespace="calico-apiserver" Pod="calico-apiserver-7d4c6f5df5-5c5bv" WorkloadEndpoint="ci--4452.0.0--n--23b47482b2-k8s-calico--apiserver--7d4c6f5df5--5c5bv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4452.0.0--n--23b47482b2-k8s-calico--apiserver--7d4c6f5df5--5c5bv-eth0", GenerateName:"calico-apiserver-7d4c6f5df5-", Namespace:"calico-apiserver", SelfLink:"", UID:"2327e27d-1902-4105-a58c-732597795f2d", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 5, 42, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d4c6f5df5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4452.0.0-n-23b47482b2", ContainerID:"", Pod:"calico-apiserver-7d4c6f5df5-5c5bv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.49.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9a0974919da", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 05:42:37.266748 containerd[1723]: 2025-09-09 05:42:37.231 [INFO][4780] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.49.131/32] ContainerID="4a22729dfa39ae16518134c009e409687b056e571841a71233f99535164850ca" Namespace="calico-apiserver" Pod="calico-apiserver-7d4c6f5df5-5c5bv" WorkloadEndpoint="ci--4452.0.0--n--23b47482b2-k8s-calico--apiserver--7d4c6f5df5--5c5bv-eth0" Sep 9 05:42:37.266748 containerd[1723]: 2025-09-09 05:42:37.231 [INFO][4780] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9a0974919da ContainerID="4a22729dfa39ae16518134c009e409687b056e571841a71233f99535164850ca" Namespace="calico-apiserver" Pod="calico-apiserver-7d4c6f5df5-5c5bv" WorkloadEndpoint="ci--4452.0.0--n--23b47482b2-k8s-calico--apiserver--7d4c6f5df5--5c5bv-eth0" Sep 9 05:42:37.266748 containerd[1723]: 2025-09-09 05:42:37.242 [INFO][4780] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4a22729dfa39ae16518134c009e409687b056e571841a71233f99535164850ca" Namespace="calico-apiserver" Pod="calico-apiserver-7d4c6f5df5-5c5bv" WorkloadEndpoint="ci--4452.0.0--n--23b47482b2-k8s-calico--apiserver--7d4c6f5df5--5c5bv-eth0" Sep 9 05:42:37.267325 containerd[1723]: 2025-09-09 05:42:37.243 [INFO][4780] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4a22729dfa39ae16518134c009e409687b056e571841a71233f99535164850ca" Namespace="calico-apiserver" Pod="calico-apiserver-7d4c6f5df5-5c5bv" WorkloadEndpoint="ci--4452.0.0--n--23b47482b2-k8s-calico--apiserver--7d4c6f5df5--5c5bv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4452.0.0--n--23b47482b2-k8s-calico--apiserver--7d4c6f5df5--5c5bv-eth0", GenerateName:"calico-apiserver-7d4c6f5df5-", Namespace:"calico-apiserver", SelfLink:"", UID:"2327e27d-1902-4105-a58c-732597795f2d", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 5, 42, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d4c6f5df5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4452.0.0-n-23b47482b2", ContainerID:"4a22729dfa39ae16518134c009e409687b056e571841a71233f99535164850ca", Pod:"calico-apiserver-7d4c6f5df5-5c5bv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.49.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9a0974919da", MAC:"ae:d7:23:5e:29:00", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 05:42:37.267409 containerd[1723]: 2025-09-09 05:42:37.260 [INFO][4780] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4a22729dfa39ae16518134c009e409687b056e571841a71233f99535164850ca" Namespace="calico-apiserver" Pod="calico-apiserver-7d4c6f5df5-5c5bv" WorkloadEndpoint="ci--4452.0.0--n--23b47482b2-k8s-calico--apiserver--7d4c6f5df5--5c5bv-eth0" Sep 9 05:42:37.313116 containerd[1723]: time="2025-09-09T05:42:37.312970606Z" level=info msg="connecting to shim 4a22729dfa39ae16518134c009e409687b056e571841a71233f99535164850ca" address="unix:///run/containerd/s/902825bb6640164aa9bad4d0744b2d2be0038d3849076aac68e1c5c4bc53e430" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:42:37.351594 systemd[1]: Started cri-containerd-4a22729dfa39ae16518134c009e409687b056e571841a71233f99535164850ca.scope - libcontainer container 4a22729dfa39ae16518134c009e409687b056e571841a71233f99535164850ca. Sep 9 05:42:37.410495 containerd[1723]: time="2025-09-09T05:42:37.410403107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d4c6f5df5-5c5bv,Uid:2327e27d-1902-4105-a58c-732597795f2d,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"4a22729dfa39ae16518134c009e409687b056e571841a71233f99535164850ca\"" Sep 9 05:42:37.719289 containerd[1723]: time="2025-09-09T05:42:37.719165766Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:42:37.722155 containerd[1723]: time="2025-09-09T05:42:37.722122288Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Sep 9 05:42:37.725187 containerd[1723]: time="2025-09-09T05:42:37.725144657Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:42:37.729629 containerd[1723]: time="2025-09-09T05:42:37.729210270Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:42:37.729629 containerd[1723]: time="2025-09-09T05:42:37.729514694Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 2.401271393s" Sep 9 05:42:37.729629 containerd[1723]: time="2025-09-09T05:42:37.729540663Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 9 05:42:37.730960 containerd[1723]: time="2025-09-09T05:42:37.730936108Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 9 05:42:37.748938 containerd[1723]: time="2025-09-09T05:42:37.748913661Z" level=info msg="CreateContainer within sandbox \"b595daf385f78a6e29d5aa0f009b824f1e99ac2783cedb31ca0f5b73d36905ea\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 9 05:42:37.766059 containerd[1723]: time="2025-09-09T05:42:37.766033319Z" level=info msg="Container 84e05ce642707feec06a2c468b3d440ec565c729fb177c7a2a581d38cf1107c8: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:42:37.782673 containerd[1723]: time="2025-09-09T05:42:37.782645326Z" level=info msg="CreateContainer within sandbox \"b595daf385f78a6e29d5aa0f009b824f1e99ac2783cedb31ca0f5b73d36905ea\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"84e05ce642707feec06a2c468b3d440ec565c729fb177c7a2a581d38cf1107c8\"" Sep 9 05:42:37.783279 containerd[1723]: time="2025-09-09T05:42:37.783067112Z" level=info msg="StartContainer for \"84e05ce642707feec06a2c468b3d440ec565c729fb177c7a2a581d38cf1107c8\"" Sep 9 05:42:37.784427 containerd[1723]: time="2025-09-09T05:42:37.784378164Z" level=info msg="connecting to shim 84e05ce642707feec06a2c468b3d440ec565c729fb177c7a2a581d38cf1107c8" address="unix:///run/containerd/s/d8a554946856297ad5ead128b06f6ca7345ea58539d76956d215d1bf9df9ae3d" protocol=ttrpc version=3 Sep 9 05:42:37.798921 systemd[1]: Started cri-containerd-84e05ce642707feec06a2c468b3d440ec565c729fb177c7a2a581d38cf1107c8.scope - libcontainer container 84e05ce642707feec06a2c468b3d440ec565c729fb177c7a2a581d38cf1107c8. Sep 9 05:42:37.848311 containerd[1723]: time="2025-09-09T05:42:37.848286646Z" level=info msg="StartContainer for \"84e05ce642707feec06a2c468b3d440ec565c729fb177c7a2a581d38cf1107c8\" returns successfully" Sep 9 05:42:38.111773 containerd[1723]: time="2025-09-09T05:42:38.111680196Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-t2sgm,Uid:d4ed2dc7-4551-4678-8f20-c675198c7103,Namespace:kube-system,Attempt:0,}" Sep 9 05:42:38.112528 containerd[1723]: time="2025-09-09T05:42:38.112030483Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kpfkf,Uid:6ce6737c-5cbb-4d00-b926-8ff7cccb578b,Namespace:calico-system,Attempt:0,}" Sep 9 05:42:38.113392 containerd[1723]: time="2025-09-09T05:42:38.113349387Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-smrln,Uid:59a7decc-b070-4036-86c2-61eb36a53dee,Namespace:calico-system,Attempt:0,}" Sep 9 05:42:38.277143 systemd-networkd[1551]: cali80d84b08612: Link UP Sep 9 05:42:38.278103 systemd-networkd[1551]: cali80d84b08612: Gained carrier Sep 9 05:42:38.299039 kubelet[3195]: I0909 05:42:38.298291 3195 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6cfdf88cf4-n2xbw" podStartSLOduration=22.89582768 podStartE2EDuration="25.298273661s" podCreationTimestamp="2025-09-09 05:42:13 +0000 UTC" firstStartedPulling="2025-09-09 05:42:35.327872101 +0000 UTC m=+41.310824557" lastFinishedPulling="2025-09-09 05:42:37.73031808 +0000 UTC m=+43.713270538" observedRunningTime="2025-09-09 05:42:38.29529071 +0000 UTC m=+44.278243195" watchObservedRunningTime="2025-09-09 05:42:38.298273661 +0000 UTC m=+44.281226113" Sep 9 05:42:38.299483 containerd[1723]: 2025-09-09 05:42:38.195 [INFO][4912] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4452.0.0--n--23b47482b2-k8s-coredns--674b8bbfcf--t2sgm-eth0 coredns-674b8bbfcf- kube-system d4ed2dc7-4551-4678-8f20-c675198c7103 813 0 2025-09-09 05:42:01 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4452.0.0-n-23b47482b2 coredns-674b8bbfcf-t2sgm eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali80d84b08612 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="84084ebe1fb5be3ec4fa3fbbb7624ed6694ae248a3801c565cbf41b5191055ad" Namespace="kube-system" Pod="coredns-674b8bbfcf-t2sgm" WorkloadEndpoint="ci--4452.0.0--n--23b47482b2-k8s-coredns--674b8bbfcf--t2sgm-" Sep 9 05:42:38.299483 containerd[1723]: 2025-09-09 05:42:38.196 [INFO][4912] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="84084ebe1fb5be3ec4fa3fbbb7624ed6694ae248a3801c565cbf41b5191055ad" Namespace="kube-system" Pod="coredns-674b8bbfcf-t2sgm" WorkloadEndpoint="ci--4452.0.0--n--23b47482b2-k8s-coredns--674b8bbfcf--t2sgm-eth0" Sep 9 05:42:38.299483 containerd[1723]: 2025-09-09 05:42:38.240 [INFO][4942] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="84084ebe1fb5be3ec4fa3fbbb7624ed6694ae248a3801c565cbf41b5191055ad" HandleID="k8s-pod-network.84084ebe1fb5be3ec4fa3fbbb7624ed6694ae248a3801c565cbf41b5191055ad" Workload="ci--4452.0.0--n--23b47482b2-k8s-coredns--674b8bbfcf--t2sgm-eth0" Sep 9 05:42:38.299628 containerd[1723]: 2025-09-09 05:42:38.240 [INFO][4942] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="84084ebe1fb5be3ec4fa3fbbb7624ed6694ae248a3801c565cbf41b5191055ad" HandleID="k8s-pod-network.84084ebe1fb5be3ec4fa3fbbb7624ed6694ae248a3801c565cbf41b5191055ad" Workload="ci--4452.0.0--n--23b47482b2-k8s-coredns--674b8bbfcf--t2sgm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5010), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4452.0.0-n-23b47482b2", "pod":"coredns-674b8bbfcf-t2sgm", "timestamp":"2025-09-09 05:42:38.240393523 +0000 UTC"}, Hostname:"ci-4452.0.0-n-23b47482b2", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 05:42:38.299628 containerd[1723]: 2025-09-09 05:42:38.240 [INFO][4942] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 05:42:38.299628 containerd[1723]: 2025-09-09 05:42:38.240 [INFO][4942] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 05:42:38.299628 containerd[1723]: 2025-09-09 05:42:38.240 [INFO][4942] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4452.0.0-n-23b47482b2' Sep 9 05:42:38.299628 containerd[1723]: 2025-09-09 05:42:38.246 [INFO][4942] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.84084ebe1fb5be3ec4fa3fbbb7624ed6694ae248a3801c565cbf41b5191055ad" host="ci-4452.0.0-n-23b47482b2" Sep 9 05:42:38.299628 containerd[1723]: 2025-09-09 05:42:38.250 [INFO][4942] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4452.0.0-n-23b47482b2" Sep 9 05:42:38.299628 containerd[1723]: 2025-09-09 05:42:38.253 [INFO][4942] ipam/ipam.go 511: Trying affinity for 192.168.49.128/26 host="ci-4452.0.0-n-23b47482b2" Sep 9 05:42:38.299628 containerd[1723]: 2025-09-09 05:42:38.254 [INFO][4942] ipam/ipam.go 158: Attempting to load block cidr=192.168.49.128/26 host="ci-4452.0.0-n-23b47482b2" Sep 9 05:42:38.299628 containerd[1723]: 2025-09-09 05:42:38.255 [INFO][4942] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.49.128/26 host="ci-4452.0.0-n-23b47482b2" Sep 9 05:42:38.299868 containerd[1723]: 2025-09-09 05:42:38.255 [INFO][4942] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.49.128/26 handle="k8s-pod-network.84084ebe1fb5be3ec4fa3fbbb7624ed6694ae248a3801c565cbf41b5191055ad" host="ci-4452.0.0-n-23b47482b2" Sep 9 05:42:38.299868 containerd[1723]: 2025-09-09 05:42:38.256 [INFO][4942] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.84084ebe1fb5be3ec4fa3fbbb7624ed6694ae248a3801c565cbf41b5191055ad Sep 9 05:42:38.299868 containerd[1723]: 2025-09-09 05:42:38.260 [INFO][4942] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.49.128/26 handle="k8s-pod-network.84084ebe1fb5be3ec4fa3fbbb7624ed6694ae248a3801c565cbf41b5191055ad" host="ci-4452.0.0-n-23b47482b2" Sep 9 05:42:38.299868 containerd[1723]: 2025-09-09 05:42:38.271 [INFO][4942] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.49.132/26] block=192.168.49.128/26 handle="k8s-pod-network.84084ebe1fb5be3ec4fa3fbbb7624ed6694ae248a3801c565cbf41b5191055ad" host="ci-4452.0.0-n-23b47482b2" Sep 9 05:42:38.299868 containerd[1723]: 2025-09-09 05:42:38.271 [INFO][4942] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.49.132/26] handle="k8s-pod-network.84084ebe1fb5be3ec4fa3fbbb7624ed6694ae248a3801c565cbf41b5191055ad" host="ci-4452.0.0-n-23b47482b2" Sep 9 05:42:38.299868 containerd[1723]: 2025-09-09 05:42:38.271 [INFO][4942] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 05:42:38.299868 containerd[1723]: 2025-09-09 05:42:38.271 [INFO][4942] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.49.132/26] IPv6=[] ContainerID="84084ebe1fb5be3ec4fa3fbbb7624ed6694ae248a3801c565cbf41b5191055ad" HandleID="k8s-pod-network.84084ebe1fb5be3ec4fa3fbbb7624ed6694ae248a3801c565cbf41b5191055ad" Workload="ci--4452.0.0--n--23b47482b2-k8s-coredns--674b8bbfcf--t2sgm-eth0" Sep 9 05:42:38.300021 containerd[1723]: 2025-09-09 05:42:38.274 [INFO][4912] cni-plugin/k8s.go 418: Populated endpoint ContainerID="84084ebe1fb5be3ec4fa3fbbb7624ed6694ae248a3801c565cbf41b5191055ad" Namespace="kube-system" Pod="coredns-674b8bbfcf-t2sgm" WorkloadEndpoint="ci--4452.0.0--n--23b47482b2-k8s-coredns--674b8bbfcf--t2sgm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4452.0.0--n--23b47482b2-k8s-coredns--674b8bbfcf--t2sgm-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"d4ed2dc7-4551-4678-8f20-c675198c7103", ResourceVersion:"813", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 5, 42, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4452.0.0-n-23b47482b2", ContainerID:"", Pod:"coredns-674b8bbfcf-t2sgm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.49.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali80d84b08612", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 05:42:38.300021 containerd[1723]: 2025-09-09 05:42:38.274 [INFO][4912] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.49.132/32] ContainerID="84084ebe1fb5be3ec4fa3fbbb7624ed6694ae248a3801c565cbf41b5191055ad" Namespace="kube-system" Pod="coredns-674b8bbfcf-t2sgm" WorkloadEndpoint="ci--4452.0.0--n--23b47482b2-k8s-coredns--674b8bbfcf--t2sgm-eth0" Sep 9 05:42:38.300021 containerd[1723]: 2025-09-09 05:42:38.274 [INFO][4912] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali80d84b08612 ContainerID="84084ebe1fb5be3ec4fa3fbbb7624ed6694ae248a3801c565cbf41b5191055ad" Namespace="kube-system" Pod="coredns-674b8bbfcf-t2sgm" WorkloadEndpoint="ci--4452.0.0--n--23b47482b2-k8s-coredns--674b8bbfcf--t2sgm-eth0" Sep 9 05:42:38.300021 containerd[1723]: 2025-09-09 05:42:38.278 [INFO][4912] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="84084ebe1fb5be3ec4fa3fbbb7624ed6694ae248a3801c565cbf41b5191055ad" Namespace="kube-system" Pod="coredns-674b8bbfcf-t2sgm" WorkloadEndpoint="ci--4452.0.0--n--23b47482b2-k8s-coredns--674b8bbfcf--t2sgm-eth0" Sep 9 05:42:38.300021 containerd[1723]: 2025-09-09 05:42:38.278 [INFO][4912] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="84084ebe1fb5be3ec4fa3fbbb7624ed6694ae248a3801c565cbf41b5191055ad" Namespace="kube-system" Pod="coredns-674b8bbfcf-t2sgm" WorkloadEndpoint="ci--4452.0.0--n--23b47482b2-k8s-coredns--674b8bbfcf--t2sgm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4452.0.0--n--23b47482b2-k8s-coredns--674b8bbfcf--t2sgm-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"d4ed2dc7-4551-4678-8f20-c675198c7103", ResourceVersion:"813", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 5, 42, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4452.0.0-n-23b47482b2", ContainerID:"84084ebe1fb5be3ec4fa3fbbb7624ed6694ae248a3801c565cbf41b5191055ad", Pod:"coredns-674b8bbfcf-t2sgm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.49.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali80d84b08612", MAC:"16:70:1f:d6:52:ed", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 05:42:38.300021 containerd[1723]: 2025-09-09 05:42:38.296 [INFO][4912] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="84084ebe1fb5be3ec4fa3fbbb7624ed6694ae248a3801c565cbf41b5191055ad" Namespace="kube-system" Pod="coredns-674b8bbfcf-t2sgm" WorkloadEndpoint="ci--4452.0.0--n--23b47482b2-k8s-coredns--674b8bbfcf--t2sgm-eth0" Sep 9 05:42:38.345637 containerd[1723]: time="2025-09-09T05:42:38.345586588Z" level=info msg="TaskExit event in podsandbox handler container_id:\"84e05ce642707feec06a2c468b3d440ec565c729fb177c7a2a581d38cf1107c8\" id:\"f7db4f6313a37421b1a235760a95cfa2f230d468197d458a29774c254e9c9cb3\" pid:4983 exited_at:{seconds:1757396558 nanos:345198113}" Sep 9 05:42:38.400775 systemd-networkd[1551]: cali1ee3adf2f5e: Link UP Sep 9 05:42:38.401862 systemd-networkd[1551]: cali1ee3adf2f5e: Gained carrier Sep 9 05:42:38.414510 containerd[1723]: time="2025-09-09T05:42:38.414472417Z" level=info msg="connecting to shim 84084ebe1fb5be3ec4fa3fbbb7624ed6694ae248a3801c565cbf41b5191055ad" address="unix:///run/containerd/s/94aedaf6f4a1e03ac82c1b75a257e2f3acdcf76ed6dcfa3e353c50649c3d08c8" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:42:38.418507 containerd[1723]: 2025-09-09 05:42:38.200 [INFO][4923] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4452.0.0--n--23b47482b2-k8s-goldmane--54d579b49d--smrln-eth0 goldmane-54d579b49d- calico-system 59a7decc-b070-4036-86c2-61eb36a53dee 808 0 2025-09-09 05:42:13 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4452.0.0-n-23b47482b2 goldmane-54d579b49d-smrln eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali1ee3adf2f5e [] [] }} ContainerID="9ea70ef30cae069562532ca9bec82c02e9387af641a9bb3c524fb4f454112995" Namespace="calico-system" Pod="goldmane-54d579b49d-smrln" WorkloadEndpoint="ci--4452.0.0--n--23b47482b2-k8s-goldmane--54d579b49d--smrln-" Sep 9 05:42:38.418507 containerd[1723]: 2025-09-09 05:42:38.200 [INFO][4923] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9ea70ef30cae069562532ca9bec82c02e9387af641a9bb3c524fb4f454112995" Namespace="calico-system" Pod="goldmane-54d579b49d-smrln" WorkloadEndpoint="ci--4452.0.0--n--23b47482b2-k8s-goldmane--54d579b49d--smrln-eth0" Sep 9 05:42:38.418507 containerd[1723]: 2025-09-09 05:42:38.243 [INFO][4944] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9ea70ef30cae069562532ca9bec82c02e9387af641a9bb3c524fb4f454112995" HandleID="k8s-pod-network.9ea70ef30cae069562532ca9bec82c02e9387af641a9bb3c524fb4f454112995" Workload="ci--4452.0.0--n--23b47482b2-k8s-goldmane--54d579b49d--smrln-eth0" Sep 9 05:42:38.418507 containerd[1723]: 2025-09-09 05:42:38.243 [INFO][4944] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9ea70ef30cae069562532ca9bec82c02e9387af641a9bb3c524fb4f454112995" HandleID="k8s-pod-network.9ea70ef30cae069562532ca9bec82c02e9387af641a9bb3c524fb4f454112995" Workload="ci--4452.0.0--n--23b47482b2-k8s-goldmane--54d579b49d--smrln-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ccff0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4452.0.0-n-23b47482b2", "pod":"goldmane-54d579b49d-smrln", "timestamp":"2025-09-09 05:42:38.243369683 +0000 UTC"}, Hostname:"ci-4452.0.0-n-23b47482b2", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 05:42:38.418507 containerd[1723]: 2025-09-09 05:42:38.243 [INFO][4944] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 05:42:38.418507 containerd[1723]: 2025-09-09 05:42:38.271 [INFO][4944] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 05:42:38.418507 containerd[1723]: 2025-09-09 05:42:38.271 [INFO][4944] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4452.0.0-n-23b47482b2' Sep 9 05:42:38.418507 containerd[1723]: 2025-09-09 05:42:38.348 [INFO][4944] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9ea70ef30cae069562532ca9bec82c02e9387af641a9bb3c524fb4f454112995" host="ci-4452.0.0-n-23b47482b2" Sep 9 05:42:38.418507 containerd[1723]: 2025-09-09 05:42:38.353 [INFO][4944] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4452.0.0-n-23b47482b2" Sep 9 05:42:38.418507 containerd[1723]: 2025-09-09 05:42:38.361 [INFO][4944] ipam/ipam.go 511: Trying affinity for 192.168.49.128/26 host="ci-4452.0.0-n-23b47482b2" Sep 9 05:42:38.418507 containerd[1723]: 2025-09-09 05:42:38.364 [INFO][4944] ipam/ipam.go 158: Attempting to load block cidr=192.168.49.128/26 host="ci-4452.0.0-n-23b47482b2" Sep 9 05:42:38.418507 containerd[1723]: 2025-09-09 05:42:38.367 [INFO][4944] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.49.128/26 host="ci-4452.0.0-n-23b47482b2" Sep 9 05:42:38.418507 containerd[1723]: 2025-09-09 05:42:38.367 [INFO][4944] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.49.128/26 handle="k8s-pod-network.9ea70ef30cae069562532ca9bec82c02e9387af641a9bb3c524fb4f454112995" host="ci-4452.0.0-n-23b47482b2" Sep 9 05:42:38.418507 containerd[1723]: 2025-09-09 05:42:38.371 [INFO][4944] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9ea70ef30cae069562532ca9bec82c02e9387af641a9bb3c524fb4f454112995 Sep 9 05:42:38.418507 containerd[1723]: 2025-09-09 05:42:38.377 [INFO][4944] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.49.128/26 handle="k8s-pod-network.9ea70ef30cae069562532ca9bec82c02e9387af641a9bb3c524fb4f454112995" host="ci-4452.0.0-n-23b47482b2" Sep 9 05:42:38.418507 containerd[1723]: 2025-09-09 05:42:38.387 [INFO][4944] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.49.133/26] block=192.168.49.128/26 handle="k8s-pod-network.9ea70ef30cae069562532ca9bec82c02e9387af641a9bb3c524fb4f454112995" host="ci-4452.0.0-n-23b47482b2" Sep 9 05:42:38.418507 containerd[1723]: 2025-09-09 05:42:38.387 [INFO][4944] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.49.133/26] handle="k8s-pod-network.9ea70ef30cae069562532ca9bec82c02e9387af641a9bb3c524fb4f454112995" host="ci-4452.0.0-n-23b47482b2" Sep 9 05:42:38.418507 containerd[1723]: 2025-09-09 05:42:38.387 [INFO][4944] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 05:42:38.418507 containerd[1723]: 2025-09-09 05:42:38.387 [INFO][4944] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.49.133/26] IPv6=[] ContainerID="9ea70ef30cae069562532ca9bec82c02e9387af641a9bb3c524fb4f454112995" HandleID="k8s-pod-network.9ea70ef30cae069562532ca9bec82c02e9387af641a9bb3c524fb4f454112995" Workload="ci--4452.0.0--n--23b47482b2-k8s-goldmane--54d579b49d--smrln-eth0" Sep 9 05:42:38.419262 containerd[1723]: 2025-09-09 05:42:38.397 [INFO][4923] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9ea70ef30cae069562532ca9bec82c02e9387af641a9bb3c524fb4f454112995" Namespace="calico-system" Pod="goldmane-54d579b49d-smrln" WorkloadEndpoint="ci--4452.0.0--n--23b47482b2-k8s-goldmane--54d579b49d--smrln-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4452.0.0--n--23b47482b2-k8s-goldmane--54d579b49d--smrln-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"59a7decc-b070-4036-86c2-61eb36a53dee", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 5, 42, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4452.0.0-n-23b47482b2", ContainerID:"", Pod:"goldmane-54d579b49d-smrln", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.49.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali1ee3adf2f5e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 05:42:38.419262 containerd[1723]: 2025-09-09 05:42:38.398 [INFO][4923] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.49.133/32] ContainerID="9ea70ef30cae069562532ca9bec82c02e9387af641a9bb3c524fb4f454112995" Namespace="calico-system" Pod="goldmane-54d579b49d-smrln" WorkloadEndpoint="ci--4452.0.0--n--23b47482b2-k8s-goldmane--54d579b49d--smrln-eth0" Sep 9 05:42:38.419262 containerd[1723]: 2025-09-09 05:42:38.398 [INFO][4923] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1ee3adf2f5e ContainerID="9ea70ef30cae069562532ca9bec82c02e9387af641a9bb3c524fb4f454112995" Namespace="calico-system" Pod="goldmane-54d579b49d-smrln" WorkloadEndpoint="ci--4452.0.0--n--23b47482b2-k8s-goldmane--54d579b49d--smrln-eth0" Sep 9 05:42:38.419262 containerd[1723]: 2025-09-09 05:42:38.402 [INFO][4923] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9ea70ef30cae069562532ca9bec82c02e9387af641a9bb3c524fb4f454112995" Namespace="calico-system" Pod="goldmane-54d579b49d-smrln" WorkloadEndpoint="ci--4452.0.0--n--23b47482b2-k8s-goldmane--54d579b49d--smrln-eth0" Sep 9 05:42:38.419262 containerd[1723]: 2025-09-09 05:42:38.402 [INFO][4923] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9ea70ef30cae069562532ca9bec82c02e9387af641a9bb3c524fb4f454112995" Namespace="calico-system" Pod="goldmane-54d579b49d-smrln" WorkloadEndpoint="ci--4452.0.0--n--23b47482b2-k8s-goldmane--54d579b49d--smrln-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4452.0.0--n--23b47482b2-k8s-goldmane--54d579b49d--smrln-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"59a7decc-b070-4036-86c2-61eb36a53dee", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 5, 42, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4452.0.0-n-23b47482b2", ContainerID:"9ea70ef30cae069562532ca9bec82c02e9387af641a9bb3c524fb4f454112995", Pod:"goldmane-54d579b49d-smrln", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.49.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali1ee3adf2f5e", MAC:"ce:45:df:72:e0:e2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 05:42:38.419262 containerd[1723]: 2025-09-09 05:42:38.414 [INFO][4923] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9ea70ef30cae069562532ca9bec82c02e9387af641a9bb3c524fb4f454112995" Namespace="calico-system" Pod="goldmane-54d579b49d-smrln" WorkloadEndpoint="ci--4452.0.0--n--23b47482b2-k8s-goldmane--54d579b49d--smrln-eth0" Sep 9 05:42:38.440941 systemd[1]: Started cri-containerd-84084ebe1fb5be3ec4fa3fbbb7624ed6694ae248a3801c565cbf41b5191055ad.scope - libcontainer container 84084ebe1fb5be3ec4fa3fbbb7624ed6694ae248a3801c565cbf41b5191055ad. Sep 9 05:42:38.468881 containerd[1723]: time="2025-09-09T05:42:38.468614736Z" level=info msg="connecting to shim 9ea70ef30cae069562532ca9bec82c02e9387af641a9bb3c524fb4f454112995" address="unix:///run/containerd/s/cc85f85ae13035c5a1cd7e2b2806fdaf215954343b36af08eda731e211b3e031" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:42:38.496079 systemd[1]: Started cri-containerd-9ea70ef30cae069562532ca9bec82c02e9387af641a9bb3c524fb4f454112995.scope - libcontainer container 9ea70ef30cae069562532ca9bec82c02e9387af641a9bb3c524fb4f454112995. Sep 9 05:42:38.509991 systemd-networkd[1551]: calicfb19b368a7: Link UP Sep 9 05:42:38.511381 systemd-networkd[1551]: calicfb19b368a7: Gained carrier Sep 9 05:42:38.539972 containerd[1723]: 2025-09-09 05:42:38.194 [INFO][4902] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4452.0.0--n--23b47482b2-k8s-csi--node--driver--kpfkf-eth0 csi-node-driver- calico-system 6ce6737c-5cbb-4d00-b926-8ff7cccb578b 702 0 2025-09-09 05:42:13 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4452.0.0-n-23b47482b2 csi-node-driver-kpfkf eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calicfb19b368a7 [] [] }} ContainerID="43b19111f45fbee383506a84541d815166b4bad7f19af9a12c8b0606079db9b7" Namespace="calico-system" Pod="csi-node-driver-kpfkf" WorkloadEndpoint="ci--4452.0.0--n--23b47482b2-k8s-csi--node--driver--kpfkf-" Sep 9 05:42:38.539972 containerd[1723]: 2025-09-09 05:42:38.194 [INFO][4902] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="43b19111f45fbee383506a84541d815166b4bad7f19af9a12c8b0606079db9b7" Namespace="calico-system" Pod="csi-node-driver-kpfkf" WorkloadEndpoint="ci--4452.0.0--n--23b47482b2-k8s-csi--node--driver--kpfkf-eth0" Sep 9 05:42:38.539972 containerd[1723]: 2025-09-09 05:42:38.249 [INFO][4940] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="43b19111f45fbee383506a84541d815166b4bad7f19af9a12c8b0606079db9b7" HandleID="k8s-pod-network.43b19111f45fbee383506a84541d815166b4bad7f19af9a12c8b0606079db9b7" Workload="ci--4452.0.0--n--23b47482b2-k8s-csi--node--driver--kpfkf-eth0" Sep 9 05:42:38.539972 containerd[1723]: 2025-09-09 05:42:38.249 [INFO][4940] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="43b19111f45fbee383506a84541d815166b4bad7f19af9a12c8b0606079db9b7" HandleID="k8s-pod-network.43b19111f45fbee383506a84541d815166b4bad7f19af9a12c8b0606079db9b7" Workload="ci--4452.0.0--n--23b47482b2-k8s-csi--node--driver--kpfkf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5180), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4452.0.0-n-23b47482b2", "pod":"csi-node-driver-kpfkf", "timestamp":"2025-09-09 05:42:38.249297574 +0000 UTC"}, Hostname:"ci-4452.0.0-n-23b47482b2", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 05:42:38.539972 containerd[1723]: 2025-09-09 05:42:38.249 [INFO][4940] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 05:42:38.539972 containerd[1723]: 2025-09-09 05:42:38.387 [INFO][4940] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 05:42:38.539972 containerd[1723]: 2025-09-09 05:42:38.387 [INFO][4940] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4452.0.0-n-23b47482b2' Sep 9 05:42:38.539972 containerd[1723]: 2025-09-09 05:42:38.448 [INFO][4940] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.43b19111f45fbee383506a84541d815166b4bad7f19af9a12c8b0606079db9b7" host="ci-4452.0.0-n-23b47482b2" Sep 9 05:42:38.539972 containerd[1723]: 2025-09-09 05:42:38.453 [INFO][4940] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4452.0.0-n-23b47482b2" Sep 9 05:42:38.539972 containerd[1723]: 2025-09-09 05:42:38.458 [INFO][4940] ipam/ipam.go 511: Trying affinity for 192.168.49.128/26 host="ci-4452.0.0-n-23b47482b2" Sep 9 05:42:38.539972 containerd[1723]: 2025-09-09 05:42:38.461 [INFO][4940] ipam/ipam.go 158: Attempting to load block cidr=192.168.49.128/26 host="ci-4452.0.0-n-23b47482b2" Sep 9 05:42:38.539972 containerd[1723]: 2025-09-09 05:42:38.464 [INFO][4940] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.49.128/26 host="ci-4452.0.0-n-23b47482b2" Sep 9 05:42:38.539972 containerd[1723]: 2025-09-09 05:42:38.464 [INFO][4940] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.49.128/26 handle="k8s-pod-network.43b19111f45fbee383506a84541d815166b4bad7f19af9a12c8b0606079db9b7" host="ci-4452.0.0-n-23b47482b2" Sep 9 05:42:38.539972 containerd[1723]: 2025-09-09 05:42:38.466 [INFO][4940] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.43b19111f45fbee383506a84541d815166b4bad7f19af9a12c8b0606079db9b7 Sep 9 05:42:38.539972 containerd[1723]: 2025-09-09 05:42:38.476 [INFO][4940] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.49.128/26 handle="k8s-pod-network.43b19111f45fbee383506a84541d815166b4bad7f19af9a12c8b0606079db9b7" host="ci-4452.0.0-n-23b47482b2" Sep 9 05:42:38.539972 containerd[1723]: 2025-09-09 05:42:38.488 [INFO][4940] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.49.134/26] block=192.168.49.128/26 handle="k8s-pod-network.43b19111f45fbee383506a84541d815166b4bad7f19af9a12c8b0606079db9b7" host="ci-4452.0.0-n-23b47482b2" Sep 9 05:42:38.539972 containerd[1723]: 2025-09-09 05:42:38.488 [INFO][4940] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.49.134/26] handle="k8s-pod-network.43b19111f45fbee383506a84541d815166b4bad7f19af9a12c8b0606079db9b7" host="ci-4452.0.0-n-23b47482b2" Sep 9 05:42:38.539972 containerd[1723]: 2025-09-09 05:42:38.488 [INFO][4940] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 05:42:38.539972 containerd[1723]: 2025-09-09 05:42:38.488 [INFO][4940] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.49.134/26] IPv6=[] ContainerID="43b19111f45fbee383506a84541d815166b4bad7f19af9a12c8b0606079db9b7" HandleID="k8s-pod-network.43b19111f45fbee383506a84541d815166b4bad7f19af9a12c8b0606079db9b7" Workload="ci--4452.0.0--n--23b47482b2-k8s-csi--node--driver--kpfkf-eth0" Sep 9 05:42:38.540478 containerd[1723]: 2025-09-09 05:42:38.494 [INFO][4902] cni-plugin/k8s.go 418: Populated endpoint ContainerID="43b19111f45fbee383506a84541d815166b4bad7f19af9a12c8b0606079db9b7" Namespace="calico-system" Pod="csi-node-driver-kpfkf" WorkloadEndpoint="ci--4452.0.0--n--23b47482b2-k8s-csi--node--driver--kpfkf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4452.0.0--n--23b47482b2-k8s-csi--node--driver--kpfkf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6ce6737c-5cbb-4d00-b926-8ff7cccb578b", ResourceVersion:"702", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 5, 42, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4452.0.0-n-23b47482b2", ContainerID:"", Pod:"csi-node-driver-kpfkf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.49.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calicfb19b368a7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 05:42:38.540478 containerd[1723]: 2025-09-09 05:42:38.494 [INFO][4902] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.49.134/32] ContainerID="43b19111f45fbee383506a84541d815166b4bad7f19af9a12c8b0606079db9b7" Namespace="calico-system" Pod="csi-node-driver-kpfkf" WorkloadEndpoint="ci--4452.0.0--n--23b47482b2-k8s-csi--node--driver--kpfkf-eth0" Sep 9 05:42:38.540478 containerd[1723]: 2025-09-09 05:42:38.495 [INFO][4902] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicfb19b368a7 ContainerID="43b19111f45fbee383506a84541d815166b4bad7f19af9a12c8b0606079db9b7" Namespace="calico-system" Pod="csi-node-driver-kpfkf" WorkloadEndpoint="ci--4452.0.0--n--23b47482b2-k8s-csi--node--driver--kpfkf-eth0" Sep 9 05:42:38.540478 containerd[1723]: 2025-09-09 05:42:38.512 [INFO][4902] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="43b19111f45fbee383506a84541d815166b4bad7f19af9a12c8b0606079db9b7" Namespace="calico-system" Pod="csi-node-driver-kpfkf" WorkloadEndpoint="ci--4452.0.0--n--23b47482b2-k8s-csi--node--driver--kpfkf-eth0" Sep 9 05:42:38.540478 containerd[1723]: 2025-09-09 05:42:38.515 [INFO][4902] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="43b19111f45fbee383506a84541d815166b4bad7f19af9a12c8b0606079db9b7" Namespace="calico-system" Pod="csi-node-driver-kpfkf" WorkloadEndpoint="ci--4452.0.0--n--23b47482b2-k8s-csi--node--driver--kpfkf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4452.0.0--n--23b47482b2-k8s-csi--node--driver--kpfkf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6ce6737c-5cbb-4d00-b926-8ff7cccb578b", ResourceVersion:"702", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 5, 42, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4452.0.0-n-23b47482b2", ContainerID:"43b19111f45fbee383506a84541d815166b4bad7f19af9a12c8b0606079db9b7", Pod:"csi-node-driver-kpfkf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.49.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calicfb19b368a7", MAC:"aa:fc:2e:16:f2:39", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 05:42:38.540478 containerd[1723]: 2025-09-09 05:42:38.536 [INFO][4902] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="43b19111f45fbee383506a84541d815166b4bad7f19af9a12c8b0606079db9b7" Namespace="calico-system" Pod="csi-node-driver-kpfkf" WorkloadEndpoint="ci--4452.0.0--n--23b47482b2-k8s-csi--node--driver--kpfkf-eth0" Sep 9 05:42:38.544685 containerd[1723]: time="2025-09-09T05:42:38.544282906Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-t2sgm,Uid:d4ed2dc7-4551-4678-8f20-c675198c7103,Namespace:kube-system,Attempt:0,} returns sandbox id \"84084ebe1fb5be3ec4fa3fbbb7624ed6694ae248a3801c565cbf41b5191055ad\"" Sep 9 05:42:38.554459 containerd[1723]: time="2025-09-09T05:42:38.554056570Z" level=info msg="CreateContainer within sandbox \"84084ebe1fb5be3ec4fa3fbbb7624ed6694ae248a3801c565cbf41b5191055ad\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 9 05:42:38.580133 containerd[1723]: time="2025-09-09T05:42:38.580096604Z" level=info msg="Container fa0caa63aedf782e32709c1913bb6afd27f6e0e1712e5bfc30e46c5069b3d33d: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:42:38.585037 containerd[1723]: time="2025-09-09T05:42:38.585014504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-smrln,Uid:59a7decc-b070-4036-86c2-61eb36a53dee,Namespace:calico-system,Attempt:0,} returns sandbox id \"9ea70ef30cae069562532ca9bec82c02e9387af641a9bb3c524fb4f454112995\"" Sep 9 05:42:38.600362 containerd[1723]: time="2025-09-09T05:42:38.600250491Z" level=info msg="CreateContainer within sandbox \"84084ebe1fb5be3ec4fa3fbbb7624ed6694ae248a3801c565cbf41b5191055ad\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"fa0caa63aedf782e32709c1913bb6afd27f6e0e1712e5bfc30e46c5069b3d33d\"" Sep 9 05:42:38.601144 containerd[1723]: time="2025-09-09T05:42:38.601113371Z" level=info msg="connecting to shim 43b19111f45fbee383506a84541d815166b4bad7f19af9a12c8b0606079db9b7" address="unix:///run/containerd/s/623df805c5553a3b2cadf2d2560a268596615dd96de0fe05c7673c9c2613fd90" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:42:38.601805 containerd[1723]: time="2025-09-09T05:42:38.601723246Z" level=info msg="StartContainer for \"fa0caa63aedf782e32709c1913bb6afd27f6e0e1712e5bfc30e46c5069b3d33d\"" Sep 9 05:42:38.603407 containerd[1723]: time="2025-09-09T05:42:38.603331939Z" level=info msg="connecting to shim fa0caa63aedf782e32709c1913bb6afd27f6e0e1712e5bfc30e46c5069b3d33d" address="unix:///run/containerd/s/94aedaf6f4a1e03ac82c1b75a257e2f3acdcf76ed6dcfa3e353c50649c3d08c8" protocol=ttrpc version=3 Sep 9 05:42:38.624938 systemd[1]: Started cri-containerd-43b19111f45fbee383506a84541d815166b4bad7f19af9a12c8b0606079db9b7.scope - libcontainer container 43b19111f45fbee383506a84541d815166b4bad7f19af9a12c8b0606079db9b7. Sep 9 05:42:38.626178 systemd[1]: Started cri-containerd-fa0caa63aedf782e32709c1913bb6afd27f6e0e1712e5bfc30e46c5069b3d33d.scope - libcontainer container fa0caa63aedf782e32709c1913bb6afd27f6e0e1712e5bfc30e46c5069b3d33d. Sep 9 05:42:38.670628 containerd[1723]: time="2025-09-09T05:42:38.670453475Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kpfkf,Uid:6ce6737c-5cbb-4d00-b926-8ff7cccb578b,Namespace:calico-system,Attempt:0,} returns sandbox id \"43b19111f45fbee383506a84541d815166b4bad7f19af9a12c8b0606079db9b7\"" Sep 9 05:42:38.673801 containerd[1723]: time="2025-09-09T05:42:38.672701231Z" level=info msg="StartContainer for \"fa0caa63aedf782e32709c1913bb6afd27f6e0e1712e5bfc30e46c5069b3d33d\" returns successfully" Sep 9 05:42:39.113016 containerd[1723]: time="2025-09-09T05:42:39.112419436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-x75ls,Uid:e844f59d-88d0-4ca4-bf84-35ef219a5745,Namespace:kube-system,Attempt:0,}" Sep 9 05:42:39.113016 containerd[1723]: time="2025-09-09T05:42:39.112432733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d4c6f5df5-vfjwk,Uid:0bafd0a8-d202-4cdc-b16c-862a0665c3a9,Namespace:calico-apiserver,Attempt:0,}" Sep 9 05:42:39.242121 systemd-networkd[1551]: cali9a0974919da: Gained IPv6LL Sep 9 05:42:39.291458 kubelet[3195]: I0909 05:42:39.291309 3195 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-t2sgm" podStartSLOduration=38.291291675 podStartE2EDuration="38.291291675s" podCreationTimestamp="2025-09-09 05:42:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 05:42:39.290488861 +0000 UTC m=+45.273441323" watchObservedRunningTime="2025-09-09 05:42:39.291291675 +0000 UTC m=+45.274244133" Sep 9 05:42:39.496589 systemd-networkd[1551]: cali0ec43970f48: Link UP Sep 9 05:42:39.498125 systemd-networkd[1551]: cali0ec43970f48: Gained carrier Sep 9 05:42:39.518208 containerd[1723]: 2025-09-09 05:42:39.396 [INFO][5190] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4452.0.0--n--23b47482b2-k8s-calico--apiserver--7d4c6f5df5--vfjwk-eth0 calico-apiserver-7d4c6f5df5- calico-apiserver 0bafd0a8-d202-4cdc-b16c-862a0665c3a9 811 0 2025-09-09 05:42:10 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7d4c6f5df5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4452.0.0-n-23b47482b2 calico-apiserver-7d4c6f5df5-vfjwk eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali0ec43970f48 [] [] }} ContainerID="f3711f03a95c2bc388d73c7d66bfd405cd39fdcc4d88197cfcd25b828d019a44" Namespace="calico-apiserver" Pod="calico-apiserver-7d4c6f5df5-vfjwk" WorkloadEndpoint="ci--4452.0.0--n--23b47482b2-k8s-calico--apiserver--7d4c6f5df5--vfjwk-" Sep 9 05:42:39.518208 containerd[1723]: 2025-09-09 05:42:39.397 [INFO][5190] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f3711f03a95c2bc388d73c7d66bfd405cd39fdcc4d88197cfcd25b828d019a44" Namespace="calico-apiserver" Pod="calico-apiserver-7d4c6f5df5-vfjwk" WorkloadEndpoint="ci--4452.0.0--n--23b47482b2-k8s-calico--apiserver--7d4c6f5df5--vfjwk-eth0" Sep 9 05:42:39.518208 containerd[1723]: 2025-09-09 05:42:39.440 [INFO][5214] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f3711f03a95c2bc388d73c7d66bfd405cd39fdcc4d88197cfcd25b828d019a44" HandleID="k8s-pod-network.f3711f03a95c2bc388d73c7d66bfd405cd39fdcc4d88197cfcd25b828d019a44" Workload="ci--4452.0.0--n--23b47482b2-k8s-calico--apiserver--7d4c6f5df5--vfjwk-eth0" Sep 9 05:42:39.518208 containerd[1723]: 2025-09-09 05:42:39.440 [INFO][5214] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f3711f03a95c2bc388d73c7d66bfd405cd39fdcc4d88197cfcd25b828d019a44" HandleID="k8s-pod-network.f3711f03a95c2bc388d73c7d66bfd405cd39fdcc4d88197cfcd25b828d019a44" Workload="ci--4452.0.0--n--23b47482b2-k8s-calico--apiserver--7d4c6f5df5--vfjwk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4310), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4452.0.0-n-23b47482b2", "pod":"calico-apiserver-7d4c6f5df5-vfjwk", "timestamp":"2025-09-09 05:42:39.440035826 +0000 UTC"}, Hostname:"ci-4452.0.0-n-23b47482b2", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 05:42:39.518208 containerd[1723]: 2025-09-09 05:42:39.440 [INFO][5214] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 05:42:39.518208 containerd[1723]: 2025-09-09 05:42:39.440 [INFO][5214] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 05:42:39.518208 containerd[1723]: 2025-09-09 05:42:39.440 [INFO][5214] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4452.0.0-n-23b47482b2' Sep 9 05:42:39.518208 containerd[1723]: 2025-09-09 05:42:39.449 [INFO][5214] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f3711f03a95c2bc388d73c7d66bfd405cd39fdcc4d88197cfcd25b828d019a44" host="ci-4452.0.0-n-23b47482b2" Sep 9 05:42:39.518208 containerd[1723]: 2025-09-09 05:42:39.454 [INFO][5214] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4452.0.0-n-23b47482b2" Sep 9 05:42:39.518208 containerd[1723]: 2025-09-09 05:42:39.459 [INFO][5214] ipam/ipam.go 511: Trying affinity for 192.168.49.128/26 host="ci-4452.0.0-n-23b47482b2" Sep 9 05:42:39.518208 containerd[1723]: 2025-09-09 05:42:39.461 [INFO][5214] ipam/ipam.go 158: Attempting to load block cidr=192.168.49.128/26 host="ci-4452.0.0-n-23b47482b2" Sep 9 05:42:39.518208 containerd[1723]: 2025-09-09 05:42:39.463 [INFO][5214] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.49.128/26 host="ci-4452.0.0-n-23b47482b2" Sep 9 05:42:39.518208 containerd[1723]: 2025-09-09 05:42:39.463 [INFO][5214] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.49.128/26 handle="k8s-pod-network.f3711f03a95c2bc388d73c7d66bfd405cd39fdcc4d88197cfcd25b828d019a44" host="ci-4452.0.0-n-23b47482b2" Sep 9 05:42:39.518208 containerd[1723]: 2025-09-09 05:42:39.465 [INFO][5214] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f3711f03a95c2bc388d73c7d66bfd405cd39fdcc4d88197cfcd25b828d019a44 Sep 9 05:42:39.518208 containerd[1723]: 2025-09-09 05:42:39.473 [INFO][5214] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.49.128/26 handle="k8s-pod-network.f3711f03a95c2bc388d73c7d66bfd405cd39fdcc4d88197cfcd25b828d019a44" host="ci-4452.0.0-n-23b47482b2" Sep 9 05:42:39.518208 containerd[1723]: 2025-09-09 05:42:39.483 [INFO][5214] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.49.135/26] block=192.168.49.128/26 handle="k8s-pod-network.f3711f03a95c2bc388d73c7d66bfd405cd39fdcc4d88197cfcd25b828d019a44" host="ci-4452.0.0-n-23b47482b2" Sep 9 05:42:39.518208 containerd[1723]: 2025-09-09 05:42:39.483 [INFO][5214] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.49.135/26] handle="k8s-pod-network.f3711f03a95c2bc388d73c7d66bfd405cd39fdcc4d88197cfcd25b828d019a44" host="ci-4452.0.0-n-23b47482b2" Sep 9 05:42:39.518208 containerd[1723]: 2025-09-09 05:42:39.483 [INFO][5214] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 05:42:39.518208 containerd[1723]: 2025-09-09 05:42:39.483 [INFO][5214] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.49.135/26] IPv6=[] ContainerID="f3711f03a95c2bc388d73c7d66bfd405cd39fdcc4d88197cfcd25b828d019a44" HandleID="k8s-pod-network.f3711f03a95c2bc388d73c7d66bfd405cd39fdcc4d88197cfcd25b828d019a44" Workload="ci--4452.0.0--n--23b47482b2-k8s-calico--apiserver--7d4c6f5df5--vfjwk-eth0" Sep 9 05:42:39.519236 containerd[1723]: 2025-09-09 05:42:39.488 [INFO][5190] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f3711f03a95c2bc388d73c7d66bfd405cd39fdcc4d88197cfcd25b828d019a44" Namespace="calico-apiserver" Pod="calico-apiserver-7d4c6f5df5-vfjwk" WorkloadEndpoint="ci--4452.0.0--n--23b47482b2-k8s-calico--apiserver--7d4c6f5df5--vfjwk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4452.0.0--n--23b47482b2-k8s-calico--apiserver--7d4c6f5df5--vfjwk-eth0", GenerateName:"calico-apiserver-7d4c6f5df5-", Namespace:"calico-apiserver", SelfLink:"", UID:"0bafd0a8-d202-4cdc-b16c-862a0665c3a9", ResourceVersion:"811", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 5, 42, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d4c6f5df5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4452.0.0-n-23b47482b2", ContainerID:"", Pod:"calico-apiserver-7d4c6f5df5-vfjwk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.49.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0ec43970f48", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 05:42:39.519236 containerd[1723]: 2025-09-09 05:42:39.488 [INFO][5190] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.49.135/32] ContainerID="f3711f03a95c2bc388d73c7d66bfd405cd39fdcc4d88197cfcd25b828d019a44" Namespace="calico-apiserver" Pod="calico-apiserver-7d4c6f5df5-vfjwk" WorkloadEndpoint="ci--4452.0.0--n--23b47482b2-k8s-calico--apiserver--7d4c6f5df5--vfjwk-eth0" Sep 9 05:42:39.519236 containerd[1723]: 2025-09-09 05:42:39.488 [INFO][5190] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0ec43970f48 ContainerID="f3711f03a95c2bc388d73c7d66bfd405cd39fdcc4d88197cfcd25b828d019a44" Namespace="calico-apiserver" Pod="calico-apiserver-7d4c6f5df5-vfjwk" WorkloadEndpoint="ci--4452.0.0--n--23b47482b2-k8s-calico--apiserver--7d4c6f5df5--vfjwk-eth0" Sep 9 05:42:39.519236 containerd[1723]: 2025-09-09 05:42:39.498 [INFO][5190] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f3711f03a95c2bc388d73c7d66bfd405cd39fdcc4d88197cfcd25b828d019a44" Namespace="calico-apiserver" Pod="calico-apiserver-7d4c6f5df5-vfjwk" WorkloadEndpoint="ci--4452.0.0--n--23b47482b2-k8s-calico--apiserver--7d4c6f5df5--vfjwk-eth0" Sep 9 05:42:39.519236 containerd[1723]: 2025-09-09 05:42:39.499 [INFO][5190] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f3711f03a95c2bc388d73c7d66bfd405cd39fdcc4d88197cfcd25b828d019a44" Namespace="calico-apiserver" Pod="calico-apiserver-7d4c6f5df5-vfjwk" WorkloadEndpoint="ci--4452.0.0--n--23b47482b2-k8s-calico--apiserver--7d4c6f5df5--vfjwk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4452.0.0--n--23b47482b2-k8s-calico--apiserver--7d4c6f5df5--vfjwk-eth0", GenerateName:"calico-apiserver-7d4c6f5df5-", Namespace:"calico-apiserver", SelfLink:"", UID:"0bafd0a8-d202-4cdc-b16c-862a0665c3a9", ResourceVersion:"811", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 5, 42, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d4c6f5df5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4452.0.0-n-23b47482b2", ContainerID:"f3711f03a95c2bc388d73c7d66bfd405cd39fdcc4d88197cfcd25b828d019a44", Pod:"calico-apiserver-7d4c6f5df5-vfjwk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.49.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0ec43970f48", MAC:"4e:5e:30:04:3b:29", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 05:42:39.519236 containerd[1723]: 2025-09-09 05:42:39.514 [INFO][5190] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f3711f03a95c2bc388d73c7d66bfd405cd39fdcc4d88197cfcd25b828d019a44" Namespace="calico-apiserver" Pod="calico-apiserver-7d4c6f5df5-vfjwk" WorkloadEndpoint="ci--4452.0.0--n--23b47482b2-k8s-calico--apiserver--7d4c6f5df5--vfjwk-eth0" Sep 9 05:42:39.585954 containerd[1723]: time="2025-09-09T05:42:39.585853988Z" level=info msg="connecting to shim f3711f03a95c2bc388d73c7d66bfd405cd39fdcc4d88197cfcd25b828d019a44" address="unix:///run/containerd/s/e88a930b0afea50c673d9e9251858a88f8739e7a7003835cb30c7883192d16aa" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:42:39.612952 systemd-networkd[1551]: cali86e2a1e089f: Link UP Sep 9 05:42:39.618131 systemd-networkd[1551]: cali86e2a1e089f: Gained carrier Sep 9 05:42:39.640816 containerd[1723]: 2025-09-09 05:42:39.398 [INFO][5183] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4452.0.0--n--23b47482b2-k8s-coredns--674b8bbfcf--x75ls-eth0 coredns-674b8bbfcf- kube-system e844f59d-88d0-4ca4-bf84-35ef219a5745 803 0 2025-09-09 05:42:01 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4452.0.0-n-23b47482b2 coredns-674b8bbfcf-x75ls eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali86e2a1e089f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="8279ce1d82eab4732d7e0c43d977594c5446970a9c03342e76ccaf0cbfd5c551" Namespace="kube-system" Pod="coredns-674b8bbfcf-x75ls" WorkloadEndpoint="ci--4452.0.0--n--23b47482b2-k8s-coredns--674b8bbfcf--x75ls-" Sep 9 05:42:39.640816 containerd[1723]: 2025-09-09 05:42:39.399 [INFO][5183] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8279ce1d82eab4732d7e0c43d977594c5446970a9c03342e76ccaf0cbfd5c551" Namespace="kube-system" Pod="coredns-674b8bbfcf-x75ls" WorkloadEndpoint="ci--4452.0.0--n--23b47482b2-k8s-coredns--674b8bbfcf--x75ls-eth0" Sep 9 05:42:39.640816 containerd[1723]: 2025-09-09 05:42:39.442 [INFO][5216] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8279ce1d82eab4732d7e0c43d977594c5446970a9c03342e76ccaf0cbfd5c551" HandleID="k8s-pod-network.8279ce1d82eab4732d7e0c43d977594c5446970a9c03342e76ccaf0cbfd5c551" Workload="ci--4452.0.0--n--23b47482b2-k8s-coredns--674b8bbfcf--x75ls-eth0" Sep 9 05:42:39.640816 containerd[1723]: 2025-09-09 05:42:39.442 [INFO][5216] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8279ce1d82eab4732d7e0c43d977594c5446970a9c03342e76ccaf0cbfd5c551" HandleID="k8s-pod-network.8279ce1d82eab4732d7e0c43d977594c5446970a9c03342e76ccaf0cbfd5c551" Workload="ci--4452.0.0--n--23b47482b2-k8s-coredns--674b8bbfcf--x75ls-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4ff0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4452.0.0-n-23b47482b2", "pod":"coredns-674b8bbfcf-x75ls", "timestamp":"2025-09-09 05:42:39.442541455 +0000 UTC"}, Hostname:"ci-4452.0.0-n-23b47482b2", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 05:42:39.640816 containerd[1723]: 2025-09-09 05:42:39.442 [INFO][5216] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 05:42:39.640816 containerd[1723]: 2025-09-09 05:42:39.483 [INFO][5216] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 05:42:39.640816 containerd[1723]: 2025-09-09 05:42:39.483 [INFO][5216] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4452.0.0-n-23b47482b2' Sep 9 05:42:39.640816 containerd[1723]: 2025-09-09 05:42:39.550 [INFO][5216] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8279ce1d82eab4732d7e0c43d977594c5446970a9c03342e76ccaf0cbfd5c551" host="ci-4452.0.0-n-23b47482b2" Sep 9 05:42:39.640816 containerd[1723]: 2025-09-09 05:42:39.561 [INFO][5216] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4452.0.0-n-23b47482b2" Sep 9 05:42:39.640816 containerd[1723]: 2025-09-09 05:42:39.570 [INFO][5216] ipam/ipam.go 511: Trying affinity for 192.168.49.128/26 host="ci-4452.0.0-n-23b47482b2" Sep 9 05:42:39.640816 containerd[1723]: 2025-09-09 05:42:39.573 [INFO][5216] ipam/ipam.go 158: Attempting to load block cidr=192.168.49.128/26 host="ci-4452.0.0-n-23b47482b2" Sep 9 05:42:39.640816 containerd[1723]: 2025-09-09 05:42:39.577 [INFO][5216] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.49.128/26 host="ci-4452.0.0-n-23b47482b2" Sep 9 05:42:39.640816 containerd[1723]: 2025-09-09 05:42:39.577 [INFO][5216] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.49.128/26 handle="k8s-pod-network.8279ce1d82eab4732d7e0c43d977594c5446970a9c03342e76ccaf0cbfd5c551" host="ci-4452.0.0-n-23b47482b2" Sep 9 05:42:39.640816 containerd[1723]: 2025-09-09 05:42:39.578 [INFO][5216] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8279ce1d82eab4732d7e0c43d977594c5446970a9c03342e76ccaf0cbfd5c551 Sep 9 05:42:39.640816 containerd[1723]: 2025-09-09 05:42:39.587 [INFO][5216] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.49.128/26 handle="k8s-pod-network.8279ce1d82eab4732d7e0c43d977594c5446970a9c03342e76ccaf0cbfd5c551" host="ci-4452.0.0-n-23b47482b2" Sep 9 05:42:39.640816 containerd[1723]: 2025-09-09 05:42:39.599 [INFO][5216] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.49.136/26] block=192.168.49.128/26 handle="k8s-pod-network.8279ce1d82eab4732d7e0c43d977594c5446970a9c03342e76ccaf0cbfd5c551" host="ci-4452.0.0-n-23b47482b2" Sep 9 05:42:39.640816 containerd[1723]: 2025-09-09 05:42:39.600 [INFO][5216] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.49.136/26] handle="k8s-pod-network.8279ce1d82eab4732d7e0c43d977594c5446970a9c03342e76ccaf0cbfd5c551" host="ci-4452.0.0-n-23b47482b2" Sep 9 05:42:39.640816 containerd[1723]: 2025-09-09 05:42:39.600 [INFO][5216] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 05:42:39.640816 containerd[1723]: 2025-09-09 05:42:39.601 [INFO][5216] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.49.136/26] IPv6=[] ContainerID="8279ce1d82eab4732d7e0c43d977594c5446970a9c03342e76ccaf0cbfd5c551" HandleID="k8s-pod-network.8279ce1d82eab4732d7e0c43d977594c5446970a9c03342e76ccaf0cbfd5c551" Workload="ci--4452.0.0--n--23b47482b2-k8s-coredns--674b8bbfcf--x75ls-eth0" Sep 9 05:42:39.641424 containerd[1723]: 2025-09-09 05:42:39.605 [INFO][5183] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8279ce1d82eab4732d7e0c43d977594c5446970a9c03342e76ccaf0cbfd5c551" Namespace="kube-system" Pod="coredns-674b8bbfcf-x75ls" WorkloadEndpoint="ci--4452.0.0--n--23b47482b2-k8s-coredns--674b8bbfcf--x75ls-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4452.0.0--n--23b47482b2-k8s-coredns--674b8bbfcf--x75ls-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"e844f59d-88d0-4ca4-bf84-35ef219a5745", ResourceVersion:"803", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 5, 42, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4452.0.0-n-23b47482b2", ContainerID:"", Pod:"coredns-674b8bbfcf-x75ls", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.49.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali86e2a1e089f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 05:42:39.641424 containerd[1723]: 2025-09-09 05:42:39.605 [INFO][5183] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.49.136/32] ContainerID="8279ce1d82eab4732d7e0c43d977594c5446970a9c03342e76ccaf0cbfd5c551" Namespace="kube-system" Pod="coredns-674b8bbfcf-x75ls" WorkloadEndpoint="ci--4452.0.0--n--23b47482b2-k8s-coredns--674b8bbfcf--x75ls-eth0" Sep 9 05:42:39.641424 containerd[1723]: 2025-09-09 05:42:39.605 [INFO][5183] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali86e2a1e089f ContainerID="8279ce1d82eab4732d7e0c43d977594c5446970a9c03342e76ccaf0cbfd5c551" Namespace="kube-system" Pod="coredns-674b8bbfcf-x75ls" WorkloadEndpoint="ci--4452.0.0--n--23b47482b2-k8s-coredns--674b8bbfcf--x75ls-eth0" Sep 9 05:42:39.641424 containerd[1723]: 2025-09-09 05:42:39.619 [INFO][5183] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8279ce1d82eab4732d7e0c43d977594c5446970a9c03342e76ccaf0cbfd5c551" Namespace="kube-system" Pod="coredns-674b8bbfcf-x75ls" WorkloadEndpoint="ci--4452.0.0--n--23b47482b2-k8s-coredns--674b8bbfcf--x75ls-eth0" Sep 9 05:42:39.641424 containerd[1723]: 2025-09-09 05:42:39.619 [INFO][5183] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8279ce1d82eab4732d7e0c43d977594c5446970a9c03342e76ccaf0cbfd5c551" Namespace="kube-system" Pod="coredns-674b8bbfcf-x75ls" WorkloadEndpoint="ci--4452.0.0--n--23b47482b2-k8s-coredns--674b8bbfcf--x75ls-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4452.0.0--n--23b47482b2-k8s-coredns--674b8bbfcf--x75ls-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"e844f59d-88d0-4ca4-bf84-35ef219a5745", ResourceVersion:"803", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 5, 42, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4452.0.0-n-23b47482b2", ContainerID:"8279ce1d82eab4732d7e0c43d977594c5446970a9c03342e76ccaf0cbfd5c551", Pod:"coredns-674b8bbfcf-x75ls", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.49.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali86e2a1e089f", MAC:"b2:52:99:de:57:22", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 05:42:39.641424 containerd[1723]: 2025-09-09 05:42:39.635 [INFO][5183] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8279ce1d82eab4732d7e0c43d977594c5446970a9c03342e76ccaf0cbfd5c551" Namespace="kube-system" Pod="coredns-674b8bbfcf-x75ls" WorkloadEndpoint="ci--4452.0.0--n--23b47482b2-k8s-coredns--674b8bbfcf--x75ls-eth0" Sep 9 05:42:39.654959 systemd[1]: Started cri-containerd-f3711f03a95c2bc388d73c7d66bfd405cd39fdcc4d88197cfcd25b828d019a44.scope - libcontainer container f3711f03a95c2bc388d73c7d66bfd405cd39fdcc4d88197cfcd25b828d019a44. Sep 9 05:42:39.690024 systemd-networkd[1551]: cali1ee3adf2f5e: Gained IPv6LL Sep 9 05:42:39.708145 containerd[1723]: time="2025-09-09T05:42:39.708110372Z" level=info msg="connecting to shim 8279ce1d82eab4732d7e0c43d977594c5446970a9c03342e76ccaf0cbfd5c551" address="unix:///run/containerd/s/de6d38e1130979af7b0e6f2b999e781b34fbf3825cbb4923c891acdb30ad5b67" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:42:39.746944 systemd[1]: Started cri-containerd-8279ce1d82eab4732d7e0c43d977594c5446970a9c03342e76ccaf0cbfd5c551.scope - libcontainer container 8279ce1d82eab4732d7e0c43d977594c5446970a9c03342e76ccaf0cbfd5c551. Sep 9 05:42:39.781529 containerd[1723]: time="2025-09-09T05:42:39.781476188Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d4c6f5df5-vfjwk,Uid:0bafd0a8-d202-4cdc-b16c-862a0665c3a9,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"f3711f03a95c2bc388d73c7d66bfd405cd39fdcc4d88197cfcd25b828d019a44\"" Sep 9 05:42:39.819084 containerd[1723]: time="2025-09-09T05:42:39.819056156Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-x75ls,Uid:e844f59d-88d0-4ca4-bf84-35ef219a5745,Namespace:kube-system,Attempt:0,} returns sandbox id \"8279ce1d82eab4732d7e0c43d977594c5446970a9c03342e76ccaf0cbfd5c551\"" Sep 9 05:42:39.828151 containerd[1723]: time="2025-09-09T05:42:39.828089961Z" level=info msg="CreateContainer within sandbox \"8279ce1d82eab4732d7e0c43d977594c5446970a9c03342e76ccaf0cbfd5c551\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 9 05:42:39.855916 containerd[1723]: time="2025-09-09T05:42:39.855891069Z" level=info msg="Container 8a78fa15fc484291ad52c5c1a57140f6ab2543c1aa6b14b001d4ba87a76682ee: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:42:39.869134 containerd[1723]: time="2025-09-09T05:42:39.869054334Z" level=info msg="CreateContainer within sandbox \"8279ce1d82eab4732d7e0c43d977594c5446970a9c03342e76ccaf0cbfd5c551\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8a78fa15fc484291ad52c5c1a57140f6ab2543c1aa6b14b001d4ba87a76682ee\"" Sep 9 05:42:39.869774 containerd[1723]: time="2025-09-09T05:42:39.869744197Z" level=info msg="StartContainer for \"8a78fa15fc484291ad52c5c1a57140f6ab2543c1aa6b14b001d4ba87a76682ee\"" Sep 9 05:42:39.873077 containerd[1723]: time="2025-09-09T05:42:39.873042474Z" level=info msg="connecting to shim 8a78fa15fc484291ad52c5c1a57140f6ab2543c1aa6b14b001d4ba87a76682ee" address="unix:///run/containerd/s/de6d38e1130979af7b0e6f2b999e781b34fbf3825cbb4923c891acdb30ad5b67" protocol=ttrpc version=3 Sep 9 05:42:39.898037 systemd[1]: Started cri-containerd-8a78fa15fc484291ad52c5c1a57140f6ab2543c1aa6b14b001d4ba87a76682ee.scope - libcontainer container 8a78fa15fc484291ad52c5c1a57140f6ab2543c1aa6b14b001d4ba87a76682ee. Sep 9 05:42:39.934793 containerd[1723]: time="2025-09-09T05:42:39.934557755Z" level=info msg="StartContainer for \"8a78fa15fc484291ad52c5c1a57140f6ab2543c1aa6b14b001d4ba87a76682ee\" returns successfully" Sep 9 05:42:40.138042 systemd-networkd[1551]: cali80d84b08612: Gained IPv6LL Sep 9 05:42:40.202056 systemd-networkd[1551]: calicfb19b368a7: Gained IPv6LL Sep 9 05:42:40.301375 kubelet[3195]: I0909 05:42:40.301323 3195 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-x75ls" podStartSLOduration=39.301305729 podStartE2EDuration="39.301305729s" podCreationTimestamp="2025-09-09 05:42:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 05:42:40.301003894 +0000 UTC m=+46.283956360" watchObservedRunningTime="2025-09-09 05:42:40.301305729 +0000 UTC m=+46.284258191" Sep 9 05:42:40.499478 containerd[1723]: time="2025-09-09T05:42:40.499433896Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:42:40.501889 containerd[1723]: time="2025-09-09T05:42:40.501764365Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Sep 9 05:42:40.504532 containerd[1723]: time="2025-09-09T05:42:40.504503913Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:42:40.508818 containerd[1723]: time="2025-09-09T05:42:40.508725455Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:42:40.509317 containerd[1723]: time="2025-09-09T05:42:40.509152623Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 2.778184404s" Sep 9 05:42:40.509317 containerd[1723]: time="2025-09-09T05:42:40.509184817Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 9 05:42:40.510807 containerd[1723]: time="2025-09-09T05:42:40.510593960Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 9 05:42:40.515378 containerd[1723]: time="2025-09-09T05:42:40.515354276Z" level=info msg="CreateContainer within sandbox \"4a22729dfa39ae16518134c009e409687b056e571841a71233f99535164850ca\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 9 05:42:40.534493 containerd[1723]: time="2025-09-09T05:42:40.533691794Z" level=info msg="Container 25795cf8e8cc24372f8d711429bc5ae0d388ef6c2b8c3a850effc3571b249e36: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:42:40.555826 containerd[1723]: time="2025-09-09T05:42:40.555793592Z" level=info msg="CreateContainer within sandbox \"4a22729dfa39ae16518134c009e409687b056e571841a71233f99535164850ca\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"25795cf8e8cc24372f8d711429bc5ae0d388ef6c2b8c3a850effc3571b249e36\"" Sep 9 05:42:40.556440 containerd[1723]: time="2025-09-09T05:42:40.556375082Z" level=info msg="StartContainer for \"25795cf8e8cc24372f8d711429bc5ae0d388ef6c2b8c3a850effc3571b249e36\"" Sep 9 05:42:40.557949 containerd[1723]: time="2025-09-09T05:42:40.557895565Z" level=info msg="connecting to shim 25795cf8e8cc24372f8d711429bc5ae0d388ef6c2b8c3a850effc3571b249e36" address="unix:///run/containerd/s/902825bb6640164aa9bad4d0744b2d2be0038d3849076aac68e1c5c4bc53e430" protocol=ttrpc version=3 Sep 9 05:42:40.581915 systemd[1]: Started cri-containerd-25795cf8e8cc24372f8d711429bc5ae0d388ef6c2b8c3a850effc3571b249e36.scope - libcontainer container 25795cf8e8cc24372f8d711429bc5ae0d388ef6c2b8c3a850effc3571b249e36. Sep 9 05:42:40.638575 containerd[1723]: time="2025-09-09T05:42:40.638544365Z" level=info msg="StartContainer for \"25795cf8e8cc24372f8d711429bc5ae0d388ef6c2b8c3a850effc3571b249e36\" returns successfully" Sep 9 05:42:40.650976 systemd-networkd[1551]: cali0ec43970f48: Gained IPv6LL Sep 9 05:42:40.906922 systemd-networkd[1551]: cali86e2a1e089f: Gained IPv6LL Sep 9 05:42:42.309542 kubelet[3195]: I0909 05:42:42.308974 3195 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7d4c6f5df5-5c5bv" podStartSLOduration=29.211071891 podStartE2EDuration="32.308955618s" podCreationTimestamp="2025-09-09 05:42:10 +0000 UTC" firstStartedPulling="2025-09-09 05:42:37.412014832 +0000 UTC m=+43.394967289" lastFinishedPulling="2025-09-09 05:42:40.50989856 +0000 UTC m=+46.492851016" observedRunningTime="2025-09-09 05:42:41.336913533 +0000 UTC m=+47.319865997" watchObservedRunningTime="2025-09-09 05:42:42.308955618 +0000 UTC m=+48.291908083" Sep 9 05:42:42.833698 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2404281606.mount: Deactivated successfully. Sep 9 05:42:43.503210 containerd[1723]: time="2025-09-09T05:42:43.503084960Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:42:43.511316 containerd[1723]: time="2025-09-09T05:42:43.511279368Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Sep 9 05:42:43.511426 containerd[1723]: time="2025-09-09T05:42:43.511370793Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:42:43.515660 containerd[1723]: time="2025-09-09T05:42:43.515606357Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:42:43.516324 containerd[1723]: time="2025-09-09T05:42:43.516215849Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 3.005591486s" Sep 9 05:42:43.516324 containerd[1723]: time="2025-09-09T05:42:43.516245211Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 9 05:42:43.517247 containerd[1723]: time="2025-09-09T05:42:43.517188152Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 9 05:42:43.523546 containerd[1723]: time="2025-09-09T05:42:43.523521193Z" level=info msg="CreateContainer within sandbox \"9ea70ef30cae069562532ca9bec82c02e9387af641a9bb3c524fb4f454112995\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 9 05:42:43.540144 containerd[1723]: time="2025-09-09T05:42:43.540117768Z" level=info msg="Container 05972370f01a334443d54ebce5fb84f95c412020745bf203e62022b8c8918cbf: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:42:43.559615 containerd[1723]: time="2025-09-09T05:42:43.559591545Z" level=info msg="CreateContainer within sandbox \"9ea70ef30cae069562532ca9bec82c02e9387af641a9bb3c524fb4f454112995\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"05972370f01a334443d54ebce5fb84f95c412020745bf203e62022b8c8918cbf\"" Sep 9 05:42:43.560366 containerd[1723]: time="2025-09-09T05:42:43.560339282Z" level=info msg="StartContainer for \"05972370f01a334443d54ebce5fb84f95c412020745bf203e62022b8c8918cbf\"" Sep 9 05:42:43.561663 containerd[1723]: time="2025-09-09T05:42:43.561638899Z" level=info msg="connecting to shim 05972370f01a334443d54ebce5fb84f95c412020745bf203e62022b8c8918cbf" address="unix:///run/containerd/s/cc85f85ae13035c5a1cd7e2b2806fdaf215954343b36af08eda731e211b3e031" protocol=ttrpc version=3 Sep 9 05:42:43.587948 systemd[1]: Started cri-containerd-05972370f01a334443d54ebce5fb84f95c412020745bf203e62022b8c8918cbf.scope - libcontainer container 05972370f01a334443d54ebce5fb84f95c412020745bf203e62022b8c8918cbf. Sep 9 05:42:43.634583 containerd[1723]: time="2025-09-09T05:42:43.634557961Z" level=info msg="StartContainer for \"05972370f01a334443d54ebce5fb84f95c412020745bf203e62022b8c8918cbf\" returns successfully" Sep 9 05:42:44.324145 kubelet[3195]: I0909 05:42:44.324085 3195 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-smrln" podStartSLOduration=26.392767408 podStartE2EDuration="31.32406695s" podCreationTimestamp="2025-09-09 05:42:13 +0000 UTC" firstStartedPulling="2025-09-09 05:42:38.585774524 +0000 UTC m=+44.568726980" lastFinishedPulling="2025-09-09 05:42:43.517074058 +0000 UTC m=+49.500026522" observedRunningTime="2025-09-09 05:42:44.321153778 +0000 UTC m=+50.304106243" watchObservedRunningTime="2025-09-09 05:42:44.32406695 +0000 UTC m=+50.307019403" Sep 9 05:42:44.394798 containerd[1723]: time="2025-09-09T05:42:44.394671096Z" level=info msg="TaskExit event in podsandbox handler container_id:\"05972370f01a334443d54ebce5fb84f95c412020745bf203e62022b8c8918cbf\" id:\"01e043050fb56c6f3a8a76e17520ba2d2702bc5ec8f3762c002ab36d049b1a28\" pid:5482 exit_status:1 exited_at:{seconds:1757396564 nanos:393977508}" Sep 9 05:42:45.166113 containerd[1723]: time="2025-09-09T05:42:45.166069399Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:42:45.168515 containerd[1723]: time="2025-09-09T05:42:45.168423318Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 9 05:42:45.180254 containerd[1723]: time="2025-09-09T05:42:45.180209399Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:42:45.183879 containerd[1723]: time="2025-09-09T05:42:45.183836319Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:42:45.184225 containerd[1723]: time="2025-09-09T05:42:45.184184555Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 1.666969133s" Sep 9 05:42:45.184225 containerd[1723]: time="2025-09-09T05:42:45.184217116Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 9 05:42:45.185948 containerd[1723]: time="2025-09-09T05:42:45.185921154Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 9 05:42:45.193805 containerd[1723]: time="2025-09-09T05:42:45.193624223Z" level=info msg="CreateContainer within sandbox \"43b19111f45fbee383506a84541d815166b4bad7f19af9a12c8b0606079db9b7\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 9 05:42:45.217893 containerd[1723]: time="2025-09-09T05:42:45.217867936Z" level=info msg="Container 7aa06b72ca3d280fcf07a4caed90a04e8e790f1c5b7a0c35bcf6379b44b12988: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:42:45.241392 containerd[1723]: time="2025-09-09T05:42:45.241365926Z" level=info msg="CreateContainer within sandbox \"43b19111f45fbee383506a84541d815166b4bad7f19af9a12c8b0606079db9b7\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"7aa06b72ca3d280fcf07a4caed90a04e8e790f1c5b7a0c35bcf6379b44b12988\"" Sep 9 05:42:45.241950 containerd[1723]: time="2025-09-09T05:42:45.241919964Z" level=info msg="StartContainer for \"7aa06b72ca3d280fcf07a4caed90a04e8e790f1c5b7a0c35bcf6379b44b12988\"" Sep 9 05:42:45.243999 containerd[1723]: time="2025-09-09T05:42:45.243968525Z" level=info msg="connecting to shim 7aa06b72ca3d280fcf07a4caed90a04e8e790f1c5b7a0c35bcf6379b44b12988" address="unix:///run/containerd/s/623df805c5553a3b2cadf2d2560a268596615dd96de0fe05c7673c9c2613fd90" protocol=ttrpc version=3 Sep 9 05:42:45.264962 systemd[1]: Started cri-containerd-7aa06b72ca3d280fcf07a4caed90a04e8e790f1c5b7a0c35bcf6379b44b12988.scope - libcontainer container 7aa06b72ca3d280fcf07a4caed90a04e8e790f1c5b7a0c35bcf6379b44b12988. Sep 9 05:42:45.295056 containerd[1723]: time="2025-09-09T05:42:45.295031466Z" level=info msg="StartContainer for \"7aa06b72ca3d280fcf07a4caed90a04e8e790f1c5b7a0c35bcf6379b44b12988\" returns successfully" Sep 9 05:42:45.380274 containerd[1723]: time="2025-09-09T05:42:45.380240942Z" level=info msg="TaskExit event in podsandbox handler container_id:\"05972370f01a334443d54ebce5fb84f95c412020745bf203e62022b8c8918cbf\" id:\"e4e5e7e300962b5d55e4cf64f90b0d32e93cd3ac3f27b612c8206ac739195b23\" pid:5547 exit_status:1 exited_at:{seconds:1757396565 nanos:379835261}" Sep 9 05:42:45.506580 containerd[1723]: time="2025-09-09T05:42:45.506494224Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:42:45.516818 containerd[1723]: time="2025-09-09T05:42:45.516775530Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 9 05:42:45.524551 containerd[1723]: time="2025-09-09T05:42:45.524068431Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 338.111248ms" Sep 9 05:42:45.524551 containerd[1723]: time="2025-09-09T05:42:45.524099935Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 9 05:42:45.526085 containerd[1723]: time="2025-09-09T05:42:45.526064284Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 9 05:42:45.532992 containerd[1723]: time="2025-09-09T05:42:45.532965844Z" level=info msg="CreateContainer within sandbox \"f3711f03a95c2bc388d73c7d66bfd405cd39fdcc4d88197cfcd25b828d019a44\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 9 05:42:45.552666 containerd[1723]: time="2025-09-09T05:42:45.551924002Z" level=info msg="Container dd873a7dfa03beb01cd8ed3e7dba71544d920df335df507fabdfe98fbd89a78f: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:42:45.566066 containerd[1723]: time="2025-09-09T05:42:45.566039861Z" level=info msg="CreateContainer within sandbox \"f3711f03a95c2bc388d73c7d66bfd405cd39fdcc4d88197cfcd25b828d019a44\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"dd873a7dfa03beb01cd8ed3e7dba71544d920df335df507fabdfe98fbd89a78f\"" Sep 9 05:42:45.566823 containerd[1723]: time="2025-09-09T05:42:45.566515916Z" level=info msg="StartContainer for \"dd873a7dfa03beb01cd8ed3e7dba71544d920df335df507fabdfe98fbd89a78f\"" Sep 9 05:42:45.567456 containerd[1723]: time="2025-09-09T05:42:45.567425606Z" level=info msg="connecting to shim dd873a7dfa03beb01cd8ed3e7dba71544d920df335df507fabdfe98fbd89a78f" address="unix:///run/containerd/s/e88a930b0afea50c673d9e9251858a88f8739e7a7003835cb30c7883192d16aa" protocol=ttrpc version=3 Sep 9 05:42:45.586923 systemd[1]: Started cri-containerd-dd873a7dfa03beb01cd8ed3e7dba71544d920df335df507fabdfe98fbd89a78f.scope - libcontainer container dd873a7dfa03beb01cd8ed3e7dba71544d920df335df507fabdfe98fbd89a78f. Sep 9 05:42:45.636401 containerd[1723]: time="2025-09-09T05:42:45.636299939Z" level=info msg="StartContainer for \"dd873a7dfa03beb01cd8ed3e7dba71544d920df335df507fabdfe98fbd89a78f\" returns successfully" Sep 9 05:42:47.308507 kubelet[3195]: I0909 05:42:47.308476 3195 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 05:42:47.593273 containerd[1723]: time="2025-09-09T05:42:47.593158130Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:42:47.596805 containerd[1723]: time="2025-09-09T05:42:47.596003806Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 9 05:42:47.600422 containerd[1723]: time="2025-09-09T05:42:47.599152025Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:42:47.610775 containerd[1723]: time="2025-09-09T05:42:47.610743092Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:42:47.611348 containerd[1723]: time="2025-09-09T05:42:47.611313431Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 2.0851254s" Sep 9 05:42:47.611395 containerd[1723]: time="2025-09-09T05:42:47.611348127Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 9 05:42:47.617417 containerd[1723]: time="2025-09-09T05:42:47.617392368Z" level=info msg="CreateContainer within sandbox \"43b19111f45fbee383506a84541d815166b4bad7f19af9a12c8b0606079db9b7\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 9 05:42:47.638678 containerd[1723]: time="2025-09-09T05:42:47.637545404Z" level=info msg="Container 09decc3304e6047e1a7cbcd16131511cee13cf40d97a092fa37c921f9267574e: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:42:47.654430 containerd[1723]: time="2025-09-09T05:42:47.654402924Z" level=info msg="CreateContainer within sandbox \"43b19111f45fbee383506a84541d815166b4bad7f19af9a12c8b0606079db9b7\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"09decc3304e6047e1a7cbcd16131511cee13cf40d97a092fa37c921f9267574e\"" Sep 9 05:42:47.655810 containerd[1723]: time="2025-09-09T05:42:47.655703216Z" level=info msg="StartContainer for \"09decc3304e6047e1a7cbcd16131511cee13cf40d97a092fa37c921f9267574e\"" Sep 9 05:42:47.656968 containerd[1723]: time="2025-09-09T05:42:47.656939775Z" level=info msg="connecting to shim 09decc3304e6047e1a7cbcd16131511cee13cf40d97a092fa37c921f9267574e" address="unix:///run/containerd/s/623df805c5553a3b2cadf2d2560a268596615dd96de0fe05c7673c9c2613fd90" protocol=ttrpc version=3 Sep 9 05:42:47.680026 systemd[1]: Started cri-containerd-09decc3304e6047e1a7cbcd16131511cee13cf40d97a092fa37c921f9267574e.scope - libcontainer container 09decc3304e6047e1a7cbcd16131511cee13cf40d97a092fa37c921f9267574e. Sep 9 05:42:47.713424 containerd[1723]: time="2025-09-09T05:42:47.713398544Z" level=info msg="StartContainer for \"09decc3304e6047e1a7cbcd16131511cee13cf40d97a092fa37c921f9267574e\" returns successfully" Sep 9 05:42:48.190107 kubelet[3195]: I0909 05:42:48.190076 3195 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 9 05:42:48.190107 kubelet[3195]: I0909 05:42:48.190116 3195 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 9 05:42:48.324120 kubelet[3195]: I0909 05:42:48.323615 3195 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-kpfkf" podStartSLOduration=26.385760017 podStartE2EDuration="35.323597133s" podCreationTimestamp="2025-09-09 05:42:13 +0000 UTC" firstStartedPulling="2025-09-09 05:42:38.674283263 +0000 UTC m=+44.657235724" lastFinishedPulling="2025-09-09 05:42:47.612120375 +0000 UTC m=+53.595072840" observedRunningTime="2025-09-09 05:42:48.322505809 +0000 UTC m=+54.305458271" watchObservedRunningTime="2025-09-09 05:42:48.323597133 +0000 UTC m=+54.306549597" Sep 9 05:42:48.324120 kubelet[3195]: I0909 05:42:48.323925 3195 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7d4c6f5df5-vfjwk" podStartSLOduration=32.583146975 podStartE2EDuration="38.323916865s" podCreationTimestamp="2025-09-09 05:42:10 +0000 UTC" firstStartedPulling="2025-09-09 05:42:39.785197987 +0000 UTC m=+45.768150453" lastFinishedPulling="2025-09-09 05:42:45.525967881 +0000 UTC m=+51.508920343" observedRunningTime="2025-09-09 05:42:46.319738453 +0000 UTC m=+52.302690915" watchObservedRunningTime="2025-09-09 05:42:48.323916865 +0000 UTC m=+54.306869328" Sep 9 05:42:55.138606 containerd[1723]: time="2025-09-09T05:42:55.138488111Z" level=info msg="TaskExit event in podsandbox handler container_id:\"05972370f01a334443d54ebce5fb84f95c412020745bf203e62022b8c8918cbf\" id:\"4374182f3882ecd04c574472805dedc83596377f0557101c95cfc9b1f140e9bf\" pid:5657 exited_at:{seconds:1757396575 nanos:138265195}" Sep 9 05:43:00.293310 containerd[1723]: time="2025-09-09T05:43:00.293269863Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c0b1b11fc7333500d56a0ebe20b25e452eb3544495f69bc9014479027ea8776f\" id:\"95110c5d9ff11e87aafe7aee2adf6d0a40b2ce83f909fc661409944aa38662cc\" pid:5684 exited_at:{seconds:1757396580 nanos:292846714}" Sep 9 05:43:07.552998 containerd[1723]: time="2025-09-09T05:43:07.552627692Z" level=info msg="TaskExit event in podsandbox handler container_id:\"84e05ce642707feec06a2c468b3d440ec565c729fb177c7a2a581d38cf1107c8\" id:\"b2e7eb46b7cdac9390c4c0100c4cf722020530acef2d6fdd7a9e6db2809c71f9\" pid:5711 exited_at:{seconds:1757396587 nanos:552304139}" Sep 9 05:43:08.307908 containerd[1723]: time="2025-09-09T05:43:08.307862751Z" level=info msg="TaskExit event in podsandbox handler container_id:\"84e05ce642707feec06a2c468b3d440ec565c729fb177c7a2a581d38cf1107c8\" id:\"50e494148765fce6c6a384a362d06682d4bdbd78a912ffe76701d66020925191\" pid:5733 exited_at:{seconds:1757396588 nanos:306744104}" Sep 9 05:43:15.500386 containerd[1723]: time="2025-09-09T05:43:15.500334753Z" level=info msg="TaskExit event in podsandbox handler container_id:\"05972370f01a334443d54ebce5fb84f95c412020745bf203e62022b8c8918cbf\" id:\"a8a259dcf27772aaae087ef98d0a7bf584a83f58d256f23d7605104fd4232b08\" pid:5760 exited_at:{seconds:1757396595 nanos:499868299}" Sep 9 05:43:23.431322 kubelet[3195]: I0909 05:43:23.431282 3195 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 05:43:30.291233 containerd[1723]: time="2025-09-09T05:43:30.291192695Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c0b1b11fc7333500d56a0ebe20b25e452eb3544495f69bc9014479027ea8776f\" id:\"2005db3ef42ad7670c2a1c4efa2b380c3007936d73649d68697c1fe9910fb66a\" pid:5790 exited_at:{seconds:1757396610 nanos:290757556}" Sep 9 05:43:38.304996 containerd[1723]: time="2025-09-09T05:43:38.304492773Z" level=info msg="TaskExit event in podsandbox handler container_id:\"84e05ce642707feec06a2c468b3d440ec565c729fb177c7a2a581d38cf1107c8\" id:\"0b4ff47e33b16e66c6ad4042c85b1b44520f24e1313501153ebc266aa5571a22\" pid:5816 exited_at:{seconds:1757396618 nanos:304246607}" Sep 9 05:43:45.442806 containerd[1723]: time="2025-09-09T05:43:45.442223111Z" level=info msg="TaskExit event in podsandbox handler container_id:\"05972370f01a334443d54ebce5fb84f95c412020745bf203e62022b8c8918cbf\" id:\"50b424b2335f3bca7e79ed2460f1e82ca322c1c24b32da8e54e735eeb0631cdc\" pid:5844 exited_at:{seconds:1757396625 nanos:440090578}" Sep 9 05:43:45.551308 systemd[1]: Started sshd@7-10.200.8.13:22-10.200.16.10:46330.service - OpenSSH per-connection server daemon (10.200.16.10:46330). Sep 9 05:43:46.187348 sshd[5855]: Accepted publickey for core from 10.200.16.10 port 46330 ssh2: RSA SHA256:sZgtQeKACOXr8pJrh4GrBINrKp/VHU8dqG1ZDK7f3fs Sep 9 05:43:46.188634 sshd-session[5855]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:43:46.193923 systemd-logind[1701]: New session 10 of user core. Sep 9 05:43:46.197947 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 9 05:43:46.698704 sshd[5858]: Connection closed by 10.200.16.10 port 46330 Sep 9 05:43:46.700215 sshd-session[5855]: pam_unix(sshd:session): session closed for user core Sep 9 05:43:46.703542 systemd[1]: sshd@7-10.200.8.13:22-10.200.16.10:46330.service: Deactivated successfully. Sep 9 05:43:46.706656 systemd[1]: session-10.scope: Deactivated successfully. Sep 9 05:43:46.710698 systemd-logind[1701]: Session 10 logged out. Waiting for processes to exit. Sep 9 05:43:46.712237 systemd-logind[1701]: Removed session 10. Sep 9 05:43:51.814757 systemd[1]: Started sshd@8-10.200.8.13:22-10.200.16.10:50148.service - OpenSSH per-connection server daemon (10.200.16.10:50148). Sep 9 05:43:52.439694 sshd[5872]: Accepted publickey for core from 10.200.16.10 port 50148 ssh2: RSA SHA256:sZgtQeKACOXr8pJrh4GrBINrKp/VHU8dqG1ZDK7f3fs Sep 9 05:43:52.441342 sshd-session[5872]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:43:52.448456 systemd-logind[1701]: New session 11 of user core. Sep 9 05:43:52.456942 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 9 05:43:52.970194 sshd[5875]: Connection closed by 10.200.16.10 port 50148 Sep 9 05:43:52.972977 sshd-session[5872]: pam_unix(sshd:session): session closed for user core Sep 9 05:43:52.977408 systemd[1]: sshd@8-10.200.8.13:22-10.200.16.10:50148.service: Deactivated successfully. Sep 9 05:43:52.982207 systemd[1]: session-11.scope: Deactivated successfully. Sep 9 05:43:52.983771 systemd-logind[1701]: Session 11 logged out. Waiting for processes to exit. Sep 9 05:43:52.987267 systemd-logind[1701]: Removed session 11. Sep 9 05:43:55.128401 containerd[1723]: time="2025-09-09T05:43:55.128347991Z" level=info msg="TaskExit event in podsandbox handler container_id:\"05972370f01a334443d54ebce5fb84f95c412020745bf203e62022b8c8918cbf\" id:\"1867448462610f88c8120df5b01d2b9cd6974cdfc40b595555a80ee918a9af3d\" pid:5908 exited_at:{seconds:1757396635 nanos:128125710}" Sep 9 05:43:58.081937 systemd[1]: Started sshd@9-10.200.8.13:22-10.200.16.10:50152.service - OpenSSH per-connection server daemon (10.200.16.10:50152). Sep 9 05:43:58.704217 sshd[5921]: Accepted publickey for core from 10.200.16.10 port 50152 ssh2: RSA SHA256:sZgtQeKACOXr8pJrh4GrBINrKp/VHU8dqG1ZDK7f3fs Sep 9 05:43:58.705375 sshd-session[5921]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:43:58.709386 systemd-logind[1701]: New session 12 of user core. Sep 9 05:43:58.715923 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 9 05:43:59.194404 sshd[5924]: Connection closed by 10.200.16.10 port 50152 Sep 9 05:43:59.195990 sshd-session[5921]: pam_unix(sshd:session): session closed for user core Sep 9 05:43:59.198835 systemd[1]: sshd@9-10.200.8.13:22-10.200.16.10:50152.service: Deactivated successfully. Sep 9 05:43:59.200490 systemd[1]: session-12.scope: Deactivated successfully. Sep 9 05:43:59.202356 systemd-logind[1701]: Session 12 logged out. Waiting for processes to exit. Sep 9 05:43:59.203290 systemd-logind[1701]: Removed session 12. Sep 9 05:43:59.308073 systemd[1]: Started sshd@10-10.200.8.13:22-10.200.16.10:50162.service - OpenSSH per-connection server daemon (10.200.16.10:50162). Sep 9 05:43:59.932834 sshd[5937]: Accepted publickey for core from 10.200.16.10 port 50162 ssh2: RSA SHA256:sZgtQeKACOXr8pJrh4GrBINrKp/VHU8dqG1ZDK7f3fs Sep 9 05:43:59.933426 sshd-session[5937]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:43:59.937853 systemd-logind[1701]: New session 13 of user core. Sep 9 05:43:59.941984 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 9 05:44:00.298310 containerd[1723]: time="2025-09-09T05:44:00.298163390Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c0b1b11fc7333500d56a0ebe20b25e452eb3544495f69bc9014479027ea8776f\" id:\"36685baa692b45742cb7779ab8f4daf881e4a61af660c554a63c0698ae0ff6e4\" pid:5955 exited_at:{seconds:1757396640 nanos:297877065}" Sep 9 05:44:00.459798 sshd[5942]: Connection closed by 10.200.16.10 port 50162 Sep 9 05:44:00.460581 sshd-session[5937]: pam_unix(sshd:session): session closed for user core Sep 9 05:44:00.464641 systemd[1]: sshd@10-10.200.8.13:22-10.200.16.10:50162.service: Deactivated successfully. Sep 9 05:44:00.466704 systemd[1]: session-13.scope: Deactivated successfully. Sep 9 05:44:00.467482 systemd-logind[1701]: Session 13 logged out. Waiting for processes to exit. Sep 9 05:44:00.469090 systemd-logind[1701]: Removed session 13. Sep 9 05:44:00.573994 systemd[1]: Started sshd@11-10.200.8.13:22-10.200.16.10:32820.service - OpenSSH per-connection server daemon (10.200.16.10:32820). Sep 9 05:44:01.206716 sshd[5975]: Accepted publickey for core from 10.200.16.10 port 32820 ssh2: RSA SHA256:sZgtQeKACOXr8pJrh4GrBINrKp/VHU8dqG1ZDK7f3fs Sep 9 05:44:01.207606 sshd-session[5975]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:44:01.212378 systemd-logind[1701]: New session 14 of user core. Sep 9 05:44:01.218959 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 9 05:44:01.699418 sshd[5979]: Connection closed by 10.200.16.10 port 32820 Sep 9 05:44:01.700897 sshd-session[5975]: pam_unix(sshd:session): session closed for user core Sep 9 05:44:01.704127 systemd-logind[1701]: Session 14 logged out. Waiting for processes to exit. Sep 9 05:44:01.704396 systemd[1]: sshd@11-10.200.8.13:22-10.200.16.10:32820.service: Deactivated successfully. Sep 9 05:44:01.706543 systemd[1]: session-14.scope: Deactivated successfully. Sep 9 05:44:01.708284 systemd-logind[1701]: Removed session 14. Sep 9 05:44:06.811305 systemd[1]: Started sshd@12-10.200.8.13:22-10.200.16.10:32822.service - OpenSSH per-connection server daemon (10.200.16.10:32822). Sep 9 05:44:07.435297 sshd[6004]: Accepted publickey for core from 10.200.16.10 port 32822 ssh2: RSA SHA256:sZgtQeKACOXr8pJrh4GrBINrKp/VHU8dqG1ZDK7f3fs Sep 9 05:44:07.436406 sshd-session[6004]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:44:07.440755 systemd-logind[1701]: New session 15 of user core. Sep 9 05:44:07.447903 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 9 05:44:07.532532 containerd[1723]: time="2025-09-09T05:44:07.532491471Z" level=info msg="TaskExit event in podsandbox handler container_id:\"84e05ce642707feec06a2c468b3d440ec565c729fb177c7a2a581d38cf1107c8\" id:\"22a8f430e8d56643b56e5741d731c89ec7493408187f025f6b059013276d0808\" pid:6034 exited_at:{seconds:1757396647 nanos:532303436}" Sep 9 05:44:07.935445 sshd[6021]: Connection closed by 10.200.16.10 port 32822 Sep 9 05:44:07.935976 sshd-session[6004]: pam_unix(sshd:session): session closed for user core Sep 9 05:44:07.941156 systemd-logind[1701]: Session 15 logged out. Waiting for processes to exit. Sep 9 05:44:07.941298 systemd[1]: sshd@12-10.200.8.13:22-10.200.16.10:32822.service: Deactivated successfully. Sep 9 05:44:07.944132 systemd[1]: session-15.scope: Deactivated successfully. Sep 9 05:44:07.946331 systemd-logind[1701]: Removed session 15. Sep 9 05:44:08.300618 containerd[1723]: time="2025-09-09T05:44:08.300481028Z" level=info msg="TaskExit event in podsandbox handler container_id:\"84e05ce642707feec06a2c468b3d440ec565c729fb177c7a2a581d38cf1107c8\" id:\"f771c2f427fa6b68098a8e64c5d05c041b65f5d660a3ceecffe0485577eb2462\" pid:6068 exited_at:{seconds:1757396648 nanos:300275652}" Sep 9 05:44:13.049969 systemd[1]: Started sshd@13-10.200.8.13:22-10.200.16.10:37722.service - OpenSSH per-connection server daemon (10.200.16.10:37722). Sep 9 05:44:13.673372 sshd[6078]: Accepted publickey for core from 10.200.16.10 port 37722 ssh2: RSA SHA256:sZgtQeKACOXr8pJrh4GrBINrKp/VHU8dqG1ZDK7f3fs Sep 9 05:44:13.674733 sshd-session[6078]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:44:13.679649 systemd-logind[1701]: New session 16 of user core. Sep 9 05:44:13.686946 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 9 05:44:14.164859 sshd[6081]: Connection closed by 10.200.16.10 port 37722 Sep 9 05:44:14.165376 sshd-session[6078]: pam_unix(sshd:session): session closed for user core Sep 9 05:44:14.168591 systemd[1]: sshd@13-10.200.8.13:22-10.200.16.10:37722.service: Deactivated successfully. Sep 9 05:44:14.170444 systemd[1]: session-16.scope: Deactivated successfully. Sep 9 05:44:14.171305 systemd-logind[1701]: Session 16 logged out. Waiting for processes to exit. Sep 9 05:44:14.172340 systemd-logind[1701]: Removed session 16. Sep 9 05:44:15.366159 containerd[1723]: time="2025-09-09T05:44:15.366053130Z" level=info msg="TaskExit event in podsandbox handler container_id:\"05972370f01a334443d54ebce5fb84f95c412020745bf203e62022b8c8918cbf\" id:\"764d6e0ace4c2e33083a01ef470c0f098577d292a39b3f4917c8c82b8d90cada\" pid:6104 exited_at:{seconds:1757396655 nanos:365799940}" Sep 9 05:44:19.282031 systemd[1]: Started sshd@14-10.200.8.13:22-10.200.16.10:37736.service - OpenSSH per-connection server daemon (10.200.16.10:37736). Sep 9 05:44:19.914844 sshd[6115]: Accepted publickey for core from 10.200.16.10 port 37736 ssh2: RSA SHA256:sZgtQeKACOXr8pJrh4GrBINrKp/VHU8dqG1ZDK7f3fs Sep 9 05:44:19.916287 sshd-session[6115]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:44:19.921099 systemd-logind[1701]: New session 17 of user core. Sep 9 05:44:19.926944 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 9 05:44:20.423246 sshd[6118]: Connection closed by 10.200.16.10 port 37736 Sep 9 05:44:20.426970 sshd-session[6115]: pam_unix(sshd:session): session closed for user core Sep 9 05:44:20.431456 systemd[1]: sshd@14-10.200.8.13:22-10.200.16.10:37736.service: Deactivated successfully. Sep 9 05:44:20.435653 systemd[1]: session-17.scope: Deactivated successfully. Sep 9 05:44:20.438210 systemd-logind[1701]: Session 17 logged out. Waiting for processes to exit. Sep 9 05:44:20.440660 systemd-logind[1701]: Removed session 17. Sep 9 05:44:20.544554 systemd[1]: Started sshd@15-10.200.8.13:22-10.200.16.10:33012.service - OpenSSH per-connection server daemon (10.200.16.10:33012). Sep 9 05:44:21.188698 sshd[6130]: Accepted publickey for core from 10.200.16.10 port 33012 ssh2: RSA SHA256:sZgtQeKACOXr8pJrh4GrBINrKp/VHU8dqG1ZDK7f3fs Sep 9 05:44:21.189911 sshd-session[6130]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:44:21.193687 systemd-logind[1701]: New session 18 of user core. Sep 9 05:44:21.198936 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 9 05:44:21.894325 sshd[6133]: Connection closed by 10.200.16.10 port 33012 Sep 9 05:44:21.894969 sshd-session[6130]: pam_unix(sshd:session): session closed for user core Sep 9 05:44:21.898600 systemd[1]: sshd@15-10.200.8.13:22-10.200.16.10:33012.service: Deactivated successfully. Sep 9 05:44:21.901769 systemd[1]: session-18.scope: Deactivated successfully. Sep 9 05:44:21.903658 systemd-logind[1701]: Session 18 logged out. Waiting for processes to exit. Sep 9 05:44:21.905387 systemd-logind[1701]: Removed session 18. Sep 9 05:44:22.017551 systemd[1]: Started sshd@16-10.200.8.13:22-10.200.16.10:33028.service - OpenSSH per-connection server daemon (10.200.16.10:33028). Sep 9 05:44:22.658398 sshd[6143]: Accepted publickey for core from 10.200.16.10 port 33028 ssh2: RSA SHA256:sZgtQeKACOXr8pJrh4GrBINrKp/VHU8dqG1ZDK7f3fs Sep 9 05:44:22.659536 sshd-session[6143]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:44:22.663841 systemd-logind[1701]: New session 19 of user core. Sep 9 05:44:22.666916 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 9 05:44:23.625905 sshd[6146]: Connection closed by 10.200.16.10 port 33028 Sep 9 05:44:23.626445 sshd-session[6143]: pam_unix(sshd:session): session closed for user core Sep 9 05:44:23.629632 systemd[1]: sshd@16-10.200.8.13:22-10.200.16.10:33028.service: Deactivated successfully. Sep 9 05:44:23.631471 systemd[1]: session-19.scope: Deactivated successfully. Sep 9 05:44:23.632302 systemd-logind[1701]: Session 19 logged out. Waiting for processes to exit. Sep 9 05:44:23.633994 systemd-logind[1701]: Removed session 19. Sep 9 05:44:23.739178 systemd[1]: Started sshd@17-10.200.8.13:22-10.200.16.10:33038.service - OpenSSH per-connection server daemon (10.200.16.10:33038). Sep 9 05:44:24.370936 sshd[6164]: Accepted publickey for core from 10.200.16.10 port 33038 ssh2: RSA SHA256:sZgtQeKACOXr8pJrh4GrBINrKp/VHU8dqG1ZDK7f3fs Sep 9 05:44:24.374004 sshd-session[6164]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:44:24.378414 systemd-logind[1701]: New session 20 of user core. Sep 9 05:44:24.384940 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 9 05:44:24.953989 sshd[6167]: Connection closed by 10.200.16.10 port 33038 Sep 9 05:44:24.954573 sshd-session[6164]: pam_unix(sshd:session): session closed for user core Sep 9 05:44:24.957322 systemd[1]: sshd@17-10.200.8.13:22-10.200.16.10:33038.service: Deactivated successfully. Sep 9 05:44:24.959272 systemd[1]: session-20.scope: Deactivated successfully. Sep 9 05:44:24.961082 systemd-logind[1701]: Session 20 logged out. Waiting for processes to exit. Sep 9 05:44:24.962194 systemd-logind[1701]: Removed session 20. Sep 9 05:44:25.063772 systemd[1]: Started sshd@18-10.200.8.13:22-10.200.16.10:33050.service - OpenSSH per-connection server daemon (10.200.16.10:33050). Sep 9 05:44:25.687159 sshd[6177]: Accepted publickey for core from 10.200.16.10 port 33050 ssh2: RSA SHA256:sZgtQeKACOXr8pJrh4GrBINrKp/VHU8dqG1ZDK7f3fs Sep 9 05:44:25.688378 sshd-session[6177]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:44:25.692947 systemd-logind[1701]: New session 21 of user core. Sep 9 05:44:25.695935 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 9 05:44:26.178991 sshd[6180]: Connection closed by 10.200.16.10 port 33050 Sep 9 05:44:26.179540 sshd-session[6177]: pam_unix(sshd:session): session closed for user core Sep 9 05:44:26.183279 systemd-logind[1701]: Session 21 logged out. Waiting for processes to exit. Sep 9 05:44:26.183506 systemd[1]: sshd@18-10.200.8.13:22-10.200.16.10:33050.service: Deactivated successfully. Sep 9 05:44:26.185466 systemd[1]: session-21.scope: Deactivated successfully. Sep 9 05:44:26.187573 systemd-logind[1701]: Removed session 21. Sep 9 05:44:30.290045 containerd[1723]: time="2025-09-09T05:44:30.289979162Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c0b1b11fc7333500d56a0ebe20b25e452eb3544495f69bc9014479027ea8776f\" id:\"7a4598ae37bbd8be9fbc4ce8ce285524a7750bf6b789207d3db53fe54ed248cd\" pid:6205 exited_at:{seconds:1757396670 nanos:288610440}" Sep 9 05:44:31.295048 systemd[1]: Started sshd@19-10.200.8.13:22-10.200.16.10:44924.service - OpenSSH per-connection server daemon (10.200.16.10:44924). Sep 9 05:44:31.927478 sshd[6218]: Accepted publickey for core from 10.200.16.10 port 44924 ssh2: RSA SHA256:sZgtQeKACOXr8pJrh4GrBINrKp/VHU8dqG1ZDK7f3fs Sep 9 05:44:31.928614 sshd-session[6218]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:44:31.932871 systemd-logind[1701]: New session 22 of user core. Sep 9 05:44:31.941973 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 9 05:44:32.417541 sshd[6221]: Connection closed by 10.200.16.10 port 44924 Sep 9 05:44:32.418074 sshd-session[6218]: pam_unix(sshd:session): session closed for user core Sep 9 05:44:32.421184 systemd-logind[1701]: Session 22 logged out. Waiting for processes to exit. Sep 9 05:44:32.423018 systemd[1]: sshd@19-10.200.8.13:22-10.200.16.10:44924.service: Deactivated successfully. Sep 9 05:44:32.425139 systemd[1]: session-22.scope: Deactivated successfully. Sep 9 05:44:32.426588 systemd-logind[1701]: Removed session 22. Sep 9 05:44:37.534588 systemd[1]: Started sshd@20-10.200.8.13:22-10.200.16.10:44932.service - OpenSSH per-connection server daemon (10.200.16.10:44932). Sep 9 05:44:38.163355 sshd[6235]: Accepted publickey for core from 10.200.16.10 port 44932 ssh2: RSA SHA256:sZgtQeKACOXr8pJrh4GrBINrKp/VHU8dqG1ZDK7f3fs Sep 9 05:44:38.166533 sshd-session[6235]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:44:38.173024 systemd-logind[1701]: New session 23 of user core. Sep 9 05:44:38.179000 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 9 05:44:38.317870 containerd[1723]: time="2025-09-09T05:44:38.317829521Z" level=info msg="TaskExit event in podsandbox handler container_id:\"84e05ce642707feec06a2c468b3d440ec565c729fb177c7a2a581d38cf1107c8\" id:\"111ee662c5b904c934a3db767c083a0b63023db43249cd86c59f5b22fcdb80d8\" pid:6253 exited_at:{seconds:1757396678 nanos:317047017}" Sep 9 05:44:38.719709 sshd[6238]: Connection closed by 10.200.16.10 port 44932 Sep 9 05:44:38.720054 sshd-session[6235]: pam_unix(sshd:session): session closed for user core Sep 9 05:44:38.726312 systemd[1]: sshd@20-10.200.8.13:22-10.200.16.10:44932.service: Deactivated successfully. Sep 9 05:44:38.729625 systemd[1]: session-23.scope: Deactivated successfully. Sep 9 05:44:38.731586 systemd-logind[1701]: Session 23 logged out. Waiting for processes to exit. Sep 9 05:44:38.733175 systemd-logind[1701]: Removed session 23. Sep 9 05:44:43.832004 systemd[1]: Started sshd@21-10.200.8.13:22-10.200.16.10:42018.service - OpenSSH per-connection server daemon (10.200.16.10:42018). Sep 9 05:44:44.458201 sshd[6273]: Accepted publickey for core from 10.200.16.10 port 42018 ssh2: RSA SHA256:sZgtQeKACOXr8pJrh4GrBINrKp/VHU8dqG1ZDK7f3fs Sep 9 05:44:44.459369 sshd-session[6273]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:44:44.464232 systemd-logind[1701]: New session 24 of user core. Sep 9 05:44:44.468955 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 9 05:44:44.945462 sshd[6276]: Connection closed by 10.200.16.10 port 42018 Sep 9 05:44:44.947030 sshd-session[6273]: pam_unix(sshd:session): session closed for user core Sep 9 05:44:44.950134 systemd[1]: sshd@21-10.200.8.13:22-10.200.16.10:42018.service: Deactivated successfully. Sep 9 05:44:44.952166 systemd[1]: session-24.scope: Deactivated successfully. Sep 9 05:44:44.952962 systemd-logind[1701]: Session 24 logged out. Waiting for processes to exit. Sep 9 05:44:44.954487 systemd-logind[1701]: Removed session 24. Sep 9 05:44:45.367679 containerd[1723]: time="2025-09-09T05:44:45.367288408Z" level=info msg="TaskExit event in podsandbox handler container_id:\"05972370f01a334443d54ebce5fb84f95c412020745bf203e62022b8c8918cbf\" id:\"7133d3786b554c6081b52188a34b1be97a769abc056b66bc26ed65a6a629bf25\" pid:6299 exited_at:{seconds:1757396685 nanos:367007541}" Sep 9 05:44:50.059059 systemd[1]: Started sshd@22-10.200.8.13:22-10.200.16.10:32968.service - OpenSSH per-connection server daemon (10.200.16.10:32968). Sep 9 05:44:50.700108 sshd[6311]: Accepted publickey for core from 10.200.16.10 port 32968 ssh2: RSA SHA256:sZgtQeKACOXr8pJrh4GrBINrKp/VHU8dqG1ZDK7f3fs Sep 9 05:44:50.702011 sshd-session[6311]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:44:50.708985 systemd-logind[1701]: New session 25 of user core. Sep 9 05:44:50.712952 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 9 05:44:51.243628 sshd[6314]: Connection closed by 10.200.16.10 port 32968 Sep 9 05:44:51.244220 sshd-session[6311]: pam_unix(sshd:session): session closed for user core Sep 9 05:44:51.248104 systemd[1]: sshd@22-10.200.8.13:22-10.200.16.10:32968.service: Deactivated successfully. Sep 9 05:44:51.249906 systemd[1]: session-25.scope: Deactivated successfully. Sep 9 05:44:51.250608 systemd-logind[1701]: Session 25 logged out. Waiting for processes to exit. Sep 9 05:44:51.252196 systemd-logind[1701]: Removed session 25. Sep 9 05:44:55.125734 containerd[1723]: time="2025-09-09T05:44:55.125692429Z" level=info msg="TaskExit event in podsandbox handler container_id:\"05972370f01a334443d54ebce5fb84f95c412020745bf203e62022b8c8918cbf\" id:\"22166ff98d16292d3cd062c20b87798c39664a913b8abee25579e6ef1a73c6ae\" pid:6340 exited_at:{seconds:1757396695 nanos:125394423}"