Sep 12 17:47:31.923425 kernel: Linux version 6.12.47-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Sep 12 15:34:39 -00 2025 Sep 12 17:47:31.923452 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=271a44cc8ea1639cfb6fdf777202a5f025fda0b3ce9b293cc4e0e7047aecb858 Sep 12 17:47:31.923462 kernel: BIOS-provided physical RAM map: Sep 12 17:47:31.923468 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 12 17:47:31.923474 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Sep 12 17:47:31.923480 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000044fdfff] usable Sep 12 17:47:31.923489 kernel: BIOS-e820: [mem 0x00000000044fe000-0x00000000048fdfff] reserved Sep 12 17:47:31.923503 kernel: BIOS-e820: [mem 0x00000000048fe000-0x000000003ff1efff] usable Sep 12 17:47:31.923509 kernel: BIOS-e820: [mem 0x000000003ff1f000-0x000000003ffc8fff] reserved Sep 12 17:47:31.923517 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Sep 12 17:47:31.923523 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Sep 12 17:47:31.923529 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Sep 12 17:47:31.923536 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Sep 12 17:47:31.923542 kernel: printk: legacy bootconsole [earlyser0] enabled Sep 12 17:47:31.923554 kernel: NX (Execute Disable) protection: active Sep 12 17:47:31.923560 kernel: APIC: Static calls initialized Sep 12 17:47:31.923568 kernel: efi: EFI v2.7 by Microsoft Sep 12 17:47:31.923575 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff88000 SMBIOS 3.0=0x3ff86000 MEMATTR=0x3e9da518 RNG=0x3ffd2018 Sep 12 17:47:31.923583 kernel: random: crng init done Sep 12 17:47:31.923590 kernel: secureboot: Secure boot disabled Sep 12 17:47:31.923597 kernel: SMBIOS 3.1.0 present. Sep 12 17:47:31.923603 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 01/28/2025 Sep 12 17:47:31.923610 kernel: DMI: Memory slots populated: 2/2 Sep 12 17:47:31.923620 kernel: Hypervisor detected: Microsoft Hyper-V Sep 12 17:47:31.923627 kernel: Hyper-V: privilege flags low 0xae7f, high 0x3b8030, hints 0x9e4e24, misc 0xe0bed7b2 Sep 12 17:47:31.923635 kernel: Hyper-V: Nested features: 0x3e0101 Sep 12 17:47:31.923642 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Sep 12 17:47:31.923649 kernel: Hyper-V: Using hypercall for remote TLB flush Sep 12 17:47:31.923678 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Sep 12 17:47:31.923687 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Sep 12 17:47:31.923694 kernel: tsc: Detected 2300.000 MHz processor Sep 12 17:47:31.923702 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 12 17:47:31.923711 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 12 17:47:31.923718 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x10000000000 Sep 12 17:47:31.923729 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Sep 12 17:47:31.923737 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 12 17:47:31.923744 kernel: e820: update [mem 0x48000000-0xffffffff] usable ==> reserved Sep 12 17:47:31.923752 kernel: last_pfn = 0x40000 max_arch_pfn = 0x10000000000 Sep 12 17:47:31.923760 kernel: Using GB pages for direct mapping Sep 12 17:47:31.923768 kernel: ACPI: Early table checksum verification disabled Sep 12 17:47:31.923781 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Sep 12 17:47:31.923793 kernel: ACPI: XSDT 0x000000003FFF90E8 00005C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 17:47:31.923801 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 17:47:31.923808 kernel: ACPI: DSDT 0x000000003FFD6000 01E27A (v02 MSFTVM DSDT01 00000001 INTL 20230628) Sep 12 17:47:31.923814 kernel: ACPI: FACS 0x000000003FFFE000 000040 Sep 12 17:47:31.923822 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 17:47:31.923830 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 17:47:31.923840 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 17:47:31.923848 kernel: ACPI: APIC 0x000000003FFD5000 000052 (v05 HVLITE HVLITETB 00000000 MSHV 00000000) Sep 12 17:47:31.923854 kernel: ACPI: SRAT 0x000000003FFD4000 0000A0 (v03 HVLITE HVLITETB 00000000 MSHV 00000000) Sep 12 17:47:31.923860 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 17:47:31.923867 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Sep 12 17:47:31.923873 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4279] Sep 12 17:47:31.923880 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Sep 12 17:47:31.923887 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Sep 12 17:47:31.923894 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Sep 12 17:47:31.923902 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Sep 12 17:47:31.923909 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5051] Sep 12 17:47:31.923915 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd409f] Sep 12 17:47:31.923922 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Sep 12 17:47:31.923929 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Sep 12 17:47:31.923936 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] Sep 12 17:47:31.923943 kernel: NUMA: Node 0 [mem 0x00001000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00001000-0x2bfffffff] Sep 12 17:47:31.923950 kernel: NODE_DATA(0) allocated [mem 0x2bfff6dc0-0x2bfffdfff] Sep 12 17:47:31.923957 kernel: Zone ranges: Sep 12 17:47:31.923965 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 12 17:47:31.923972 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Sep 12 17:47:31.923979 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Sep 12 17:47:31.923986 kernel: Device empty Sep 12 17:47:31.923993 kernel: Movable zone start for each node Sep 12 17:47:31.924000 kernel: Early memory node ranges Sep 12 17:47:31.924007 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Sep 12 17:47:31.924014 kernel: node 0: [mem 0x0000000000100000-0x00000000044fdfff] Sep 12 17:47:31.924021 kernel: node 0: [mem 0x00000000048fe000-0x000000003ff1efff] Sep 12 17:47:31.924029 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Sep 12 17:47:31.924036 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Sep 12 17:47:31.924043 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Sep 12 17:47:31.924050 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 12 17:47:31.924057 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Sep 12 17:47:31.924064 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Sep 12 17:47:31.924071 kernel: On node 0, zone DMA32: 224 pages in unavailable ranges Sep 12 17:47:31.924078 kernel: ACPI: PM-Timer IO Port: 0x408 Sep 12 17:47:31.924085 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 12 17:47:31.924093 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 12 17:47:31.924100 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 12 17:47:31.924107 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Sep 12 17:47:31.924114 kernel: TSC deadline timer available Sep 12 17:47:31.924121 kernel: CPU topo: Max. logical packages: 1 Sep 12 17:47:31.924128 kernel: CPU topo: Max. logical dies: 1 Sep 12 17:47:31.924135 kernel: CPU topo: Max. dies per package: 1 Sep 12 17:47:31.924142 kernel: CPU topo: Max. threads per core: 2 Sep 12 17:47:31.924149 kernel: CPU topo: Num. cores per package: 1 Sep 12 17:47:31.924158 kernel: CPU topo: Num. threads per package: 2 Sep 12 17:47:31.924165 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Sep 12 17:47:31.924172 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Sep 12 17:47:31.924179 kernel: Booting paravirtualized kernel on Hyper-V Sep 12 17:47:31.924229 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 12 17:47:31.924237 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Sep 12 17:47:31.924244 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Sep 12 17:47:31.924251 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Sep 12 17:47:31.924258 kernel: pcpu-alloc: [0] 0 1 Sep 12 17:47:31.924267 kernel: Hyper-V: PV spinlocks enabled Sep 12 17:47:31.924274 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 12 17:47:31.924283 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=271a44cc8ea1639cfb6fdf777202a5f025fda0b3ce9b293cc4e0e7047aecb858 Sep 12 17:47:31.924291 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 12 17:47:31.924298 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Sep 12 17:47:31.924306 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 12 17:47:31.924313 kernel: Fallback order for Node 0: 0 Sep 12 17:47:31.924320 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2095807 Sep 12 17:47:31.924328 kernel: Policy zone: Normal Sep 12 17:47:31.924335 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 12 17:47:31.924342 kernel: software IO TLB: area num 2. Sep 12 17:47:31.924350 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 12 17:47:31.924357 kernel: ftrace: allocating 40125 entries in 157 pages Sep 12 17:47:31.924364 kernel: ftrace: allocated 157 pages with 5 groups Sep 12 17:47:31.924371 kernel: Dynamic Preempt: voluntary Sep 12 17:47:31.924378 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 12 17:47:31.924387 kernel: rcu: RCU event tracing is enabled. Sep 12 17:47:31.924400 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 12 17:47:31.924408 kernel: Trampoline variant of Tasks RCU enabled. Sep 12 17:47:31.924416 kernel: Rude variant of Tasks RCU enabled. Sep 12 17:47:31.924425 kernel: Tracing variant of Tasks RCU enabled. Sep 12 17:47:31.924433 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 12 17:47:31.924441 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 12 17:47:31.924448 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 17:47:31.924456 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 17:47:31.924464 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 17:47:31.924472 kernel: Using NULL legacy PIC Sep 12 17:47:31.924480 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Sep 12 17:47:31.924488 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 12 17:47:31.924496 kernel: Console: colour dummy device 80x25 Sep 12 17:47:31.924503 kernel: printk: legacy console [tty1] enabled Sep 12 17:47:31.924511 kernel: printk: legacy console [ttyS0] enabled Sep 12 17:47:31.924519 kernel: printk: legacy bootconsole [earlyser0] disabled Sep 12 17:47:31.924526 kernel: ACPI: Core revision 20240827 Sep 12 17:47:31.924536 kernel: Failed to register legacy timer interrupt Sep 12 17:47:31.924543 kernel: APIC: Switch to symmetric I/O mode setup Sep 12 17:47:31.924551 kernel: x2apic enabled Sep 12 17:47:31.924559 kernel: APIC: Switched APIC routing to: physical x2apic Sep 12 17:47:31.924566 kernel: Hyper-V: Host Build 10.0.26100.1293-1-0 Sep 12 17:47:31.924574 kernel: Hyper-V: enabling crash_kexec_post_notifiers Sep 12 17:47:31.924581 kernel: Hyper-V: Disabling IBT because of Hyper-V bug Sep 12 17:47:31.924589 kernel: Hyper-V: Using IPI hypercalls Sep 12 17:47:31.924597 kernel: APIC: send_IPI() replaced with hv_send_ipi() Sep 12 17:47:31.924606 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Sep 12 17:47:31.924614 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Sep 12 17:47:31.924621 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Sep 12 17:47:31.924629 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Sep 12 17:47:31.924637 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Sep 12 17:47:31.924645 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212735223b2, max_idle_ns: 440795277976 ns Sep 12 17:47:31.924653 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 4600.00 BogoMIPS (lpj=2300000) Sep 12 17:47:31.924661 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 12 17:47:31.924668 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Sep 12 17:47:31.924677 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Sep 12 17:47:31.924685 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 12 17:47:31.924693 kernel: Spectre V2 : Mitigation: Retpolines Sep 12 17:47:31.924700 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 12 17:47:31.924708 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Sep 12 17:47:31.924715 kernel: RETBleed: Vulnerable Sep 12 17:47:31.924723 kernel: Speculative Store Bypass: Vulnerable Sep 12 17:47:31.924731 kernel: active return thunk: its_return_thunk Sep 12 17:47:31.924738 kernel: ITS: Mitigation: Aligned branch/return thunks Sep 12 17:47:31.924746 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 12 17:47:31.924754 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 12 17:47:31.924763 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 12 17:47:31.924770 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Sep 12 17:47:31.924777 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Sep 12 17:47:31.924784 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Sep 12 17:47:31.924791 kernel: x86/fpu: Supporting XSAVE feature 0x800: 'Control-flow User registers' Sep 12 17:47:31.924798 kernel: x86/fpu: Supporting XSAVE feature 0x20000: 'AMX Tile config' Sep 12 17:47:31.924806 kernel: x86/fpu: Supporting XSAVE feature 0x40000: 'AMX Tile data' Sep 12 17:47:31.924813 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 12 17:47:31.924820 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Sep 12 17:47:31.924827 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Sep 12 17:47:31.924836 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Sep 12 17:47:31.924843 kernel: x86/fpu: xstate_offset[11]: 2432, xstate_sizes[11]: 16 Sep 12 17:47:31.924851 kernel: x86/fpu: xstate_offset[17]: 2496, xstate_sizes[17]: 64 Sep 12 17:47:31.924859 kernel: x86/fpu: xstate_offset[18]: 2560, xstate_sizes[18]: 8192 Sep 12 17:47:31.924866 kernel: x86/fpu: Enabled xstate features 0x608e7, context size is 10752 bytes, using 'compacted' format. Sep 12 17:47:31.924874 kernel: Freeing SMP alternatives memory: 32K Sep 12 17:47:31.924881 kernel: pid_max: default: 32768 minimum: 301 Sep 12 17:47:31.924888 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 12 17:47:31.924896 kernel: landlock: Up and running. Sep 12 17:47:31.924903 kernel: SELinux: Initializing. Sep 12 17:47:31.924910 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 12 17:47:31.924918 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 12 17:47:31.924927 kernel: smpboot: CPU0: Intel INTEL(R) XEON(R) PLATINUM 8573C (family: 0x6, model: 0xcf, stepping: 0x2) Sep 12 17:47:31.924934 kernel: Performance Events: unsupported p6 CPU model 207 no PMU driver, software events only. Sep 12 17:47:31.924942 kernel: signal: max sigframe size: 11952 Sep 12 17:47:31.924950 kernel: rcu: Hierarchical SRCU implementation. Sep 12 17:47:31.924958 kernel: rcu: Max phase no-delay instances is 400. Sep 12 17:47:31.924965 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 12 17:47:31.924973 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 12 17:47:31.924981 kernel: smp: Bringing up secondary CPUs ... Sep 12 17:47:31.924988 kernel: smpboot: x86: Booting SMP configuration: Sep 12 17:47:31.924997 kernel: .... node #0, CPUs: #1 Sep 12 17:47:31.925005 kernel: smp: Brought up 1 node, 2 CPUs Sep 12 17:47:31.925013 kernel: smpboot: Total of 2 processors activated (9200.00 BogoMIPS) Sep 12 17:47:31.925021 kernel: Memory: 8077280K/8383228K available (14336K kernel code, 2432K rwdata, 9960K rodata, 54040K init, 2924K bss, 299996K reserved, 0K cma-reserved) Sep 12 17:47:31.925029 kernel: devtmpfs: initialized Sep 12 17:47:31.925036 kernel: x86/mm: Memory block size: 128MB Sep 12 17:47:31.925044 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Sep 12 17:47:31.925052 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 12 17:47:31.925060 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 12 17:47:31.925069 kernel: pinctrl core: initialized pinctrl subsystem Sep 12 17:47:31.925076 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 12 17:47:31.925084 kernel: audit: initializing netlink subsys (disabled) Sep 12 17:47:31.925092 kernel: audit: type=2000 audit(1757699249.028:1): state=initialized audit_enabled=0 res=1 Sep 12 17:47:31.925099 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 12 17:47:31.925107 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 12 17:47:31.925114 kernel: cpuidle: using governor menu Sep 12 17:47:31.925122 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 12 17:47:31.925130 kernel: dca service started, version 1.12.1 Sep 12 17:47:31.925138 kernel: e820: reserve RAM buffer [mem 0x044fe000-0x07ffffff] Sep 12 17:47:31.925146 kernel: e820: reserve RAM buffer [mem 0x3ff1f000-0x3fffffff] Sep 12 17:47:31.925153 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 12 17:47:31.925161 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 12 17:47:31.925168 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 12 17:47:31.925175 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 12 17:47:31.925182 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 12 17:47:31.925200 kernel: ACPI: Added _OSI(Module Device) Sep 12 17:47:31.925208 kernel: ACPI: Added _OSI(Processor Device) Sep 12 17:47:31.925216 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 12 17:47:31.925223 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 12 17:47:31.925230 kernel: ACPI: Interpreter enabled Sep 12 17:47:31.925237 kernel: ACPI: PM: (supports S0 S5) Sep 12 17:47:31.925244 kernel: ACPI: Using IOAPIC for interrupt routing Sep 12 17:47:31.925251 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 12 17:47:31.925259 kernel: PCI: Ignoring E820 reservations for host bridge windows Sep 12 17:47:31.925265 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Sep 12 17:47:31.925272 kernel: iommu: Default domain type: Translated Sep 12 17:47:31.925281 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 12 17:47:31.925288 kernel: efivars: Registered efivars operations Sep 12 17:47:31.925296 kernel: PCI: Using ACPI for IRQ routing Sep 12 17:47:31.925303 kernel: PCI: System does not support PCI Sep 12 17:47:31.925310 kernel: vgaarb: loaded Sep 12 17:47:31.925318 kernel: clocksource: Switched to clocksource tsc-early Sep 12 17:47:31.925325 kernel: VFS: Disk quotas dquot_6.6.0 Sep 12 17:47:31.925333 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 12 17:47:31.925341 kernel: pnp: PnP ACPI init Sep 12 17:47:31.925349 kernel: pnp: PnP ACPI: found 3 devices Sep 12 17:47:31.925357 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 12 17:47:31.925364 kernel: NET: Registered PF_INET protocol family Sep 12 17:47:31.925371 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 12 17:47:31.925378 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Sep 12 17:47:31.925385 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 12 17:47:31.925392 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 12 17:47:31.925399 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Sep 12 17:47:31.925406 kernel: TCP: Hash tables configured (established 65536 bind 65536) Sep 12 17:47:31.925415 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Sep 12 17:47:31.925422 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Sep 12 17:47:31.925429 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 12 17:47:31.925436 kernel: NET: Registered PF_XDP protocol family Sep 12 17:47:31.925443 kernel: PCI: CLS 0 bytes, default 64 Sep 12 17:47:31.925450 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Sep 12 17:47:31.925456 kernel: software IO TLB: mapped [mem 0x000000003a9da000-0x000000003e9da000] (64MB) Sep 12 17:47:31.925463 kernel: RAPL PMU: API unit is 2^-32 Joules, 1 fixed counters, 10737418240 ms ovfl timer Sep 12 17:47:31.925471 kernel: RAPL PMU: hw unit of domain psys 2^-0 Joules Sep 12 17:47:31.925478 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212735223b2, max_idle_ns: 440795277976 ns Sep 12 17:47:31.925486 kernel: clocksource: Switched to clocksource tsc Sep 12 17:47:31.925493 kernel: Initialise system trusted keyrings Sep 12 17:47:31.925500 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Sep 12 17:47:31.925507 kernel: Key type asymmetric registered Sep 12 17:47:31.925514 kernel: Asymmetric key parser 'x509' registered Sep 12 17:47:31.925521 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 12 17:47:31.925528 kernel: io scheduler mq-deadline registered Sep 12 17:47:31.925534 kernel: io scheduler kyber registered Sep 12 17:47:31.925543 kernel: io scheduler bfq registered Sep 12 17:47:31.925550 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 12 17:47:31.925558 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 12 17:47:31.925565 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 12 17:47:31.925573 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Sep 12 17:47:31.925580 kernel: serial8250: ttyS2 at I/O 0x3e8 (irq = 4, base_baud = 115200) is a 16550A Sep 12 17:47:31.925587 kernel: i8042: PNP: No PS/2 controller found. Sep 12 17:47:31.925701 kernel: rtc_cmos 00:02: registered as rtc0 Sep 12 17:47:31.925767 kernel: rtc_cmos 00:02: setting system clock to 2025-09-12T17:47:31 UTC (1757699251) Sep 12 17:47:31.925826 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Sep 12 17:47:31.925835 kernel: intel_pstate: Intel P-state driver initializing Sep 12 17:47:31.925843 kernel: efifb: probing for efifb Sep 12 17:47:31.925850 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Sep 12 17:47:31.925857 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Sep 12 17:47:31.925865 kernel: efifb: scrolling: redraw Sep 12 17:47:31.925872 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 12 17:47:31.925881 kernel: Console: switching to colour frame buffer device 128x48 Sep 12 17:47:31.925888 kernel: fb0: EFI VGA frame buffer device Sep 12 17:47:31.925895 kernel: pstore: Using crash dump compression: deflate Sep 12 17:47:31.925902 kernel: pstore: Registered efi_pstore as persistent store backend Sep 12 17:47:31.925910 kernel: NET: Registered PF_INET6 protocol family Sep 12 17:47:31.925917 kernel: Segment Routing with IPv6 Sep 12 17:47:31.925924 kernel: In-situ OAM (IOAM) with IPv6 Sep 12 17:47:31.925931 kernel: NET: Registered PF_PACKET protocol family Sep 12 17:47:31.925938 kernel: Key type dns_resolver registered Sep 12 17:47:31.925945 kernel: IPI shorthand broadcast: enabled Sep 12 17:47:31.925953 kernel: sched_clock: Marking stable (2749003498, 86660488)->(3130826692, -295162706) Sep 12 17:47:31.925961 kernel: registered taskstats version 1 Sep 12 17:47:31.925968 kernel: Loading compiled-in X.509 certificates Sep 12 17:47:31.925975 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.47-flatcar: f1ae8d6e9bfae84d90f4136cf098b0465b2a5bd7' Sep 12 17:47:31.925983 kernel: Demotion targets for Node 0: null Sep 12 17:47:31.925990 kernel: Key type .fscrypt registered Sep 12 17:47:31.925997 kernel: Key type fscrypt-provisioning registered Sep 12 17:47:31.926004 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 12 17:47:31.926013 kernel: ima: Allocated hash algorithm: sha1 Sep 12 17:47:31.926020 kernel: ima: No architecture policies found Sep 12 17:47:31.926027 kernel: clk: Disabling unused clocks Sep 12 17:47:31.926034 kernel: Warning: unable to open an initial console. Sep 12 17:47:31.926041 kernel: Freeing unused kernel image (initmem) memory: 54040K Sep 12 17:47:31.926048 kernel: Write protecting the kernel read-only data: 24576k Sep 12 17:47:31.926056 kernel: Freeing unused kernel image (rodata/data gap) memory: 280K Sep 12 17:47:31.926063 kernel: Run /init as init process Sep 12 17:47:31.926070 kernel: with arguments: Sep 12 17:47:31.926078 kernel: /init Sep 12 17:47:31.926085 kernel: with environment: Sep 12 17:47:31.926092 kernel: HOME=/ Sep 12 17:47:31.926099 kernel: TERM=linux Sep 12 17:47:31.926106 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 12 17:47:31.926114 systemd[1]: Successfully made /usr/ read-only. Sep 12 17:47:31.926124 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 12 17:47:31.926132 systemd[1]: Detected virtualization microsoft. Sep 12 17:47:31.926141 systemd[1]: Detected architecture x86-64. Sep 12 17:47:31.926149 systemd[1]: Running in initrd. Sep 12 17:47:31.926155 systemd[1]: No hostname configured, using default hostname. Sep 12 17:47:31.926163 systemd[1]: Hostname set to . Sep 12 17:47:31.926171 systemd[1]: Initializing machine ID from random generator. Sep 12 17:47:31.926178 systemd[1]: Queued start job for default target initrd.target. Sep 12 17:47:31.926197 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:47:31.926206 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:47:31.926216 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 12 17:47:31.926224 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 17:47:31.926232 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 12 17:47:31.926241 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 12 17:47:31.926250 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 12 17:47:31.926258 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 12 17:47:31.926265 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:47:31.926274 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:47:31.926282 systemd[1]: Reached target paths.target - Path Units. Sep 12 17:47:31.926289 systemd[1]: Reached target slices.target - Slice Units. Sep 12 17:47:31.926297 systemd[1]: Reached target swap.target - Swaps. Sep 12 17:47:31.926305 systemd[1]: Reached target timers.target - Timer Units. Sep 12 17:47:31.926313 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 17:47:31.926321 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 17:47:31.926328 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 12 17:47:31.926336 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 12 17:47:31.926345 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:47:31.926353 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 17:47:31.926361 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:47:31.926368 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 17:47:31.926376 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 12 17:47:31.926384 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 17:47:31.926391 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 12 17:47:31.926399 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 12 17:47:31.926409 systemd[1]: Starting systemd-fsck-usr.service... Sep 12 17:47:31.926417 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 17:47:31.926431 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 17:47:31.926455 systemd-journald[205]: Collecting audit messages is disabled. Sep 12 17:47:31.926476 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:47:31.926485 systemd-journald[205]: Journal started Sep 12 17:47:31.926506 systemd-journald[205]: Runtime Journal (/run/log/journal/c21a6b816e5c4211beeb3fc2020a64bc) is 8M, max 158.9M, 150.9M free. Sep 12 17:47:31.934203 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 17:47:31.935335 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 12 17:47:31.937290 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:47:31.942967 systemd[1]: Finished systemd-fsck-usr.service. Sep 12 17:47:31.947782 systemd-modules-load[207]: Inserted module 'overlay' Sep 12 17:47:31.949275 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 17:47:31.958288 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 17:47:31.970053 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:47:31.982606 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 12 17:47:31.980295 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:47:31.982037 systemd-tmpfiles[219]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 12 17:47:31.992312 kernel: Bridge firewalling registered Sep 12 17:47:31.986995 systemd-modules-load[207]: Inserted module 'br_netfilter' Sep 12 17:47:31.989121 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 17:47:31.992294 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 17:47:31.992753 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:47:31.995293 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:47:31.997291 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 17:47:32.009843 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:47:32.019216 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:47:32.019756 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:47:32.020528 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 12 17:47:32.033291 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 17:47:32.045889 dracut-cmdline[244]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=271a44cc8ea1639cfb6fdf777202a5f025fda0b3ce9b293cc4e0e7047aecb858 Sep 12 17:47:32.080581 systemd-resolved[245]: Positive Trust Anchors: Sep 12 17:47:32.080592 systemd-resolved[245]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 17:47:32.080623 systemd-resolved[245]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 17:47:32.097840 systemd-resolved[245]: Defaulting to hostname 'linux'. Sep 12 17:47:32.100312 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 17:47:32.103308 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:47:32.120207 kernel: SCSI subsystem initialized Sep 12 17:47:32.127204 kernel: Loading iSCSI transport class v2.0-870. Sep 12 17:47:32.134204 kernel: iscsi: registered transport (tcp) Sep 12 17:47:32.149486 kernel: iscsi: registered transport (qla4xxx) Sep 12 17:47:32.149519 kernel: QLogic iSCSI HBA Driver Sep 12 17:47:32.160624 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 17:47:32.173014 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 17:47:32.173720 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 17:47:32.202887 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 12 17:47:32.206309 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 12 17:47:32.244205 kernel: raid6: avx512x4 gen() 45791 MB/s Sep 12 17:47:32.261199 kernel: raid6: avx512x2 gen() 45909 MB/s Sep 12 17:47:32.278201 kernel: raid6: avx512x1 gen() 30096 MB/s Sep 12 17:47:32.296198 kernel: raid6: avx2x4 gen() 42934 MB/s Sep 12 17:47:32.313197 kernel: raid6: avx2x2 gen() 43895 MB/s Sep 12 17:47:32.330687 kernel: raid6: avx2x1 gen() 33172 MB/s Sep 12 17:47:32.330701 kernel: raid6: using algorithm avx512x2 gen() 45909 MB/s Sep 12 17:47:32.349514 kernel: raid6: .... xor() 37601 MB/s, rmw enabled Sep 12 17:47:32.349538 kernel: raid6: using avx512x2 recovery algorithm Sep 12 17:47:32.365204 kernel: xor: automatically using best checksumming function avx Sep 12 17:47:32.468203 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 12 17:47:32.472049 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 12 17:47:32.475299 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:47:32.488832 systemd-udevd[454]: Using default interface naming scheme 'v255'. Sep 12 17:47:32.492335 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:47:32.498621 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 12 17:47:32.515315 dracut-pre-trigger[463]: rd.md=0: removing MD RAID activation Sep 12 17:47:32.531067 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 17:47:32.533285 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 17:47:32.558925 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:47:32.565127 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 12 17:47:32.601202 kernel: cryptd: max_cpu_qlen set to 1000 Sep 12 17:47:32.609207 kernel: AES CTR mode by8 optimization enabled Sep 12 17:47:32.624289 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:47:32.624420 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:47:32.631836 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:47:32.646394 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:47:32.653378 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:47:32.653446 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:47:32.658204 kernel: hv_vmbus: Vmbus version:5.3 Sep 12 17:47:32.661056 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:47:32.670715 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 12 17:47:32.670754 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 12 17:47:32.672553 kernel: hv_vmbus: registering driver hyperv_keyboard Sep 12 17:47:32.686222 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Sep 12 17:47:32.688613 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:47:32.691216 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 12 17:47:32.696069 kernel: PTP clock support registered Sep 12 17:47:32.696098 kernel: hv_vmbus: registering driver hv_pci Sep 12 17:47:32.701217 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI VMBus probing: Using version 0x10004 Sep 12 17:47:32.705989 kernel: hv_utils: Registering HyperV Utility Driver Sep 12 17:47:32.706020 kernel: hv_vmbus: registering driver hv_utils Sep 12 17:47:32.710971 kernel: hv_utils: Shutdown IC version 3.2 Sep 12 17:47:32.710998 kernel: hv_utils: Heartbeat IC version 3.0 Sep 12 17:47:32.711012 kernel: hv_utils: TimeSync IC version 4.0 Sep 12 17:47:32.917330 systemd-resolved[245]: Clock change detected. Flushing caches. Sep 12 17:47:32.920393 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI host bridge to bus c05b:00 Sep 12 17:47:32.923408 kernel: hv_vmbus: registering driver hid_hyperv Sep 12 17:47:32.923441 kernel: hv_vmbus: registering driver hv_storvsc Sep 12 17:47:32.923454 kernel: hv_vmbus: registering driver hv_netvsc Sep 12 17:47:32.923464 kernel: pci_bus c05b:00: root bus resource [mem 0xfc0000000-0xfc007ffff window] Sep 12 17:47:32.927739 kernel: pci_bus c05b:00: No busn resource found for root bus, will use [bus 00-ff] Sep 12 17:47:32.927887 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Sep 12 17:47:32.931386 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Sep 12 17:47:32.934211 kernel: scsi host0: storvsc_host_t Sep 12 17:47:32.935341 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Sep 12 17:47:32.940083 kernel: pci c05b:00:00.0: [1414:00a9] type 00 class 0x010802 PCIe Endpoint Sep 12 17:47:32.940211 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit] Sep 12 17:47:32.941867 kernel: hv_netvsc f8615163-0000-1000-2000-7ced8d2dae44 (unnamed net_device) (uninitialized): VF slot 1 added Sep 12 17:47:32.961529 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Sep 12 17:47:32.961700 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 12 17:47:32.961711 kernel: pci_bus c05b:00: busn_res: [bus 00-ff] end is updated to 00 Sep 12 17:47:32.965197 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit]: assigned Sep 12 17:47:32.965731 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Sep 12 17:47:32.977647 kernel: nvme nvme0: pci function c05b:00:00.0 Sep 12 17:47:32.980383 kernel: nvme c05b:00:00.0: enabling device (0000 -> 0002) Sep 12 17:47:32.982890 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#205 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Sep 12 17:47:32.998647 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#214 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Sep 12 17:47:33.125765 kernel: nvme nvme0: 2/0/0 default/read/poll queues Sep 12 17:47:33.130650 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 12 17:47:33.387269 kernel: nvme nvme0: using unchecked data buffer Sep 12 17:47:33.581666 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Sep 12 17:47:33.595852 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - MSFT NVMe Accelerator v1.0 ROOT. Sep 12 17:47:33.604992 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - MSFT NVMe Accelerator v1.0 USR-A. Sep 12 17:47:33.609890 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - MSFT NVMe Accelerator v1.0 USR-A. Sep 12 17:47:33.618299 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - MSFT NVMe Accelerator v1.0 EFI-SYSTEM. Sep 12 17:47:33.632429 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 12 17:47:33.638824 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 12 17:47:33.641688 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 17:47:33.647713 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:47:33.655092 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 12 17:47:33.650688 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 17:47:33.657810 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 12 17:47:33.665659 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 12 17:47:33.673691 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 12 17:47:33.969301 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI VMBus probing: Using version 0x10004 Sep 12 17:47:33.969436 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI host bridge to bus 7870:00 Sep 12 17:47:33.971290 kernel: pci_bus 7870:00: root bus resource [mem 0xfc2000000-0xfc4007fff window] Sep 12 17:47:33.973000 kernel: pci_bus 7870:00: No busn resource found for root bus, will use [bus 00-ff] Sep 12 17:47:33.976720 kernel: pci 7870:00:00.0: [1414:00ba] type 00 class 0x020000 PCIe Endpoint Sep 12 17:47:33.980753 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref] Sep 12 17:47:33.984668 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref] Sep 12 17:47:33.986898 kernel: pci 7870:00:00.0: enabling Extended Tags Sep 12 17:47:34.000204 kernel: pci_bus 7870:00: busn_res: [bus 00-ff] end is updated to 00 Sep 12 17:47:34.000374 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref]: assigned Sep 12 17:47:34.005121 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref]: assigned Sep 12 17:47:34.008301 kernel: mana 7870:00:00.0: enabling device (0000 -> 0002) Sep 12 17:47:34.017649 kernel: mana 7870:00:00.0: Microsoft Azure Network Adapter protocol version: 0.1.1 Sep 12 17:47:34.020028 kernel: hv_netvsc f8615163-0000-1000-2000-7ced8d2dae44 eth0: VF registering: eth1 Sep 12 17:47:34.020178 kernel: mana 7870:00:00.0 eth1: joined to eth0 Sep 12 17:47:34.023649 kernel: mana 7870:00:00.0 enP30832s1: renamed from eth1 Sep 12 17:47:34.658684 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 12 17:47:34.658950 disk-uuid[671]: The operation has completed successfully. Sep 12 17:47:34.704032 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 12 17:47:34.704103 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 12 17:47:34.732402 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 12 17:47:34.748372 sh[715]: Success Sep 12 17:47:34.776844 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 12 17:47:34.776890 kernel: device-mapper: uevent: version 1.0.3 Sep 12 17:47:34.777959 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 12 17:47:34.786653 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Sep 12 17:47:35.003492 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 12 17:47:35.010126 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 12 17:47:35.027022 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 12 17:47:35.037840 kernel: BTRFS: device fsid 74707491-1b86-4926-8bdb-c533ce2a0c32 devid 1 transid 38 /dev/mapper/usr (254:0) scanned by mount (728) Sep 12 17:47:35.037868 kernel: BTRFS info (device dm-0): first mount of filesystem 74707491-1b86-4926-8bdb-c533ce2a0c32 Sep 12 17:47:35.038665 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:47:35.282758 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 12 17:47:35.282796 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 12 17:47:35.284026 kernel: BTRFS info (device dm-0): enabling free space tree Sep 12 17:47:35.324444 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 12 17:47:35.325061 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 12 17:47:35.330539 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 12 17:47:35.333695 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 12 17:47:35.337848 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 12 17:47:35.359594 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (751) Sep 12 17:47:35.359647 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 5410dae6-8d31-4ea4-a4b4-868064445761 Sep 12 17:47:35.362664 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:47:35.380785 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 12 17:47:35.380836 kernel: BTRFS info (device nvme0n1p6): turning on async discard Sep 12 17:47:35.381837 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Sep 12 17:47:35.386652 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 5410dae6-8d31-4ea4-a4b4-868064445761 Sep 12 17:47:35.386690 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 12 17:47:35.390747 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 12 17:47:35.419244 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 17:47:35.423780 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 17:47:35.445919 systemd-networkd[897]: lo: Link UP Sep 12 17:47:35.451254 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Sep 12 17:47:35.445925 systemd-networkd[897]: lo: Gained carrier Sep 12 17:47:35.456711 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Sep 12 17:47:35.456890 kernel: hv_netvsc f8615163-0000-1000-2000-7ced8d2dae44 eth0: Data path switched to VF: enP30832s1 Sep 12 17:47:35.447322 systemd-networkd[897]: Enumeration completed Sep 12 17:47:35.447579 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 17:47:35.447664 systemd-networkd[897]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:47:35.447667 systemd-networkd[897]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 17:47:35.450938 systemd[1]: Reached target network.target - Network. Sep 12 17:47:35.457073 systemd-networkd[897]: enP30832s1: Link UP Sep 12 17:47:35.457131 systemd-networkd[897]: eth0: Link UP Sep 12 17:47:35.457258 systemd-networkd[897]: eth0: Gained carrier Sep 12 17:47:35.457268 systemd-networkd[897]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:47:35.459929 systemd-networkd[897]: enP30832s1: Gained carrier Sep 12 17:47:35.466662 systemd-networkd[897]: eth0: DHCPv4 address 10.200.8.42/24, gateway 10.200.8.1 acquired from 168.63.129.16 Sep 12 17:47:36.406623 ignition[840]: Ignition 2.21.0 Sep 12 17:47:36.406658 ignition[840]: Stage: fetch-offline Sep 12 17:47:36.406729 ignition[840]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:47:36.406736 ignition[840]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 12 17:47:36.409440 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 17:47:36.408295 ignition[840]: parsed url from cmdline: "" Sep 12 17:47:36.411750 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 12 17:47:36.408300 ignition[840]: no config URL provided Sep 12 17:47:36.408306 ignition[840]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 17:47:36.408315 ignition[840]: no config at "/usr/lib/ignition/user.ign" Sep 12 17:47:36.408320 ignition[840]: failed to fetch config: resource requires networking Sep 12 17:47:36.408488 ignition[840]: Ignition finished successfully Sep 12 17:47:36.432345 ignition[908]: Ignition 2.21.0 Sep 12 17:47:36.432354 ignition[908]: Stage: fetch Sep 12 17:47:36.432508 ignition[908]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:47:36.432515 ignition[908]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 12 17:47:36.434109 ignition[908]: parsed url from cmdline: "" Sep 12 17:47:36.434113 ignition[908]: no config URL provided Sep 12 17:47:36.434117 ignition[908]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 17:47:36.434124 ignition[908]: no config at "/usr/lib/ignition/user.ign" Sep 12 17:47:36.434152 ignition[908]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Sep 12 17:47:36.494242 ignition[908]: GET result: OK Sep 12 17:47:36.494294 ignition[908]: config has been read from IMDS userdata Sep 12 17:47:36.494318 ignition[908]: parsing config with SHA512: 9087ab24e8b191b36365f192f5664d95b484e640c04480c718e624d1d5f2239c713a5e13f88e780af3b33e9902e996954bef601b0c135698a1805bada76285ed Sep 12 17:47:36.499664 unknown[908]: fetched base config from "system" Sep 12 17:47:36.499673 unknown[908]: fetched base config from "system" Sep 12 17:47:36.499955 ignition[908]: fetch: fetch complete Sep 12 17:47:36.499676 unknown[908]: fetched user config from "azure" Sep 12 17:47:36.499959 ignition[908]: fetch: fetch passed Sep 12 17:47:36.501989 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 12 17:47:36.499988 ignition[908]: Ignition finished successfully Sep 12 17:47:36.504722 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 12 17:47:36.524491 ignition[914]: Ignition 2.21.0 Sep 12 17:47:36.524501 ignition[914]: Stage: kargs Sep 12 17:47:36.524654 ignition[914]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:47:36.524661 ignition[914]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 12 17:47:36.529303 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 12 17:47:36.525868 ignition[914]: kargs: kargs passed Sep 12 17:47:36.534073 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 12 17:47:36.525905 ignition[914]: Ignition finished successfully Sep 12 17:47:36.547647 ignition[921]: Ignition 2.21.0 Sep 12 17:47:36.547656 ignition[921]: Stage: disks Sep 12 17:47:36.547802 ignition[921]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:47:36.550071 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 12 17:47:36.547808 ignition[921]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 12 17:47:36.555945 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 12 17:47:36.548401 ignition[921]: disks: disks passed Sep 12 17:47:36.558109 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 12 17:47:36.548428 ignition[921]: Ignition finished successfully Sep 12 17:47:36.560675 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 17:47:36.564668 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 17:47:36.568663 systemd[1]: Reached target basic.target - Basic System. Sep 12 17:47:36.572729 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 12 17:47:36.631013 systemd-fsck[930]: ROOT: clean, 15/7326000 files, 477845/7359488 blocks Sep 12 17:47:36.634284 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 12 17:47:36.638922 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 12 17:47:36.813740 systemd-networkd[897]: eth0: Gained IPv6LL Sep 12 17:47:36.884585 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 26739aba-b0be-4ce3-bfbd-ca4dbcbe2426 r/w with ordered data mode. Quota mode: none. Sep 12 17:47:36.883589 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 12 17:47:36.885954 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 12 17:47:36.904829 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 17:47:36.918706 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 12 17:47:36.922215 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Sep 12 17:47:36.928308 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 12 17:47:36.936088 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (939) Sep 12 17:47:36.936116 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 5410dae6-8d31-4ea4-a4b4-868064445761 Sep 12 17:47:36.928334 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 17:47:36.941287 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:47:36.933098 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 12 17:47:36.943290 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 12 17:47:36.948917 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 12 17:47:36.948951 kernel: BTRFS info (device nvme0n1p6): turning on async discard Sep 12 17:47:36.948962 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Sep 12 17:47:36.954419 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 17:47:37.305046 coreos-metadata[941]: Sep 12 17:47:37.304 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Sep 12 17:47:37.307776 coreos-metadata[941]: Sep 12 17:47:37.307 INFO Fetch successful Sep 12 17:47:37.310687 coreos-metadata[941]: Sep 12 17:47:37.308 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Sep 12 17:47:37.315015 coreos-metadata[941]: Sep 12 17:47:37.314 INFO Fetch successful Sep 12 17:47:37.348461 coreos-metadata[941]: Sep 12 17:47:37.348 INFO wrote hostname ci-4426.1.0-a-49404e8b93 to /sysroot/etc/hostname Sep 12 17:47:37.349967 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 12 17:47:37.469697 initrd-setup-root[969]: cut: /sysroot/etc/passwd: No such file or directory Sep 12 17:47:37.503716 initrd-setup-root[976]: cut: /sysroot/etc/group: No such file or directory Sep 12 17:47:37.508338 initrd-setup-root[983]: cut: /sysroot/etc/shadow: No such file or directory Sep 12 17:47:37.527505 initrd-setup-root[990]: cut: /sysroot/etc/gshadow: No such file or directory Sep 12 17:47:38.343191 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 12 17:47:38.346770 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 12 17:47:38.353200 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 12 17:47:38.366459 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 12 17:47:38.368167 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 5410dae6-8d31-4ea4-a4b4-868064445761 Sep 12 17:47:38.388159 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 12 17:47:38.390253 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 12 17:47:38.398729 ignition[1058]: INFO : Ignition 2.21.0 Sep 12 17:47:38.398729 ignition[1058]: INFO : Stage: mount Sep 12 17:47:38.398729 ignition[1058]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:47:38.398729 ignition[1058]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 12 17:47:38.398729 ignition[1058]: INFO : mount: mount passed Sep 12 17:47:38.398729 ignition[1058]: INFO : Ignition finished successfully Sep 12 17:47:38.392700 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 12 17:47:38.402929 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 17:47:38.423934 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (1070) Sep 12 17:47:38.423960 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 5410dae6-8d31-4ea4-a4b4-868064445761 Sep 12 17:47:38.423972 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:47:38.423984 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 12 17:47:38.425128 kernel: BTRFS info (device nvme0n1p6): turning on async discard Sep 12 17:47:38.426475 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Sep 12 17:47:38.428127 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 17:47:38.453768 ignition[1086]: INFO : Ignition 2.21.0 Sep 12 17:47:38.453768 ignition[1086]: INFO : Stage: files Sep 12 17:47:38.456160 ignition[1086]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:47:38.456160 ignition[1086]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 12 17:47:38.460803 ignition[1086]: DEBUG : files: compiled without relabeling support, skipping Sep 12 17:47:38.460803 ignition[1086]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 12 17:47:38.460803 ignition[1086]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 12 17:47:38.505031 ignition[1086]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 12 17:47:38.507155 ignition[1086]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 12 17:47:38.509705 ignition[1086]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 12 17:47:38.508738 unknown[1086]: wrote ssh authorized keys file for user: core Sep 12 17:47:38.524292 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 12 17:47:38.526653 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 12 17:47:38.829615 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 12 17:47:39.104925 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 12 17:47:39.104925 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 12 17:47:39.111991 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 12 17:47:39.111991 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 12 17:47:39.111991 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 12 17:47:39.111991 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 17:47:39.111991 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 17:47:39.111991 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 17:47:39.111991 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 17:47:39.111991 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 17:47:39.111991 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 17:47:39.111991 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 12 17:47:39.143663 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 12 17:47:39.143663 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 12 17:47:39.143663 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Sep 12 17:47:39.726197 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 12 17:47:40.325547 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 12 17:47:40.325547 ignition[1086]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 12 17:47:40.354685 ignition[1086]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 17:47:40.360507 ignition[1086]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 17:47:40.360507 ignition[1086]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 12 17:47:40.371753 ignition[1086]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Sep 12 17:47:40.371753 ignition[1086]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Sep 12 17:47:40.371753 ignition[1086]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 12 17:47:40.371753 ignition[1086]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 12 17:47:40.371753 ignition[1086]: INFO : files: files passed Sep 12 17:47:40.371753 ignition[1086]: INFO : Ignition finished successfully Sep 12 17:47:40.364300 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 12 17:47:40.370393 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 12 17:47:40.383743 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 12 17:47:40.401285 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 12 17:47:40.401366 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 12 17:47:40.418148 initrd-setup-root-after-ignition[1117]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:47:40.418148 initrd-setup-root-after-ignition[1117]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:47:40.424007 initrd-setup-root-after-ignition[1121]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:47:40.423060 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 17:47:40.428811 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 12 17:47:40.432725 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 12 17:47:40.466016 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 12 17:47:40.466112 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 12 17:47:40.471819 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 12 17:47:40.472416 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 12 17:47:40.472473 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 12 17:47:40.473031 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 12 17:47:40.489610 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 17:47:40.493115 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 12 17:47:40.505677 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:47:40.506151 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:47:40.513241 systemd[1]: Stopped target timers.target - Timer Units. Sep 12 17:47:40.515878 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 12 17:47:40.516416 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 17:47:40.522762 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 12 17:47:40.525750 systemd[1]: Stopped target basic.target - Basic System. Sep 12 17:47:40.527915 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 12 17:47:40.531778 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 17:47:40.534259 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 12 17:47:40.536938 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 12 17:47:40.538170 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 12 17:47:40.538435 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 17:47:40.538732 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 12 17:47:40.539260 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 12 17:47:40.539779 systemd[1]: Stopped target swap.target - Swaps. Sep 12 17:47:40.540023 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 12 17:47:40.540129 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 12 17:47:40.546674 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:47:40.549604 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:47:40.552509 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 12 17:47:40.552964 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:47:40.556748 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 12 17:47:40.556852 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 12 17:47:40.560959 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 12 17:47:40.561059 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 17:47:40.565786 systemd[1]: ignition-files.service: Deactivated successfully. Sep 12 17:47:40.565882 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 12 17:47:40.567907 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 12 17:47:40.568000 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 12 17:47:40.572800 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 12 17:47:40.576693 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 12 17:47:40.612040 ignition[1141]: INFO : Ignition 2.21.0 Sep 12 17:47:40.612040 ignition[1141]: INFO : Stage: umount Sep 12 17:47:40.612040 ignition[1141]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:47:40.612040 ignition[1141]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 12 17:47:40.612040 ignition[1141]: INFO : umount: umount passed Sep 12 17:47:40.612040 ignition[1141]: INFO : Ignition finished successfully Sep 12 17:47:40.576839 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:47:40.594821 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 12 17:47:40.600356 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 12 17:47:40.601433 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:47:40.610042 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 12 17:47:40.610146 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 17:47:40.613429 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 12 17:47:40.613497 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 12 17:47:40.617174 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 12 17:47:40.617244 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 12 17:47:40.622994 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 12 17:47:40.623039 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 12 17:47:40.631345 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 12 17:47:40.631385 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 12 17:47:40.633716 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 12 17:47:40.633746 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 12 17:47:40.637494 systemd[1]: Stopped target network.target - Network. Sep 12 17:47:40.639492 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 12 17:47:40.639531 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 17:47:40.640757 systemd[1]: Stopped target paths.target - Path Units. Sep 12 17:47:40.641733 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 12 17:47:40.644662 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:47:40.646697 systemd[1]: Stopped target slices.target - Slice Units. Sep 12 17:47:40.649300 systemd[1]: Stopped target sockets.target - Socket Units. Sep 12 17:47:40.654794 systemd[1]: iscsid.socket: Deactivated successfully. Sep 12 17:47:40.654864 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 17:47:40.656948 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 12 17:47:40.656985 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 17:47:40.662819 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 12 17:47:40.663717 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 12 17:47:40.666862 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 12 17:47:40.666897 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 12 17:47:40.671239 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 12 17:47:40.674323 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 12 17:47:40.678209 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 12 17:47:40.678277 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 12 17:47:40.690513 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 12 17:47:40.690571 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 12 17:47:40.690921 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 12 17:47:40.690989 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 12 17:47:40.702363 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 12 17:47:40.762217 kernel: hv_netvsc f8615163-0000-1000-2000-7ced8d2dae44 eth0: Data path switched from VF: enP30832s1 Sep 12 17:47:40.762363 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Sep 12 17:47:40.702951 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 12 17:47:40.706709 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 12 17:47:40.706740 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:47:40.712091 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 12 17:47:40.717695 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 12 17:47:40.717743 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 17:47:40.722828 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 17:47:40.724815 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:47:40.728989 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 12 17:47:40.729016 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 12 17:47:40.734060 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 12 17:47:40.734091 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:47:40.738301 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:47:40.744859 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 12 17:47:40.744912 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 12 17:47:40.754162 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 12 17:47:40.754286 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:47:40.760891 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 12 17:47:40.760969 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 12 17:47:40.765848 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 12 17:47:40.765871 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:47:40.770358 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 12 17:47:40.771504 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 12 17:47:40.795841 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 12 17:47:40.795887 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 12 17:47:40.798512 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 17:47:40.798552 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:47:40.802349 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 12 17:47:40.802709 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 12 17:47:40.802752 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 17:47:40.804665 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 12 17:47:40.804709 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:47:40.807158 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:47:40.807196 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:47:40.809440 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Sep 12 17:47:40.809486 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 12 17:47:40.809515 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 12 17:47:40.809757 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 12 17:47:40.809859 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 12 17:47:40.830827 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 12 17:47:40.837994 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 12 17:47:40.841897 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 12 17:47:40.841958 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 12 17:47:40.845006 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 12 17:47:40.848688 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 12 17:47:40.848741 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 12 17:47:40.854069 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 12 17:47:40.875074 systemd[1]: Switching root. Sep 12 17:47:40.936380 systemd-journald[205]: Journal stopped Sep 12 17:47:44.424223 systemd-journald[205]: Received SIGTERM from PID 1 (systemd). Sep 12 17:47:44.424256 kernel: SELinux: policy capability network_peer_controls=1 Sep 12 17:47:44.424272 kernel: SELinux: policy capability open_perms=1 Sep 12 17:47:44.424281 kernel: SELinux: policy capability extended_socket_class=1 Sep 12 17:47:44.424289 kernel: SELinux: policy capability always_check_network=0 Sep 12 17:47:44.424297 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 12 17:47:44.424306 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 12 17:47:44.424316 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 12 17:47:44.424326 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 12 17:47:44.424336 kernel: SELinux: policy capability userspace_initial_context=0 Sep 12 17:47:44.424343 kernel: audit: type=1403 audit(1757699262.225:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 12 17:47:44.424356 systemd[1]: Successfully loaded SELinux policy in 155.851ms. Sep 12 17:47:44.424366 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 3.377ms. Sep 12 17:47:44.424379 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 12 17:47:44.424397 systemd[1]: Detected virtualization microsoft. Sep 12 17:47:44.424409 systemd[1]: Detected architecture x86-64. Sep 12 17:47:44.424421 systemd[1]: Detected first boot. Sep 12 17:47:44.424433 systemd[1]: Hostname set to . Sep 12 17:47:44.424448 systemd[1]: Initializing machine ID from random generator. Sep 12 17:47:44.424459 zram_generator::config[1186]: No configuration found. Sep 12 17:47:44.424471 kernel: Guest personality initialized and is inactive Sep 12 17:47:44.424480 kernel: VMCI host device registered (name=vmci, major=10, minor=124) Sep 12 17:47:44.424491 kernel: Initialized host personality Sep 12 17:47:44.424504 kernel: NET: Registered PF_VSOCK protocol family Sep 12 17:47:44.424515 systemd[1]: Populated /etc with preset unit settings. Sep 12 17:47:44.424529 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 12 17:47:44.424541 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 12 17:47:44.424552 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 12 17:47:44.424560 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 12 17:47:44.424570 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 12 17:47:44.424580 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 12 17:47:44.424589 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 12 17:47:44.424597 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 12 17:47:44.424606 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 12 17:47:44.424614 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 12 17:47:44.424622 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 12 17:47:44.424639 systemd[1]: Created slice user.slice - User and Session Slice. Sep 12 17:47:44.424647 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:47:44.424655 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:47:44.424664 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 12 17:47:44.424673 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 12 17:47:44.424682 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 12 17:47:44.424691 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 17:47:44.424699 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 12 17:47:44.424708 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:47:44.424716 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:47:44.424724 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 12 17:47:44.424732 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 12 17:47:44.424742 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 12 17:47:44.424751 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 12 17:47:44.424759 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:47:44.424768 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 17:47:44.424776 systemd[1]: Reached target slices.target - Slice Units. Sep 12 17:47:44.424784 systemd[1]: Reached target swap.target - Swaps. Sep 12 17:47:44.424793 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 12 17:47:44.424801 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 12 17:47:44.424811 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 12 17:47:44.424820 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:47:44.424828 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 17:47:44.424837 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:47:44.424846 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 12 17:47:44.424855 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 12 17:47:44.424864 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 12 17:47:44.424872 systemd[1]: Mounting media.mount - External Media Directory... Sep 12 17:47:44.424881 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:47:44.424890 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 12 17:47:44.424899 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 12 17:47:44.424907 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 12 17:47:44.424917 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 12 17:47:44.424925 systemd[1]: Reached target machines.target - Containers. Sep 12 17:47:44.424935 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 12 17:47:44.424943 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:47:44.424952 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 17:47:44.424960 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 12 17:47:44.424968 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:47:44.424977 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 17:47:44.424985 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:47:44.424994 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 12 17:47:44.425004 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:47:44.425012 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 12 17:47:44.425021 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 12 17:47:44.425029 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 12 17:47:44.425038 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 12 17:47:44.425046 systemd[1]: Stopped systemd-fsck-usr.service. Sep 12 17:47:44.425055 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 17:47:44.425064 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 17:47:44.425075 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 17:47:44.425083 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 17:47:44.425092 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 12 17:47:44.425101 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 12 17:47:44.425109 kernel: fuse: init (API version 7.41) Sep 12 17:47:44.425117 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 17:47:44.425126 systemd[1]: verity-setup.service: Deactivated successfully. Sep 12 17:47:44.425134 systemd[1]: Stopped verity-setup.service. Sep 12 17:47:44.425143 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:47:44.425153 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 12 17:47:44.425161 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 12 17:47:44.425169 systemd[1]: Mounted media.mount - External Media Directory. Sep 12 17:47:44.425177 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 12 17:47:44.425186 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 12 17:47:44.425194 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 12 17:47:44.425203 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:47:44.425211 kernel: loop: module loaded Sep 12 17:47:44.425220 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 12 17:47:44.425228 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 12 17:47:44.425237 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:47:44.425245 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:47:44.425254 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:47:44.425263 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:47:44.425287 systemd-journald[1279]: Collecting audit messages is disabled. Sep 12 17:47:44.425310 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 12 17:47:44.425319 systemd-journald[1279]: Journal started Sep 12 17:47:44.425338 systemd-journald[1279]: Runtime Journal (/run/log/journal/5f58610be7fe406387e65a39caf9bffb) is 8M, max 158.9M, 150.9M free. Sep 12 17:47:44.056782 systemd[1]: Queued start job for default target multi-user.target. Sep 12 17:47:44.067002 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Sep 12 17:47:44.067306 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 12 17:47:44.430176 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 17:47:44.431831 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 12 17:47:44.432032 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 12 17:47:44.434532 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:47:44.434729 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:47:44.437253 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 17:47:44.440022 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 17:47:44.450133 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 12 17:47:44.454069 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 12 17:47:44.472746 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 17:47:44.477561 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 12 17:47:44.481262 kernel: ACPI: bus type drm_connector registered Sep 12 17:47:44.483714 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 12 17:47:44.485352 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 12 17:47:44.485380 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 17:47:44.487953 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 12 17:47:44.490603 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 12 17:47:44.492336 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:47:44.494063 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 12 17:47:44.507112 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 12 17:47:44.508939 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 17:47:44.511080 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 12 17:47:44.512799 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 17:47:44.513485 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:47:44.516380 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 12 17:47:44.521361 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 12 17:47:44.524717 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 17:47:44.524963 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 17:47:44.527522 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:47:44.530927 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 12 17:47:44.533924 systemd-journald[1279]: Time spent on flushing to /var/log/journal/5f58610be7fe406387e65a39caf9bffb is 14.988ms for 991 entries. Sep 12 17:47:44.533924 systemd-journald[1279]: System Journal (/var/log/journal/5f58610be7fe406387e65a39caf9bffb) is 8M, max 2.6G, 2.6G free. Sep 12 17:47:44.586962 systemd-journald[1279]: Received client request to flush runtime journal. Sep 12 17:47:44.587003 kernel: loop0: detected capacity change from 0 to 221472 Sep 12 17:47:44.587017 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 12 17:47:44.537109 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 12 17:47:44.544867 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 12 17:47:44.549852 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 12 17:47:44.552478 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 12 17:47:44.588404 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 12 17:47:44.606501 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 12 17:47:44.621188 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:47:44.626726 kernel: loop1: detected capacity change from 0 to 111000 Sep 12 17:47:44.645757 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 12 17:47:44.649179 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 17:47:44.676596 systemd-tmpfiles[1344]: ACLs are not supported, ignoring. Sep 12 17:47:44.676609 systemd-tmpfiles[1344]: ACLs are not supported, ignoring. Sep 12 17:47:44.678709 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:47:44.948652 kernel: loop2: detected capacity change from 0 to 29272 Sep 12 17:47:45.068418 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 12 17:47:45.348646 kernel: loop3: detected capacity change from 0 to 128016 Sep 12 17:47:45.384484 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 12 17:47:45.386876 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:47:45.410770 systemd-udevd[1350]: Using default interface naming scheme 'v255'. Sep 12 17:47:45.534852 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:47:45.541268 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 17:47:45.581098 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 12 17:47:45.613735 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 12 17:47:45.647645 kernel: loop4: detected capacity change from 0 to 221472 Sep 12 17:47:45.664659 kernel: loop5: detected capacity change from 0 to 111000 Sep 12 17:47:45.674649 kernel: loop6: detected capacity change from 0 to 29272 Sep 12 17:47:45.684675 kernel: loop7: detected capacity change from 0 to 128016 Sep 12 17:47:45.691659 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#233 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Sep 12 17:47:45.701054 kernel: hv_vmbus: registering driver hyperv_fb Sep 12 17:47:45.701070 kernel: hv_vmbus: registering driver hv_balloon Sep 12 17:47:45.701604 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 12 17:47:45.704109 (sd-merge)[1390]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Sep 12 17:47:45.705313 (sd-merge)[1390]: Merged extensions into '/usr'. Sep 12 17:47:45.712965 systemd[1]: Reload requested from client PID 1326 ('systemd-sysext') (unit systemd-sysext.service)... Sep 12 17:47:45.712981 systemd[1]: Reloading... Sep 12 17:47:45.716651 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Sep 12 17:47:45.720115 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Sep 12 17:47:45.720155 kernel: mousedev: PS/2 mouse device common for all mice Sep 12 17:47:45.720169 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Sep 12 17:47:45.725512 kernel: Console: switching to colour dummy device 80x25 Sep 12 17:47:45.733656 kernel: Console: switching to colour frame buffer device 128x48 Sep 12 17:47:45.866674 zram_generator::config[1457]: No configuration found. Sep 12 17:47:45.898671 systemd-networkd[1362]: lo: Link UP Sep 12 17:47:45.900701 systemd-networkd[1362]: lo: Gained carrier Sep 12 17:47:45.902140 systemd-networkd[1362]: Enumeration completed Sep 12 17:47:45.902730 systemd-networkd[1362]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:47:45.903088 systemd-networkd[1362]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 17:47:45.904650 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Sep 12 17:47:45.909982 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Sep 12 17:47:45.910168 kernel: hv_netvsc f8615163-0000-1000-2000-7ced8d2dae44 eth0: Data path switched to VF: enP30832s1 Sep 12 17:47:45.911920 systemd-networkd[1362]: enP30832s1: Link UP Sep 12 17:47:45.912051 systemd-networkd[1362]: eth0: Link UP Sep 12 17:47:45.912094 systemd-networkd[1362]: eth0: Gained carrier Sep 12 17:47:45.912135 systemd-networkd[1362]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:47:45.916838 systemd-networkd[1362]: enP30832s1: Gained carrier Sep 12 17:47:45.922744 systemd-networkd[1362]: eth0: DHCPv4 address 10.200.8.42/24, gateway 10.200.8.1 acquired from 168.63.129.16 Sep 12 17:47:46.122150 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Sep 12 17:47:46.123968 systemd[1]: Reloading finished in 410 ms. Sep 12 17:47:46.146247 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 17:47:46.156030 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 12 17:47:46.157277 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Sep 12 17:47:46.190383 systemd[1]: Starting ensure-sysext.service... Sep 12 17:47:46.194813 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 12 17:47:46.197965 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 12 17:47:46.201810 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 12 17:47:46.205886 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 17:47:46.217311 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:47:46.236729 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 12 17:47:46.242930 systemd[1]: Reload requested from client PID 1520 ('systemctl') (unit ensure-sysext.service)... Sep 12 17:47:46.242940 systemd[1]: Reloading... Sep 12 17:47:46.243411 systemd-tmpfiles[1524]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 12 17:47:46.243435 systemd-tmpfiles[1524]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 12 17:47:46.243653 systemd-tmpfiles[1524]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 12 17:47:46.243807 systemd-tmpfiles[1524]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 12 17:47:46.244204 systemd-tmpfiles[1524]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 12 17:47:46.244354 systemd-tmpfiles[1524]: ACLs are not supported, ignoring. Sep 12 17:47:46.244394 systemd-tmpfiles[1524]: ACLs are not supported, ignoring. Sep 12 17:47:46.251492 systemd-tmpfiles[1524]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 17:47:46.251499 systemd-tmpfiles[1524]: Skipping /boot Sep 12 17:47:46.257165 systemd-tmpfiles[1524]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 17:47:46.257176 systemd-tmpfiles[1524]: Skipping /boot Sep 12 17:47:46.294657 zram_generator::config[1562]: No configuration found. Sep 12 17:47:46.446726 systemd[1]: Reloading finished in 203 ms. Sep 12 17:47:46.470296 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 12 17:47:46.472088 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:47:46.477975 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 12 17:47:46.484709 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 12 17:47:46.486942 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 12 17:47:46.492814 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 17:47:46.495678 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 12 17:47:46.502325 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:47:46.502461 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:47:46.503773 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:47:46.508843 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:47:46.512854 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:47:46.513346 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:47:46.513429 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 17:47:46.513500 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:47:46.525619 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 12 17:47:46.527202 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:47:46.527325 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:47:46.527979 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:47:46.528086 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:47:46.532834 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:47:46.533398 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:47:46.535372 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:47:46.540722 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 17:47:46.543577 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:47:46.544093 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:47:46.544176 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 17:47:46.544300 systemd[1]: Reached target time-set.target - System Time Set. Sep 12 17:47:46.544403 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:47:46.545207 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:47:46.545329 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:47:46.548568 systemd[1]: Finished ensure-sysext.service. Sep 12 17:47:46.554669 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:47:46.554820 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:47:46.556626 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 17:47:46.571565 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:47:46.571740 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:47:46.572226 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 17:47:46.574915 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 17:47:46.575813 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 17:47:46.586958 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 12 17:47:46.591625 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:47:46.609367 systemd-resolved[1626]: Positive Trust Anchors: Sep 12 17:47:46.609529 systemd-resolved[1626]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 17:47:46.609577 systemd-resolved[1626]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 17:47:46.612083 systemd-resolved[1626]: Using system hostname 'ci-4426.1.0-a-49404e8b93'. Sep 12 17:47:46.613204 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 17:47:46.615753 systemd[1]: Reached target network.target - Network. Sep 12 17:47:46.619724 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:47:46.655227 augenrules[1666]: No rules Sep 12 17:47:46.655917 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 17:47:46.656070 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 12 17:47:46.865505 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 12 17:47:46.868842 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 12 17:47:47.629726 systemd-networkd[1362]: eth0: Gained IPv6LL Sep 12 17:47:47.631112 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 12 17:47:47.634846 systemd[1]: Reached target network-online.target - Network is Online. Sep 12 17:47:48.526961 ldconfig[1321]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 12 17:47:48.536649 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 12 17:47:48.539003 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 12 17:47:48.553418 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 12 17:47:48.554866 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 17:47:48.556192 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 12 17:47:48.559746 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 12 17:47:48.561143 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Sep 12 17:47:48.562580 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 12 17:47:48.565715 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 12 17:47:48.568671 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 12 17:47:48.570176 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 12 17:47:48.570203 systemd[1]: Reached target paths.target - Path Units. Sep 12 17:47:48.572671 systemd[1]: Reached target timers.target - Timer Units. Sep 12 17:47:48.575245 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 12 17:47:48.578407 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 12 17:47:48.581972 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 12 17:47:48.584785 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 12 17:47:48.586117 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 12 17:47:48.592051 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 12 17:47:48.595874 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 12 17:47:48.597608 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 12 17:47:48.601275 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 17:47:48.602455 systemd[1]: Reached target basic.target - Basic System. Sep 12 17:47:48.604721 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 12 17:47:48.604744 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 12 17:47:48.606555 systemd[1]: Starting chronyd.service - NTP client/server... Sep 12 17:47:48.608887 systemd[1]: Starting containerd.service - containerd container runtime... Sep 12 17:47:48.618260 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 12 17:47:48.621219 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 12 17:47:48.625526 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 12 17:47:48.630376 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 12 17:47:48.636447 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 12 17:47:48.638876 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 12 17:47:48.644081 jq[1686]: false Sep 12 17:47:48.641207 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Sep 12 17:47:48.643450 systemd[1]: hv_fcopy_uio_daemon.service - Hyper-V FCOPY UIO daemon was skipped because of an unmet condition check (ConditionPathExists=/sys/bus/vmbus/devices/eb765408-105f-49b6-b4aa-c123b64d17d4/uio). Sep 12 17:47:48.645789 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Sep 12 17:47:48.648038 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Sep 12 17:47:48.650827 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:47:48.658754 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 12 17:47:48.663819 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 12 17:47:48.667584 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 12 17:47:48.672135 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 12 17:47:48.673333 KVP[1690]: KVP starting; pid is:1690 Sep 12 17:47:48.678668 kernel: hv_utils: KVP IC version 4.0 Sep 12 17:47:48.680720 KVP[1690]: KVP LIC Version: 3.1 Sep 12 17:47:48.683507 extend-filesystems[1688]: Found /dev/nvme0n1p6 Sep 12 17:47:48.684552 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 12 17:47:48.693742 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 12 17:47:48.694521 google_oslogin_nss_cache[1689]: oslogin_cache_refresh[1689]: Refreshing passwd entry cache Sep 12 17:47:48.692358 oslogin_cache_refresh[1689]: Refreshing passwd entry cache Sep 12 17:47:48.696758 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 12 17:47:48.697120 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 12 17:47:48.702415 systemd[1]: Starting update-engine.service - Update Engine... Sep 12 17:47:48.705192 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 12 17:47:48.711250 chronyd[1679]: chronyd version 4.7 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +NTS +SECHASH +IPV6 -DEBUG) Sep 12 17:47:48.714321 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 12 17:47:48.715691 chronyd[1679]: Timezone right/UTC failed leap second check, ignoring Sep 12 17:47:48.715810 chronyd[1679]: Loaded seccomp filter (level 2) Sep 12 17:47:48.716316 systemd[1]: Started chronyd.service - NTP client/server. Sep 12 17:47:48.719991 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 12 17:47:48.720143 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 12 17:47:48.722544 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 12 17:47:48.723937 oslogin_cache_refresh[1689]: Failure getting users, quitting Sep 12 17:47:48.724367 google_oslogin_nss_cache[1689]: oslogin_cache_refresh[1689]: Failure getting users, quitting Sep 12 17:47:48.724367 google_oslogin_nss_cache[1689]: oslogin_cache_refresh[1689]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 12 17:47:48.724367 google_oslogin_nss_cache[1689]: oslogin_cache_refresh[1689]: Refreshing group entry cache Sep 12 17:47:48.723309 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 12 17:47:48.723950 oslogin_cache_refresh[1689]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 12 17:47:48.723977 oslogin_cache_refresh[1689]: Refreshing group entry cache Sep 12 17:47:48.731700 jq[1707]: true Sep 12 17:47:48.739852 (ntainerd)[1714]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 12 17:47:48.745211 extend-filesystems[1688]: Found /dev/nvme0n1p9 Sep 12 17:47:48.750121 extend-filesystems[1688]: Checking size of /dev/nvme0n1p9 Sep 12 17:47:48.761114 google_oslogin_nss_cache[1689]: oslogin_cache_refresh[1689]: Failure getting groups, quitting Sep 12 17:47:48.761114 google_oslogin_nss_cache[1689]: oslogin_cache_refresh[1689]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 12 17:47:48.760306 oslogin_cache_refresh[1689]: Failure getting groups, quitting Sep 12 17:47:48.760314 oslogin_cache_refresh[1689]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 12 17:47:48.761558 systemd[1]: motdgen.service: Deactivated successfully. Sep 12 17:47:48.765795 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 12 17:47:48.769591 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Sep 12 17:47:48.769759 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Sep 12 17:47:48.771620 jq[1717]: true Sep 12 17:47:48.782755 extend-filesystems[1688]: Old size kept for /dev/nvme0n1p9 Sep 12 17:47:48.786262 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 12 17:47:48.788322 update_engine[1704]: I20250912 17:47:48.787935 1704 main.cc:92] Flatcar Update Engine starting Sep 12 17:47:48.786472 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 12 17:47:48.806076 systemd-logind[1701]: New seat seat0. Sep 12 17:47:48.808410 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 12 17:47:48.815033 systemd-logind[1701]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 12 17:47:48.815148 systemd[1]: Started systemd-logind.service - User Login Management. Sep 12 17:47:48.819322 dbus-daemon[1682]: [system] SELinux support is enabled Sep 12 17:47:48.819421 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 12 17:47:48.825149 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 12 17:47:48.825177 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 12 17:47:48.829736 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 12 17:47:48.829759 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 12 17:47:48.836782 tar[1711]: linux-amd64/helm Sep 12 17:47:48.841524 systemd[1]: Started update-engine.service - Update Engine. Sep 12 17:47:48.842257 update_engine[1704]: I20250912 17:47:48.842136 1704 update_check_scheduler.cc:74] Next update check in 8m52s Sep 12 17:47:48.866079 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 12 17:47:48.877056 bash[1757]: Updated "/home/core/.ssh/authorized_keys" Sep 12 17:47:48.874335 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 12 17:47:48.878711 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 12 17:47:48.932717 sshd_keygen[1706]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 12 17:47:48.938053 coreos-metadata[1681]: Sep 12 17:47:48.936 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Sep 12 17:47:48.941868 coreos-metadata[1681]: Sep 12 17:47:48.940 INFO Fetch successful Sep 12 17:47:48.941970 coreos-metadata[1681]: Sep 12 17:47:48.941 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Sep 12 17:47:48.947867 coreos-metadata[1681]: Sep 12 17:47:48.947 INFO Fetch successful Sep 12 17:47:48.947867 coreos-metadata[1681]: Sep 12 17:47:48.947 INFO Fetching http://168.63.129.16/machine/4f93c401-53df-4ccb-b36a-e442395813ed/5f70bfa2%2D07cb%2D400f%2Db279%2D28990bf51844.%5Fci%2D4426.1.0%2Da%2D49404e8b93?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Sep 12 17:47:48.949662 coreos-metadata[1681]: Sep 12 17:47:48.948 INFO Fetch successful Sep 12 17:47:48.949985 coreos-metadata[1681]: Sep 12 17:47:48.949 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Sep 12 17:47:48.957324 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 12 17:47:48.962552 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 12 17:47:48.963941 coreos-metadata[1681]: Sep 12 17:47:48.961 INFO Fetch successful Sep 12 17:47:48.965919 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Sep 12 17:47:48.997830 systemd[1]: issuegen.service: Deactivated successfully. Sep 12 17:47:48.998165 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 12 17:47:49.007850 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 12 17:47:49.045569 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 12 17:47:49.049081 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 12 17:47:49.052295 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 12 17:47:49.056373 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Sep 12 17:47:49.062916 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 12 17:47:49.069369 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 12 17:47:49.080973 systemd[1]: Reached target getty.target - Login Prompts. Sep 12 17:47:49.134901 locksmithd[1765]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 12 17:47:49.350960 tar[1711]: linux-amd64/LICENSE Sep 12 17:47:49.351058 tar[1711]: linux-amd64/README.md Sep 12 17:47:49.363942 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 12 17:47:49.476993 containerd[1714]: time="2025-09-12T17:47:49Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 12 17:47:49.478188 containerd[1714]: time="2025-09-12T17:47:49.477346189Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 12 17:47:49.486268 containerd[1714]: time="2025-09-12T17:47:49.485622232Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.282µs" Sep 12 17:47:49.486268 containerd[1714]: time="2025-09-12T17:47:49.485931966Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 12 17:47:49.486268 containerd[1714]: time="2025-09-12T17:47:49.485953618Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 12 17:47:49.486268 containerd[1714]: time="2025-09-12T17:47:49.486060337Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 12 17:47:49.486268 containerd[1714]: time="2025-09-12T17:47:49.486071204Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 12 17:47:49.486268 containerd[1714]: time="2025-09-12T17:47:49.486089304Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 12 17:47:49.486268 containerd[1714]: time="2025-09-12T17:47:49.486129388Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 12 17:47:49.486268 containerd[1714]: time="2025-09-12T17:47:49.486142177Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 12 17:47:49.486446 containerd[1714]: time="2025-09-12T17:47:49.486311494Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 12 17:47:49.486446 containerd[1714]: time="2025-09-12T17:47:49.486342153Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 12 17:47:49.486446 containerd[1714]: time="2025-09-12T17:47:49.486351856Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 12 17:47:49.486446 containerd[1714]: time="2025-09-12T17:47:49.486359652Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 12 17:47:49.486446 containerd[1714]: time="2025-09-12T17:47:49.486414079Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 12 17:47:49.486590 containerd[1714]: time="2025-09-12T17:47:49.486568295Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 12 17:47:49.486626 containerd[1714]: time="2025-09-12T17:47:49.486595849Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 12 17:47:49.486626 containerd[1714]: time="2025-09-12T17:47:49.486605119Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 12 17:47:49.486676 containerd[1714]: time="2025-09-12T17:47:49.486625311Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 12 17:47:49.486806 containerd[1714]: time="2025-09-12T17:47:49.486792231Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 12 17:47:49.486847 containerd[1714]: time="2025-09-12T17:47:49.486836075Z" level=info msg="metadata content store policy set" policy=shared Sep 12 17:47:49.500272 containerd[1714]: time="2025-09-12T17:47:49.499793666Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 12 17:47:49.500272 containerd[1714]: time="2025-09-12T17:47:49.499861465Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 12 17:47:49.500272 containerd[1714]: time="2025-09-12T17:47:49.499877854Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 12 17:47:49.500272 containerd[1714]: time="2025-09-12T17:47:49.499894527Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 12 17:47:49.500272 containerd[1714]: time="2025-09-12T17:47:49.499946437Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 12 17:47:49.500272 containerd[1714]: time="2025-09-12T17:47:49.499960412Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 12 17:47:49.500272 containerd[1714]: time="2025-09-12T17:47:49.499973915Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 12 17:47:49.500272 containerd[1714]: time="2025-09-12T17:47:49.499983954Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 12 17:47:49.500272 containerd[1714]: time="2025-09-12T17:47:49.499993799Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 12 17:47:49.500272 containerd[1714]: time="2025-09-12T17:47:49.500002851Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 12 17:47:49.500272 containerd[1714]: time="2025-09-12T17:47:49.500012478Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 12 17:47:49.500272 containerd[1714]: time="2025-09-12T17:47:49.500024662Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 12 17:47:49.500272 containerd[1714]: time="2025-09-12T17:47:49.500127967Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 12 17:47:49.500272 containerd[1714]: time="2025-09-12T17:47:49.500143383Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 12 17:47:49.500529 containerd[1714]: time="2025-09-12T17:47:49.500156268Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 12 17:47:49.500529 containerd[1714]: time="2025-09-12T17:47:49.500170256Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 12 17:47:49.500529 containerd[1714]: time="2025-09-12T17:47:49.500182431Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 12 17:47:49.500529 containerd[1714]: time="2025-09-12T17:47:49.500191810Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 12 17:47:49.500529 containerd[1714]: time="2025-09-12T17:47:49.500201456Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 12 17:47:49.500529 containerd[1714]: time="2025-09-12T17:47:49.500209760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 12 17:47:49.500529 containerd[1714]: time="2025-09-12T17:47:49.500221060Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 12 17:47:49.500529 containerd[1714]: time="2025-09-12T17:47:49.500230607Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 12 17:47:49.500529 containerd[1714]: time="2025-09-12T17:47:49.500239753Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 12 17:47:49.500529 containerd[1714]: time="2025-09-12T17:47:49.500315323Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 12 17:47:49.500529 containerd[1714]: time="2025-09-12T17:47:49.500326489Z" level=info msg="Start snapshots syncer" Sep 12 17:47:49.500529 containerd[1714]: time="2025-09-12T17:47:49.500357123Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 12 17:47:49.500745 containerd[1714]: time="2025-09-12T17:47:49.500612082Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 12 17:47:49.500745 containerd[1714]: time="2025-09-12T17:47:49.500668015Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 12 17:47:49.500855 containerd[1714]: time="2025-09-12T17:47:49.500749576Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 12 17:47:49.500872 containerd[1714]: time="2025-09-12T17:47:49.500853338Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 12 17:47:49.500872 containerd[1714]: time="2025-09-12T17:47:49.500868890Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 12 17:47:49.500907 containerd[1714]: time="2025-09-12T17:47:49.500878771Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 12 17:47:49.500907 containerd[1714]: time="2025-09-12T17:47:49.500888571Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 12 17:47:49.500907 containerd[1714]: time="2025-09-12T17:47:49.500899453Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 12 17:47:49.500957 containerd[1714]: time="2025-09-12T17:47:49.500908635Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 12 17:47:49.500957 containerd[1714]: time="2025-09-12T17:47:49.500919143Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 12 17:47:49.500957 containerd[1714]: time="2025-09-12T17:47:49.500938371Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 12 17:47:49.500957 containerd[1714]: time="2025-09-12T17:47:49.500948111Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 12 17:47:49.501018 containerd[1714]: time="2025-09-12T17:47:49.500956957Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 12 17:47:49.501018 containerd[1714]: time="2025-09-12T17:47:49.500984102Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 12 17:47:49.501018 containerd[1714]: time="2025-09-12T17:47:49.500996434Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 12 17:47:49.501018 containerd[1714]: time="2025-09-12T17:47:49.501004111Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 12 17:47:49.501018 containerd[1714]: time="2025-09-12T17:47:49.501012563Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 12 17:47:49.501097 containerd[1714]: time="2025-09-12T17:47:49.501019407Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 12 17:47:49.501097 containerd[1714]: time="2025-09-12T17:47:49.501027856Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 12 17:47:49.501097 containerd[1714]: time="2025-09-12T17:47:49.501059963Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 12 17:47:49.501097 containerd[1714]: time="2025-09-12T17:47:49.501073285Z" level=info msg="runtime interface created" Sep 12 17:47:49.501097 containerd[1714]: time="2025-09-12T17:47:49.501078364Z" level=info msg="created NRI interface" Sep 12 17:47:49.501097 containerd[1714]: time="2025-09-12T17:47:49.501085750Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 12 17:47:49.501097 containerd[1714]: time="2025-09-12T17:47:49.501095645Z" level=info msg="Connect containerd service" Sep 12 17:47:49.501218 containerd[1714]: time="2025-09-12T17:47:49.501116558Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 12 17:47:49.502172 containerd[1714]: time="2025-09-12T17:47:49.501736939Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 17:47:49.782335 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:47:49.790895 (kubelet)[1841]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:47:49.944727 containerd[1714]: time="2025-09-12T17:47:49.944595502Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 12 17:47:49.944727 containerd[1714]: time="2025-09-12T17:47:49.944658206Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 12 17:47:49.944727 containerd[1714]: time="2025-09-12T17:47:49.944679787Z" level=info msg="Start subscribing containerd event" Sep 12 17:47:49.944727 containerd[1714]: time="2025-09-12T17:47:49.944699971Z" level=info msg="Start recovering state" Sep 12 17:47:49.944836 containerd[1714]: time="2025-09-12T17:47:49.944767553Z" level=info msg="Start event monitor" Sep 12 17:47:49.944836 containerd[1714]: time="2025-09-12T17:47:49.944776555Z" level=info msg="Start cni network conf syncer for default" Sep 12 17:47:49.944836 containerd[1714]: time="2025-09-12T17:47:49.944784165Z" level=info msg="Start streaming server" Sep 12 17:47:49.944836 containerd[1714]: time="2025-09-12T17:47:49.944794946Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 12 17:47:49.944836 containerd[1714]: time="2025-09-12T17:47:49.944801568Z" level=info msg="runtime interface starting up..." Sep 12 17:47:49.944836 containerd[1714]: time="2025-09-12T17:47:49.944806700Z" level=info msg="starting plugins..." Sep 12 17:47:49.944836 containerd[1714]: time="2025-09-12T17:47:49.944816291Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 12 17:47:49.944943 containerd[1714]: time="2025-09-12T17:47:49.944895633Z" level=info msg="containerd successfully booted in 0.468963s" Sep 12 17:47:49.945705 systemd[1]: Started containerd.service - containerd container runtime. Sep 12 17:47:49.947796 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 12 17:47:49.950141 systemd[1]: Startup finished in 2.860s (kernel) + 10.179s (initrd) + 7.879s (userspace) = 20.919s. Sep 12 17:47:50.228834 kubelet[1841]: E0912 17:47:50.228777 1841 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:47:50.230795 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:47:50.230992 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:47:50.231428 systemd[1]: kubelet.service: Consumed 792ms CPU time, 265.1M memory peak. Sep 12 17:47:50.234305 login[1813]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 12 17:47:50.240196 login[1814]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 12 17:47:50.243156 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 12 17:47:50.245836 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 12 17:47:50.255031 systemd-logind[1701]: New session 2 of user core. Sep 12 17:47:50.261674 systemd-logind[1701]: New session 1 of user core. Sep 12 17:47:50.265332 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 12 17:47:50.267899 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 12 17:47:50.277179 (systemd)[1859]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 12 17:47:50.278500 systemd-logind[1701]: New session c1 of user core. Sep 12 17:47:50.413053 systemd[1859]: Queued start job for default target default.target. Sep 12 17:47:50.423261 systemd[1859]: Created slice app.slice - User Application Slice. Sep 12 17:47:50.423286 systemd[1859]: Reached target paths.target - Paths. Sep 12 17:47:50.423312 systemd[1859]: Reached target timers.target - Timers. Sep 12 17:47:50.424044 systemd[1859]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 12 17:47:50.430655 systemd[1859]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 12 17:47:50.430702 systemd[1859]: Reached target sockets.target - Sockets. Sep 12 17:47:50.430732 systemd[1859]: Reached target basic.target - Basic System. Sep 12 17:47:50.430789 systemd[1859]: Reached target default.target - Main User Target. Sep 12 17:47:50.430808 systemd[1859]: Startup finished in 148ms. Sep 12 17:47:50.430810 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 12 17:47:50.431827 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 12 17:47:50.432451 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 12 17:47:50.466621 waagent[1811]: 2025-09-12T17:47:50.466567Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Sep 12 17:47:50.466912 waagent[1811]: 2025-09-12T17:47:50.466885Z INFO Daemon Daemon OS: flatcar 4426.1.0 Sep 12 17:47:50.467253 waagent[1811]: 2025-09-12T17:47:50.466961Z INFO Daemon Daemon Python: 3.11.13 Sep 12 17:47:50.467253 waagent[1811]: 2025-09-12T17:47:50.467142Z INFO Daemon Daemon Run daemon Sep 12 17:47:50.467313 waagent[1811]: 2025-09-12T17:47:50.467281Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4426.1.0' Sep 12 17:47:50.472023 waagent[1811]: 2025-09-12T17:47:50.467341Z INFO Daemon Daemon Using waagent for provisioning Sep 12 17:47:50.472023 waagent[1811]: 2025-09-12T17:47:50.467476Z INFO Daemon Daemon Activate resource disk Sep 12 17:47:50.472023 waagent[1811]: 2025-09-12T17:47:50.467919Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Sep 12 17:47:50.472023 waagent[1811]: 2025-09-12T17:47:50.469310Z INFO Daemon Daemon Found device: None Sep 12 17:47:50.472023 waagent[1811]: 2025-09-12T17:47:50.469388Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Sep 12 17:47:50.472023 waagent[1811]: 2025-09-12T17:47:50.469828Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Sep 12 17:47:50.472023 waagent[1811]: 2025-09-12T17:47:50.470253Z INFO Daemon Daemon Clean protocol and wireserver endpoint Sep 12 17:47:50.472023 waagent[1811]: 2025-09-12T17:47:50.470531Z INFO Daemon Daemon Running default provisioning handler Sep 12 17:47:50.483997 waagent[1811]: 2025-09-12T17:47:50.483561Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Sep 12 17:47:50.484226 waagent[1811]: 2025-09-12T17:47:50.484200Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Sep 12 17:47:50.484399 waagent[1811]: 2025-09-12T17:47:50.484381Z INFO Daemon Daemon cloud-init is enabled: False Sep 12 17:47:50.484702 waagent[1811]: 2025-09-12T17:47:50.484686Z INFO Daemon Daemon Copying ovf-env.xml Sep 12 17:47:50.551127 waagent[1811]: 2025-09-12T17:47:50.551087Z INFO Daemon Daemon Successfully mounted dvd Sep 12 17:47:50.573695 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Sep 12 17:47:50.575382 waagent[1811]: 2025-09-12T17:47:50.575342Z INFO Daemon Daemon Detect protocol endpoint Sep 12 17:47:50.576756 waagent[1811]: 2025-09-12T17:47:50.575913Z INFO Daemon Daemon Clean protocol and wireserver endpoint Sep 12 17:47:50.576756 waagent[1811]: 2025-09-12T17:47:50.576159Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Sep 12 17:47:50.576756 waagent[1811]: 2025-09-12T17:47:50.576398Z INFO Daemon Daemon Test for route to 168.63.129.16 Sep 12 17:47:50.576756 waagent[1811]: 2025-09-12T17:47:50.576516Z INFO Daemon Daemon Route to 168.63.129.16 exists Sep 12 17:47:50.577076 waagent[1811]: 2025-09-12T17:47:50.577058Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Sep 12 17:47:50.591856 waagent[1811]: 2025-09-12T17:47:50.591826Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Sep 12 17:47:50.593256 waagent[1811]: 2025-09-12T17:47:50.592427Z INFO Daemon Daemon Wire protocol version:2012-11-30 Sep 12 17:47:50.593256 waagent[1811]: 2025-09-12T17:47:50.592669Z INFO Daemon Daemon Server preferred version:2015-04-05 Sep 12 17:47:50.672235 waagent[1811]: 2025-09-12T17:47:50.672190Z INFO Daemon Daemon Initializing goal state during protocol detection Sep 12 17:47:50.673129 waagent[1811]: 2025-09-12T17:47:50.672967Z INFO Daemon Daemon Forcing an update of the goal state. Sep 12 17:47:50.676910 waagent[1811]: 2025-09-12T17:47:50.676878Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Sep 12 17:47:50.690277 waagent[1811]: 2025-09-12T17:47:50.690253Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.175 Sep 12 17:47:50.693880 waagent[1811]: 2025-09-12T17:47:50.691038Z INFO Daemon Sep 12 17:47:50.693880 waagent[1811]: 2025-09-12T17:47:50.691350Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 5d772207-f36e-4dfa-9b91-0c6a8f6573bb eTag: 13574958646153622320 source: Fabric] Sep 12 17:47:50.693880 waagent[1811]: 2025-09-12T17:47:50.691574Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Sep 12 17:47:50.693880 waagent[1811]: 2025-09-12T17:47:50.691837Z INFO Daemon Sep 12 17:47:50.693880 waagent[1811]: 2025-09-12T17:47:50.691961Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Sep 12 17:47:50.698535 waagent[1811]: 2025-09-12T17:47:50.698508Z INFO Daemon Daemon Downloading artifacts profile blob Sep 12 17:47:50.773316 waagent[1811]: 2025-09-12T17:47:50.773251Z INFO Daemon Downloaded certificate {'thumbprint': '67FD50F5F5CB7F66216840DBB2EBD7669C375D9C', 'hasPrivateKey': True} Sep 12 17:47:50.775415 waagent[1811]: 2025-09-12T17:47:50.775385Z INFO Daemon Fetch goal state completed Sep 12 17:47:50.780574 waagent[1811]: 2025-09-12T17:47:50.780533Z INFO Daemon Daemon Starting provisioning Sep 12 17:47:50.781603 waagent[1811]: 2025-09-12T17:47:50.781537Z INFO Daemon Daemon Handle ovf-env.xml. Sep 12 17:47:50.782390 waagent[1811]: 2025-09-12T17:47:50.781827Z INFO Daemon Daemon Set hostname [ci-4426.1.0-a-49404e8b93] Sep 12 17:47:50.798862 waagent[1811]: 2025-09-12T17:47:50.798829Z INFO Daemon Daemon Publish hostname [ci-4426.1.0-a-49404e8b93] Sep 12 17:47:50.800381 waagent[1811]: 2025-09-12T17:47:50.800349Z INFO Daemon Daemon Examine /proc/net/route for primary interface Sep 12 17:47:50.801629 waagent[1811]: 2025-09-12T17:47:50.801602Z INFO Daemon Daemon Primary interface is [eth0] Sep 12 17:47:50.807403 systemd-networkd[1362]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:47:50.807408 systemd-networkd[1362]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 17:47:50.807426 systemd-networkd[1362]: eth0: DHCP lease lost Sep 12 17:47:50.808043 waagent[1811]: 2025-09-12T17:47:50.807998Z INFO Daemon Daemon Create user account if not exists Sep 12 17:47:50.809173 waagent[1811]: 2025-09-12T17:47:50.808507Z INFO Daemon Daemon User core already exists, skip useradd Sep 12 17:47:50.809173 waagent[1811]: 2025-09-12T17:47:50.808772Z INFO Daemon Daemon Configure sudoer Sep 12 17:47:50.815547 waagent[1811]: 2025-09-12T17:47:50.815504Z INFO Daemon Daemon Configure sshd Sep 12 17:47:50.820881 waagent[1811]: 2025-09-12T17:47:50.820844Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Sep 12 17:47:50.824668 waagent[1811]: 2025-09-12T17:47:50.821270Z INFO Daemon Daemon Deploy ssh public key. Sep 12 17:47:50.831671 systemd-networkd[1362]: eth0: DHCPv4 address 10.200.8.42/24, gateway 10.200.8.1 acquired from 168.63.129.16 Sep 12 17:47:51.883095 waagent[1811]: 2025-09-12T17:47:51.883050Z INFO Daemon Daemon Provisioning complete Sep 12 17:47:51.891022 waagent[1811]: 2025-09-12T17:47:51.890997Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Sep 12 17:47:51.891708 waagent[1811]: 2025-09-12T17:47:51.891508Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Sep 12 17:47:51.891708 waagent[1811]: 2025-09-12T17:47:51.891758Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Sep 12 17:47:51.981486 waagent[1913]: 2025-09-12T17:47:51.981430Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Sep 12 17:47:51.981681 waagent[1913]: 2025-09-12T17:47:51.981512Z INFO ExtHandler ExtHandler OS: flatcar 4426.1.0 Sep 12 17:47:51.981681 waagent[1913]: 2025-09-12T17:47:51.981551Z INFO ExtHandler ExtHandler Python: 3.11.13 Sep 12 17:47:51.981681 waagent[1913]: 2025-09-12T17:47:51.981588Z INFO ExtHandler ExtHandler CPU Arch: x86_64 Sep 12 17:47:52.019454 waagent[1913]: 2025-09-12T17:47:52.019406Z INFO ExtHandler ExtHandler Distro: flatcar-4426.1.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.13; Arch: x86_64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Sep 12 17:47:52.019589 waagent[1913]: 2025-09-12T17:47:52.019563Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 12 17:47:52.019653 waagent[1913]: 2025-09-12T17:47:52.019616Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 12 17:47:52.026332 waagent[1913]: 2025-09-12T17:47:52.026285Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Sep 12 17:47:52.030959 waagent[1913]: 2025-09-12T17:47:52.030930Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.175 Sep 12 17:47:52.031257 waagent[1913]: 2025-09-12T17:47:52.031229Z INFO ExtHandler Sep 12 17:47:52.031294 waagent[1913]: 2025-09-12T17:47:52.031275Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: b2f92498-07e2-44c8-9705-caeb0fe5fe66 eTag: 13574958646153622320 source: Fabric] Sep 12 17:47:52.031462 waagent[1913]: 2025-09-12T17:47:52.031439Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Sep 12 17:47:52.031765 waagent[1913]: 2025-09-12T17:47:52.031740Z INFO ExtHandler Sep 12 17:47:52.031798 waagent[1913]: 2025-09-12T17:47:52.031778Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Sep 12 17:47:52.034041 waagent[1913]: 2025-09-12T17:47:52.034011Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Sep 12 17:47:52.111735 waagent[1913]: 2025-09-12T17:47:52.111693Z INFO ExtHandler Downloaded certificate {'thumbprint': '67FD50F5F5CB7F66216840DBB2EBD7669C375D9C', 'hasPrivateKey': True} Sep 12 17:47:52.112005 waagent[1913]: 2025-09-12T17:47:52.111981Z INFO ExtHandler Fetch goal state completed Sep 12 17:47:52.121462 waagent[1913]: 2025-09-12T17:47:52.121424Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.4.1 11 Feb 2025 (Library: OpenSSL 3.4.1 11 Feb 2025) Sep 12 17:47:52.125029 waagent[1913]: 2025-09-12T17:47:52.124987Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 1913 Sep 12 17:47:52.125109 waagent[1913]: 2025-09-12T17:47:52.125087Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Sep 12 17:47:52.125314 waagent[1913]: 2025-09-12T17:47:52.125295Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Sep 12 17:47:52.126167 waagent[1913]: 2025-09-12T17:47:52.126139Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4426.1.0', '', 'Flatcar Container Linux by Kinvolk'] Sep 12 17:47:52.126418 waagent[1913]: 2025-09-12T17:47:52.126396Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4426.1.0', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Sep 12 17:47:52.126509 waagent[1913]: 2025-09-12T17:47:52.126492Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Sep 12 17:47:52.126890 waagent[1913]: 2025-09-12T17:47:52.126866Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Sep 12 17:47:52.144727 waagent[1913]: 2025-09-12T17:47:52.144676Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Sep 12 17:47:52.144811 waagent[1913]: 2025-09-12T17:47:52.144790Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Sep 12 17:47:52.149173 waagent[1913]: 2025-09-12T17:47:52.148874Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Sep 12 17:47:52.153064 systemd[1]: Reload requested from client PID 1928 ('systemctl') (unit waagent.service)... Sep 12 17:47:52.153075 systemd[1]: Reloading... Sep 12 17:47:52.216691 zram_generator::config[1967]: No configuration found. Sep 12 17:47:52.371608 systemd[1]: Reloading finished in 218 ms. Sep 12 17:47:52.387647 waagent[1913]: 2025-09-12T17:47:52.385765Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Sep 12 17:47:52.387647 waagent[1913]: 2025-09-12T17:47:52.385850Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Sep 12 17:47:52.559796 waagent[1913]: 2025-09-12T17:47:52.559685Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Sep 12 17:47:52.560069 waagent[1913]: 2025-09-12T17:47:52.560030Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Sep 12 17:47:52.560558 waagent[1913]: 2025-09-12T17:47:52.560530Z INFO ExtHandler ExtHandler Starting env monitor service. Sep 12 17:47:52.560913 waagent[1913]: 2025-09-12T17:47:52.560889Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Sep 12 17:47:52.560959 waagent[1913]: 2025-09-12T17:47:52.560927Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 12 17:47:52.560986 waagent[1913]: 2025-09-12T17:47:52.560973Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 12 17:47:52.561099 waagent[1913]: 2025-09-12T17:47:52.561081Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Sep 12 17:47:52.561328 waagent[1913]: 2025-09-12T17:47:52.561242Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 12 17:47:52.561547 waagent[1913]: 2025-09-12T17:47:52.561523Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Sep 12 17:47:52.561597 waagent[1913]: 2025-09-12T17:47:52.561576Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 12 17:47:52.561756 waagent[1913]: 2025-09-12T17:47:52.561737Z INFO EnvHandler ExtHandler Configure routes Sep 12 17:47:52.561796 waagent[1913]: 2025-09-12T17:47:52.561779Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Sep 12 17:47:52.561796 waagent[1913]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Sep 12 17:47:52.561796 waagent[1913]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Sep 12 17:47:52.561796 waagent[1913]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Sep 12 17:47:52.561796 waagent[1913]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Sep 12 17:47:52.561796 waagent[1913]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Sep 12 17:47:52.561796 waagent[1913]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Sep 12 17:47:52.562051 waagent[1913]: 2025-09-12T17:47:52.562026Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Sep 12 17:47:52.562114 waagent[1913]: 2025-09-12T17:47:52.562079Z INFO EnvHandler ExtHandler Gateway:None Sep 12 17:47:52.562164 waagent[1913]: 2025-09-12T17:47:52.562135Z INFO EnvHandler ExtHandler Routes:None Sep 12 17:47:52.562469 waagent[1913]: 2025-09-12T17:47:52.562442Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Sep 12 17:47:52.562801 waagent[1913]: 2025-09-12T17:47:52.562778Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Sep 12 17:47:52.562850 waagent[1913]: 2025-09-12T17:47:52.562808Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Sep 12 17:47:52.566785 waagent[1913]: 2025-09-12T17:47:52.566761Z INFO ExtHandler ExtHandler Sep 12 17:47:52.567247 waagent[1913]: 2025-09-12T17:47:52.567228Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: b4a7b813-b365-46a4-872a-a74febad4974 correlation 1c842843-8b32-4ace-a74e-9530ce77dee5 created: 2025-09-12T17:47:04.842766Z] Sep 12 17:47:52.567457 waagent[1913]: 2025-09-12T17:47:52.567436Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Sep 12 17:47:52.567819 waagent[1913]: 2025-09-12T17:47:52.567800Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Sep 12 17:47:52.594822 waagent[1913]: 2025-09-12T17:47:52.594784Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Sep 12 17:47:52.594822 waagent[1913]: Try `iptables -h' or 'iptables --help' for more information.) Sep 12 17:47:52.595076 waagent[1913]: 2025-09-12T17:47:52.595053Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: B04AF188-F10E-4329-920D-C0683033D24C;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Sep 12 17:47:52.598915 waagent[1913]: 2025-09-12T17:47:52.598877Z INFO MonitorHandler ExtHandler Network interfaces: Sep 12 17:47:52.598915 waagent[1913]: Executing ['ip', '-a', '-o', 'link']: Sep 12 17:47:52.598915 waagent[1913]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Sep 12 17:47:52.598915 waagent[1913]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:2d:ae:44 brd ff:ff:ff:ff:ff:ff\ alias Network Device Sep 12 17:47:52.598915 waagent[1913]: 3: enP30832s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:2d:ae:44 brd ff:ff:ff:ff:ff:ff\ altname enP30832p0s0 Sep 12 17:47:52.598915 waagent[1913]: Executing ['ip', '-4', '-a', '-o', 'address']: Sep 12 17:47:52.598915 waagent[1913]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Sep 12 17:47:52.598915 waagent[1913]: 2: eth0 inet 10.200.8.42/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Sep 12 17:47:52.598915 waagent[1913]: Executing ['ip', '-6', '-a', '-o', 'address']: Sep 12 17:47:52.598915 waagent[1913]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Sep 12 17:47:52.598915 waagent[1913]: 2: eth0 inet6 fe80::7eed:8dff:fe2d:ae44/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Sep 12 17:47:52.625592 waagent[1913]: 2025-09-12T17:47:52.625551Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Sep 12 17:47:52.625592 waagent[1913]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Sep 12 17:47:52.625592 waagent[1913]: pkts bytes target prot opt in out source destination Sep 12 17:47:52.625592 waagent[1913]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Sep 12 17:47:52.625592 waagent[1913]: pkts bytes target prot opt in out source destination Sep 12 17:47:52.625592 waagent[1913]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Sep 12 17:47:52.625592 waagent[1913]: pkts bytes target prot opt in out source destination Sep 12 17:47:52.625592 waagent[1913]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Sep 12 17:47:52.625592 waagent[1913]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Sep 12 17:47:52.625592 waagent[1913]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Sep 12 17:47:52.628337 waagent[1913]: 2025-09-12T17:47:52.628295Z INFO EnvHandler ExtHandler Current Firewall rules: Sep 12 17:47:52.628337 waagent[1913]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Sep 12 17:47:52.628337 waagent[1913]: pkts bytes target prot opt in out source destination Sep 12 17:47:52.628337 waagent[1913]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Sep 12 17:47:52.628337 waagent[1913]: pkts bytes target prot opt in out source destination Sep 12 17:47:52.628337 waagent[1913]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Sep 12 17:47:52.628337 waagent[1913]: pkts bytes target prot opt in out source destination Sep 12 17:47:52.628337 waagent[1913]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Sep 12 17:47:52.628337 waagent[1913]: 4 406 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Sep 12 17:47:52.628337 waagent[1913]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Sep 12 17:48:00.481749 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 12 17:48:00.482985 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:48:01.018423 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:48:01.023864 (kubelet)[2065]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:48:01.056973 kubelet[2065]: E0912 17:48:01.056934 2065 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:48:01.059459 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:48:01.059552 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:48:01.059965 systemd[1]: kubelet.service: Consumed 115ms CPU time, 110.7M memory peak. Sep 12 17:48:11.310264 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 12 17:48:11.311599 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:48:11.729488 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:48:11.734864 (kubelet)[2080]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:48:11.769818 kubelet[2080]: E0912 17:48:11.769791 2080 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:48:11.771369 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:48:11.771465 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:48:11.771715 systemd[1]: kubelet.service: Consumed 114ms CPU time, 111.2M memory peak. Sep 12 17:48:12.495141 chronyd[1679]: Selected source PHC0 Sep 12 17:48:21.675427 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 12 17:48:21.676304 systemd[1]: Started sshd@0-10.200.8.42:22-10.200.16.10:60766.service - OpenSSH per-connection server daemon (10.200.16.10:60766). Sep 12 17:48:21.798334 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 12 17:48:21.799266 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:48:22.325331 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:48:22.330904 (kubelet)[2099]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:48:22.356139 sshd[2088]: Accepted publickey for core from 10.200.16.10 port 60766 ssh2: RSA SHA256:ZNiJkqu8Loo123AdfZic/f9v0/MsiWfNTs209WSupSg Sep 12 17:48:22.357222 sshd-session[2088]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:48:22.360184 kubelet[2099]: E0912 17:48:22.360161 2099 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:48:22.362617 systemd-logind[1701]: New session 3 of user core. Sep 12 17:48:22.363243 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:48:22.363349 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:48:22.363571 systemd[1]: kubelet.service: Consumed 108ms CPU time, 110.2M memory peak. Sep 12 17:48:22.366864 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 12 17:48:22.905703 systemd[1]: Started sshd@1-10.200.8.42:22-10.200.16.10:60776.service - OpenSSH per-connection server daemon (10.200.16.10:60776). Sep 12 17:48:23.527645 sshd[2109]: Accepted publickey for core from 10.200.16.10 port 60776 ssh2: RSA SHA256:ZNiJkqu8Loo123AdfZic/f9v0/MsiWfNTs209WSupSg Sep 12 17:48:23.528435 sshd-session[2109]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:48:23.531913 systemd-logind[1701]: New session 4 of user core. Sep 12 17:48:23.540730 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 12 17:48:23.965406 sshd[2112]: Connection closed by 10.200.16.10 port 60776 Sep 12 17:48:23.965737 sshd-session[2109]: pam_unix(sshd:session): session closed for user core Sep 12 17:48:23.968093 systemd[1]: sshd@1-10.200.8.42:22-10.200.16.10:60776.service: Deactivated successfully. Sep 12 17:48:23.969281 systemd[1]: session-4.scope: Deactivated successfully. Sep 12 17:48:23.969927 systemd-logind[1701]: Session 4 logged out. Waiting for processes to exit. Sep 12 17:48:23.970813 systemd-logind[1701]: Removed session 4. Sep 12 17:48:24.078476 systemd[1]: Started sshd@2-10.200.8.42:22-10.200.16.10:60790.service - OpenSSH per-connection server daemon (10.200.16.10:60790). Sep 12 17:48:24.708752 sshd[2118]: Accepted publickey for core from 10.200.16.10 port 60790 ssh2: RSA SHA256:ZNiJkqu8Loo123AdfZic/f9v0/MsiWfNTs209WSupSg Sep 12 17:48:24.709506 sshd-session[2118]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:48:24.712836 systemd-logind[1701]: New session 5 of user core. Sep 12 17:48:24.720739 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 12 17:48:25.144566 sshd[2121]: Connection closed by 10.200.16.10 port 60790 Sep 12 17:48:25.144909 sshd-session[2118]: pam_unix(sshd:session): session closed for user core Sep 12 17:48:25.147114 systemd[1]: sshd@2-10.200.8.42:22-10.200.16.10:60790.service: Deactivated successfully. Sep 12 17:48:25.148266 systemd[1]: session-5.scope: Deactivated successfully. Sep 12 17:48:25.148889 systemd-logind[1701]: Session 5 logged out. Waiting for processes to exit. Sep 12 17:48:25.149792 systemd-logind[1701]: Removed session 5. Sep 12 17:48:25.262584 systemd[1]: Started sshd@3-10.200.8.42:22-10.200.16.10:60804.service - OpenSSH per-connection server daemon (10.200.16.10:60804). Sep 12 17:48:25.886668 sshd[2127]: Accepted publickey for core from 10.200.16.10 port 60804 ssh2: RSA SHA256:ZNiJkqu8Loo123AdfZic/f9v0/MsiWfNTs209WSupSg Sep 12 17:48:25.887437 sshd-session[2127]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:48:25.890675 systemd-logind[1701]: New session 6 of user core. Sep 12 17:48:25.900753 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 12 17:48:26.325682 sshd[2130]: Connection closed by 10.200.16.10 port 60804 Sep 12 17:48:26.326015 sshd-session[2127]: pam_unix(sshd:session): session closed for user core Sep 12 17:48:26.328419 systemd[1]: sshd@3-10.200.8.42:22-10.200.16.10:60804.service: Deactivated successfully. Sep 12 17:48:26.329694 systemd[1]: session-6.scope: Deactivated successfully. Sep 12 17:48:26.330317 systemd-logind[1701]: Session 6 logged out. Waiting for processes to exit. Sep 12 17:48:26.331153 systemd-logind[1701]: Removed session 6. Sep 12 17:48:26.437483 systemd[1]: Started sshd@4-10.200.8.42:22-10.200.16.10:60816.service - OpenSSH per-connection server daemon (10.200.16.10:60816). Sep 12 17:48:27.061278 sshd[2136]: Accepted publickey for core from 10.200.16.10 port 60816 ssh2: RSA SHA256:ZNiJkqu8Loo123AdfZic/f9v0/MsiWfNTs209WSupSg Sep 12 17:48:27.062031 sshd-session[2136]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:48:27.065347 systemd-logind[1701]: New session 7 of user core. Sep 12 17:48:27.070748 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 12 17:48:27.491328 sudo[2140]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 12 17:48:27.491522 sudo[2140]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:48:27.516202 sudo[2140]: pam_unix(sudo:session): session closed for user root Sep 12 17:48:27.627507 sshd[2139]: Connection closed by 10.200.16.10 port 60816 Sep 12 17:48:27.627958 sshd-session[2136]: pam_unix(sshd:session): session closed for user core Sep 12 17:48:27.630510 systemd[1]: sshd@4-10.200.8.42:22-10.200.16.10:60816.service: Deactivated successfully. Sep 12 17:48:27.631623 systemd[1]: session-7.scope: Deactivated successfully. Sep 12 17:48:27.632232 systemd-logind[1701]: Session 7 logged out. Waiting for processes to exit. Sep 12 17:48:27.633129 systemd-logind[1701]: Removed session 7. Sep 12 17:48:27.739546 systemd[1]: Started sshd@5-10.200.8.42:22-10.200.16.10:60822.service - OpenSSH per-connection server daemon (10.200.16.10:60822). Sep 12 17:48:28.363520 sshd[2146]: Accepted publickey for core from 10.200.16.10 port 60822 ssh2: RSA SHA256:ZNiJkqu8Loo123AdfZic/f9v0/MsiWfNTs209WSupSg Sep 12 17:48:28.364331 sshd-session[2146]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:48:28.367925 systemd-logind[1701]: New session 8 of user core. Sep 12 17:48:28.372748 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 12 17:48:28.705723 sudo[2151]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 12 17:48:28.705913 sudo[2151]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:48:28.711493 sudo[2151]: pam_unix(sudo:session): session closed for user root Sep 12 17:48:28.715138 sudo[2150]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 12 17:48:28.715346 sudo[2150]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:48:28.722255 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 12 17:48:28.751757 augenrules[2173]: No rules Sep 12 17:48:28.752536 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 17:48:28.752721 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 12 17:48:28.753393 sudo[2150]: pam_unix(sudo:session): session closed for user root Sep 12 17:48:28.852887 sshd[2149]: Connection closed by 10.200.16.10 port 60822 Sep 12 17:48:28.853190 sshd-session[2146]: pam_unix(sshd:session): session closed for user core Sep 12 17:48:28.855141 systemd[1]: sshd@5-10.200.8.42:22-10.200.16.10:60822.service: Deactivated successfully. Sep 12 17:48:28.856261 systemd[1]: session-8.scope: Deactivated successfully. Sep 12 17:48:28.857191 systemd-logind[1701]: Session 8 logged out. Waiting for processes to exit. Sep 12 17:48:28.858226 systemd-logind[1701]: Removed session 8. Sep 12 17:48:28.963399 systemd[1]: Started sshd@6-10.200.8.42:22-10.200.16.10:60832.service - OpenSSH per-connection server daemon (10.200.16.10:60832). Sep 12 17:48:29.587726 sshd[2182]: Accepted publickey for core from 10.200.16.10 port 60832 ssh2: RSA SHA256:ZNiJkqu8Loo123AdfZic/f9v0/MsiWfNTs209WSupSg Sep 12 17:48:29.588496 sshd-session[2182]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:48:29.592189 systemd-logind[1701]: New session 9 of user core. Sep 12 17:48:29.597777 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 12 17:48:29.926824 sudo[2186]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 12 17:48:29.927011 sudo[2186]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:48:31.130363 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 12 17:48:31.138854 (dockerd)[2204]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 12 17:48:32.232391 dockerd[2204]: time="2025-09-12T17:48:32.232342991Z" level=info msg="Starting up" Sep 12 17:48:32.234661 dockerd[2204]: time="2025-09-12T17:48:32.234053276Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 12 17:48:32.242814 dockerd[2204]: time="2025-09-12T17:48:32.242784908Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 12 17:48:32.350712 dockerd[2204]: time="2025-09-12T17:48:32.350676098Z" level=info msg="Loading containers: start." Sep 12 17:48:32.375342 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Sep 12 17:48:32.379791 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:48:32.381647 kernel: Initializing XFRM netlink socket Sep 12 17:48:32.996314 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:48:33.007878 (kubelet)[2356]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:48:33.038977 systemd-networkd[1362]: docker0: Link UP Sep 12 17:48:33.046908 kubelet[2356]: E0912 17:48:33.046863 2356 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:48:33.048155 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:48:33.048268 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:48:33.048570 systemd[1]: kubelet.service: Consumed 115ms CPU time, 108.6M memory peak. Sep 12 17:48:33.052116 dockerd[2204]: time="2025-09-12T17:48:33.052095882Z" level=info msg="Loading containers: done." Sep 12 17:48:33.084428 dockerd[2204]: time="2025-09-12T17:48:33.084403596Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 12 17:48:33.084527 dockerd[2204]: time="2025-09-12T17:48:33.084457647Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 12 17:48:33.084527 dockerd[2204]: time="2025-09-12T17:48:33.084516335Z" level=info msg="Initializing buildkit" Sep 12 17:48:33.120470 dockerd[2204]: time="2025-09-12T17:48:33.120431668Z" level=info msg="Completed buildkit initialization" Sep 12 17:48:33.126027 dockerd[2204]: time="2025-09-12T17:48:33.125986260Z" level=info msg="Daemon has completed initialization" Sep 12 17:48:33.126221 dockerd[2204]: time="2025-09-12T17:48:33.126113853Z" level=info msg="API listen on /run/docker.sock" Sep 12 17:48:33.126169 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 12 17:48:33.823661 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Sep 12 17:48:34.032365 update_engine[1704]: I20250912 17:48:34.032320 1704 update_attempter.cc:509] Updating boot flags... Sep 12 17:48:34.186767 containerd[1714]: time="2025-09-12T17:48:34.186596638Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\"" Sep 12 17:48:34.908928 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3930727676.mount: Deactivated successfully. Sep 12 17:48:35.976584 containerd[1714]: time="2025-09-12T17:48:35.976551406Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:48:35.979005 containerd[1714]: time="2025-09-12T17:48:35.978974711Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.13: active requests=0, bytes read=28117132" Sep 12 17:48:35.981718 containerd[1714]: time="2025-09-12T17:48:35.981685209Z" level=info msg="ImageCreate event name:\"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:48:35.985410 containerd[1714]: time="2025-09-12T17:48:35.985252358Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:48:35.985813 containerd[1714]: time="2025-09-12T17:48:35.985793640Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.13\" with image id \"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.13\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\", size \"28113723\" in 1.799164799s" Sep 12 17:48:35.985850 containerd[1714]: time="2025-09-12T17:48:35.985821391Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\" returns image reference \"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\"" Sep 12 17:48:35.986352 containerd[1714]: time="2025-09-12T17:48:35.986332244Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\"" Sep 12 17:48:37.172328 containerd[1714]: time="2025-09-12T17:48:37.172296421Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:48:37.175404 containerd[1714]: time="2025-09-12T17:48:37.175375998Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.13: active requests=0, bytes read=24716640" Sep 12 17:48:37.178994 containerd[1714]: time="2025-09-12T17:48:37.178961049Z" level=info msg="ImageCreate event name:\"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:48:37.183143 containerd[1714]: time="2025-09-12T17:48:37.182990843Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:48:37.183532 containerd[1714]: time="2025-09-12T17:48:37.183513063Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.13\" with image id \"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.13\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\", size \"26351311\" in 1.197156657s" Sep 12 17:48:37.183572 containerd[1714]: time="2025-09-12T17:48:37.183538960Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\" returns image reference \"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\"" Sep 12 17:48:37.184075 containerd[1714]: time="2025-09-12T17:48:37.184056724Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\"" Sep 12 17:48:38.330624 containerd[1714]: time="2025-09-12T17:48:38.330594158Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:48:38.333040 containerd[1714]: time="2025-09-12T17:48:38.333012119Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.13: active requests=0, bytes read=18787706" Sep 12 17:48:38.339225 containerd[1714]: time="2025-09-12T17:48:38.339192736Z" level=info msg="ImageCreate event name:\"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:48:38.343253 containerd[1714]: time="2025-09-12T17:48:38.343217651Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:48:38.344163 containerd[1714]: time="2025-09-12T17:48:38.343769992Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.13\" with image id \"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.13\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\", size \"20422395\" in 1.159690494s" Sep 12 17:48:38.344163 containerd[1714]: time="2025-09-12T17:48:38.343796004Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\" returns image reference \"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\"" Sep 12 17:48:38.344416 containerd[1714]: time="2025-09-12T17:48:38.344402571Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\"" Sep 12 17:48:39.184772 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2055314262.mount: Deactivated successfully. Sep 12 17:48:39.501413 containerd[1714]: time="2025-09-12T17:48:39.501343291Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:48:39.503681 containerd[1714]: time="2025-09-12T17:48:39.503649884Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.13: active requests=0, bytes read=30410260" Sep 12 17:48:39.506288 containerd[1714]: time="2025-09-12T17:48:39.506255437Z" level=info msg="ImageCreate event name:\"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:48:39.509665 containerd[1714]: time="2025-09-12T17:48:39.509291773Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:48:39.509665 containerd[1714]: time="2025-09-12T17:48:39.509533078Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.13\" with image id \"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\", repo tag \"registry.k8s.io/kube-proxy:v1.31.13\", repo digest \"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\", size \"30409271\" in 1.165059332s" Sep 12 17:48:39.509665 containerd[1714]: time="2025-09-12T17:48:39.509553830Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\" returns image reference \"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\"" Sep 12 17:48:39.510115 containerd[1714]: time="2025-09-12T17:48:39.510092167Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 12 17:48:40.127154 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount903986119.mount: Deactivated successfully. Sep 12 17:48:40.983589 containerd[1714]: time="2025-09-12T17:48:40.983559042Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:48:40.985807 containerd[1714]: time="2025-09-12T17:48:40.985779549Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565249" Sep 12 17:48:40.989868 containerd[1714]: time="2025-09-12T17:48:40.989833958Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:48:40.993916 containerd[1714]: time="2025-09-12T17:48:40.993880644Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:48:40.994589 containerd[1714]: time="2025-09-12T17:48:40.994469751Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.484355662s" Sep 12 17:48:40.994589 containerd[1714]: time="2025-09-12T17:48:40.994496178Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 12 17:48:40.995108 containerd[1714]: time="2025-09-12T17:48:40.995088988Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 12 17:48:41.506800 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4021572132.mount: Deactivated successfully. Sep 12 17:48:41.524811 containerd[1714]: time="2025-09-12T17:48:41.524787157Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:48:41.527272 containerd[1714]: time="2025-09-12T17:48:41.527251012Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Sep 12 17:48:41.530006 containerd[1714]: time="2025-09-12T17:48:41.529973545Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:48:41.533504 containerd[1714]: time="2025-09-12T17:48:41.533468492Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:48:41.534155 containerd[1714]: time="2025-09-12T17:48:41.533811420Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 538.641649ms" Sep 12 17:48:41.534155 containerd[1714]: time="2025-09-12T17:48:41.533832999Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 12 17:48:41.534415 containerd[1714]: time="2025-09-12T17:48:41.534397220Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 12 17:48:42.110356 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2823251056.mount: Deactivated successfully. Sep 12 17:48:43.299353 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Sep 12 17:48:43.301070 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:48:43.793322 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:48:43.798835 (kubelet)[2637]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:48:43.829617 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:48:43.908794 kubelet[2637]: E0912 17:48:43.828337 2637 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:48:43.829737 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:48:43.830001 systemd[1]: kubelet.service: Consumed 120ms CPU time, 108.2M memory peak. Sep 12 17:48:44.039823 containerd[1714]: time="2025-09-12T17:48:44.039789178Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:48:44.042195 containerd[1714]: time="2025-09-12T17:48:44.042076644Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56910717" Sep 12 17:48:44.044551 containerd[1714]: time="2025-09-12T17:48:44.044502592Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:48:44.047858 containerd[1714]: time="2025-09-12T17:48:44.047832821Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:48:44.048554 containerd[1714]: time="2025-09-12T17:48:44.048451017Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.514031479s" Sep 12 17:48:44.048554 containerd[1714]: time="2025-09-12T17:48:44.048476422Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Sep 12 17:48:46.110318 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:48:46.110462 systemd[1]: kubelet.service: Consumed 120ms CPU time, 108.2M memory peak. Sep 12 17:48:46.112585 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:48:46.132577 systemd[1]: Reload requested from client PID 2673 ('systemctl') (unit session-9.scope)... Sep 12 17:48:46.132673 systemd[1]: Reloading... Sep 12 17:48:46.211677 zram_generator::config[2729]: No configuration found. Sep 12 17:48:46.420416 systemd[1]: Reloading finished in 287 ms. Sep 12 17:48:46.466990 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 12 17:48:46.467052 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 12 17:48:46.467270 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:48:46.468712 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:48:46.932702 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:48:46.938870 (kubelet)[2787]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 17:48:46.969183 kubelet[2787]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:48:46.969183 kubelet[2787]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 12 17:48:46.969183 kubelet[2787]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:48:46.969407 kubelet[2787]: I0912 17:48:46.969228 2787 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 17:48:47.070669 kubelet[2787]: I0912 17:48:47.070649 2787 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 12 17:48:47.070669 kubelet[2787]: I0912 17:48:47.070666 2787 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 17:48:47.070830 kubelet[2787]: I0912 17:48:47.070820 2787 server.go:934] "Client rotation is on, will bootstrap in background" Sep 12 17:48:47.097786 kubelet[2787]: E0912 17:48:47.097760 2787 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.8.42:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.8.42:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:48:47.099662 kubelet[2787]: I0912 17:48:47.099440 2787 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 17:48:47.110592 kubelet[2787]: I0912 17:48:47.110577 2787 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 12 17:48:47.113126 kubelet[2787]: I0912 17:48:47.113110 2787 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 17:48:47.113611 kubelet[2787]: I0912 17:48:47.113595 2787 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 12 17:48:47.113720 kubelet[2787]: I0912 17:48:47.113695 2787 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 17:48:47.113843 kubelet[2787]: I0912 17:48:47.113719 2787 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4426.1.0-a-49404e8b93","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 17:48:47.113945 kubelet[2787]: I0912 17:48:47.113850 2787 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 17:48:47.113945 kubelet[2787]: I0912 17:48:47.113859 2787 container_manager_linux.go:300] "Creating device plugin manager" Sep 12 17:48:47.113945 kubelet[2787]: I0912 17:48:47.113931 2787 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:48:47.116901 kubelet[2787]: I0912 17:48:47.116450 2787 kubelet.go:408] "Attempting to sync node with API server" Sep 12 17:48:47.116901 kubelet[2787]: I0912 17:48:47.116468 2787 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 17:48:47.116901 kubelet[2787]: I0912 17:48:47.116493 2787 kubelet.go:314] "Adding apiserver pod source" Sep 12 17:48:47.116901 kubelet[2787]: I0912 17:48:47.116510 2787 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 17:48:47.121785 kubelet[2787]: W0912 17:48:47.121742 2787 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.8.42:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4426.1.0-a-49404e8b93&limit=500&resourceVersion=0": dial tcp 10.200.8.42:6443: connect: connection refused Sep 12 17:48:47.121840 kubelet[2787]: E0912 17:48:47.121788 2787 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.8.42:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4426.1.0-a-49404e8b93&limit=500&resourceVersion=0\": dial tcp 10.200.8.42:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:48:47.124147 kubelet[2787]: W0912 17:48:47.124120 2787 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.8.42:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.8.42:6443: connect: connection refused Sep 12 17:48:47.124681 kubelet[2787]: E0912 17:48:47.124663 2787 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.8.42:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.42:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:48:47.124786 kubelet[2787]: I0912 17:48:47.124777 2787 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 12 17:48:47.125114 kubelet[2787]: I0912 17:48:47.125106 2787 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 17:48:47.126025 kubelet[2787]: W0912 17:48:47.125698 2787 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 12 17:48:47.127275 kubelet[2787]: I0912 17:48:47.127264 2787 server.go:1274] "Started kubelet" Sep 12 17:48:47.128164 kubelet[2787]: I0912 17:48:47.127785 2787 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 17:48:47.128996 kubelet[2787]: I0912 17:48:47.128477 2787 server.go:449] "Adding debug handlers to kubelet server" Sep 12 17:48:47.129988 kubelet[2787]: I0912 17:48:47.129962 2787 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 17:48:47.130212 kubelet[2787]: I0912 17:48:47.130203 2787 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 17:48:47.131409 kubelet[2787]: E0912 17:48:47.130350 2787 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.42:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.42:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4426.1.0-a-49404e8b93.18649a39af5cd3a6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4426.1.0-a-49404e8b93,UID:ci-4426.1.0-a-49404e8b93,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4426.1.0-a-49404e8b93,},FirstTimestamp:2025-09-12 17:48:47.127245734 +0000 UTC m=+0.185660247,LastTimestamp:2025-09-12 17:48:47.127245734 +0000 UTC m=+0.185660247,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4426.1.0-a-49404e8b93,}" Sep 12 17:48:47.132649 kubelet[2787]: I0912 17:48:47.132156 2787 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 17:48:47.133658 kubelet[2787]: I0912 17:48:47.133000 2787 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 17:48:47.135387 kubelet[2787]: E0912 17:48:47.135373 2787 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 17:48:47.135618 kubelet[2787]: E0912 17:48:47.135608 2787 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4426.1.0-a-49404e8b93\" not found" Sep 12 17:48:47.135703 kubelet[2787]: I0912 17:48:47.135698 2787 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 12 17:48:47.135899 kubelet[2787]: I0912 17:48:47.135892 2787 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 12 17:48:47.135972 kubelet[2787]: I0912 17:48:47.135967 2787 reconciler.go:26] "Reconciler: start to sync state" Sep 12 17:48:47.136554 kubelet[2787]: I0912 17:48:47.136541 2787 factory.go:221] Registration of the systemd container factory successfully Sep 12 17:48:47.136691 kubelet[2787]: I0912 17:48:47.136677 2787 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 17:48:47.137035 kubelet[2787]: W0912 17:48:47.137011 2787 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.8.42:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.42:6443: connect: connection refused Sep 12 17:48:47.137107 kubelet[2787]: E0912 17:48:47.137096 2787 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.8.42:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.42:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:48:47.138175 kubelet[2787]: E0912 17:48:47.138154 2787 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.42:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4426.1.0-a-49404e8b93?timeout=10s\": dial tcp 10.200.8.42:6443: connect: connection refused" interval="200ms" Sep 12 17:48:47.138374 kubelet[2787]: I0912 17:48:47.138360 2787 factory.go:221] Registration of the containerd container factory successfully Sep 12 17:48:47.158695 kubelet[2787]: I0912 17:48:47.158677 2787 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 12 17:48:47.158782 kubelet[2787]: I0912 17:48:47.158777 2787 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 12 17:48:47.158938 kubelet[2787]: I0912 17:48:47.158829 2787 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:48:47.164150 kubelet[2787]: I0912 17:48:47.164120 2787 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 17:48:47.165658 kubelet[2787]: I0912 17:48:47.165397 2787 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 17:48:47.165658 kubelet[2787]: I0912 17:48:47.165421 2787 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 12 17:48:47.165755 kubelet[2787]: I0912 17:48:47.165745 2787 kubelet.go:2321] "Starting kubelet main sync loop" Sep 12 17:48:47.165974 kubelet[2787]: E0912 17:48:47.165779 2787 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 17:48:47.167189 kubelet[2787]: W0912 17:48:47.167159 2787 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.8.42:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.42:6443: connect: connection refused Sep 12 17:48:47.167279 kubelet[2787]: E0912 17:48:47.167267 2787 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.8.42:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.42:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:48:47.202021 kubelet[2787]: I0912 17:48:47.201897 2787 policy_none.go:49] "None policy: Start" Sep 12 17:48:47.202533 kubelet[2787]: I0912 17:48:47.202522 2787 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 12 17:48:47.202700 kubelet[2787]: I0912 17:48:47.202558 2787 state_mem.go:35] "Initializing new in-memory state store" Sep 12 17:48:47.210913 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 12 17:48:47.227340 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 12 17:48:47.230024 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 12 17:48:47.235878 kubelet[2787]: E0912 17:48:47.235862 2787 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4426.1.0-a-49404e8b93\" not found" Sep 12 17:48:47.238275 kubelet[2787]: I0912 17:48:47.238191 2787 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 17:48:47.238328 kubelet[2787]: I0912 17:48:47.238311 2787 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 17:48:47.238350 kubelet[2787]: I0912 17:48:47.238318 2787 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 17:48:47.238744 kubelet[2787]: I0912 17:48:47.238726 2787 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 17:48:47.240004 kubelet[2787]: E0912 17:48:47.239990 2787 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4426.1.0-a-49404e8b93\" not found" Sep 12 17:48:47.272736 systemd[1]: Created slice kubepods-burstable-podb1f5f7a6e6efb0f06f6854dee4bc4fec.slice - libcontainer container kubepods-burstable-podb1f5f7a6e6efb0f06f6854dee4bc4fec.slice. Sep 12 17:48:47.287325 systemd[1]: Created slice kubepods-burstable-podb23535d53434aa6dbadf29a3e042f4ed.slice - libcontainer container kubepods-burstable-podb23535d53434aa6dbadf29a3e042f4ed.slice. Sep 12 17:48:47.296512 systemd[1]: Created slice kubepods-burstable-podff5252c6728aa20f3e29f023d6c66e3f.slice - libcontainer container kubepods-burstable-podff5252c6728aa20f3e29f023d6c66e3f.slice. Sep 12 17:48:47.337279 kubelet[2787]: I0912 17:48:47.337251 2787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b23535d53434aa6dbadf29a3e042f4ed-k8s-certs\") pod \"kube-controller-manager-ci-4426.1.0-a-49404e8b93\" (UID: \"b23535d53434aa6dbadf29a3e042f4ed\") " pod="kube-system/kube-controller-manager-ci-4426.1.0-a-49404e8b93" Sep 12 17:48:47.337342 kubelet[2787]: I0912 17:48:47.337285 2787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ff5252c6728aa20f3e29f023d6c66e3f-kubeconfig\") pod \"kube-scheduler-ci-4426.1.0-a-49404e8b93\" (UID: \"ff5252c6728aa20f3e29f023d6c66e3f\") " pod="kube-system/kube-scheduler-ci-4426.1.0-a-49404e8b93" Sep 12 17:48:47.337342 kubelet[2787]: I0912 17:48:47.337299 2787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b1f5f7a6e6efb0f06f6854dee4bc4fec-ca-certs\") pod \"kube-apiserver-ci-4426.1.0-a-49404e8b93\" (UID: \"b1f5f7a6e6efb0f06f6854dee4bc4fec\") " pod="kube-system/kube-apiserver-ci-4426.1.0-a-49404e8b93" Sep 12 17:48:47.337342 kubelet[2787]: I0912 17:48:47.337313 2787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b1f5f7a6e6efb0f06f6854dee4bc4fec-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4426.1.0-a-49404e8b93\" (UID: \"b1f5f7a6e6efb0f06f6854dee4bc4fec\") " pod="kube-system/kube-apiserver-ci-4426.1.0-a-49404e8b93" Sep 12 17:48:47.337342 kubelet[2787]: I0912 17:48:47.337328 2787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b23535d53434aa6dbadf29a3e042f4ed-kubeconfig\") pod \"kube-controller-manager-ci-4426.1.0-a-49404e8b93\" (UID: \"b23535d53434aa6dbadf29a3e042f4ed\") " pod="kube-system/kube-controller-manager-ci-4426.1.0-a-49404e8b93" Sep 12 17:48:47.337342 kubelet[2787]: I0912 17:48:47.337340 2787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b23535d53434aa6dbadf29a3e042f4ed-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4426.1.0-a-49404e8b93\" (UID: \"b23535d53434aa6dbadf29a3e042f4ed\") " pod="kube-system/kube-controller-manager-ci-4426.1.0-a-49404e8b93" Sep 12 17:48:47.337448 kubelet[2787]: I0912 17:48:47.337353 2787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b1f5f7a6e6efb0f06f6854dee4bc4fec-k8s-certs\") pod \"kube-apiserver-ci-4426.1.0-a-49404e8b93\" (UID: \"b1f5f7a6e6efb0f06f6854dee4bc4fec\") " pod="kube-system/kube-apiserver-ci-4426.1.0-a-49404e8b93" Sep 12 17:48:47.337448 kubelet[2787]: I0912 17:48:47.337366 2787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b23535d53434aa6dbadf29a3e042f4ed-ca-certs\") pod \"kube-controller-manager-ci-4426.1.0-a-49404e8b93\" (UID: \"b23535d53434aa6dbadf29a3e042f4ed\") " pod="kube-system/kube-controller-manager-ci-4426.1.0-a-49404e8b93" Sep 12 17:48:47.337448 kubelet[2787]: I0912 17:48:47.337382 2787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b23535d53434aa6dbadf29a3e042f4ed-flexvolume-dir\") pod \"kube-controller-manager-ci-4426.1.0-a-49404e8b93\" (UID: \"b23535d53434aa6dbadf29a3e042f4ed\") " pod="kube-system/kube-controller-manager-ci-4426.1.0-a-49404e8b93" Sep 12 17:48:47.338568 kubelet[2787]: E0912 17:48:47.338545 2787 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.42:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4426.1.0-a-49404e8b93?timeout=10s\": dial tcp 10.200.8.42:6443: connect: connection refused" interval="400ms" Sep 12 17:48:47.339339 kubelet[2787]: I0912 17:48:47.339314 2787 kubelet_node_status.go:72] "Attempting to register node" node="ci-4426.1.0-a-49404e8b93" Sep 12 17:48:47.339614 kubelet[2787]: E0912 17:48:47.339597 2787 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.8.42:6443/api/v1/nodes\": dial tcp 10.200.8.42:6443: connect: connection refused" node="ci-4426.1.0-a-49404e8b93" Sep 12 17:48:47.541048 kubelet[2787]: I0912 17:48:47.540992 2787 kubelet_node_status.go:72] "Attempting to register node" node="ci-4426.1.0-a-49404e8b93" Sep 12 17:48:47.541211 kubelet[2787]: E0912 17:48:47.541183 2787 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.8.42:6443/api/v1/nodes\": dial tcp 10.200.8.42:6443: connect: connection refused" node="ci-4426.1.0-a-49404e8b93" Sep 12 17:48:47.586160 containerd[1714]: time="2025-09-12T17:48:47.586134134Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4426.1.0-a-49404e8b93,Uid:b1f5f7a6e6efb0f06f6854dee4bc4fec,Namespace:kube-system,Attempt:0,}" Sep 12 17:48:47.595506 containerd[1714]: time="2025-09-12T17:48:47.595481111Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4426.1.0-a-49404e8b93,Uid:b23535d53434aa6dbadf29a3e042f4ed,Namespace:kube-system,Attempt:0,}" Sep 12 17:48:47.599233 containerd[1714]: time="2025-09-12T17:48:47.599206922Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4426.1.0-a-49404e8b93,Uid:ff5252c6728aa20f3e29f023d6c66e3f,Namespace:kube-system,Attempt:0,}" Sep 12 17:48:47.656406 containerd[1714]: time="2025-09-12T17:48:47.656379897Z" level=info msg="connecting to shim d7879b23eb4635ada78227401a077ef2b65cf3639fd5cadd69cdc2f887528cfc" address="unix:///run/containerd/s/86c422be7faea2658d79db4a32ef7d1ea6f3899acc4b346d2df76da00fe8c469" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:48:47.674172 containerd[1714]: time="2025-09-12T17:48:47.674107506Z" level=info msg="connecting to shim 098f5dbbfdacd75e7f72b627f206ad160b9336d3a287bd0a0d75b59746cd6dcb" address="unix:///run/containerd/s/372dbffba6ac6e817464245e6574299493517ecc34ec825c2bdaac219981d729" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:48:47.688961 systemd[1]: Started cri-containerd-d7879b23eb4635ada78227401a077ef2b65cf3639fd5cadd69cdc2f887528cfc.scope - libcontainer container d7879b23eb4635ada78227401a077ef2b65cf3639fd5cadd69cdc2f887528cfc. Sep 12 17:48:47.692843 containerd[1714]: time="2025-09-12T17:48:47.692429637Z" level=info msg="connecting to shim d3dbe9737e80322956205b40453881ab3b04cb9b742b8a58a58619bcb55ebf2d" address="unix:///run/containerd/s/1b91d106d9d7b629fdd93522832068ad1b3f025230823f664dde4d3d42dd64e3" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:48:47.714853 systemd[1]: Started cri-containerd-098f5dbbfdacd75e7f72b627f206ad160b9336d3a287bd0a0d75b59746cd6dcb.scope - libcontainer container 098f5dbbfdacd75e7f72b627f206ad160b9336d3a287bd0a0d75b59746cd6dcb. Sep 12 17:48:47.718368 systemd[1]: Started cri-containerd-d3dbe9737e80322956205b40453881ab3b04cb9b742b8a58a58619bcb55ebf2d.scope - libcontainer container d3dbe9737e80322956205b40453881ab3b04cb9b742b8a58a58619bcb55ebf2d. Sep 12 17:48:47.739001 kubelet[2787]: E0912 17:48:47.738973 2787 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.42:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4426.1.0-a-49404e8b93?timeout=10s\": dial tcp 10.200.8.42:6443: connect: connection refused" interval="800ms" Sep 12 17:48:47.753967 containerd[1714]: time="2025-09-12T17:48:47.753852589Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4426.1.0-a-49404e8b93,Uid:b1f5f7a6e6efb0f06f6854dee4bc4fec,Namespace:kube-system,Attempt:0,} returns sandbox id \"d7879b23eb4635ada78227401a077ef2b65cf3639fd5cadd69cdc2f887528cfc\"" Sep 12 17:48:47.756286 containerd[1714]: time="2025-09-12T17:48:47.756260331Z" level=info msg="CreateContainer within sandbox \"d7879b23eb4635ada78227401a077ef2b65cf3639fd5cadd69cdc2f887528cfc\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 12 17:48:47.777594 containerd[1714]: time="2025-09-12T17:48:47.777564136Z" level=info msg="Container dc28003218d71b8f1ba86f152162894895e8e4250cce49f96ffb1b75670d69e9: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:48:47.779334 containerd[1714]: time="2025-09-12T17:48:47.779264989Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4426.1.0-a-49404e8b93,Uid:b23535d53434aa6dbadf29a3e042f4ed,Namespace:kube-system,Attempt:0,} returns sandbox id \"098f5dbbfdacd75e7f72b627f206ad160b9336d3a287bd0a0d75b59746cd6dcb\"" Sep 12 17:48:47.782149 containerd[1714]: time="2025-09-12T17:48:47.782126115Z" level=info msg="CreateContainer within sandbox \"098f5dbbfdacd75e7f72b627f206ad160b9336d3a287bd0a0d75b59746cd6dcb\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 12 17:48:47.793057 containerd[1714]: time="2025-09-12T17:48:47.792999992Z" level=info msg="CreateContainer within sandbox \"d7879b23eb4635ada78227401a077ef2b65cf3639fd5cadd69cdc2f887528cfc\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"dc28003218d71b8f1ba86f152162894895e8e4250cce49f96ffb1b75670d69e9\"" Sep 12 17:48:47.793427 containerd[1714]: time="2025-09-12T17:48:47.793407306Z" level=info msg="StartContainer for \"dc28003218d71b8f1ba86f152162894895e8e4250cce49f96ffb1b75670d69e9\"" Sep 12 17:48:47.794289 containerd[1714]: time="2025-09-12T17:48:47.794270158Z" level=info msg="connecting to shim dc28003218d71b8f1ba86f152162894895e8e4250cce49f96ffb1b75670d69e9" address="unix:///run/containerd/s/86c422be7faea2658d79db4a32ef7d1ea6f3899acc4b346d2df76da00fe8c469" protocol=ttrpc version=3 Sep 12 17:48:47.802406 containerd[1714]: time="2025-09-12T17:48:47.802375121Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4426.1.0-a-49404e8b93,Uid:ff5252c6728aa20f3e29f023d6c66e3f,Namespace:kube-system,Attempt:0,} returns sandbox id \"d3dbe9737e80322956205b40453881ab3b04cb9b742b8a58a58619bcb55ebf2d\"" Sep 12 17:48:47.804756 containerd[1714]: time="2025-09-12T17:48:47.804489967Z" level=info msg="CreateContainer within sandbox \"d3dbe9737e80322956205b40453881ab3b04cb9b742b8a58a58619bcb55ebf2d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 12 17:48:47.807328 containerd[1714]: time="2025-09-12T17:48:47.807311208Z" level=info msg="Container f417a87c18f5a525e9d7ef2187c32533fa32c805cd1ffb9a8e9e844852a1c8f5: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:48:47.808819 systemd[1]: Started cri-containerd-dc28003218d71b8f1ba86f152162894895e8e4250cce49f96ffb1b75670d69e9.scope - libcontainer container dc28003218d71b8f1ba86f152162894895e8e4250cce49f96ffb1b75670d69e9. Sep 12 17:48:47.823824 containerd[1714]: time="2025-09-12T17:48:47.823650409Z" level=info msg="CreateContainer within sandbox \"098f5dbbfdacd75e7f72b627f206ad160b9336d3a287bd0a0d75b59746cd6dcb\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f417a87c18f5a525e9d7ef2187c32533fa32c805cd1ffb9a8e9e844852a1c8f5\"" Sep 12 17:48:47.824652 containerd[1714]: time="2025-09-12T17:48:47.823971983Z" level=info msg="StartContainer for \"f417a87c18f5a525e9d7ef2187c32533fa32c805cd1ffb9a8e9e844852a1c8f5\"" Sep 12 17:48:47.824981 containerd[1714]: time="2025-09-12T17:48:47.824964811Z" level=info msg="connecting to shim f417a87c18f5a525e9d7ef2187c32533fa32c805cd1ffb9a8e9e844852a1c8f5" address="unix:///run/containerd/s/372dbffba6ac6e817464245e6574299493517ecc34ec825c2bdaac219981d729" protocol=ttrpc version=3 Sep 12 17:48:47.827941 containerd[1714]: time="2025-09-12T17:48:47.827915130Z" level=info msg="Container fea558d5e947ecb5a0cec6ad3fb48f3616c7444c5115ec5c9953f08b7cc708ab: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:48:47.839761 systemd[1]: Started cri-containerd-f417a87c18f5a525e9d7ef2187c32533fa32c805cd1ffb9a8e9e844852a1c8f5.scope - libcontainer container f417a87c18f5a525e9d7ef2187c32533fa32c805cd1ffb9a8e9e844852a1c8f5. Sep 12 17:48:47.848276 containerd[1714]: time="2025-09-12T17:48:47.848202620Z" level=info msg="CreateContainer within sandbox \"d3dbe9737e80322956205b40453881ab3b04cb9b742b8a58a58619bcb55ebf2d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"fea558d5e947ecb5a0cec6ad3fb48f3616c7444c5115ec5c9953f08b7cc708ab\"" Sep 12 17:48:47.848857 containerd[1714]: time="2025-09-12T17:48:47.848769665Z" level=info msg="StartContainer for \"fea558d5e947ecb5a0cec6ad3fb48f3616c7444c5115ec5c9953f08b7cc708ab\"" Sep 12 17:48:47.851698 containerd[1714]: time="2025-09-12T17:48:47.851669154Z" level=info msg="connecting to shim fea558d5e947ecb5a0cec6ad3fb48f3616c7444c5115ec5c9953f08b7cc708ab" address="unix:///run/containerd/s/1b91d106d9d7b629fdd93522832068ad1b3f025230823f664dde4d3d42dd64e3" protocol=ttrpc version=3 Sep 12 17:48:47.854579 containerd[1714]: time="2025-09-12T17:48:47.854530629Z" level=info msg="StartContainer for \"dc28003218d71b8f1ba86f152162894895e8e4250cce49f96ffb1b75670d69e9\" returns successfully" Sep 12 17:48:47.877743 systemd[1]: Started cri-containerd-fea558d5e947ecb5a0cec6ad3fb48f3616c7444c5115ec5c9953f08b7cc708ab.scope - libcontainer container fea558d5e947ecb5a0cec6ad3fb48f3616c7444c5115ec5c9953f08b7cc708ab. Sep 12 17:48:47.923283 containerd[1714]: time="2025-09-12T17:48:47.923230666Z" level=info msg="StartContainer for \"f417a87c18f5a525e9d7ef2187c32533fa32c805cd1ffb9a8e9e844852a1c8f5\" returns successfully" Sep 12 17:48:47.944152 kubelet[2787]: I0912 17:48:47.944140 2787 kubelet_node_status.go:72] "Attempting to register node" node="ci-4426.1.0-a-49404e8b93" Sep 12 17:48:48.008774 containerd[1714]: time="2025-09-12T17:48:48.008756481Z" level=info msg="StartContainer for \"fea558d5e947ecb5a0cec6ad3fb48f3616c7444c5115ec5c9953f08b7cc708ab\" returns successfully" Sep 12 17:48:49.737370 kubelet[2787]: E0912 17:48:49.737319 2787 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4426.1.0-a-49404e8b93\" not found" node="ci-4426.1.0-a-49404e8b93" Sep 12 17:48:49.819762 kubelet[2787]: I0912 17:48:49.819735 2787 kubelet_node_status.go:75] "Successfully registered node" node="ci-4426.1.0-a-49404e8b93" Sep 12 17:48:49.819762 kubelet[2787]: E0912 17:48:49.819758 2787 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4426.1.0-a-49404e8b93\": node \"ci-4426.1.0-a-49404e8b93\" not found" Sep 12 17:48:50.127478 kubelet[2787]: I0912 17:48:50.127448 2787 apiserver.go:52] "Watching apiserver" Sep 12 17:48:50.137016 kubelet[2787]: I0912 17:48:50.136999 2787 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 12 17:48:51.882907 systemd[1]: Reload requested from client PID 3060 ('systemctl') (unit session-9.scope)... Sep 12 17:48:51.882920 systemd[1]: Reloading... Sep 12 17:48:51.959704 zram_generator::config[3116]: No configuration found. Sep 12 17:48:52.114908 systemd[1]: Reloading finished in 231 ms. Sep 12 17:48:52.146907 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:48:52.162331 systemd[1]: kubelet.service: Deactivated successfully. Sep 12 17:48:52.162517 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:48:52.162559 systemd[1]: kubelet.service: Consumed 425ms CPU time, 129M memory peak. Sep 12 17:48:52.163721 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:48:52.671594 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:48:52.677873 (kubelet)[3174]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 17:48:52.713234 kubelet[3174]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:48:52.713234 kubelet[3174]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 12 17:48:52.713234 kubelet[3174]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:48:52.713442 kubelet[3174]: I0912 17:48:52.713289 3174 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 17:48:52.719651 kubelet[3174]: I0912 17:48:52.719224 3174 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 12 17:48:52.719651 kubelet[3174]: I0912 17:48:52.719251 3174 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 17:48:52.719651 kubelet[3174]: I0912 17:48:52.719574 3174 server.go:934] "Client rotation is on, will bootstrap in background" Sep 12 17:48:52.721421 kubelet[3174]: I0912 17:48:52.721408 3174 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 12 17:48:52.723827 kubelet[3174]: I0912 17:48:52.723812 3174 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 17:48:52.726505 kubelet[3174]: I0912 17:48:52.726489 3174 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 12 17:48:52.729731 kubelet[3174]: I0912 17:48:52.729708 3174 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 17:48:52.730057 kubelet[3174]: I0912 17:48:52.730044 3174 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 12 17:48:52.730150 kubelet[3174]: I0912 17:48:52.730126 3174 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 17:48:52.730264 kubelet[3174]: I0912 17:48:52.730149 3174 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4426.1.0-a-49404e8b93","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 17:48:52.730343 kubelet[3174]: I0912 17:48:52.730270 3174 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 17:48:52.730343 kubelet[3174]: I0912 17:48:52.730279 3174 container_manager_linux.go:300] "Creating device plugin manager" Sep 12 17:48:52.730343 kubelet[3174]: I0912 17:48:52.730301 3174 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:48:52.730419 kubelet[3174]: I0912 17:48:52.730372 3174 kubelet.go:408] "Attempting to sync node with API server" Sep 12 17:48:52.730419 kubelet[3174]: I0912 17:48:52.730381 3174 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 17:48:52.730419 kubelet[3174]: I0912 17:48:52.730406 3174 kubelet.go:314] "Adding apiserver pod source" Sep 12 17:48:52.730419 kubelet[3174]: I0912 17:48:52.730418 3174 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 17:48:52.733310 kubelet[3174]: I0912 17:48:52.731749 3174 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 12 17:48:52.733310 kubelet[3174]: I0912 17:48:52.732124 3174 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 17:48:52.733310 kubelet[3174]: I0912 17:48:52.732478 3174 server.go:1274] "Started kubelet" Sep 12 17:48:52.734159 kubelet[3174]: I0912 17:48:52.734145 3174 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 17:48:52.743992 kubelet[3174]: I0912 17:48:52.743967 3174 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 17:48:52.744522 kubelet[3174]: I0912 17:48:52.744506 3174 server.go:449] "Adding debug handlers to kubelet server" Sep 12 17:48:52.749894 kubelet[3174]: I0912 17:48:52.747320 3174 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 12 17:48:52.749894 kubelet[3174]: I0912 17:48:52.748755 3174 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 17:48:52.749894 kubelet[3174]: I0912 17:48:52.748897 3174 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 17:48:52.749894 kubelet[3174]: I0912 17:48:52.749349 3174 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 17:48:52.753670 kubelet[3174]: I0912 17:48:52.753656 3174 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 12 17:48:52.755413 kubelet[3174]: I0912 17:48:52.755399 3174 reconciler.go:26] "Reconciler: start to sync state" Sep 12 17:48:52.756622 kubelet[3174]: I0912 17:48:52.756604 3174 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 17:48:52.758481 kubelet[3174]: I0912 17:48:52.758416 3174 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 17:48:52.758481 kubelet[3174]: I0912 17:48:52.758434 3174 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 12 17:48:52.758481 kubelet[3174]: I0912 17:48:52.758446 3174 kubelet.go:2321] "Starting kubelet main sync loop" Sep 12 17:48:52.758481 kubelet[3174]: E0912 17:48:52.758473 3174 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 17:48:52.759665 kubelet[3174]: I0912 17:48:52.759651 3174 factory.go:221] Registration of the containerd container factory successfully Sep 12 17:48:52.759740 kubelet[3174]: I0912 17:48:52.759734 3174 factory.go:221] Registration of the systemd container factory successfully Sep 12 17:48:52.759863 kubelet[3174]: I0912 17:48:52.759826 3174 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 17:48:52.815873 kubelet[3174]: I0912 17:48:52.815815 3174 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 12 17:48:52.816020 kubelet[3174]: I0912 17:48:52.816012 3174 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 12 17:48:52.816060 kubelet[3174]: I0912 17:48:52.816057 3174 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:48:52.816166 kubelet[3174]: I0912 17:48:52.816161 3174 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 12 17:48:52.816198 kubelet[3174]: I0912 17:48:52.816189 3174 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 12 17:48:52.816219 kubelet[3174]: I0912 17:48:52.816217 3174 policy_none.go:49] "None policy: Start" Sep 12 17:48:52.816699 kubelet[3174]: I0912 17:48:52.816685 3174 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 12 17:48:52.816759 kubelet[3174]: I0912 17:48:52.816708 3174 state_mem.go:35] "Initializing new in-memory state store" Sep 12 17:48:52.816850 kubelet[3174]: I0912 17:48:52.816840 3174 state_mem.go:75] "Updated machine memory state" Sep 12 17:48:52.820376 kubelet[3174]: I0912 17:48:52.820358 3174 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 17:48:52.820482 kubelet[3174]: I0912 17:48:52.820471 3174 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 17:48:52.820657 kubelet[3174]: I0912 17:48:52.820483 3174 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 17:48:52.821106 kubelet[3174]: I0912 17:48:52.821079 3174 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 17:48:52.866939 kubelet[3174]: W0912 17:48:52.866924 3174 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 12 17:48:52.871793 kubelet[3174]: W0912 17:48:52.871774 3174 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 12 17:48:52.872056 kubelet[3174]: W0912 17:48:52.871971 3174 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 12 17:48:52.929323 kubelet[3174]: I0912 17:48:52.929276 3174 kubelet_node_status.go:72] "Attempting to register node" node="ci-4426.1.0-a-49404e8b93" Sep 12 17:48:52.943866 kubelet[3174]: I0912 17:48:52.943797 3174 kubelet_node_status.go:111] "Node was previously registered" node="ci-4426.1.0-a-49404e8b93" Sep 12 17:48:52.943866 kubelet[3174]: I0912 17:48:52.943846 3174 kubelet_node_status.go:75] "Successfully registered node" node="ci-4426.1.0-a-49404e8b93" Sep 12 17:48:53.056426 kubelet[3174]: I0912 17:48:53.056310 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b23535d53434aa6dbadf29a3e042f4ed-ca-certs\") pod \"kube-controller-manager-ci-4426.1.0-a-49404e8b93\" (UID: \"b23535d53434aa6dbadf29a3e042f4ed\") " pod="kube-system/kube-controller-manager-ci-4426.1.0-a-49404e8b93" Sep 12 17:48:53.056426 kubelet[3174]: I0912 17:48:53.056337 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b23535d53434aa6dbadf29a3e042f4ed-k8s-certs\") pod \"kube-controller-manager-ci-4426.1.0-a-49404e8b93\" (UID: \"b23535d53434aa6dbadf29a3e042f4ed\") " pod="kube-system/kube-controller-manager-ci-4426.1.0-a-49404e8b93" Sep 12 17:48:53.056426 kubelet[3174]: I0912 17:48:53.056353 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b23535d53434aa6dbadf29a3e042f4ed-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4426.1.0-a-49404e8b93\" (UID: \"b23535d53434aa6dbadf29a3e042f4ed\") " pod="kube-system/kube-controller-manager-ci-4426.1.0-a-49404e8b93" Sep 12 17:48:53.056426 kubelet[3174]: I0912 17:48:53.056365 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ff5252c6728aa20f3e29f023d6c66e3f-kubeconfig\") pod \"kube-scheduler-ci-4426.1.0-a-49404e8b93\" (UID: \"ff5252c6728aa20f3e29f023d6c66e3f\") " pod="kube-system/kube-scheduler-ci-4426.1.0-a-49404e8b93" Sep 12 17:48:53.056426 kubelet[3174]: I0912 17:48:53.056375 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b1f5f7a6e6efb0f06f6854dee4bc4fec-k8s-certs\") pod \"kube-apiserver-ci-4426.1.0-a-49404e8b93\" (UID: \"b1f5f7a6e6efb0f06f6854dee4bc4fec\") " pod="kube-system/kube-apiserver-ci-4426.1.0-a-49404e8b93" Sep 12 17:48:53.056585 kubelet[3174]: I0912 17:48:53.056385 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b1f5f7a6e6efb0f06f6854dee4bc4fec-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4426.1.0-a-49404e8b93\" (UID: \"b1f5f7a6e6efb0f06f6854dee4bc4fec\") " pod="kube-system/kube-apiserver-ci-4426.1.0-a-49404e8b93" Sep 12 17:48:53.056585 kubelet[3174]: I0912 17:48:53.056395 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b23535d53434aa6dbadf29a3e042f4ed-flexvolume-dir\") pod \"kube-controller-manager-ci-4426.1.0-a-49404e8b93\" (UID: \"b23535d53434aa6dbadf29a3e042f4ed\") " pod="kube-system/kube-controller-manager-ci-4426.1.0-a-49404e8b93" Sep 12 17:48:53.056585 kubelet[3174]: I0912 17:48:53.056409 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b23535d53434aa6dbadf29a3e042f4ed-kubeconfig\") pod \"kube-controller-manager-ci-4426.1.0-a-49404e8b93\" (UID: \"b23535d53434aa6dbadf29a3e042f4ed\") " pod="kube-system/kube-controller-manager-ci-4426.1.0-a-49404e8b93" Sep 12 17:48:53.056585 kubelet[3174]: I0912 17:48:53.056428 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b1f5f7a6e6efb0f06f6854dee4bc4fec-ca-certs\") pod \"kube-apiserver-ci-4426.1.0-a-49404e8b93\" (UID: \"b1f5f7a6e6efb0f06f6854dee4bc4fec\") " pod="kube-system/kube-apiserver-ci-4426.1.0-a-49404e8b93" Sep 12 17:48:53.731241 kubelet[3174]: I0912 17:48:53.731219 3174 apiserver.go:52] "Watching apiserver" Sep 12 17:48:53.756057 kubelet[3174]: I0912 17:48:53.756023 3174 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 12 17:48:53.842171 kubelet[3174]: I0912 17:48:53.842028 3174 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4426.1.0-a-49404e8b93" podStartSLOduration=1.842012956 podStartE2EDuration="1.842012956s" podCreationTimestamp="2025-09-12 17:48:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:48:53.827873341 +0000 UTC m=+1.146713401" watchObservedRunningTime="2025-09-12 17:48:53.842012956 +0000 UTC m=+1.160853018" Sep 12 17:48:53.852426 kubelet[3174]: I0912 17:48:53.852357 3174 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4426.1.0-a-49404e8b93" podStartSLOduration=1.852206549 podStartE2EDuration="1.852206549s" podCreationTimestamp="2025-09-12 17:48:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:48:53.84277857 +0000 UTC m=+1.161618628" watchObservedRunningTime="2025-09-12 17:48:53.852206549 +0000 UTC m=+1.171046661" Sep 12 17:48:53.860070 kubelet[3174]: I0912 17:48:53.859952 3174 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4426.1.0-a-49404e8b93" podStartSLOduration=1.859934333 podStartE2EDuration="1.859934333s" podCreationTimestamp="2025-09-12 17:48:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:48:53.852563318 +0000 UTC m=+1.171403374" watchObservedRunningTime="2025-09-12 17:48:53.859934333 +0000 UTC m=+1.178774390" Sep 12 17:48:57.643088 kubelet[3174]: I0912 17:48:57.643049 3174 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 12 17:48:57.643593 containerd[1714]: time="2025-09-12T17:48:57.643323148Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 12 17:48:57.643906 kubelet[3174]: I0912 17:48:57.643890 3174 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 12 17:48:58.413181 systemd[1]: Created slice kubepods-besteffort-pod08434829_39fd_4402_95ed_c62b94496df1.slice - libcontainer container kubepods-besteffort-pod08434829_39fd_4402_95ed_c62b94496df1.slice. Sep 12 17:48:58.493313 kubelet[3174]: I0912 17:48:58.491787 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/08434829-39fd-4402-95ed-c62b94496df1-kube-proxy\") pod \"kube-proxy-qlrv5\" (UID: \"08434829-39fd-4402-95ed-c62b94496df1\") " pod="kube-system/kube-proxy-qlrv5" Sep 12 17:48:58.493576 kubelet[3174]: I0912 17:48:58.493513 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59ksj\" (UniqueName: \"kubernetes.io/projected/08434829-39fd-4402-95ed-c62b94496df1-kube-api-access-59ksj\") pod \"kube-proxy-qlrv5\" (UID: \"08434829-39fd-4402-95ed-c62b94496df1\") " pod="kube-system/kube-proxy-qlrv5" Sep 12 17:48:58.493576 kubelet[3174]: I0912 17:48:58.493545 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/08434829-39fd-4402-95ed-c62b94496df1-xtables-lock\") pod \"kube-proxy-qlrv5\" (UID: \"08434829-39fd-4402-95ed-c62b94496df1\") " pod="kube-system/kube-proxy-qlrv5" Sep 12 17:48:58.493576 kubelet[3174]: I0912 17:48:58.493562 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/08434829-39fd-4402-95ed-c62b94496df1-lib-modules\") pod \"kube-proxy-qlrv5\" (UID: \"08434829-39fd-4402-95ed-c62b94496df1\") " pod="kube-system/kube-proxy-qlrv5" Sep 12 17:48:58.533419 systemd[1]: Created slice kubepods-besteffort-pod7232064b_ad15_4479_8ed9_664dbcb48717.slice - libcontainer container kubepods-besteffort-pod7232064b_ad15_4479_8ed9_664dbcb48717.slice. Sep 12 17:48:58.594792 kubelet[3174]: I0912 17:48:58.594763 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7c9xn\" (UniqueName: \"kubernetes.io/projected/7232064b-ad15-4479-8ed9-664dbcb48717-kube-api-access-7c9xn\") pod \"tigera-operator-58fc44c59b-pgqbk\" (UID: \"7232064b-ad15-4479-8ed9-664dbcb48717\") " pod="tigera-operator/tigera-operator-58fc44c59b-pgqbk" Sep 12 17:48:58.594877 kubelet[3174]: I0912 17:48:58.594837 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7232064b-ad15-4479-8ed9-664dbcb48717-var-lib-calico\") pod \"tigera-operator-58fc44c59b-pgqbk\" (UID: \"7232064b-ad15-4479-8ed9-664dbcb48717\") " pod="tigera-operator/tigera-operator-58fc44c59b-pgqbk" Sep 12 17:48:58.725212 containerd[1714]: time="2025-09-12T17:48:58.725132269Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qlrv5,Uid:08434829-39fd-4402-95ed-c62b94496df1,Namespace:kube-system,Attempt:0,}" Sep 12 17:48:58.766293 containerd[1714]: time="2025-09-12T17:48:58.766223512Z" level=info msg="connecting to shim 379a63dded8d5802251423d130d49d4fef4914d4499499a6a2a7788b386abe91" address="unix:///run/containerd/s/6994378827b8815b874d95aa6ce25bbfeb690ae0c596c898f26374fcf52dae71" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:48:58.789779 systemd[1]: Started cri-containerd-379a63dded8d5802251423d130d49d4fef4914d4499499a6a2a7788b386abe91.scope - libcontainer container 379a63dded8d5802251423d130d49d4fef4914d4499499a6a2a7788b386abe91. Sep 12 17:48:58.812976 containerd[1714]: time="2025-09-12T17:48:58.812951229Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qlrv5,Uid:08434829-39fd-4402-95ed-c62b94496df1,Namespace:kube-system,Attempt:0,} returns sandbox id \"379a63dded8d5802251423d130d49d4fef4914d4499499a6a2a7788b386abe91\"" Sep 12 17:48:58.815341 containerd[1714]: time="2025-09-12T17:48:58.815317780Z" level=info msg="CreateContainer within sandbox \"379a63dded8d5802251423d130d49d4fef4914d4499499a6a2a7788b386abe91\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 12 17:48:58.832020 containerd[1714]: time="2025-09-12T17:48:58.831625198Z" level=info msg="Container 5058d3e8c7714ecf97ed44308627dd230a45d12c14e7406dcce02d6f230117ec: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:48:58.837955 containerd[1714]: time="2025-09-12T17:48:58.837931542Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-pgqbk,Uid:7232064b-ad15-4479-8ed9-664dbcb48717,Namespace:tigera-operator,Attempt:0,}" Sep 12 17:48:58.866005 containerd[1714]: time="2025-09-12T17:48:58.865982052Z" level=info msg="CreateContainer within sandbox \"379a63dded8d5802251423d130d49d4fef4914d4499499a6a2a7788b386abe91\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"5058d3e8c7714ecf97ed44308627dd230a45d12c14e7406dcce02d6f230117ec\"" Sep 12 17:48:58.866592 containerd[1714]: time="2025-09-12T17:48:58.866292474Z" level=info msg="StartContainer for \"5058d3e8c7714ecf97ed44308627dd230a45d12c14e7406dcce02d6f230117ec\"" Sep 12 17:48:58.867595 containerd[1714]: time="2025-09-12T17:48:58.867567869Z" level=info msg="connecting to shim 5058d3e8c7714ecf97ed44308627dd230a45d12c14e7406dcce02d6f230117ec" address="unix:///run/containerd/s/6994378827b8815b874d95aa6ce25bbfeb690ae0c596c898f26374fcf52dae71" protocol=ttrpc version=3 Sep 12 17:48:58.884772 systemd[1]: Started cri-containerd-5058d3e8c7714ecf97ed44308627dd230a45d12c14e7406dcce02d6f230117ec.scope - libcontainer container 5058d3e8c7714ecf97ed44308627dd230a45d12c14e7406dcce02d6f230117ec. Sep 12 17:48:58.893396 containerd[1714]: time="2025-09-12T17:48:58.893372410Z" level=info msg="connecting to shim 1f71e9fd068da0c3b16f42a3555aae9a212a523b14ed02fb37abce062588bcbd" address="unix:///run/containerd/s/0bf21fee6d73235ec820105e18c6d498a04ac46f3b8dc6f215f361118ca75330" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:48:58.912824 systemd[1]: Started cri-containerd-1f71e9fd068da0c3b16f42a3555aae9a212a523b14ed02fb37abce062588bcbd.scope - libcontainer container 1f71e9fd068da0c3b16f42a3555aae9a212a523b14ed02fb37abce062588bcbd. Sep 12 17:48:58.924522 containerd[1714]: time="2025-09-12T17:48:58.924501499Z" level=info msg="StartContainer for \"5058d3e8c7714ecf97ed44308627dd230a45d12c14e7406dcce02d6f230117ec\" returns successfully" Sep 12 17:48:58.961060 containerd[1714]: time="2025-09-12T17:48:58.961000116Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-pgqbk,Uid:7232064b-ad15-4479-8ed9-664dbcb48717,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"1f71e9fd068da0c3b16f42a3555aae9a212a523b14ed02fb37abce062588bcbd\"" Sep 12 17:48:58.962991 containerd[1714]: time="2025-09-12T17:48:58.962804395Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 12 17:49:00.840234 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1656990656.mount: Deactivated successfully. Sep 12 17:49:01.392398 containerd[1714]: time="2025-09-12T17:49:01.392364855Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:49:01.394653 containerd[1714]: time="2025-09-12T17:49:01.394579281Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Sep 12 17:49:01.397278 containerd[1714]: time="2025-09-12T17:49:01.397243198Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:49:01.400424 containerd[1714]: time="2025-09-12T17:49:01.400388204Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:49:01.400782 containerd[1714]: time="2025-09-12T17:49:01.400702820Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 2.437581206s" Sep 12 17:49:01.400782 containerd[1714]: time="2025-09-12T17:49:01.400726993Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 12 17:49:01.402311 containerd[1714]: time="2025-09-12T17:49:01.402284701Z" level=info msg="CreateContainer within sandbox \"1f71e9fd068da0c3b16f42a3555aae9a212a523b14ed02fb37abce062588bcbd\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 12 17:49:01.422413 containerd[1714]: time="2025-09-12T17:49:01.421923166Z" level=info msg="Container 259d02469e6e50b583bcc3b92d3d7a40cc442bb028381a135a11bd654749c669: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:49:01.434136 containerd[1714]: time="2025-09-12T17:49:01.434112616Z" level=info msg="CreateContainer within sandbox \"1f71e9fd068da0c3b16f42a3555aae9a212a523b14ed02fb37abce062588bcbd\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"259d02469e6e50b583bcc3b92d3d7a40cc442bb028381a135a11bd654749c669\"" Sep 12 17:49:01.434740 containerd[1714]: time="2025-09-12T17:49:01.434651002Z" level=info msg="StartContainer for \"259d02469e6e50b583bcc3b92d3d7a40cc442bb028381a135a11bd654749c669\"" Sep 12 17:49:01.435539 containerd[1714]: time="2025-09-12T17:49:01.435515847Z" level=info msg="connecting to shim 259d02469e6e50b583bcc3b92d3d7a40cc442bb028381a135a11bd654749c669" address="unix:///run/containerd/s/0bf21fee6d73235ec820105e18c6d498a04ac46f3b8dc6f215f361118ca75330" protocol=ttrpc version=3 Sep 12 17:49:01.450843 systemd[1]: Started cri-containerd-259d02469e6e50b583bcc3b92d3d7a40cc442bb028381a135a11bd654749c669.scope - libcontainer container 259d02469e6e50b583bcc3b92d3d7a40cc442bb028381a135a11bd654749c669. Sep 12 17:49:01.476024 containerd[1714]: time="2025-09-12T17:49:01.475995155Z" level=info msg="StartContainer for \"259d02469e6e50b583bcc3b92d3d7a40cc442bb028381a135a11bd654749c669\" returns successfully" Sep 12 17:49:01.821390 kubelet[3174]: I0912 17:49:01.820880 3174 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-qlrv5" podStartSLOduration=3.820863368 podStartE2EDuration="3.820863368s" podCreationTimestamp="2025-09-12 17:48:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:48:59.811691538 +0000 UTC m=+7.130531607" watchObservedRunningTime="2025-09-12 17:49:01.820863368 +0000 UTC m=+9.139703424" Sep 12 17:49:01.821390 kubelet[3174]: I0912 17:49:01.820981 3174 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-58fc44c59b-pgqbk" podStartSLOduration=1.38150398 podStartE2EDuration="3.820975847s" podCreationTimestamp="2025-09-12 17:48:58 +0000 UTC" firstStartedPulling="2025-09-12 17:48:58.961963076 +0000 UTC m=+6.280803134" lastFinishedPulling="2025-09-12 17:49:01.401434945 +0000 UTC m=+8.720275001" observedRunningTime="2025-09-12 17:49:01.820939185 +0000 UTC m=+9.139779264" watchObservedRunningTime="2025-09-12 17:49:01.820975847 +0000 UTC m=+9.139815905" Sep 12 17:49:06.739895 sudo[2186]: pam_unix(sudo:session): session closed for user root Sep 12 17:49:06.840032 sshd[2185]: Connection closed by 10.200.16.10 port 60832 Sep 12 17:49:06.840783 sshd-session[2182]: pam_unix(sshd:session): session closed for user core Sep 12 17:49:06.845813 systemd[1]: sshd@6-10.200.8.42:22-10.200.16.10:60832.service: Deactivated successfully. Sep 12 17:49:06.849105 systemd[1]: session-9.scope: Deactivated successfully. Sep 12 17:49:06.850038 systemd[1]: session-9.scope: Consumed 2.886s CPU time, 226.9M memory peak. Sep 12 17:49:06.852974 systemd-logind[1701]: Session 9 logged out. Waiting for processes to exit. Sep 12 17:49:06.855931 systemd-logind[1701]: Removed session 9. Sep 12 17:49:09.517304 systemd[1]: Created slice kubepods-besteffort-pod3a92148d_caf1_4540_a3fe_818389314729.slice - libcontainer container kubepods-besteffort-pod3a92148d_caf1_4540_a3fe_818389314729.slice. Sep 12 17:49:09.566968 kubelet[3174]: I0912 17:49:09.566938 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3a92148d-caf1-4540-a3fe-818389314729-tigera-ca-bundle\") pod \"calico-typha-87cdd5b44-z8tn6\" (UID: \"3a92148d-caf1-4540-a3fe-818389314729\") " pod="calico-system/calico-typha-87cdd5b44-z8tn6" Sep 12 17:49:09.567351 kubelet[3174]: I0912 17:49:09.567337 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqdpk\" (UniqueName: \"kubernetes.io/projected/3a92148d-caf1-4540-a3fe-818389314729-kube-api-access-qqdpk\") pod \"calico-typha-87cdd5b44-z8tn6\" (UID: \"3a92148d-caf1-4540-a3fe-818389314729\") " pod="calico-system/calico-typha-87cdd5b44-z8tn6" Sep 12 17:49:09.567450 kubelet[3174]: I0912 17:49:09.567440 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/3a92148d-caf1-4540-a3fe-818389314729-typha-certs\") pod \"calico-typha-87cdd5b44-z8tn6\" (UID: \"3a92148d-caf1-4540-a3fe-818389314729\") " pod="calico-system/calico-typha-87cdd5b44-z8tn6" Sep 12 17:49:09.823808 containerd[1714]: time="2025-09-12T17:49:09.823775463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-87cdd5b44-z8tn6,Uid:3a92148d-caf1-4540-a3fe-818389314729,Namespace:calico-system,Attempt:0,}" Sep 12 17:49:09.859031 systemd[1]: Created slice kubepods-besteffort-podbe711e75_1fd9_49aa_aeb6_9f8fd5cc491c.slice - libcontainer container kubepods-besteffort-podbe711e75_1fd9_49aa_aeb6_9f8fd5cc491c.slice. Sep 12 17:49:09.870057 kubelet[3174]: I0912 17:49:09.869755 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/be711e75-1fd9-49aa-aeb6-9f8fd5cc491c-tigera-ca-bundle\") pod \"calico-node-xnc8x\" (UID: \"be711e75-1fd9-49aa-aeb6-9f8fd5cc491c\") " pod="calico-system/calico-node-xnc8x" Sep 12 17:49:09.870057 kubelet[3174]: I0912 17:49:09.869785 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/be711e75-1fd9-49aa-aeb6-9f8fd5cc491c-var-lib-calico\") pod \"calico-node-xnc8x\" (UID: \"be711e75-1fd9-49aa-aeb6-9f8fd5cc491c\") " pod="calico-system/calico-node-xnc8x" Sep 12 17:49:09.870057 kubelet[3174]: I0912 17:49:09.869803 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/be711e75-1fd9-49aa-aeb6-9f8fd5cc491c-xtables-lock\") pod \"calico-node-xnc8x\" (UID: \"be711e75-1fd9-49aa-aeb6-9f8fd5cc491c\") " pod="calico-system/calico-node-xnc8x" Sep 12 17:49:09.870057 kubelet[3174]: I0912 17:49:09.869820 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvdcq\" (UniqueName: \"kubernetes.io/projected/be711e75-1fd9-49aa-aeb6-9f8fd5cc491c-kube-api-access-zvdcq\") pod \"calico-node-xnc8x\" (UID: \"be711e75-1fd9-49aa-aeb6-9f8fd5cc491c\") " pod="calico-system/calico-node-xnc8x" Sep 12 17:49:09.870057 kubelet[3174]: I0912 17:49:09.869839 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/be711e75-1fd9-49aa-aeb6-9f8fd5cc491c-lib-modules\") pod \"calico-node-xnc8x\" (UID: \"be711e75-1fd9-49aa-aeb6-9f8fd5cc491c\") " pod="calico-system/calico-node-xnc8x" Sep 12 17:49:09.870223 kubelet[3174]: I0912 17:49:09.869855 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/be711e75-1fd9-49aa-aeb6-9f8fd5cc491c-node-certs\") pod \"calico-node-xnc8x\" (UID: \"be711e75-1fd9-49aa-aeb6-9f8fd5cc491c\") " pod="calico-system/calico-node-xnc8x" Sep 12 17:49:09.870223 kubelet[3174]: I0912 17:49:09.869871 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/be711e75-1fd9-49aa-aeb6-9f8fd5cc491c-cni-net-dir\") pod \"calico-node-xnc8x\" (UID: \"be711e75-1fd9-49aa-aeb6-9f8fd5cc491c\") " pod="calico-system/calico-node-xnc8x" Sep 12 17:49:09.870223 kubelet[3174]: I0912 17:49:09.869888 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/be711e75-1fd9-49aa-aeb6-9f8fd5cc491c-policysync\") pod \"calico-node-xnc8x\" (UID: \"be711e75-1fd9-49aa-aeb6-9f8fd5cc491c\") " pod="calico-system/calico-node-xnc8x" Sep 12 17:49:09.870223 kubelet[3174]: I0912 17:49:09.869904 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/be711e75-1fd9-49aa-aeb6-9f8fd5cc491c-cni-log-dir\") pod \"calico-node-xnc8x\" (UID: \"be711e75-1fd9-49aa-aeb6-9f8fd5cc491c\") " pod="calico-system/calico-node-xnc8x" Sep 12 17:49:09.870223 kubelet[3174]: I0912 17:49:09.869919 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/be711e75-1fd9-49aa-aeb6-9f8fd5cc491c-var-run-calico\") pod \"calico-node-xnc8x\" (UID: \"be711e75-1fd9-49aa-aeb6-9f8fd5cc491c\") " pod="calico-system/calico-node-xnc8x" Sep 12 17:49:09.870324 kubelet[3174]: I0912 17:49:09.869935 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/be711e75-1fd9-49aa-aeb6-9f8fd5cc491c-cni-bin-dir\") pod \"calico-node-xnc8x\" (UID: \"be711e75-1fd9-49aa-aeb6-9f8fd5cc491c\") " pod="calico-system/calico-node-xnc8x" Sep 12 17:49:09.870324 kubelet[3174]: I0912 17:49:09.869950 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/be711e75-1fd9-49aa-aeb6-9f8fd5cc491c-flexvol-driver-host\") pod \"calico-node-xnc8x\" (UID: \"be711e75-1fd9-49aa-aeb6-9f8fd5cc491c\") " pod="calico-system/calico-node-xnc8x" Sep 12 17:49:09.876402 containerd[1714]: time="2025-09-12T17:49:09.876375764Z" level=info msg="connecting to shim 6223778b320d67a3100efe127b9bae08f63b67229e79beec05dc21fcade40931" address="unix:///run/containerd/s/25d297734edaa47fab835302b7db21377b352801eae4cbe33d5bb36b2eb3f3c0" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:49:09.902760 systemd[1]: Started cri-containerd-6223778b320d67a3100efe127b9bae08f63b67229e79beec05dc21fcade40931.scope - libcontainer container 6223778b320d67a3100efe127b9bae08f63b67229e79beec05dc21fcade40931. Sep 12 17:49:09.940088 containerd[1714]: time="2025-09-12T17:49:09.940070570Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-87cdd5b44-z8tn6,Uid:3a92148d-caf1-4540-a3fe-818389314729,Namespace:calico-system,Attempt:0,} returns sandbox id \"6223778b320d67a3100efe127b9bae08f63b67229e79beec05dc21fcade40931\"" Sep 12 17:49:09.941014 containerd[1714]: time="2025-09-12T17:49:09.940989072Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 12 17:49:09.971556 kubelet[3174]: E0912 17:49:09.971536 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:49:09.971623 kubelet[3174]: W0912 17:49:09.971553 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:49:09.971623 kubelet[3174]: E0912 17:49:09.971583 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:49:09.971769 kubelet[3174]: E0912 17:49:09.971759 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:49:09.971769 kubelet[3174]: W0912 17:49:09.971768 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:49:09.971818 kubelet[3174]: E0912 17:49:09.971778 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:49:09.971913 kubelet[3174]: E0912 17:49:09.971902 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:49:09.971913 kubelet[3174]: W0912 17:49:09.971911 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:49:09.971961 kubelet[3174]: E0912 17:49:09.971918 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:49:09.972098 kubelet[3174]: E0912 17:49:09.972088 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:49:09.972135 kubelet[3174]: W0912 17:49:09.972098 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:49:09.972135 kubelet[3174]: E0912 17:49:09.972110 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:49:09.972321 kubelet[3174]: E0912 17:49:09.972285 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:49:09.972321 kubelet[3174]: W0912 17:49:09.972291 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:49:09.972321 kubelet[3174]: E0912 17:49:09.972301 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:49:09.972491 kubelet[3174]: E0912 17:49:09.972415 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:49:09.972491 kubelet[3174]: W0912 17:49:09.972441 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:49:09.972491 kubelet[3174]: E0912 17:49:09.972453 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:49:09.972744 kubelet[3174]: E0912 17:49:09.972732 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:49:09.972744 kubelet[3174]: W0912 17:49:09.972741 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:49:09.972851 kubelet[3174]: E0912 17:49:09.972830 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:49:09.973079 kubelet[3174]: E0912 17:49:09.973067 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:49:09.973079 kubelet[3174]: W0912 17:49:09.973078 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:49:09.973402 kubelet[3174]: E0912 17:49:09.973388 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:49:09.973453 kubelet[3174]: E0912 17:49:09.973442 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:49:09.973453 kubelet[3174]: W0912 17:49:09.973450 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:49:09.973593 kubelet[3174]: E0912 17:49:09.973578 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:49:09.973735 kubelet[3174]: E0912 17:49:09.973724 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:49:09.973773 kubelet[3174]: W0912 17:49:09.973734 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:49:09.973881 kubelet[3174]: E0912 17:49:09.973792 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:49:09.973960 kubelet[3174]: E0912 17:49:09.973915 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:49:09.973960 kubelet[3174]: W0912 17:49:09.973922 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:49:09.973960 kubelet[3174]: E0912 17:49:09.973935 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:49:09.974094 kubelet[3174]: E0912 17:49:09.974084 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:49:09.974094 kubelet[3174]: W0912 17:49:09.974091 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:49:09.974153 kubelet[3174]: E0912 17:49:09.974103 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:49:09.974383 kubelet[3174]: E0912 17:49:09.974339 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:49:09.974383 kubelet[3174]: W0912 17:49:09.974347 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:49:09.974383 kubelet[3174]: E0912 17:49:09.974356 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:49:09.974520 kubelet[3174]: E0912 17:49:09.974509 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:49:09.974549 kubelet[3174]: W0912 17:49:09.974519 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:49:09.974549 kubelet[3174]: E0912 17:49:09.974528 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:49:09.974775 kubelet[3174]: E0912 17:49:09.974674 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:49:09.974775 kubelet[3174]: W0912 17:49:09.974681 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:49:09.974775 kubelet[3174]: E0912 17:49:09.974689 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:49:09.974977 kubelet[3174]: E0912 17:49:09.974947 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:49:09.974977 kubelet[3174]: W0912 17:49:09.974955 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:49:09.974977 kubelet[3174]: E0912 17:49:09.974964 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:49:09.975666 kubelet[3174]: E0912 17:49:09.975051 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:49:09.975666 kubelet[3174]: W0912 17:49:09.975057 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:49:09.975666 kubelet[3174]: E0912 17:49:09.975063 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:49:09.975666 kubelet[3174]: E0912 17:49:09.975183 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:49:09.975666 kubelet[3174]: W0912 17:49:09.975189 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:49:09.975666 kubelet[3174]: E0912 17:49:09.975196 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:49:09.975666 kubelet[3174]: E0912 17:49:09.975283 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:49:09.975666 kubelet[3174]: W0912 17:49:09.975288 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:49:09.975666 kubelet[3174]: E0912 17:49:09.975295 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:49:09.975666 kubelet[3174]: E0912 17:49:09.975366 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:49:09.975916 kubelet[3174]: W0912 17:49:09.975371 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:49:09.975916 kubelet[3174]: E0912 17:49:09.975377 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:49:09.975916 kubelet[3174]: E0912 17:49:09.975477 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:49:09.975916 kubelet[3174]: W0912 17:49:09.975482 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:49:09.975916 kubelet[3174]: E0912 17:49:09.975488 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:49:09.980973 kubelet[3174]: E0912 17:49:09.980955 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:49:09.980973 kubelet[3174]: W0912 17:49:09.980969 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:49:09.981080 kubelet[3174]: E0912 17:49:09.980981 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:49:09.984048 kubelet[3174]: E0912 17:49:09.984032 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:49:09.984116 kubelet[3174]: W0912 17:49:09.984072 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:49:09.984116 kubelet[3174]: E0912 17:49:09.984084 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:49:10.049001 kubelet[3174]: E0912 17:49:10.048970 3174 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s6pvs" podUID="99b0530c-5bfd-4544-b93d-7c5fee8711e7" Sep 12 17:49:10.057109 kubelet[3174]: E0912 17:49:10.057056 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:49:10.057109 kubelet[3174]: W0912 17:49:10.057071 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:49:10.057109 kubelet[3174]: E0912 17:49:10.057083 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:49:10.057378 kubelet[3174]: E0912 17:49:10.057337 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:49:10.057378 kubelet[3174]: W0912 17:49:10.057345 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:49:10.057378 kubelet[3174]: E0912 17:49:10.057354 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:49:10.057557 kubelet[3174]: E0912 17:49:10.057523 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:49:10.057557 kubelet[3174]: W0912 17:49:10.057529 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:49:10.057557 kubelet[3174]: E0912 17:49:10.057536 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:49:10.057741 kubelet[3174]: E0912 17:49:10.057698 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:49:10.057741 kubelet[3174]: W0912 17:49:10.057705 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:49:10.057741 kubelet[3174]: E0912 17:49:10.057712 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:49:10.057897 kubelet[3174]: E0912 17:49:10.057867 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:49:10.057897 kubelet[3174]: W0912 17:49:10.057873 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:49:10.057897 kubelet[3174]: E0912 17:49:10.057879 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:49:10.058071 kubelet[3174]: E0912 17:49:10.058021 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:49:10.058071 kubelet[3174]: W0912 17:49:10.058027 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:49:10.058071 kubelet[3174]: E0912 17:49:10.058048 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:49:10.058233 kubelet[3174]: E0912 17:49:10.058200 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:49:10.058233 kubelet[3174]: W0912 17:49:10.058206 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:49:10.058233 kubelet[3174]: E0912 17:49:10.058213 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:49:10.058401 kubelet[3174]: E0912 17:49:10.058361 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:49:10.058401 kubelet[3174]: W0912 17:49:10.058367 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:49:10.058401 kubelet[3174]: E0912 17:49:10.058373 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:49:10.058556 kubelet[3174]: E0912 17:49:10.058526 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:49:10.058556 kubelet[3174]: W0912 17:49:10.058532 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:49:10.058556 kubelet[3174]: E0912 17:49:10.058538 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:49:10.059683 kubelet[3174]: E0912 17:49:10.059653 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:49:10.059816 kubelet[3174]: W0912 17:49:10.059761 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:49:10.059816 kubelet[3174]: E0912 17:49:10.059778 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:49:10.060034 kubelet[3174]: E0912 17:49:10.059983 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:49:10.060034 kubelet[3174]: W0912 17:49:10.059992 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:49:10.060034 kubelet[3174]: E0912 17:49:10.060003 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:49:10.060259 kubelet[3174]: E0912 17:49:10.060201 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:49:10.060259 kubelet[3174]: W0912 17:49:10.060209 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:49:10.060259 kubelet[3174]: E0912 17:49:10.060218 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:49:10.060465 kubelet[3174]: E0912 17:49:10.060423 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:49:10.060465 kubelet[3174]: W0912 17:49:10.060431 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:49:10.060465 kubelet[3174]: E0912 17:49:10.060441 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:49:10.060657 kubelet[3174]: E0912 17:49:10.060616 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:49:10.060657 kubelet[3174]: W0912 17:49:10.060623 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:49:10.060744 kubelet[3174]: E0912 17:49:10.060714 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:49:10.060885 kubelet[3174]: E0912 17:49:10.060843 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:49:10.060885 kubelet[3174]: W0912 17:49:10.060851 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:49:10.060885 kubelet[3174]: E0912 17:49:10.060861 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:49:10.061086 kubelet[3174]: E0912 17:49:10.061030 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:49:10.061086 kubelet[3174]: W0912 17:49:10.061038 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:49:10.061086 kubelet[3174]: E0912 17:49:10.061047 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:49:10.061340 kubelet[3174]: E0912 17:49:10.061313 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:49:10.061340 kubelet[3174]: W0912 17:49:10.061337 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:49:10.061408 kubelet[3174]: E0912 17:49:10.061346 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:49:10.061471 kubelet[3174]: E0912 17:49:10.061462 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:49:10.061471 kubelet[3174]: W0912 17:49:10.061469 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:49:10.061520 kubelet[3174]: E0912 17:49:10.061475 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:49:10.061610 kubelet[3174]: E0912 17:49:10.061588 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:49:10.061610 kubelet[3174]: W0912 17:49:10.061608 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:49:10.061681 kubelet[3174]: E0912 17:49:10.061613 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:49:10.061780 kubelet[3174]: E0912 17:49:10.061755 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:49:10.061780 kubelet[3174]: W0912 17:49:10.061776 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:49:10.061828 kubelet[3174]: E0912 17:49:10.061782 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:49:10.071320 kubelet[3174]: E0912 17:49:10.071304 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:49:10.071320 kubelet[3174]: W0912 17:49:10.071317 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:49:10.071472 kubelet[3174]: E0912 17:49:10.071329 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:49:10.071472 kubelet[3174]: I0912 17:49:10.071349 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/99b0530c-5bfd-4544-b93d-7c5fee8711e7-varrun\") pod \"csi-node-driver-s6pvs\" (UID: \"99b0530c-5bfd-4544-b93d-7c5fee8711e7\") " pod="calico-system/csi-node-driver-s6pvs" Sep 12 17:49:10.071552 kubelet[3174]: E0912 17:49:10.071542 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:49:10.071576 kubelet[3174]: W0912 17:49:10.071551 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:49:10.071576 kubelet[3174]: E0912 17:49:10.071561 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:49:10.071576 kubelet[3174]: I0912 17:49:10.071575 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/99b0530c-5bfd-4544-b93d-7c5fee8711e7-kubelet-dir\") pod \"csi-node-driver-s6pvs\" (UID: \"99b0530c-5bfd-4544-b93d-7c5fee8711e7\") " pod="calico-system/csi-node-driver-s6pvs" Sep 12 17:49:10.071746 kubelet[3174]: E0912 17:49:10.071736 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:49:10.071746 kubelet[3174]: W0912 17:49:10.071744 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:49:10.071790 kubelet[3174]: E0912 17:49:10.071755 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:49:10.071790 kubelet[3174]: I0912 17:49:10.071769 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncqhr\" (UniqueName: \"kubernetes.io/projected/99b0530c-5bfd-4544-b93d-7c5fee8711e7-kube-api-access-ncqhr\") pod \"csi-node-driver-s6pvs\" (UID: \"99b0530c-5bfd-4544-b93d-7c5fee8711e7\") " pod="calico-system/csi-node-driver-s6pvs" Sep 12 17:49:10.071904 kubelet[3174]: E0912 17:49:10.071888 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:49:10.071904 kubelet[3174]: W0912 17:49:10.071900 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:49:10.071943 kubelet[3174]: E0912 17:49:10.071918 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:49:10.071943 kubelet[3174]: I0912 17:49:10.071933 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/99b0530c-5bfd-4544-b93d-7c5fee8711e7-socket-dir\") pod \"csi-node-driver-s6pvs\" (UID: \"99b0530c-5bfd-4544-b93d-7c5fee8711e7\") " pod="calico-system/csi-node-driver-s6pvs" Sep 12 17:49:10.072057 kubelet[3174]: E0912 17:49:10.072048 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:49:10.072057 kubelet[3174]: W0912 17:49:10.072056 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:49:10.072057 kubelet[3174]: E0912 17:49:10.072065 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:49:10.072151 kubelet[3174]: I0912 17:49:10.072079 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/99b0530c-5bfd-4544-b93d-7c5fee8711e7-registration-dir\") pod \"csi-node-driver-s6pvs\" (UID: \"99b0530c-5bfd-4544-b93d-7c5fee8711e7\") " pod="calico-system/csi-node-driver-s6pvs" Sep 12 17:49:10.072271 kubelet[3174]: E0912 17:49:10.072210 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:49:10.072271 kubelet[3174]: W0912 17:49:10.072221 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:49:10.072271 kubelet[3174]: E0912 17:49:10.072234 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:49:10.072501 kubelet[3174]: E0912 17:49:10.072448 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:49:10.072501 kubelet[3174]: W0912 17:49:10.072455 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:49:10.072501 kubelet[3174]: E0912 17:49:10.072470 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:49:10.072721 kubelet[3174]: E0912 17:49:10.072708 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:49:10.072721 kubelet[3174]: W0912 17:49:10.072721 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:49:10.072964 kubelet[3174]: E0912 17:49:10.072733 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:49:10.073165 kubelet[3174]: E0912 17:49:10.073152 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:49:10.073376 kubelet[3174]: W0912 17:49:10.073165 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:49:10.073376 kubelet[3174]: E0912 17:49:10.073244 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:49:10.073376 kubelet[3174]: E0912 17:49:10.073350 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:49:10.073376 kubelet[3174]: W0912 17:49:10.073356 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:49:10.073567 kubelet[3174]: E0912 17:49:10.073497 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:49:10.073654 kubelet[3174]: E0912 17:49:10.073643 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:49:10.073680 kubelet[3174]: W0912 17:49:10.073654 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:49:10.073800 kubelet[3174]: E0912 17:49:10.073780 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:49:10.073800 kubelet[3174]: W0912 17:49:10.073786 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:49:10.074277 kubelet[3174]: E0912 17:49:10.073893 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:49:10.074277 kubelet[3174]: W0912 17:49:10.073898 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:49:10.074277 kubelet[3174]: E0912 17:49:10.073906 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:49:10.074277 kubelet[3174]: E0912 17:49:10.073921 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:49:10.074277 kubelet[3174]: E0912 17:49:10.074009 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:49:10.074277 kubelet[3174]: W0912 17:49:10.074013 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:49:10.074277 kubelet[3174]: E0912 17:49:10.074020 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:49:10.074277 kubelet[3174]: E0912 17:49:10.074152 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:49:10.074564 kubelet[3174]: E0912 17:49:10.074533 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:49:10.074564 kubelet[3174]: W0912 17:49:10.074541 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:49:10.074564 kubelet[3174]: E0912 17:49:10.074550 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:49:10.163238 containerd[1714]: time="2025-09-12T17:49:10.163213362Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-xnc8x,Uid:be711e75-1fd9-49aa-aeb6-9f8fd5cc491c,Namespace:calico-system,Attempt:0,}" Sep 12 17:49:10.172710 kubelet[3174]: E0912 17:49:10.172695 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:49:10.172710 kubelet[3174]: W0912 17:49:10.172707 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:49:10.172808 kubelet[3174]: E0912 17:49:10.172720 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:49:10.173169 kubelet[3174]: E0912 17:49:10.173089 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:49:10.173169 kubelet[3174]: W0912 17:49:10.173105 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:49:10.173169 kubelet[3174]: E0912 17:49:10.173126 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:49:10.173290 kubelet[3174]: E0912 17:49:10.173251 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:49:10.173290 kubelet[3174]: W0912 17:49:10.173257 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:49:10.173290 kubelet[3174]: E0912 17:49:10.173266 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:49:10.173450 kubelet[3174]: E0912 17:49:10.173378 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:49:10.173450 kubelet[3174]: W0912 17:49:10.173384 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:49:10.173450 kubelet[3174]: E0912 17:49:10.173390 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:49:10.173557 kubelet[3174]: E0912 17:49:10.173492 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:49:10.173557 kubelet[3174]: W0912 17:49:10.173499 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:49:10.173704 kubelet[3174]: E0912 17:49:10.173610 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:49:10.173829 kubelet[3174]: E0912 17:49:10.173812 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:49:10.173829 kubelet[3174]: W0912 17:49:10.173820 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:49:10.173908 kubelet[3174]: E0912 17:49:10.173888 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:49:10.174027 kubelet[3174]: E0912 17:49:10.174004 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:49:10.174027 kubelet[3174]: W0912 17:49:10.174026 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:49:10.174082 kubelet[3174]: E0912 17:49:10.174039 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:49:10.174232 kubelet[3174]: E0912 17:49:10.174212 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:49:10.174232 kubelet[3174]: W0912 17:49:10.174231 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:49:10.174284 kubelet[3174]: E0912 17:49:10.174241 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:49:10.174338 kubelet[3174]: E0912 17:49:10.174329 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:49:10.174338 kubelet[3174]: W0912 17:49:10.174335 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:49:10.174391 kubelet[3174]: E0912 17:49:10.174342 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:49:10.174443 kubelet[3174]: E0912 17:49:10.174430 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:49:10.174443 kubelet[3174]: W0912 17:49:10.174442 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:49:10.174542 kubelet[3174]: E0912 17:49:10.174521 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:49:10.174572 kubelet[3174]: E0912 17:49:10.174567 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:49:10.174593 kubelet[3174]: W0912 17:49:10.174572 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:49:10.174669 kubelet[3174]: E0912 17:49:10.174645 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:49:10.174700 kubelet[3174]: E0912 17:49:10.174690 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:49:10.174700 kubelet[3174]: W0912 17:49:10.174695 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:49:10.174796 kubelet[3174]: E0912 17:49:10.174787 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:49:10.174847 kubelet[3174]: E0912 17:49:10.174829 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:49:10.174847 kubelet[3174]: W0912 17:49:10.174846 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:49:10.174947 kubelet[3174]: E0912 17:49:10.174937 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:49:10.175119 kubelet[3174]: E0912 17:49:10.174956 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:49:10.175119 kubelet[3174]: W0912 17:49:10.175007 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:49:10.175119 kubelet[3174]: E0912 17:49:10.175016 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:49:10.175251 kubelet[3174]: E0912 17:49:10.175224 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:49:10.175251 kubelet[3174]: W0912 17:49:10.175244 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:49:10.175306 kubelet[3174]: E0912 17:49:10.175253 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:49:10.175354 kubelet[3174]: E0912 17:49:10.175342 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:49:10.175354 kubelet[3174]: W0912 17:49:10.175352 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:49:10.175402 kubelet[3174]: E0912 17:49:10.175364 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:49:10.175485 kubelet[3174]: E0912 17:49:10.175458 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:49:10.175485 kubelet[3174]: W0912 17:49:10.175478 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:49:10.175542 kubelet[3174]: E0912 17:49:10.175489 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:49:10.175763 kubelet[3174]: E0912 17:49:10.175737 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:49:10.175763 kubelet[3174]: W0912 17:49:10.175761 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:49:10.175824 kubelet[3174]: E0912 17:49:10.175776 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:49:10.176015 kubelet[3174]: E0912 17:49:10.176005 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:49:10.176015 kubelet[3174]: W0912 17:49:10.176015 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:49:10.176253 kubelet[3174]: E0912 17:49:10.176026 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:49:10.176253 kubelet[3174]: E0912 17:49:10.176150 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:49:10.176253 kubelet[3174]: W0912 17:49:10.176155 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:49:10.176253 kubelet[3174]: E0912 17:49:10.176162 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:49:10.176722 kubelet[3174]: E0912 17:49:10.176471 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:49:10.176722 kubelet[3174]: W0912 17:49:10.176481 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:49:10.176722 kubelet[3174]: E0912 17:49:10.176683 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:49:10.176977 kubelet[3174]: E0912 17:49:10.176952 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:49:10.176977 kubelet[3174]: W0912 17:49:10.176975 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:49:10.177311 kubelet[3174]: E0912 17:49:10.177298 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:49:10.177561 kubelet[3174]: E0912 17:49:10.177547 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:49:10.177561 kubelet[3174]: W0912 17:49:10.177559 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:49:10.177693 kubelet[3174]: E0912 17:49:10.177587 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:49:10.177717 kubelet[3174]: E0912 17:49:10.177704 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:49:10.177717 kubelet[3174]: W0912 17:49:10.177711 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:49:10.177757 kubelet[3174]: E0912 17:49:10.177720 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:49:10.177879 kubelet[3174]: E0912 17:49:10.177870 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:49:10.177879 kubelet[3174]: W0912 17:49:10.177877 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:49:10.177948 kubelet[3174]: E0912 17:49:10.177884 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:49:10.182856 kubelet[3174]: E0912 17:49:10.182802 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:49:10.182856 kubelet[3174]: W0912 17:49:10.182853 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:49:10.182941 kubelet[3174]: E0912 17:49:10.182865 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:49:10.214209 containerd[1714]: time="2025-09-12T17:49:10.213962987Z" level=info msg="connecting to shim 2cd5cd6ee576e5828c9df7207feba59184c2da1eb50568ec79247e22aba84a7d" address="unix:///run/containerd/s/b1a0d007a8d9ff8e90584837896a2ec3d2f0798107ffeecaa3a71cdd84be627d" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:49:10.240766 systemd[1]: Started cri-containerd-2cd5cd6ee576e5828c9df7207feba59184c2da1eb50568ec79247e22aba84a7d.scope - libcontainer container 2cd5cd6ee576e5828c9df7207feba59184c2da1eb50568ec79247e22aba84a7d. Sep 12 17:49:10.267353 containerd[1714]: time="2025-09-12T17:49:10.267320823Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-xnc8x,Uid:be711e75-1fd9-49aa-aeb6-9f8fd5cc491c,Namespace:calico-system,Attempt:0,} returns sandbox id \"2cd5cd6ee576e5828c9df7207feba59184c2da1eb50568ec79247e22aba84a7d\"" Sep 12 17:49:11.284197 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount131241015.mount: Deactivated successfully. Sep 12 17:49:11.759529 kubelet[3174]: E0912 17:49:11.759489 3174 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s6pvs" podUID="99b0530c-5bfd-4544-b93d-7c5fee8711e7" Sep 12 17:49:12.276940 containerd[1714]: time="2025-09-12T17:49:12.276912211Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:49:12.279274 containerd[1714]: time="2025-09-12T17:49:12.279210549Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=35237389" Sep 12 17:49:12.281701 containerd[1714]: time="2025-09-12T17:49:12.281680820Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:49:12.285659 containerd[1714]: time="2025-09-12T17:49:12.284931376Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:49:12.285659 containerd[1714]: time="2025-09-12T17:49:12.285472959Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 2.344456883s" Sep 12 17:49:12.285659 containerd[1714]: time="2025-09-12T17:49:12.285499415Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 12 17:49:12.290388 containerd[1714]: time="2025-09-12T17:49:12.290365816Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 12 17:49:12.300183 containerd[1714]: time="2025-09-12T17:49:12.300157501Z" level=info msg="CreateContainer within sandbox \"6223778b320d67a3100efe127b9bae08f63b67229e79beec05dc21fcade40931\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 12 17:49:12.319314 containerd[1714]: time="2025-09-12T17:49:12.317905748Z" level=info msg="Container f5e4619930a492cb6d6753d0cd81aba8b96ac8bf555932533f79ee9fc0befc64: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:49:12.333430 containerd[1714]: time="2025-09-12T17:49:12.333407405Z" level=info msg="CreateContainer within sandbox \"6223778b320d67a3100efe127b9bae08f63b67229e79beec05dc21fcade40931\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"f5e4619930a492cb6d6753d0cd81aba8b96ac8bf555932533f79ee9fc0befc64\"" Sep 12 17:49:12.333756 containerd[1714]: time="2025-09-12T17:49:12.333738823Z" level=info msg="StartContainer for \"f5e4619930a492cb6d6753d0cd81aba8b96ac8bf555932533f79ee9fc0befc64\"" Sep 12 17:49:12.334718 containerd[1714]: time="2025-09-12T17:49:12.334692224Z" level=info msg="connecting to shim f5e4619930a492cb6d6753d0cd81aba8b96ac8bf555932533f79ee9fc0befc64" address="unix:///run/containerd/s/25d297734edaa47fab835302b7db21377b352801eae4cbe33d5bb36b2eb3f3c0" protocol=ttrpc version=3 Sep 12 17:49:12.351771 systemd[1]: Started cri-containerd-f5e4619930a492cb6d6753d0cd81aba8b96ac8bf555932533f79ee9fc0befc64.scope - libcontainer container f5e4619930a492cb6d6753d0cd81aba8b96ac8bf555932533f79ee9fc0befc64. Sep 12 17:49:12.395258 containerd[1714]: time="2025-09-12T17:49:12.395241292Z" level=info msg="StartContainer for \"f5e4619930a492cb6d6753d0cd81aba8b96ac8bf555932533f79ee9fc0befc64\" returns successfully" Sep 12 17:49:12.837818 kubelet[3174]: I0912 17:49:12.837776 3174 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-87cdd5b44-z8tn6" podStartSLOduration=1.489625972 podStartE2EDuration="3.837760934s" podCreationTimestamp="2025-09-12 17:49:09 +0000 UTC" firstStartedPulling="2025-09-12 17:49:09.940825266 +0000 UTC m=+17.259665316" lastFinishedPulling="2025-09-12 17:49:12.288960226 +0000 UTC m=+19.607800278" observedRunningTime="2025-09-12 17:49:12.83751184 +0000 UTC m=+20.156351894" watchObservedRunningTime="2025-09-12 17:49:12.837760934 +0000 UTC m=+20.156601002" Sep 12 17:49:12.879435 kubelet[3174]: E0912 17:49:12.879416 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:49:12.879435 kubelet[3174]: W0912 17:49:12.879432 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:49:12.879535 kubelet[3174]: E0912 17:49:12.879448 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:49:12.879602 kubelet[3174]: E0912 17:49:12.879579 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:49:12.879602 kubelet[3174]: W0912 17:49:12.879600 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:49:12.879694 kubelet[3174]: E0912 17:49:12.879608 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:49:12.879727 kubelet[3174]: E0912 17:49:12.879703 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:49:12.879727 kubelet[3174]: W0912 17:49:12.879708 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:49:12.879727 kubelet[3174]: E0912 17:49:12.879715 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:49:12.879836 kubelet[3174]: E0912 17:49:12.879805 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:49:12.879836 kubelet[3174]: W0912 17:49:12.879810 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:49:12.879836 kubelet[3174]: E0912 17:49:12.879816 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:49:12.879925 kubelet[3174]: E0912 17:49:12.879914 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:49:12.879925 kubelet[3174]: W0912 17:49:12.879920 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:49:12.879972 kubelet[3174]: E0912 17:49:12.879926 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:49:12.880015 kubelet[3174]: E0912 17:49:12.880006 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:49:12.880015 kubelet[3174]: W0912 17:49:12.880012 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:49:12.880063 kubelet[3174]: E0912 17:49:12.880018 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:49:12.880116 kubelet[3174]: E0912 17:49:12.880108 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:49:12.880116 kubelet[3174]: W0912 17:49:12.880113 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:49:12.880163 kubelet[3174]: E0912 17:49:12.880119 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:49:12.880215 kubelet[3174]: E0912 17:49:12.880206 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:49:12.880215 kubelet[3174]: W0912 17:49:12.880213 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:49:12.880264 kubelet[3174]: E0912 17:49:12.880218 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:49:12.880301 kubelet[3174]: E0912 17:49:12.880296 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:49:12.880301 kubelet[3174]: W0912 17:49:12.880301 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:49:12.880406 kubelet[3174]: E0912 17:49:12.880306 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:49:12.880430 kubelet[3174]: E0912 17:49:12.880408 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:49:12.880430 kubelet[3174]: W0912 17:49:12.880413 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:49:12.880430 kubelet[3174]: E0912 17:49:12.880418 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:49:12.880494 kubelet[3174]: E0912 17:49:12.880489 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:49:12.880494 kubelet[3174]: W0912 17:49:12.880493 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:49:12.880537 kubelet[3174]: E0912 17:49:12.880498 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:49:12.880568 kubelet[3174]: E0912 17:49:12.880562 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:49:12.880568 kubelet[3174]: W0912 17:49:12.880567 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:49:12.880619 kubelet[3174]: E0912 17:49:12.880572 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:49:12.880680 kubelet[3174]: E0912 17:49:12.880657 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:49:12.880680 kubelet[3174]: W0912 17:49:12.880676 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:49:12.880736 kubelet[3174]: E0912 17:49:12.880682 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:49:12.880762 kubelet[3174]: E0912 17:49:12.880751 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:49:12.880762 kubelet[3174]: W0912 17:49:12.880755 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:49:12.880817 kubelet[3174]: E0912 17:49:12.880760 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:49:12.880841 kubelet[3174]: E0912 17:49:12.880828 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:49:12.880841 kubelet[3174]: W0912 17:49:12.880832 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:49:12.880841 kubelet[3174]: E0912 17:49:12.880837 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:49:12.888100 kubelet[3174]: E0912 17:49:12.888085 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:49:12.888100 kubelet[3174]: W0912 17:49:12.888096 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:49:12.888190 kubelet[3174]: E0912 17:49:12.888108 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:49:12.888223 kubelet[3174]: E0912 17:49:12.888219 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:49:12.888243 kubelet[3174]: W0912 17:49:12.888224 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:49:12.888243 kubelet[3174]: E0912 17:49:12.888231 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:49:12.888383 kubelet[3174]: E0912 17:49:12.888359 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:49:12.888383 kubelet[3174]: W0912 17:49:12.888380 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:49:12.888429 kubelet[3174]: E0912 17:49:12.888394 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:49:12.888532 kubelet[3174]: E0912 17:49:12.888510 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:49:12.888532 kubelet[3174]: W0912 17:49:12.888530 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:49:12.888610 kubelet[3174]: E0912 17:49:12.888538 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:49:12.888610 kubelet[3174]: E0912 17:49:12.888607 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:49:12.888684 kubelet[3174]: W0912 17:49:12.888612 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:49:12.888684 kubelet[3174]: E0912 17:49:12.888617 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:49:12.888843 kubelet[3174]: E0912 17:49:12.888819 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:49:12.888843 kubelet[3174]: W0912 17:49:12.888842 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:49:12.888897 kubelet[3174]: E0912 17:49:12.888852 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:49:12.889007 kubelet[3174]: E0912 17:49:12.888985 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:49:12.889007 kubelet[3174]: W0912 17:49:12.889006 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:49:12.889071 kubelet[3174]: E0912 17:49:12.889014 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:49:12.889170 kubelet[3174]: E0912 17:49:12.889094 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:49:12.889170 kubelet[3174]: W0912 17:49:12.889098 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:49:12.889170 kubelet[3174]: E0912 17:49:12.889104 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:49:12.889253 kubelet[3174]: E0912 17:49:12.889236 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:49:12.889253 kubelet[3174]: W0912 17:49:12.889250 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:49:12.889304 kubelet[3174]: E0912 17:49:12.889260 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:49:12.889367 kubelet[3174]: E0912 17:49:12.889358 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:49:12.889367 kubelet[3174]: W0912 17:49:12.889365 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:49:12.889409 kubelet[3174]: E0912 17:49:12.889371 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:49:12.889488 kubelet[3174]: E0912 17:49:12.889467 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:49:12.889488 kubelet[3174]: W0912 17:49:12.889485 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:49:12.889547 kubelet[3174]: E0912 17:49:12.889494 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:49:12.889652 kubelet[3174]: E0912 17:49:12.889621 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:49:12.889652 kubelet[3174]: W0912 17:49:12.889650 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:49:12.889699 kubelet[3174]: E0912 17:49:12.889662 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:49:12.890203 kubelet[3174]: E0912 17:49:12.890184 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:49:12.890203 kubelet[3174]: W0912 17:49:12.890197 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:49:12.890286 kubelet[3174]: E0912 17:49:12.890212 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:49:12.890451 kubelet[3174]: E0912 17:49:12.890429 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:49:12.890451 kubelet[3174]: W0912 17:49:12.890450 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:49:12.890541 kubelet[3174]: E0912 17:49:12.890532 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:49:12.890596 kubelet[3174]: E0912 17:49:12.890574 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:49:12.890596 kubelet[3174]: W0912 17:49:12.890594 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:49:12.890712 kubelet[3174]: E0912 17:49:12.890688 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:49:12.890762 kubelet[3174]: E0912 17:49:12.890740 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:49:12.890762 kubelet[3174]: W0912 17:49:12.890761 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:49:12.890799 kubelet[3174]: E0912 17:49:12.890768 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:49:12.890930 kubelet[3174]: E0912 17:49:12.890909 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:49:12.890930 kubelet[3174]: W0912 17:49:12.890929 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:49:12.890973 kubelet[3174]: E0912 17:49:12.890935 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:49:12.891184 kubelet[3174]: E0912 17:49:12.891162 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:49:12.891184 kubelet[3174]: W0912 17:49:12.891182 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:49:12.891232 kubelet[3174]: E0912 17:49:12.891188 3174 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:49:13.752907 containerd[1714]: time="2025-09-12T17:49:13.752871359Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:49:13.755334 containerd[1714]: time="2025-09-12T17:49:13.755159902Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4446660" Sep 12 17:49:13.757937 containerd[1714]: time="2025-09-12T17:49:13.757911338Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:49:13.758992 kubelet[3174]: E0912 17:49:13.758960 3174 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s6pvs" podUID="99b0530c-5bfd-4544-b93d-7c5fee8711e7" Sep 12 17:49:13.761342 containerd[1714]: time="2025-09-12T17:49:13.761298099Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:49:13.761875 containerd[1714]: time="2025-09-12T17:49:13.761688454Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 1.471177421s" Sep 12 17:49:13.761875 containerd[1714]: time="2025-09-12T17:49:13.761714843Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 12 17:49:13.763149 containerd[1714]: time="2025-09-12T17:49:13.763126831Z" level=info msg="CreateContainer within sandbox \"2cd5cd6ee576e5828c9df7207feba59184c2da1eb50568ec79247e22aba84a7d\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 12 17:49:13.780692 containerd[1714]: time="2025-09-12T17:49:13.780667540Z" level=info msg="Container d2e84463b9716da270c0d96777df6804fd47c1ed821ab47181892d2d875f3a75: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:49:13.795519 containerd[1714]: time="2025-09-12T17:49:13.795494516Z" level=info msg="CreateContainer within sandbox \"2cd5cd6ee576e5828c9df7207feba59184c2da1eb50568ec79247e22aba84a7d\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"d2e84463b9716da270c0d96777df6804fd47c1ed821ab47181892d2d875f3a75\"" Sep 12 17:49:13.796023 containerd[1714]: time="2025-09-12T17:49:13.796003457Z" level=info msg="StartContainer for \"d2e84463b9716da270c0d96777df6804fd47c1ed821ab47181892d2d875f3a75\"" Sep 12 17:49:13.797253 containerd[1714]: time="2025-09-12T17:49:13.797225829Z" level=info msg="connecting to shim d2e84463b9716da270c0d96777df6804fd47c1ed821ab47181892d2d875f3a75" address="unix:///run/containerd/s/b1a0d007a8d9ff8e90584837896a2ec3d2f0798107ffeecaa3a71cdd84be627d" protocol=ttrpc version=3 Sep 12 17:49:13.817814 systemd[1]: Started cri-containerd-d2e84463b9716da270c0d96777df6804fd47c1ed821ab47181892d2d875f3a75.scope - libcontainer container d2e84463b9716da270c0d96777df6804fd47c1ed821ab47181892d2d875f3a75. Sep 12 17:49:13.832430 kubelet[3174]: I0912 17:49:13.832171 3174 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:49:13.854595 containerd[1714]: time="2025-09-12T17:49:13.854574486Z" level=info msg="StartContainer for \"d2e84463b9716da270c0d96777df6804fd47c1ed821ab47181892d2d875f3a75\" returns successfully" Sep 12 17:49:13.859403 systemd[1]: cri-containerd-d2e84463b9716da270c0d96777df6804fd47c1ed821ab47181892d2d875f3a75.scope: Deactivated successfully. Sep 12 17:49:13.861565 containerd[1714]: time="2025-09-12T17:49:13.861510037Z" level=info msg="received exit event container_id:\"d2e84463b9716da270c0d96777df6804fd47c1ed821ab47181892d2d875f3a75\" id:\"d2e84463b9716da270c0d96777df6804fd47c1ed821ab47181892d2d875f3a75\" pid:3857 exited_at:{seconds:1757699353 nanos:861117447}" Sep 12 17:49:13.861670 containerd[1714]: time="2025-09-12T17:49:13.861545507Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d2e84463b9716da270c0d96777df6804fd47c1ed821ab47181892d2d875f3a75\" id:\"d2e84463b9716da270c0d96777df6804fd47c1ed821ab47181892d2d875f3a75\" pid:3857 exited_at:{seconds:1757699353 nanos:861117447}" Sep 12 17:49:13.882149 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d2e84463b9716da270c0d96777df6804fd47c1ed821ab47181892d2d875f3a75-rootfs.mount: Deactivated successfully. Sep 12 17:49:15.759322 kubelet[3174]: E0912 17:49:15.759251 3174 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s6pvs" podUID="99b0530c-5bfd-4544-b93d-7c5fee8711e7" Sep 12 17:49:16.840899 containerd[1714]: time="2025-09-12T17:49:16.840863478Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 12 17:49:17.758896 kubelet[3174]: E0912 17:49:17.758865 3174 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s6pvs" podUID="99b0530c-5bfd-4544-b93d-7c5fee8711e7" Sep 12 17:49:19.403603 containerd[1714]: time="2025-09-12T17:49:19.403571820Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:49:19.405696 containerd[1714]: time="2025-09-12T17:49:19.405619340Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 12 17:49:19.407918 containerd[1714]: time="2025-09-12T17:49:19.407895252Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:49:19.411302 containerd[1714]: time="2025-09-12T17:49:19.411278495Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:49:19.411771 containerd[1714]: time="2025-09-12T17:49:19.411694008Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 2.570799082s" Sep 12 17:49:19.411771 containerd[1714]: time="2025-09-12T17:49:19.411716259Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 12 17:49:19.413914 containerd[1714]: time="2025-09-12T17:49:19.413886007Z" level=info msg="CreateContainer within sandbox \"2cd5cd6ee576e5828c9df7207feba59184c2da1eb50568ec79247e22aba84a7d\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 12 17:49:19.434666 containerd[1714]: time="2025-09-12T17:49:19.434095298Z" level=info msg="Container 857cbb27fe759251bb3ec8822d2738e5929d9991c82b850f1a7a7ed8264b33d8: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:49:19.451023 containerd[1714]: time="2025-09-12T17:49:19.450999969Z" level=info msg="CreateContainer within sandbox \"2cd5cd6ee576e5828c9df7207feba59184c2da1eb50568ec79247e22aba84a7d\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"857cbb27fe759251bb3ec8822d2738e5929d9991c82b850f1a7a7ed8264b33d8\"" Sep 12 17:49:19.452058 containerd[1714]: time="2025-09-12T17:49:19.451280148Z" level=info msg="StartContainer for \"857cbb27fe759251bb3ec8822d2738e5929d9991c82b850f1a7a7ed8264b33d8\"" Sep 12 17:49:19.453015 containerd[1714]: time="2025-09-12T17:49:19.452992555Z" level=info msg="connecting to shim 857cbb27fe759251bb3ec8822d2738e5929d9991c82b850f1a7a7ed8264b33d8" address="unix:///run/containerd/s/b1a0d007a8d9ff8e90584837896a2ec3d2f0798107ffeecaa3a71cdd84be627d" protocol=ttrpc version=3 Sep 12 17:49:19.472821 systemd[1]: Started cri-containerd-857cbb27fe759251bb3ec8822d2738e5929d9991c82b850f1a7a7ed8264b33d8.scope - libcontainer container 857cbb27fe759251bb3ec8822d2738e5929d9991c82b850f1a7a7ed8264b33d8. Sep 12 17:49:19.505169 containerd[1714]: time="2025-09-12T17:49:19.504946179Z" level=info msg="StartContainer for \"857cbb27fe759251bb3ec8822d2738e5929d9991c82b850f1a7a7ed8264b33d8\" returns successfully" Sep 12 17:49:19.759014 kubelet[3174]: E0912 17:49:19.758944 3174 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s6pvs" podUID="99b0530c-5bfd-4544-b93d-7c5fee8711e7" Sep 12 17:49:20.777046 containerd[1714]: time="2025-09-12T17:49:20.776988120Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 17:49:20.778615 systemd[1]: cri-containerd-857cbb27fe759251bb3ec8822d2738e5929d9991c82b850f1a7a7ed8264b33d8.scope: Deactivated successfully. Sep 12 17:49:20.779507 systemd[1]: cri-containerd-857cbb27fe759251bb3ec8822d2738e5929d9991c82b850f1a7a7ed8264b33d8.scope: Consumed 349ms CPU time, 191.2M memory peak, 171.3M written to disk. Sep 12 17:49:20.781445 containerd[1714]: time="2025-09-12T17:49:20.781417995Z" level=info msg="received exit event container_id:\"857cbb27fe759251bb3ec8822d2738e5929d9991c82b850f1a7a7ed8264b33d8\" id:\"857cbb27fe759251bb3ec8822d2738e5929d9991c82b850f1a7a7ed8264b33d8\" pid:3920 exited_at:{seconds:1757699360 nanos:781252730}" Sep 12 17:49:20.781840 containerd[1714]: time="2025-09-12T17:49:20.781820596Z" level=info msg="TaskExit event in podsandbox handler container_id:\"857cbb27fe759251bb3ec8822d2738e5929d9991c82b850f1a7a7ed8264b33d8\" id:\"857cbb27fe759251bb3ec8822d2738e5929d9991c82b850f1a7a7ed8264b33d8\" pid:3920 exited_at:{seconds:1757699360 nanos:781252730}" Sep 12 17:49:20.799054 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-857cbb27fe759251bb3ec8822d2738e5929d9991c82b850f1a7a7ed8264b33d8-rootfs.mount: Deactivated successfully. Sep 12 17:49:20.836727 kubelet[3174]: I0912 17:49:20.836657 3174 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 12 17:49:20.874032 systemd[1]: Created slice kubepods-burstable-podc512cc85_9497_4540_b387_f122715cba11.slice - libcontainer container kubepods-burstable-podc512cc85_9497_4540_b387_f122715cba11.slice. Sep 12 17:49:20.887140 systemd[1]: Created slice kubepods-besteffort-pod2922ec9b_0391_44f8_a413_50f02efd5bd2.slice - libcontainer container kubepods-besteffort-pod2922ec9b_0391_44f8_a413_50f02efd5bd2.slice. Sep 12 17:49:20.895372 systemd[1]: Created slice kubepods-besteffort-pode8b27c91_3783_4996_a1f6_5caf22d96c35.slice - libcontainer container kubepods-besteffort-pode8b27c91_3783_4996_a1f6_5caf22d96c35.slice. Sep 12 17:49:20.902570 systemd[1]: Created slice kubepods-besteffort-pod36b7ada8_b514_46fb_ab6f_43e1b1c3dc6e.slice - libcontainer container kubepods-besteffort-pod36b7ada8_b514_46fb_ab6f_43e1b1c3dc6e.slice. Sep 12 17:49:20.908317 systemd[1]: Created slice kubepods-besteffort-pod8da1c4ea_8982_43c2_a817_5e4387c1aa69.slice - libcontainer container kubepods-besteffort-pod8da1c4ea_8982_43c2_a817_5e4387c1aa69.slice. Sep 12 17:49:20.915869 systemd[1]: Created slice kubepods-burstable-podc8e5c3e4_8614_46d3_bca5_f5c1f1e4abfc.slice - libcontainer container kubepods-burstable-podc8e5c3e4_8614_46d3_bca5_f5c1f1e4abfc.slice. Sep 12 17:49:20.920908 systemd[1]: Created slice kubepods-besteffort-podd61f5835_12df_456c_8f74_9315a7448ca0.slice - libcontainer container kubepods-besteffort-podd61f5835_12df_456c_8f74_9315a7448ca0.slice. Sep 12 17:49:20.940041 kubelet[3174]: I0912 17:49:20.940023 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxzkm\" (UniqueName: \"kubernetes.io/projected/2922ec9b-0391-44f8-a413-50f02efd5bd2-kube-api-access-cxzkm\") pod \"goldmane-7988f88666-h8gbr\" (UID: \"2922ec9b-0391-44f8-a413-50f02efd5bd2\") " pod="calico-system/goldmane-7988f88666-h8gbr" Sep 12 17:49:20.940145 kubelet[3174]: I0912 17:49:20.940138 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d61f5835-12df-456c-8f74-9315a7448ca0-calico-apiserver-certs\") pod \"calico-apiserver-866bfd7cf-22shl\" (UID: \"d61f5835-12df-456c-8f74-9315a7448ca0\") " pod="calico-apiserver/calico-apiserver-866bfd7cf-22shl" Sep 12 17:49:20.940210 kubelet[3174]: I0912 17:49:20.940204 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dgp5\" (UniqueName: \"kubernetes.io/projected/c8e5c3e4-8614-46d3-bca5-f5c1f1e4abfc-kube-api-access-4dgp5\") pod \"coredns-7c65d6cfc9-gm87k\" (UID: \"c8e5c3e4-8614-46d3-bca5-f5c1f1e4abfc\") " pod="kube-system/coredns-7c65d6cfc9-gm87k" Sep 12 17:49:20.940540 kubelet[3174]: I0912 17:49:20.940250 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6xjj\" (UniqueName: \"kubernetes.io/projected/36b7ada8-b514-46fb-ab6f-43e1b1c3dc6e-kube-api-access-t6xjj\") pod \"whisker-5749cc446b-hxqrm\" (UID: \"36b7ada8-b514-46fb-ab6f-43e1b1c3dc6e\") " pod="calico-system/whisker-5749cc446b-hxqrm" Sep 12 17:49:20.941204 kubelet[3174]: I0912 17:49:20.941171 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sp2cp\" (UniqueName: \"kubernetes.io/projected/d61f5835-12df-456c-8f74-9315a7448ca0-kube-api-access-sp2cp\") pod \"calico-apiserver-866bfd7cf-22shl\" (UID: \"d61f5835-12df-456c-8f74-9315a7448ca0\") " pod="calico-apiserver/calico-apiserver-866bfd7cf-22shl" Sep 12 17:49:20.942286 kubelet[3174]: I0912 17:49:20.941392 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8da1c4ea-8982-43c2-a817-5e4387c1aa69-tigera-ca-bundle\") pod \"calico-kube-controllers-54d8b7455c-46626\" (UID: \"8da1c4ea-8982-43c2-a817-5e4387c1aa69\") " pod="calico-system/calico-kube-controllers-54d8b7455c-46626" Sep 12 17:49:20.942286 kubelet[3174]: I0912 17:49:20.941421 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/2922ec9b-0391-44f8-a413-50f02efd5bd2-goldmane-key-pair\") pod \"goldmane-7988f88666-h8gbr\" (UID: \"2922ec9b-0391-44f8-a413-50f02efd5bd2\") " pod="calico-system/goldmane-7988f88666-h8gbr" Sep 12 17:49:20.942286 kubelet[3174]: I0912 17:49:20.941439 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/36b7ada8-b514-46fb-ab6f-43e1b1c3dc6e-whisker-backend-key-pair\") pod \"whisker-5749cc446b-hxqrm\" (UID: \"36b7ada8-b514-46fb-ab6f-43e1b1c3dc6e\") " pod="calico-system/whisker-5749cc446b-hxqrm" Sep 12 17:49:20.942286 kubelet[3174]: I0912 17:49:20.941456 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c8e5c3e4-8614-46d3-bca5-f5c1f1e4abfc-config-volume\") pod \"coredns-7c65d6cfc9-gm87k\" (UID: \"c8e5c3e4-8614-46d3-bca5-f5c1f1e4abfc\") " pod="kube-system/coredns-7c65d6cfc9-gm87k" Sep 12 17:49:20.942286 kubelet[3174]: I0912 17:49:20.941474 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c512cc85-9497-4540-b387-f122715cba11-config-volume\") pod \"coredns-7c65d6cfc9-skbrz\" (UID: \"c512cc85-9497-4540-b387-f122715cba11\") " pod="kube-system/coredns-7c65d6cfc9-skbrz" Sep 12 17:49:20.942414 kubelet[3174]: I0912 17:49:20.941491 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmg7f\" (UniqueName: \"kubernetes.io/projected/c512cc85-9497-4540-b387-f122715cba11-kube-api-access-hmg7f\") pod \"coredns-7c65d6cfc9-skbrz\" (UID: \"c512cc85-9497-4540-b387-f122715cba11\") " pod="kube-system/coredns-7c65d6cfc9-skbrz" Sep 12 17:49:20.942414 kubelet[3174]: I0912 17:49:20.941508 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2922ec9b-0391-44f8-a413-50f02efd5bd2-goldmane-ca-bundle\") pod \"goldmane-7988f88666-h8gbr\" (UID: \"2922ec9b-0391-44f8-a413-50f02efd5bd2\") " pod="calico-system/goldmane-7988f88666-h8gbr" Sep 12 17:49:20.942414 kubelet[3174]: I0912 17:49:20.941527 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnndq\" (UniqueName: \"kubernetes.io/projected/8da1c4ea-8982-43c2-a817-5e4387c1aa69-kube-api-access-nnndq\") pod \"calico-kube-controllers-54d8b7455c-46626\" (UID: \"8da1c4ea-8982-43c2-a817-5e4387c1aa69\") " pod="calico-system/calico-kube-controllers-54d8b7455c-46626" Sep 12 17:49:20.942414 kubelet[3174]: I0912 17:49:20.941545 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2922ec9b-0391-44f8-a413-50f02efd5bd2-config\") pod \"goldmane-7988f88666-h8gbr\" (UID: \"2922ec9b-0391-44f8-a413-50f02efd5bd2\") " pod="calico-system/goldmane-7988f88666-h8gbr" Sep 12 17:49:20.942414 kubelet[3174]: I0912 17:49:20.941561 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/36b7ada8-b514-46fb-ab6f-43e1b1c3dc6e-whisker-ca-bundle\") pod \"whisker-5749cc446b-hxqrm\" (UID: \"36b7ada8-b514-46fb-ab6f-43e1b1c3dc6e\") " pod="calico-system/whisker-5749cc446b-hxqrm" Sep 12 17:49:20.942491 kubelet[3174]: I0912 17:49:20.941578 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e8b27c91-3783-4996-a1f6-5caf22d96c35-calico-apiserver-certs\") pod \"calico-apiserver-866bfd7cf-6xdw9\" (UID: \"e8b27c91-3783-4996-a1f6-5caf22d96c35\") " pod="calico-apiserver/calico-apiserver-866bfd7cf-6xdw9" Sep 12 17:49:20.942491 kubelet[3174]: I0912 17:49:20.941599 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x87bg\" (UniqueName: \"kubernetes.io/projected/e8b27c91-3783-4996-a1f6-5caf22d96c35-kube-api-access-x87bg\") pod \"calico-apiserver-866bfd7cf-6xdw9\" (UID: \"e8b27c91-3783-4996-a1f6-5caf22d96c35\") " pod="calico-apiserver/calico-apiserver-866bfd7cf-6xdw9" Sep 12 17:49:21.181558 containerd[1714]: time="2025-09-12T17:49:21.181525426Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-skbrz,Uid:c512cc85-9497-4540-b387-f122715cba11,Namespace:kube-system,Attempt:0,}" Sep 12 17:49:21.193015 containerd[1714]: time="2025-09-12T17:49:21.192993824Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-h8gbr,Uid:2922ec9b-0391-44f8-a413-50f02efd5bd2,Namespace:calico-system,Attempt:0,}" Sep 12 17:49:21.199671 containerd[1714]: time="2025-09-12T17:49:21.199649686Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-866bfd7cf-6xdw9,Uid:e8b27c91-3783-4996-a1f6-5caf22d96c35,Namespace:calico-apiserver,Attempt:0,}" Sep 12 17:49:21.207215 containerd[1714]: time="2025-09-12T17:49:21.207196695Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5749cc446b-hxqrm,Uid:36b7ada8-b514-46fb-ab6f-43e1b1c3dc6e,Namespace:calico-system,Attempt:0,}" Sep 12 17:49:21.213793 containerd[1714]: time="2025-09-12T17:49:21.213773013Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54d8b7455c-46626,Uid:8da1c4ea-8982-43c2-a817-5e4387c1aa69,Namespace:calico-system,Attempt:0,}" Sep 12 17:49:21.220335 containerd[1714]: time="2025-09-12T17:49:21.220316813Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-gm87k,Uid:c8e5c3e4-8614-46d3-bca5-f5c1f1e4abfc,Namespace:kube-system,Attempt:0,}" Sep 12 17:49:21.222779 containerd[1714]: time="2025-09-12T17:49:21.222755889Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-866bfd7cf-22shl,Uid:d61f5835-12df-456c-8f74-9315a7448ca0,Namespace:calico-apiserver,Attempt:0,}" Sep 12 17:49:21.764199 systemd[1]: Created slice kubepods-besteffort-pod99b0530c_5bfd_4544_b93d_7c5fee8711e7.slice - libcontainer container kubepods-besteffort-pod99b0530c_5bfd_4544_b93d_7c5fee8711e7.slice. Sep 12 17:49:21.773387 containerd[1714]: time="2025-09-12T17:49:21.773362785Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s6pvs,Uid:99b0530c-5bfd-4544-b93d-7c5fee8711e7,Namespace:calico-system,Attempt:0,}" Sep 12 17:49:21.882656 containerd[1714]: time="2025-09-12T17:49:21.882487795Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 12 17:49:21.935857 containerd[1714]: time="2025-09-12T17:49:21.935827988Z" level=error msg="Failed to destroy network for sandbox \"d3edef368e22d511d910f06305eb2f4c496d806e9392894d8d6675f9483f0655\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:49:21.939862 systemd[1]: run-netns-cni\x2d9fa73fc9\x2d4661\x2d9a3e\x2d03a8\x2d3cad84fc0680.mount: Deactivated successfully. Sep 12 17:49:21.943647 containerd[1714]: time="2025-09-12T17:49:21.943376500Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-866bfd7cf-22shl,Uid:d61f5835-12df-456c-8f74-9315a7448ca0,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d3edef368e22d511d910f06305eb2f4c496d806e9392894d8d6675f9483f0655\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:49:21.944640 kubelet[3174]: E0912 17:49:21.943612 3174 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d3edef368e22d511d910f06305eb2f4c496d806e9392894d8d6675f9483f0655\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:49:21.944910 kubelet[3174]: E0912 17:49:21.944694 3174 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d3edef368e22d511d910f06305eb2f4c496d806e9392894d8d6675f9483f0655\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-866bfd7cf-22shl" Sep 12 17:49:21.944910 kubelet[3174]: E0912 17:49:21.944715 3174 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d3edef368e22d511d910f06305eb2f4c496d806e9392894d8d6675f9483f0655\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-866bfd7cf-22shl" Sep 12 17:49:21.944910 kubelet[3174]: E0912 17:49:21.944764 3174 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-866bfd7cf-22shl_calico-apiserver(d61f5835-12df-456c-8f74-9315a7448ca0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-866bfd7cf-22shl_calico-apiserver(d61f5835-12df-456c-8f74-9315a7448ca0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d3edef368e22d511d910f06305eb2f4c496d806e9392894d8d6675f9483f0655\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-866bfd7cf-22shl" podUID="d61f5835-12df-456c-8f74-9315a7448ca0" Sep 12 17:49:21.948065 containerd[1714]: time="2025-09-12T17:49:21.948033196Z" level=error msg="Failed to destroy network for sandbox \"cb7ee598d8c65c60a77b2542e8c212d1fdbf9358eae904d71423c6a7f283f0b9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:49:21.950222 systemd[1]: run-netns-cni\x2defcd22c3\x2d7255\x2d8cef\x2d033c\x2da4a820cd0edc.mount: Deactivated successfully. Sep 12 17:49:21.953776 containerd[1714]: time="2025-09-12T17:49:21.953705539Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s6pvs,Uid:99b0530c-5bfd-4544-b93d-7c5fee8711e7,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb7ee598d8c65c60a77b2542e8c212d1fdbf9358eae904d71423c6a7f283f0b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:49:21.961020 kubelet[3174]: E0912 17:49:21.960969 3174 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb7ee598d8c65c60a77b2542e8c212d1fdbf9358eae904d71423c6a7f283f0b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:49:21.961097 kubelet[3174]: E0912 17:49:21.961008 3174 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb7ee598d8c65c60a77b2542e8c212d1fdbf9358eae904d71423c6a7f283f0b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-s6pvs" Sep 12 17:49:21.961097 kubelet[3174]: E0912 17:49:21.961085 3174 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb7ee598d8c65c60a77b2542e8c212d1fdbf9358eae904d71423c6a7f283f0b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-s6pvs" Sep 12 17:49:21.961224 kubelet[3174]: E0912 17:49:21.961150 3174 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-s6pvs_calico-system(99b0530c-5bfd-4544-b93d-7c5fee8711e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-s6pvs_calico-system(99b0530c-5bfd-4544-b93d-7c5fee8711e7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cb7ee598d8c65c60a77b2542e8c212d1fdbf9358eae904d71423c6a7f283f0b9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-s6pvs" podUID="99b0530c-5bfd-4544-b93d-7c5fee8711e7" Sep 12 17:49:21.982608 containerd[1714]: time="2025-09-12T17:49:21.982582974Z" level=error msg="Failed to destroy network for sandbox \"b38863096231445e5196035a52992787da139e9a124b50e69558b5391a962e14\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:49:21.986901 systemd[1]: run-netns-cni\x2d59afe63d\x2df575\x2df73c\x2dc3e2\x2d54f5a5d3211b.mount: Deactivated successfully. Sep 12 17:49:21.999048 containerd[1714]: time="2025-09-12T17:49:21.998979999Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54d8b7455c-46626,Uid:8da1c4ea-8982-43c2-a817-5e4387c1aa69,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b38863096231445e5196035a52992787da139e9a124b50e69558b5391a962e14\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:49:21.999227 kubelet[3174]: E0912 17:49:21.999206 3174 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b38863096231445e5196035a52992787da139e9a124b50e69558b5391a962e14\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:49:21.999540 kubelet[3174]: E0912 17:49:21.999312 3174 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b38863096231445e5196035a52992787da139e9a124b50e69558b5391a962e14\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-54d8b7455c-46626" Sep 12 17:49:22.000457 kubelet[3174]: E0912 17:49:21.999608 3174 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b38863096231445e5196035a52992787da139e9a124b50e69558b5391a962e14\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-54d8b7455c-46626" Sep 12 17:49:22.000457 kubelet[3174]: E0912 17:49:21.999677 3174 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-54d8b7455c-46626_calico-system(8da1c4ea-8982-43c2-a817-5e4387c1aa69)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-54d8b7455c-46626_calico-system(8da1c4ea-8982-43c2-a817-5e4387c1aa69)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b38863096231445e5196035a52992787da139e9a124b50e69558b5391a962e14\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-54d8b7455c-46626" podUID="8da1c4ea-8982-43c2-a817-5e4387c1aa69" Sep 12 17:49:22.011930 containerd[1714]: time="2025-09-12T17:49:22.011901756Z" level=error msg="Failed to destroy network for sandbox \"0c71bbc0d6b1e585fe5777b8d891887c0e9e075acacf509ea12f68ba4091186c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:49:22.012503 containerd[1714]: time="2025-09-12T17:49:22.012481270Z" level=error msg="Failed to destroy network for sandbox \"8fdac34a509d9a25cd583d4cff7d156d1b5ce93921b32b77c36216f9bdd4e407\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:49:22.016366 containerd[1714]: time="2025-09-12T17:49:22.016213178Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-skbrz,Uid:c512cc85-9497-4540-b387-f122715cba11,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c71bbc0d6b1e585fe5777b8d891887c0e9e075acacf509ea12f68ba4091186c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:49:22.016879 kubelet[3174]: E0912 17:49:22.016618 3174 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c71bbc0d6b1e585fe5777b8d891887c0e9e075acacf509ea12f68ba4091186c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:49:22.016879 kubelet[3174]: E0912 17:49:22.016816 3174 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c71bbc0d6b1e585fe5777b8d891887c0e9e075acacf509ea12f68ba4091186c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-skbrz" Sep 12 17:49:22.016879 kubelet[3174]: E0912 17:49:22.016834 3174 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c71bbc0d6b1e585fe5777b8d891887c0e9e075acacf509ea12f68ba4091186c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-skbrz" Sep 12 17:49:22.017069 kubelet[3174]: E0912 17:49:22.016863 3174 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-skbrz_kube-system(c512cc85-9497-4540-b387-f122715cba11)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-skbrz_kube-system(c512cc85-9497-4540-b387-f122715cba11)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0c71bbc0d6b1e585fe5777b8d891887c0e9e075acacf509ea12f68ba4091186c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-skbrz" podUID="c512cc85-9497-4540-b387-f122715cba11" Sep 12 17:49:22.018928 containerd[1714]: time="2025-09-12T17:49:22.018884785Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-866bfd7cf-6xdw9,Uid:e8b27c91-3783-4996-a1f6-5caf22d96c35,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8fdac34a509d9a25cd583d4cff7d156d1b5ce93921b32b77c36216f9bdd4e407\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:49:22.019168 kubelet[3174]: E0912 17:49:22.019142 3174 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8fdac34a509d9a25cd583d4cff7d156d1b5ce93921b32b77c36216f9bdd4e407\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:49:22.019218 kubelet[3174]: E0912 17:49:22.019183 3174 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8fdac34a509d9a25cd583d4cff7d156d1b5ce93921b32b77c36216f9bdd4e407\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-866bfd7cf-6xdw9" Sep 12 17:49:22.019218 kubelet[3174]: E0912 17:49:22.019200 3174 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8fdac34a509d9a25cd583d4cff7d156d1b5ce93921b32b77c36216f9bdd4e407\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-866bfd7cf-6xdw9" Sep 12 17:49:22.019867 kubelet[3174]: E0912 17:49:22.019843 3174 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-866bfd7cf-6xdw9_calico-apiserver(e8b27c91-3783-4996-a1f6-5caf22d96c35)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-866bfd7cf-6xdw9_calico-apiserver(e8b27c91-3783-4996-a1f6-5caf22d96c35)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8fdac34a509d9a25cd583d4cff7d156d1b5ce93921b32b77c36216f9bdd4e407\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-866bfd7cf-6xdw9" podUID="e8b27c91-3783-4996-a1f6-5caf22d96c35" Sep 12 17:49:22.021718 containerd[1714]: time="2025-09-12T17:49:22.021653370Z" level=error msg="Failed to destroy network for sandbox \"1fb5bdf614bbdd93600d2cad481ac257f207d3817e1c98c3e47a0a3bfc8ed96f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:49:22.024241 containerd[1714]: time="2025-09-12T17:49:22.024164985Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-gm87k,Uid:c8e5c3e4-8614-46d3-bca5-f5c1f1e4abfc,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1fb5bdf614bbdd93600d2cad481ac257f207d3817e1c98c3e47a0a3bfc8ed96f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:49:22.024324 kubelet[3174]: E0912 17:49:22.024293 3174 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1fb5bdf614bbdd93600d2cad481ac257f207d3817e1c98c3e47a0a3bfc8ed96f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:49:22.024364 kubelet[3174]: E0912 17:49:22.024323 3174 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1fb5bdf614bbdd93600d2cad481ac257f207d3817e1c98c3e47a0a3bfc8ed96f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-gm87k" Sep 12 17:49:22.024364 kubelet[3174]: E0912 17:49:22.024339 3174 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1fb5bdf614bbdd93600d2cad481ac257f207d3817e1c98c3e47a0a3bfc8ed96f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-gm87k" Sep 12 17:49:22.024405 kubelet[3174]: E0912 17:49:22.024368 3174 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-gm87k_kube-system(c8e5c3e4-8614-46d3-bca5-f5c1f1e4abfc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-gm87k_kube-system(c8e5c3e4-8614-46d3-bca5-f5c1f1e4abfc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1fb5bdf614bbdd93600d2cad481ac257f207d3817e1c98c3e47a0a3bfc8ed96f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-gm87k" podUID="c8e5c3e4-8614-46d3-bca5-f5c1f1e4abfc" Sep 12 17:49:22.025824 containerd[1714]: time="2025-09-12T17:49:22.025780253Z" level=error msg="Failed to destroy network for sandbox \"4515085e2e85d986c86763e988753cb7d327d0415b3e8a36fc57ee06a428ce8d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:49:22.028255 containerd[1714]: time="2025-09-12T17:49:22.028218743Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-h8gbr,Uid:2922ec9b-0391-44f8-a413-50f02efd5bd2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4515085e2e85d986c86763e988753cb7d327d0415b3e8a36fc57ee06a428ce8d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:49:22.028489 kubelet[3174]: E0912 17:49:22.028466 3174 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4515085e2e85d986c86763e988753cb7d327d0415b3e8a36fc57ee06a428ce8d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:49:22.028604 kubelet[3174]: E0912 17:49:22.028529 3174 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4515085e2e85d986c86763e988753cb7d327d0415b3e8a36fc57ee06a428ce8d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-h8gbr" Sep 12 17:49:22.028604 kubelet[3174]: E0912 17:49:22.028545 3174 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4515085e2e85d986c86763e988753cb7d327d0415b3e8a36fc57ee06a428ce8d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-h8gbr" Sep 12 17:49:22.028742 kubelet[3174]: E0912 17:49:22.028691 3174 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7988f88666-h8gbr_calico-system(2922ec9b-0391-44f8-a413-50f02efd5bd2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7988f88666-h8gbr_calico-system(2922ec9b-0391-44f8-a413-50f02efd5bd2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4515085e2e85d986c86763e988753cb7d327d0415b3e8a36fc57ee06a428ce8d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-h8gbr" podUID="2922ec9b-0391-44f8-a413-50f02efd5bd2" Sep 12 17:49:22.032859 containerd[1714]: time="2025-09-12T17:49:22.032834046Z" level=error msg="Failed to destroy network for sandbox \"52a1514ef64c7024212284f26cba74c6c517a61c997ba9c6e676c597c6fcb17a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:49:22.036196 containerd[1714]: time="2025-09-12T17:49:22.036163915Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5749cc446b-hxqrm,Uid:36b7ada8-b514-46fb-ab6f-43e1b1c3dc6e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"52a1514ef64c7024212284f26cba74c6c517a61c997ba9c6e676c597c6fcb17a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:49:22.036362 kubelet[3174]: E0912 17:49:22.036342 3174 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"52a1514ef64c7024212284f26cba74c6c517a61c997ba9c6e676c597c6fcb17a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:49:22.036429 kubelet[3174]: E0912 17:49:22.036372 3174 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"52a1514ef64c7024212284f26cba74c6c517a61c997ba9c6e676c597c6fcb17a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5749cc446b-hxqrm" Sep 12 17:49:22.036429 kubelet[3174]: E0912 17:49:22.036387 3174 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"52a1514ef64c7024212284f26cba74c6c517a61c997ba9c6e676c597c6fcb17a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5749cc446b-hxqrm" Sep 12 17:49:22.036429 kubelet[3174]: E0912 17:49:22.036420 3174 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5749cc446b-hxqrm_calico-system(36b7ada8-b514-46fb-ab6f-43e1b1c3dc6e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5749cc446b-hxqrm_calico-system(36b7ada8-b514-46fb-ab6f-43e1b1c3dc6e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"52a1514ef64c7024212284f26cba74c6c517a61c997ba9c6e676c597c6fcb17a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5749cc446b-hxqrm" podUID="36b7ada8-b514-46fb-ab6f-43e1b1c3dc6e" Sep 12 17:49:22.800601 systemd[1]: run-netns-cni\x2dc5d86135\x2dcd0c\x2dffe8\x2d4e0a\x2d8ea3442391c5.mount: Deactivated successfully. Sep 12 17:49:22.800685 systemd[1]: run-netns-cni\x2d9cd319f6\x2db1a2\x2dfe61\x2d2f60\x2d7ee8c192cbca.mount: Deactivated successfully. Sep 12 17:49:22.800731 systemd[1]: run-netns-cni\x2d0b4f48cd\x2dae0e\x2d871a\x2d02e4\x2d71e0b0dddd63.mount: Deactivated successfully. Sep 12 17:49:22.800773 systemd[1]: run-netns-cni\x2d626fb52e\x2dbc42\x2dae80\x2dfc35\x2d03103ffbdf0f.mount: Deactivated successfully. Sep 12 17:49:22.800817 systemd[1]: run-netns-cni\x2d857f893a\x2dc36a\x2dfb21\x2da55a\x2d61589bdc32e9.mount: Deactivated successfully. Sep 12 17:49:28.752750 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3155848955.mount: Deactivated successfully. Sep 12 17:49:28.777003 containerd[1714]: time="2025-09-12T17:49:28.776963635Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:49:28.779177 containerd[1714]: time="2025-09-12T17:49:28.779106906Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 12 17:49:28.782027 containerd[1714]: time="2025-09-12T17:49:28.782000251Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:49:28.785657 containerd[1714]: time="2025-09-12T17:49:28.785276688Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:49:28.785657 containerd[1714]: time="2025-09-12T17:49:28.785546182Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 6.903025003s" Sep 12 17:49:28.785657 containerd[1714]: time="2025-09-12T17:49:28.785568524Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 12 17:49:28.796953 containerd[1714]: time="2025-09-12T17:49:28.796923406Z" level=info msg="CreateContainer within sandbox \"2cd5cd6ee576e5828c9df7207feba59184c2da1eb50568ec79247e22aba84a7d\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 12 17:49:28.817655 containerd[1714]: time="2025-09-12T17:49:28.817207903Z" level=info msg="Container 98d9ce3227bcdbde6b4a764c3a51ece4fe45d5cce54b2ce3099a88e679f02a49: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:49:28.834858 containerd[1714]: time="2025-09-12T17:49:28.834831502Z" level=info msg="CreateContainer within sandbox \"2cd5cd6ee576e5828c9df7207feba59184c2da1eb50568ec79247e22aba84a7d\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"98d9ce3227bcdbde6b4a764c3a51ece4fe45d5cce54b2ce3099a88e679f02a49\"" Sep 12 17:49:28.836650 containerd[1714]: time="2025-09-12T17:49:28.835313871Z" level=info msg="StartContainer for \"98d9ce3227bcdbde6b4a764c3a51ece4fe45d5cce54b2ce3099a88e679f02a49\"" Sep 12 17:49:28.836650 containerd[1714]: time="2025-09-12T17:49:28.836560738Z" level=info msg="connecting to shim 98d9ce3227bcdbde6b4a764c3a51ece4fe45d5cce54b2ce3099a88e679f02a49" address="unix:///run/containerd/s/b1a0d007a8d9ff8e90584837896a2ec3d2f0798107ffeecaa3a71cdd84be627d" protocol=ttrpc version=3 Sep 12 17:49:28.853766 systemd[1]: Started cri-containerd-98d9ce3227bcdbde6b4a764c3a51ece4fe45d5cce54b2ce3099a88e679f02a49.scope - libcontainer container 98d9ce3227bcdbde6b4a764c3a51ece4fe45d5cce54b2ce3099a88e679f02a49. Sep 12 17:49:28.889280 containerd[1714]: time="2025-09-12T17:49:28.889263169Z" level=info msg="StartContainer for \"98d9ce3227bcdbde6b4a764c3a51ece4fe45d5cce54b2ce3099a88e679f02a49\" returns successfully" Sep 12 17:49:29.224000 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 12 17:49:29.224071 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 12 17:49:29.484670 kubelet[3174]: I0912 17:49:29.484580 3174 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t6xjj\" (UniqueName: \"kubernetes.io/projected/36b7ada8-b514-46fb-ab6f-43e1b1c3dc6e-kube-api-access-t6xjj\") pod \"36b7ada8-b514-46fb-ab6f-43e1b1c3dc6e\" (UID: \"36b7ada8-b514-46fb-ab6f-43e1b1c3dc6e\") " Sep 12 17:49:29.484670 kubelet[3174]: I0912 17:49:29.484614 3174 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/36b7ada8-b514-46fb-ab6f-43e1b1c3dc6e-whisker-backend-key-pair\") pod \"36b7ada8-b514-46fb-ab6f-43e1b1c3dc6e\" (UID: \"36b7ada8-b514-46fb-ab6f-43e1b1c3dc6e\") " Sep 12 17:49:29.486674 kubelet[3174]: I0912 17:49:29.485151 3174 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/36b7ada8-b514-46fb-ab6f-43e1b1c3dc6e-whisker-ca-bundle\") pod \"36b7ada8-b514-46fb-ab6f-43e1b1c3dc6e\" (UID: \"36b7ada8-b514-46fb-ab6f-43e1b1c3dc6e\") " Sep 12 17:49:29.486674 kubelet[3174]: I0912 17:49:29.485416 3174 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36b7ada8-b514-46fb-ab6f-43e1b1c3dc6e-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "36b7ada8-b514-46fb-ab6f-43e1b1c3dc6e" (UID: "36b7ada8-b514-46fb-ab6f-43e1b1c3dc6e"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 12 17:49:29.487837 kubelet[3174]: I0912 17:49:29.487813 3174 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36b7ada8-b514-46fb-ab6f-43e1b1c3dc6e-kube-api-access-t6xjj" (OuterVolumeSpecName: "kube-api-access-t6xjj") pod "36b7ada8-b514-46fb-ab6f-43e1b1c3dc6e" (UID: "36b7ada8-b514-46fb-ab6f-43e1b1c3dc6e"). InnerVolumeSpecName "kube-api-access-t6xjj". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 12 17:49:29.489090 kubelet[3174]: I0912 17:49:29.489067 3174 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36b7ada8-b514-46fb-ab6f-43e1b1c3dc6e-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "36b7ada8-b514-46fb-ab6f-43e1b1c3dc6e" (UID: "36b7ada8-b514-46fb-ab6f-43e1b1c3dc6e"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 12 17:49:29.585947 kubelet[3174]: I0912 17:49:29.585923 3174 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t6xjj\" (UniqueName: \"kubernetes.io/projected/36b7ada8-b514-46fb-ab6f-43e1b1c3dc6e-kube-api-access-t6xjj\") on node \"ci-4426.1.0-a-49404e8b93\" DevicePath \"\"" Sep 12 17:49:29.585947 kubelet[3174]: I0912 17:49:29.585944 3174 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/36b7ada8-b514-46fb-ab6f-43e1b1c3dc6e-whisker-backend-key-pair\") on node \"ci-4426.1.0-a-49404e8b93\" DevicePath \"\"" Sep 12 17:49:29.586044 kubelet[3174]: I0912 17:49:29.585954 3174 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/36b7ada8-b514-46fb-ab6f-43e1b1c3dc6e-whisker-ca-bundle\") on node \"ci-4426.1.0-a-49404e8b93\" DevicePath \"\"" Sep 12 17:49:29.751206 systemd[1]: var-lib-kubelet-pods-36b7ada8\x2db514\x2d46fb\x2dab6f\x2d43e1b1c3dc6e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dt6xjj.mount: Deactivated successfully. Sep 12 17:49:29.751281 systemd[1]: var-lib-kubelet-pods-36b7ada8\x2db514\x2d46fb\x2dab6f\x2d43e1b1c3dc6e-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 12 17:49:29.899618 systemd[1]: Removed slice kubepods-besteffort-pod36b7ada8_b514_46fb_ab6f_43e1b1c3dc6e.slice - libcontainer container kubepods-besteffort-pod36b7ada8_b514_46fb_ab6f_43e1b1c3dc6e.slice. Sep 12 17:49:29.913696 kubelet[3174]: I0912 17:49:29.913242 3174 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-xnc8x" podStartSLOduration=2.395874054 podStartE2EDuration="20.913228467s" podCreationTimestamp="2025-09-12 17:49:09 +0000 UTC" firstStartedPulling="2025-09-12 17:49:10.268879479 +0000 UTC m=+17.587719528" lastFinishedPulling="2025-09-12 17:49:28.786233885 +0000 UTC m=+36.105073941" observedRunningTime="2025-09-12 17:49:29.912238441 +0000 UTC m=+37.231078521" watchObservedRunningTime="2025-09-12 17:49:29.913228467 +0000 UTC m=+37.232068651" Sep 12 17:49:29.978669 systemd[1]: Created slice kubepods-besteffort-podff477b3b_b995_433f_88cd_2a5ceefb97f8.slice - libcontainer container kubepods-besteffort-podff477b3b_b995_433f_88cd_2a5ceefb97f8.slice. Sep 12 17:49:29.988372 kubelet[3174]: I0912 17:49:29.988353 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ff477b3b-b995-433f-88cd-2a5ceefb97f8-whisker-backend-key-pair\") pod \"whisker-65f9696c6f-vnfk2\" (UID: \"ff477b3b-b995-433f-88cd-2a5ceefb97f8\") " pod="calico-system/whisker-65f9696c6f-vnfk2" Sep 12 17:49:29.988564 kubelet[3174]: I0912 17:49:29.988552 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9xc9\" (UniqueName: \"kubernetes.io/projected/ff477b3b-b995-433f-88cd-2a5ceefb97f8-kube-api-access-v9xc9\") pod \"whisker-65f9696c6f-vnfk2\" (UID: \"ff477b3b-b995-433f-88cd-2a5ceefb97f8\") " pod="calico-system/whisker-65f9696c6f-vnfk2" Sep 12 17:49:29.988676 kubelet[3174]: I0912 17:49:29.988667 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ff477b3b-b995-433f-88cd-2a5ceefb97f8-whisker-ca-bundle\") pod \"whisker-65f9696c6f-vnfk2\" (UID: \"ff477b3b-b995-433f-88cd-2a5ceefb97f8\") " pod="calico-system/whisker-65f9696c6f-vnfk2" Sep 12 17:49:29.991591 containerd[1714]: time="2025-09-12T17:49:29.991560062Z" level=info msg="TaskExit event in podsandbox handler container_id:\"98d9ce3227bcdbde6b4a764c3a51ece4fe45d5cce54b2ce3099a88e679f02a49\" id:\"ffe2c3af34478b780011283e2ce3bbfa5b546a08a67237307d4b2c48a18c4a45\" pid:4260 exit_status:1 exited_at:{seconds:1757699369 nanos:991343357}" Sep 12 17:49:30.284219 containerd[1714]: time="2025-09-12T17:49:30.284192055Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-65f9696c6f-vnfk2,Uid:ff477b3b-b995-433f-88cd-2a5ceefb97f8,Namespace:calico-system,Attempt:0,}" Sep 12 17:49:30.381896 systemd-networkd[1362]: calibe2eec6ec44: Link UP Sep 12 17:49:30.382039 systemd-networkd[1362]: calibe2eec6ec44: Gained carrier Sep 12 17:49:30.396778 containerd[1714]: 2025-09-12 17:49:30.307 [INFO][4273] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 12 17:49:30.396778 containerd[1714]: 2025-09-12 17:49:30.314 [INFO][4273] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4426.1.0--a--49404e8b93-k8s-whisker--65f9696c6f--vnfk2-eth0 whisker-65f9696c6f- calico-system ff477b3b-b995-433f-88cd-2a5ceefb97f8 899 0 2025-09-12 17:49:29 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:65f9696c6f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4426.1.0-a-49404e8b93 whisker-65f9696c6f-vnfk2 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calibe2eec6ec44 [] [] }} ContainerID="d151f43d2837576df7d81bf1ece4cddc4d97c5896b8e662298e89124f825a416" Namespace="calico-system" Pod="whisker-65f9696c6f-vnfk2" WorkloadEndpoint="ci--4426.1.0--a--49404e8b93-k8s-whisker--65f9696c6f--vnfk2-" Sep 12 17:49:30.396778 containerd[1714]: 2025-09-12 17:49:30.314 [INFO][4273] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d151f43d2837576df7d81bf1ece4cddc4d97c5896b8e662298e89124f825a416" Namespace="calico-system" Pod="whisker-65f9696c6f-vnfk2" WorkloadEndpoint="ci--4426.1.0--a--49404e8b93-k8s-whisker--65f9696c6f--vnfk2-eth0" Sep 12 17:49:30.396778 containerd[1714]: 2025-09-12 17:49:30.334 [INFO][4285] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d151f43d2837576df7d81bf1ece4cddc4d97c5896b8e662298e89124f825a416" HandleID="k8s-pod-network.d151f43d2837576df7d81bf1ece4cddc4d97c5896b8e662298e89124f825a416" Workload="ci--4426.1.0--a--49404e8b93-k8s-whisker--65f9696c6f--vnfk2-eth0" Sep 12 17:49:30.397674 containerd[1714]: 2025-09-12 17:49:30.334 [INFO][4285] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d151f43d2837576df7d81bf1ece4cddc4d97c5896b8e662298e89124f825a416" HandleID="k8s-pod-network.d151f43d2837576df7d81bf1ece4cddc4d97c5896b8e662298e89124f825a416" Workload="ci--4426.1.0--a--49404e8b93-k8s-whisker--65f9696c6f--vnfk2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f110), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4426.1.0-a-49404e8b93", "pod":"whisker-65f9696c6f-vnfk2", "timestamp":"2025-09-12 17:49:30.334159938 +0000 UTC"}, Hostname:"ci-4426.1.0-a-49404e8b93", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:49:30.397674 containerd[1714]: 2025-09-12 17:49:30.334 [INFO][4285] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:49:30.397674 containerd[1714]: 2025-09-12 17:49:30.334 [INFO][4285] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:49:30.397674 containerd[1714]: 2025-09-12 17:49:30.334 [INFO][4285] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4426.1.0-a-49404e8b93' Sep 12 17:49:30.397674 containerd[1714]: 2025-09-12 17:49:30.338 [INFO][4285] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d151f43d2837576df7d81bf1ece4cddc4d97c5896b8e662298e89124f825a416" host="ci-4426.1.0-a-49404e8b93" Sep 12 17:49:30.397674 containerd[1714]: 2025-09-12 17:49:30.340 [INFO][4285] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4426.1.0-a-49404e8b93" Sep 12 17:49:30.397674 containerd[1714]: 2025-09-12 17:49:30.343 [INFO][4285] ipam/ipam.go 511: Trying affinity for 192.168.109.128/26 host="ci-4426.1.0-a-49404e8b93" Sep 12 17:49:30.397674 containerd[1714]: 2025-09-12 17:49:30.345 [INFO][4285] ipam/ipam.go 158: Attempting to load block cidr=192.168.109.128/26 host="ci-4426.1.0-a-49404e8b93" Sep 12 17:49:30.397674 containerd[1714]: 2025-09-12 17:49:30.346 [INFO][4285] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.109.128/26 host="ci-4426.1.0-a-49404e8b93" Sep 12 17:49:30.397858 containerd[1714]: 2025-09-12 17:49:30.346 [INFO][4285] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.109.128/26 handle="k8s-pod-network.d151f43d2837576df7d81bf1ece4cddc4d97c5896b8e662298e89124f825a416" host="ci-4426.1.0-a-49404e8b93" Sep 12 17:49:30.397858 containerd[1714]: 2025-09-12 17:49:30.347 [INFO][4285] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d151f43d2837576df7d81bf1ece4cddc4d97c5896b8e662298e89124f825a416 Sep 12 17:49:30.397858 containerd[1714]: 2025-09-12 17:49:30.350 [INFO][4285] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.109.128/26 handle="k8s-pod-network.d151f43d2837576df7d81bf1ece4cddc4d97c5896b8e662298e89124f825a416" host="ci-4426.1.0-a-49404e8b93" Sep 12 17:49:30.397858 containerd[1714]: 2025-09-12 17:49:30.357 [INFO][4285] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.109.129/26] block=192.168.109.128/26 handle="k8s-pod-network.d151f43d2837576df7d81bf1ece4cddc4d97c5896b8e662298e89124f825a416" host="ci-4426.1.0-a-49404e8b93" Sep 12 17:49:30.397858 containerd[1714]: 2025-09-12 17:49:30.357 [INFO][4285] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.109.129/26] handle="k8s-pod-network.d151f43d2837576df7d81bf1ece4cddc4d97c5896b8e662298e89124f825a416" host="ci-4426.1.0-a-49404e8b93" Sep 12 17:49:30.397858 containerd[1714]: 2025-09-12 17:49:30.357 [INFO][4285] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:49:30.397858 containerd[1714]: 2025-09-12 17:49:30.357 [INFO][4285] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.109.129/26] IPv6=[] ContainerID="d151f43d2837576df7d81bf1ece4cddc4d97c5896b8e662298e89124f825a416" HandleID="k8s-pod-network.d151f43d2837576df7d81bf1ece4cddc4d97c5896b8e662298e89124f825a416" Workload="ci--4426.1.0--a--49404e8b93-k8s-whisker--65f9696c6f--vnfk2-eth0" Sep 12 17:49:30.397992 containerd[1714]: 2025-09-12 17:49:30.359 [INFO][4273] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d151f43d2837576df7d81bf1ece4cddc4d97c5896b8e662298e89124f825a416" Namespace="calico-system" Pod="whisker-65f9696c6f-vnfk2" WorkloadEndpoint="ci--4426.1.0--a--49404e8b93-k8s-whisker--65f9696c6f--vnfk2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4426.1.0--a--49404e8b93-k8s-whisker--65f9696c6f--vnfk2-eth0", GenerateName:"whisker-65f9696c6f-", Namespace:"calico-system", SelfLink:"", UID:"ff477b3b-b995-433f-88cd-2a5ceefb97f8", ResourceVersion:"899", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 49, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"65f9696c6f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4426.1.0-a-49404e8b93", ContainerID:"", Pod:"whisker-65f9696c6f-vnfk2", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.109.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calibe2eec6ec44", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:49:30.397992 containerd[1714]: 2025-09-12 17:49:30.359 [INFO][4273] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.109.129/32] ContainerID="d151f43d2837576df7d81bf1ece4cddc4d97c5896b8e662298e89124f825a416" Namespace="calico-system" Pod="whisker-65f9696c6f-vnfk2" WorkloadEndpoint="ci--4426.1.0--a--49404e8b93-k8s-whisker--65f9696c6f--vnfk2-eth0" Sep 12 17:49:30.398062 containerd[1714]: 2025-09-12 17:49:30.359 [INFO][4273] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibe2eec6ec44 ContainerID="d151f43d2837576df7d81bf1ece4cddc4d97c5896b8e662298e89124f825a416" Namespace="calico-system" Pod="whisker-65f9696c6f-vnfk2" WorkloadEndpoint="ci--4426.1.0--a--49404e8b93-k8s-whisker--65f9696c6f--vnfk2-eth0" Sep 12 17:49:30.398062 containerd[1714]: 2025-09-12 17:49:30.382 [INFO][4273] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d151f43d2837576df7d81bf1ece4cddc4d97c5896b8e662298e89124f825a416" Namespace="calico-system" Pod="whisker-65f9696c6f-vnfk2" WorkloadEndpoint="ci--4426.1.0--a--49404e8b93-k8s-whisker--65f9696c6f--vnfk2-eth0" Sep 12 17:49:30.398105 containerd[1714]: 2025-09-12 17:49:30.382 [INFO][4273] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d151f43d2837576df7d81bf1ece4cddc4d97c5896b8e662298e89124f825a416" Namespace="calico-system" Pod="whisker-65f9696c6f-vnfk2" WorkloadEndpoint="ci--4426.1.0--a--49404e8b93-k8s-whisker--65f9696c6f--vnfk2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4426.1.0--a--49404e8b93-k8s-whisker--65f9696c6f--vnfk2-eth0", GenerateName:"whisker-65f9696c6f-", Namespace:"calico-system", SelfLink:"", UID:"ff477b3b-b995-433f-88cd-2a5ceefb97f8", ResourceVersion:"899", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 49, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"65f9696c6f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4426.1.0-a-49404e8b93", ContainerID:"d151f43d2837576df7d81bf1ece4cddc4d97c5896b8e662298e89124f825a416", Pod:"whisker-65f9696c6f-vnfk2", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.109.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calibe2eec6ec44", MAC:"7a:c7:19:6e:ab:a8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:49:30.398153 containerd[1714]: 2025-09-12 17:49:30.392 [INFO][4273] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d151f43d2837576df7d81bf1ece4cddc4d97c5896b8e662298e89124f825a416" Namespace="calico-system" Pod="whisker-65f9696c6f-vnfk2" WorkloadEndpoint="ci--4426.1.0--a--49404e8b93-k8s-whisker--65f9696c6f--vnfk2-eth0" Sep 12 17:49:30.465038 containerd[1714]: time="2025-09-12T17:49:30.465008078Z" level=info msg="connecting to shim d151f43d2837576df7d81bf1ece4cddc4d97c5896b8e662298e89124f825a416" address="unix:///run/containerd/s/031bd27711003a91a775346fe8f02cb0e1b4be6a23c30478ae5428d212b70d2c" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:49:30.484750 systemd[1]: Started cri-containerd-d151f43d2837576df7d81bf1ece4cddc4d97c5896b8e662298e89124f825a416.scope - libcontainer container d151f43d2837576df7d81bf1ece4cddc4d97c5896b8e662298e89124f825a416. Sep 12 17:49:30.518588 containerd[1714]: time="2025-09-12T17:49:30.518522565Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-65f9696c6f-vnfk2,Uid:ff477b3b-b995-433f-88cd-2a5ceefb97f8,Namespace:calico-system,Attempt:0,} returns sandbox id \"d151f43d2837576df7d81bf1ece4cddc4d97c5896b8e662298e89124f825a416\"" Sep 12 17:49:30.521009 containerd[1714]: time="2025-09-12T17:49:30.520471431Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 12 17:49:30.761668 kubelet[3174]: I0912 17:49:30.761512 3174 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36b7ada8-b514-46fb-ab6f-43e1b1c3dc6e" path="/var/lib/kubelet/pods/36b7ada8-b514-46fb-ab6f-43e1b1c3dc6e/volumes" Sep 12 17:49:30.964858 containerd[1714]: time="2025-09-12T17:49:30.964755278Z" level=info msg="TaskExit event in podsandbox handler container_id:\"98d9ce3227bcdbde6b4a764c3a51ece4fe45d5cce54b2ce3099a88e679f02a49\" id:\"938c4d5bb3dc58834f326159510b6e552af3daeff148217517fe957d393bbf4e\" pid:4492 exit_status:1 exited_at:{seconds:1757699370 nanos:964547627}" Sep 12 17:49:31.025135 systemd-networkd[1362]: vxlan.calico: Link UP Sep 12 17:49:31.025141 systemd-networkd[1362]: vxlan.calico: Gained carrier Sep 12 17:49:32.141761 systemd-networkd[1362]: vxlan.calico: Gained IPv6LL Sep 12 17:49:32.158993 containerd[1714]: time="2025-09-12T17:49:32.158967317Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:49:32.161683 containerd[1714]: time="2025-09-12T17:49:32.161645641Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Sep 12 17:49:32.165217 containerd[1714]: time="2025-09-12T17:49:32.165183966Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:49:32.168365 containerd[1714]: time="2025-09-12T17:49:32.168330875Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:49:32.168874 containerd[1714]: time="2025-09-12T17:49:32.168774939Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 1.648277694s" Sep 12 17:49:32.168874 containerd[1714]: time="2025-09-12T17:49:32.168798830Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 12 17:49:32.171025 containerd[1714]: time="2025-09-12T17:49:32.171003340Z" level=info msg="CreateContainer within sandbox \"d151f43d2837576df7d81bf1ece4cddc4d97c5896b8e662298e89124f825a416\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 12 17:49:32.186461 containerd[1714]: time="2025-09-12T17:49:32.186436393Z" level=info msg="Container 69a36dd81c30ca4c2bb04311c1a8fb9f698ea5584beb35097f445c1c3a00eed7: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:49:32.202499 containerd[1714]: time="2025-09-12T17:49:32.202478657Z" level=info msg="CreateContainer within sandbox \"d151f43d2837576df7d81bf1ece4cddc4d97c5896b8e662298e89124f825a416\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"69a36dd81c30ca4c2bb04311c1a8fb9f698ea5584beb35097f445c1c3a00eed7\"" Sep 12 17:49:32.202950 containerd[1714]: time="2025-09-12T17:49:32.202918377Z" level=info msg="StartContainer for \"69a36dd81c30ca4c2bb04311c1a8fb9f698ea5584beb35097f445c1c3a00eed7\"" Sep 12 17:49:32.203842 containerd[1714]: time="2025-09-12T17:49:32.203820605Z" level=info msg="connecting to shim 69a36dd81c30ca4c2bb04311c1a8fb9f698ea5584beb35097f445c1c3a00eed7" address="unix:///run/containerd/s/031bd27711003a91a775346fe8f02cb0e1b4be6a23c30478ae5428d212b70d2c" protocol=ttrpc version=3 Sep 12 17:49:32.223782 systemd[1]: Started cri-containerd-69a36dd81c30ca4c2bb04311c1a8fb9f698ea5584beb35097f445c1c3a00eed7.scope - libcontainer container 69a36dd81c30ca4c2bb04311c1a8fb9f698ea5584beb35097f445c1c3a00eed7. Sep 12 17:49:32.260763 containerd[1714]: time="2025-09-12T17:49:32.260723905Z" level=info msg="StartContainer for \"69a36dd81c30ca4c2bb04311c1a8fb9f698ea5584beb35097f445c1c3a00eed7\" returns successfully" Sep 12 17:49:32.262626 containerd[1714]: time="2025-09-12T17:49:32.262611526Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 12 17:49:32.333734 systemd-networkd[1362]: calibe2eec6ec44: Gained IPv6LL Sep 12 17:49:32.760257 containerd[1714]: time="2025-09-12T17:49:32.760231310Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s6pvs,Uid:99b0530c-5bfd-4544-b93d-7c5fee8711e7,Namespace:calico-system,Attempt:0,}" Sep 12 17:49:32.842695 systemd-networkd[1362]: cali318bee28593: Link UP Sep 12 17:49:32.842808 systemd-networkd[1362]: cali318bee28593: Gained carrier Sep 12 17:49:32.855297 containerd[1714]: 2025-09-12 17:49:32.792 [INFO][4599] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4426.1.0--a--49404e8b93-k8s-csi--node--driver--s6pvs-eth0 csi-node-driver- calico-system 99b0530c-5bfd-4544-b93d-7c5fee8711e7 716 0 2025-09-12 17:49:10 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:856c6b598f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4426.1.0-a-49404e8b93 csi-node-driver-s6pvs eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali318bee28593 [] [] }} ContainerID="354d83ec7036ef74b9eb75d804ca088ca86483942f9ca5450366e598ecd39ba5" Namespace="calico-system" Pod="csi-node-driver-s6pvs" WorkloadEndpoint="ci--4426.1.0--a--49404e8b93-k8s-csi--node--driver--s6pvs-" Sep 12 17:49:32.855297 containerd[1714]: 2025-09-12 17:49:32.792 [INFO][4599] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="354d83ec7036ef74b9eb75d804ca088ca86483942f9ca5450366e598ecd39ba5" Namespace="calico-system" Pod="csi-node-driver-s6pvs" WorkloadEndpoint="ci--4426.1.0--a--49404e8b93-k8s-csi--node--driver--s6pvs-eth0" Sep 12 17:49:32.855297 containerd[1714]: 2025-09-12 17:49:32.811 [INFO][4610] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="354d83ec7036ef74b9eb75d804ca088ca86483942f9ca5450366e598ecd39ba5" HandleID="k8s-pod-network.354d83ec7036ef74b9eb75d804ca088ca86483942f9ca5450366e598ecd39ba5" Workload="ci--4426.1.0--a--49404e8b93-k8s-csi--node--driver--s6pvs-eth0" Sep 12 17:49:32.855446 containerd[1714]: 2025-09-12 17:49:32.811 [INFO][4610] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="354d83ec7036ef74b9eb75d804ca088ca86483942f9ca5450366e598ecd39ba5" HandleID="k8s-pod-network.354d83ec7036ef74b9eb75d804ca088ca86483942f9ca5450366e598ecd39ba5" Workload="ci--4426.1.0--a--49404e8b93-k8s-csi--node--driver--s6pvs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024ef60), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4426.1.0-a-49404e8b93", "pod":"csi-node-driver-s6pvs", "timestamp":"2025-09-12 17:49:32.811365978 +0000 UTC"}, Hostname:"ci-4426.1.0-a-49404e8b93", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:49:32.855446 containerd[1714]: 2025-09-12 17:49:32.811 [INFO][4610] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:49:32.855446 containerd[1714]: 2025-09-12 17:49:32.811 [INFO][4610] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:49:32.855446 containerd[1714]: 2025-09-12 17:49:32.811 [INFO][4610] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4426.1.0-a-49404e8b93' Sep 12 17:49:32.855446 containerd[1714]: 2025-09-12 17:49:32.815 [INFO][4610] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.354d83ec7036ef74b9eb75d804ca088ca86483942f9ca5450366e598ecd39ba5" host="ci-4426.1.0-a-49404e8b93" Sep 12 17:49:32.855446 containerd[1714]: 2025-09-12 17:49:32.818 [INFO][4610] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4426.1.0-a-49404e8b93" Sep 12 17:49:32.855446 containerd[1714]: 2025-09-12 17:49:32.822 [INFO][4610] ipam/ipam.go 511: Trying affinity for 192.168.109.128/26 host="ci-4426.1.0-a-49404e8b93" Sep 12 17:49:32.855446 containerd[1714]: 2025-09-12 17:49:32.823 [INFO][4610] ipam/ipam.go 158: Attempting to load block cidr=192.168.109.128/26 host="ci-4426.1.0-a-49404e8b93" Sep 12 17:49:32.855446 containerd[1714]: 2025-09-12 17:49:32.825 [INFO][4610] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.109.128/26 host="ci-4426.1.0-a-49404e8b93" Sep 12 17:49:32.855625 containerd[1714]: 2025-09-12 17:49:32.825 [INFO][4610] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.109.128/26 handle="k8s-pod-network.354d83ec7036ef74b9eb75d804ca088ca86483942f9ca5450366e598ecd39ba5" host="ci-4426.1.0-a-49404e8b93" Sep 12 17:49:32.855625 containerd[1714]: 2025-09-12 17:49:32.825 [INFO][4610] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.354d83ec7036ef74b9eb75d804ca088ca86483942f9ca5450366e598ecd39ba5 Sep 12 17:49:32.855625 containerd[1714]: 2025-09-12 17:49:32.831 [INFO][4610] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.109.128/26 handle="k8s-pod-network.354d83ec7036ef74b9eb75d804ca088ca86483942f9ca5450366e598ecd39ba5" host="ci-4426.1.0-a-49404e8b93" Sep 12 17:49:32.855625 containerd[1714]: 2025-09-12 17:49:32.839 [INFO][4610] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.109.130/26] block=192.168.109.128/26 handle="k8s-pod-network.354d83ec7036ef74b9eb75d804ca088ca86483942f9ca5450366e598ecd39ba5" host="ci-4426.1.0-a-49404e8b93" Sep 12 17:49:32.855625 containerd[1714]: 2025-09-12 17:49:32.839 [INFO][4610] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.109.130/26] handle="k8s-pod-network.354d83ec7036ef74b9eb75d804ca088ca86483942f9ca5450366e598ecd39ba5" host="ci-4426.1.0-a-49404e8b93" Sep 12 17:49:32.855625 containerd[1714]: 2025-09-12 17:49:32.840 [INFO][4610] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:49:32.855625 containerd[1714]: 2025-09-12 17:49:32.840 [INFO][4610] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.109.130/26] IPv6=[] ContainerID="354d83ec7036ef74b9eb75d804ca088ca86483942f9ca5450366e598ecd39ba5" HandleID="k8s-pod-network.354d83ec7036ef74b9eb75d804ca088ca86483942f9ca5450366e598ecd39ba5" Workload="ci--4426.1.0--a--49404e8b93-k8s-csi--node--driver--s6pvs-eth0" Sep 12 17:49:32.856185 containerd[1714]: 2025-09-12 17:49:32.841 [INFO][4599] cni-plugin/k8s.go 418: Populated endpoint ContainerID="354d83ec7036ef74b9eb75d804ca088ca86483942f9ca5450366e598ecd39ba5" Namespace="calico-system" Pod="csi-node-driver-s6pvs" WorkloadEndpoint="ci--4426.1.0--a--49404e8b93-k8s-csi--node--driver--s6pvs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4426.1.0--a--49404e8b93-k8s-csi--node--driver--s6pvs-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"99b0530c-5bfd-4544-b93d-7c5fee8711e7", ResourceVersion:"716", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 49, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4426.1.0-a-49404e8b93", ContainerID:"", Pod:"csi-node-driver-s6pvs", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.109.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali318bee28593", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:49:32.856254 containerd[1714]: 2025-09-12 17:49:32.841 [INFO][4599] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.109.130/32] ContainerID="354d83ec7036ef74b9eb75d804ca088ca86483942f9ca5450366e598ecd39ba5" Namespace="calico-system" Pod="csi-node-driver-s6pvs" WorkloadEndpoint="ci--4426.1.0--a--49404e8b93-k8s-csi--node--driver--s6pvs-eth0" Sep 12 17:49:32.856254 containerd[1714]: 2025-09-12 17:49:32.841 [INFO][4599] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali318bee28593 ContainerID="354d83ec7036ef74b9eb75d804ca088ca86483942f9ca5450366e598ecd39ba5" Namespace="calico-system" Pod="csi-node-driver-s6pvs" WorkloadEndpoint="ci--4426.1.0--a--49404e8b93-k8s-csi--node--driver--s6pvs-eth0" Sep 12 17:49:32.856254 containerd[1714]: 2025-09-12 17:49:32.842 [INFO][4599] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="354d83ec7036ef74b9eb75d804ca088ca86483942f9ca5450366e598ecd39ba5" Namespace="calico-system" Pod="csi-node-driver-s6pvs" WorkloadEndpoint="ci--4426.1.0--a--49404e8b93-k8s-csi--node--driver--s6pvs-eth0" Sep 12 17:49:32.856311 containerd[1714]: 2025-09-12 17:49:32.843 [INFO][4599] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="354d83ec7036ef74b9eb75d804ca088ca86483942f9ca5450366e598ecd39ba5" Namespace="calico-system" Pod="csi-node-driver-s6pvs" WorkloadEndpoint="ci--4426.1.0--a--49404e8b93-k8s-csi--node--driver--s6pvs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4426.1.0--a--49404e8b93-k8s-csi--node--driver--s6pvs-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"99b0530c-5bfd-4544-b93d-7c5fee8711e7", ResourceVersion:"716", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 49, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4426.1.0-a-49404e8b93", ContainerID:"354d83ec7036ef74b9eb75d804ca088ca86483942f9ca5450366e598ecd39ba5", Pod:"csi-node-driver-s6pvs", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.109.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali318bee28593", MAC:"06:af:5e:2f:e6:c9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:49:32.856361 containerd[1714]: 2025-09-12 17:49:32.852 [INFO][4599] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="354d83ec7036ef74b9eb75d804ca088ca86483942f9ca5450366e598ecd39ba5" Namespace="calico-system" Pod="csi-node-driver-s6pvs" WorkloadEndpoint="ci--4426.1.0--a--49404e8b93-k8s-csi--node--driver--s6pvs-eth0" Sep 12 17:49:32.910816 containerd[1714]: time="2025-09-12T17:49:32.910793628Z" level=info msg="connecting to shim 354d83ec7036ef74b9eb75d804ca088ca86483942f9ca5450366e598ecd39ba5" address="unix:///run/containerd/s/e223ac16f22b7506d12030fdfd643d76d2786e4003371affe2e8f396683f6f65" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:49:32.929749 systemd[1]: Started cri-containerd-354d83ec7036ef74b9eb75d804ca088ca86483942f9ca5450366e598ecd39ba5.scope - libcontainer container 354d83ec7036ef74b9eb75d804ca088ca86483942f9ca5450366e598ecd39ba5. Sep 12 17:49:32.950720 containerd[1714]: time="2025-09-12T17:49:32.950699523Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s6pvs,Uid:99b0530c-5bfd-4544-b93d-7c5fee8711e7,Namespace:calico-system,Attempt:0,} returns sandbox id \"354d83ec7036ef74b9eb75d804ca088ca86483942f9ca5450366e598ecd39ba5\"" Sep 12 17:49:34.509768 systemd-networkd[1362]: cali318bee28593: Gained IPv6LL Sep 12 17:49:34.761477 containerd[1714]: time="2025-09-12T17:49:34.761052975Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-skbrz,Uid:c512cc85-9497-4540-b387-f122715cba11,Namespace:kube-system,Attempt:0,}" Sep 12 17:49:34.874914 systemd-networkd[1362]: caliea444c7955e: Link UP Sep 12 17:49:34.875064 systemd-networkd[1362]: caliea444c7955e: Gained carrier Sep 12 17:49:34.893709 containerd[1714]: 2025-09-12 17:49:34.808 [INFO][4676] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4426.1.0--a--49404e8b93-k8s-coredns--7c65d6cfc9--skbrz-eth0 coredns-7c65d6cfc9- kube-system c512cc85-9497-4540-b387-f122715cba11 825 0 2025-09-12 17:48:58 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4426.1.0-a-49404e8b93 coredns-7c65d6cfc9-skbrz eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliea444c7955e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="52cc29eb6cf6ba52b76b62a4ddb3a25a3215d1aa7faf56298d7efa40b0988cae" Namespace="kube-system" Pod="coredns-7c65d6cfc9-skbrz" WorkloadEndpoint="ci--4426.1.0--a--49404e8b93-k8s-coredns--7c65d6cfc9--skbrz-" Sep 12 17:49:34.893709 containerd[1714]: 2025-09-12 17:49:34.808 [INFO][4676] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="52cc29eb6cf6ba52b76b62a4ddb3a25a3215d1aa7faf56298d7efa40b0988cae" Namespace="kube-system" Pod="coredns-7c65d6cfc9-skbrz" WorkloadEndpoint="ci--4426.1.0--a--49404e8b93-k8s-coredns--7c65d6cfc9--skbrz-eth0" Sep 12 17:49:34.893709 containerd[1714]: 2025-09-12 17:49:34.838 [INFO][4688] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="52cc29eb6cf6ba52b76b62a4ddb3a25a3215d1aa7faf56298d7efa40b0988cae" HandleID="k8s-pod-network.52cc29eb6cf6ba52b76b62a4ddb3a25a3215d1aa7faf56298d7efa40b0988cae" Workload="ci--4426.1.0--a--49404e8b93-k8s-coredns--7c65d6cfc9--skbrz-eth0" Sep 12 17:49:34.893903 containerd[1714]: 2025-09-12 17:49:34.838 [INFO][4688] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="52cc29eb6cf6ba52b76b62a4ddb3a25a3215d1aa7faf56298d7efa40b0988cae" HandleID="k8s-pod-network.52cc29eb6cf6ba52b76b62a4ddb3a25a3215d1aa7faf56298d7efa40b0988cae" Workload="ci--4426.1.0--a--49404e8b93-k8s-coredns--7c65d6cfc9--skbrz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c5bd0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4426.1.0-a-49404e8b93", "pod":"coredns-7c65d6cfc9-skbrz", "timestamp":"2025-09-12 17:49:34.838125064 +0000 UTC"}, Hostname:"ci-4426.1.0-a-49404e8b93", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:49:34.893903 containerd[1714]: 2025-09-12 17:49:34.838 [INFO][4688] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:49:34.893903 containerd[1714]: 2025-09-12 17:49:34.838 [INFO][4688] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:49:34.893903 containerd[1714]: 2025-09-12 17:49:34.838 [INFO][4688] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4426.1.0-a-49404e8b93' Sep 12 17:49:34.893903 containerd[1714]: 2025-09-12 17:49:34.843 [INFO][4688] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.52cc29eb6cf6ba52b76b62a4ddb3a25a3215d1aa7faf56298d7efa40b0988cae" host="ci-4426.1.0-a-49404e8b93" Sep 12 17:49:34.893903 containerd[1714]: 2025-09-12 17:49:34.847 [INFO][4688] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4426.1.0-a-49404e8b93" Sep 12 17:49:34.893903 containerd[1714]: 2025-09-12 17:49:34.851 [INFO][4688] ipam/ipam.go 511: Trying affinity for 192.168.109.128/26 host="ci-4426.1.0-a-49404e8b93" Sep 12 17:49:34.893903 containerd[1714]: 2025-09-12 17:49:34.853 [INFO][4688] ipam/ipam.go 158: Attempting to load block cidr=192.168.109.128/26 host="ci-4426.1.0-a-49404e8b93" Sep 12 17:49:34.893903 containerd[1714]: 2025-09-12 17:49:34.855 [INFO][4688] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.109.128/26 host="ci-4426.1.0-a-49404e8b93" Sep 12 17:49:34.894104 containerd[1714]: 2025-09-12 17:49:34.856 [INFO][4688] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.109.128/26 handle="k8s-pod-network.52cc29eb6cf6ba52b76b62a4ddb3a25a3215d1aa7faf56298d7efa40b0988cae" host="ci-4426.1.0-a-49404e8b93" Sep 12 17:49:34.894104 containerd[1714]: 2025-09-12 17:49:34.857 [INFO][4688] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.52cc29eb6cf6ba52b76b62a4ddb3a25a3215d1aa7faf56298d7efa40b0988cae Sep 12 17:49:34.894104 containerd[1714]: 2025-09-12 17:49:34.862 [INFO][4688] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.109.128/26 handle="k8s-pod-network.52cc29eb6cf6ba52b76b62a4ddb3a25a3215d1aa7faf56298d7efa40b0988cae" host="ci-4426.1.0-a-49404e8b93" Sep 12 17:49:34.894104 containerd[1714]: 2025-09-12 17:49:34.870 [INFO][4688] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.109.131/26] block=192.168.109.128/26 handle="k8s-pod-network.52cc29eb6cf6ba52b76b62a4ddb3a25a3215d1aa7faf56298d7efa40b0988cae" host="ci-4426.1.0-a-49404e8b93" Sep 12 17:49:34.894104 containerd[1714]: 2025-09-12 17:49:34.870 [INFO][4688] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.109.131/26] handle="k8s-pod-network.52cc29eb6cf6ba52b76b62a4ddb3a25a3215d1aa7faf56298d7efa40b0988cae" host="ci-4426.1.0-a-49404e8b93" Sep 12 17:49:34.894104 containerd[1714]: 2025-09-12 17:49:34.870 [INFO][4688] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:49:34.894104 containerd[1714]: 2025-09-12 17:49:34.870 [INFO][4688] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.109.131/26] IPv6=[] ContainerID="52cc29eb6cf6ba52b76b62a4ddb3a25a3215d1aa7faf56298d7efa40b0988cae" HandleID="k8s-pod-network.52cc29eb6cf6ba52b76b62a4ddb3a25a3215d1aa7faf56298d7efa40b0988cae" Workload="ci--4426.1.0--a--49404e8b93-k8s-coredns--7c65d6cfc9--skbrz-eth0" Sep 12 17:49:34.894530 containerd[1714]: 2025-09-12 17:49:34.871 [INFO][4676] cni-plugin/k8s.go 418: Populated endpoint ContainerID="52cc29eb6cf6ba52b76b62a4ddb3a25a3215d1aa7faf56298d7efa40b0988cae" Namespace="kube-system" Pod="coredns-7c65d6cfc9-skbrz" WorkloadEndpoint="ci--4426.1.0--a--49404e8b93-k8s-coredns--7c65d6cfc9--skbrz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4426.1.0--a--49404e8b93-k8s-coredns--7c65d6cfc9--skbrz-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"c512cc85-9497-4540-b387-f122715cba11", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 48, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4426.1.0-a-49404e8b93", ContainerID:"", Pod:"coredns-7c65d6cfc9-skbrz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.109.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliea444c7955e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:49:34.894530 containerd[1714]: 2025-09-12 17:49:34.872 [INFO][4676] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.109.131/32] ContainerID="52cc29eb6cf6ba52b76b62a4ddb3a25a3215d1aa7faf56298d7efa40b0988cae" Namespace="kube-system" Pod="coredns-7c65d6cfc9-skbrz" WorkloadEndpoint="ci--4426.1.0--a--49404e8b93-k8s-coredns--7c65d6cfc9--skbrz-eth0" Sep 12 17:49:34.894530 containerd[1714]: 2025-09-12 17:49:34.872 [INFO][4676] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliea444c7955e ContainerID="52cc29eb6cf6ba52b76b62a4ddb3a25a3215d1aa7faf56298d7efa40b0988cae" Namespace="kube-system" Pod="coredns-7c65d6cfc9-skbrz" WorkloadEndpoint="ci--4426.1.0--a--49404e8b93-k8s-coredns--7c65d6cfc9--skbrz-eth0" Sep 12 17:49:34.894530 containerd[1714]: 2025-09-12 17:49:34.875 [INFO][4676] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="52cc29eb6cf6ba52b76b62a4ddb3a25a3215d1aa7faf56298d7efa40b0988cae" Namespace="kube-system" Pod="coredns-7c65d6cfc9-skbrz" WorkloadEndpoint="ci--4426.1.0--a--49404e8b93-k8s-coredns--7c65d6cfc9--skbrz-eth0" Sep 12 17:49:34.894530 containerd[1714]: 2025-09-12 17:49:34.876 [INFO][4676] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="52cc29eb6cf6ba52b76b62a4ddb3a25a3215d1aa7faf56298d7efa40b0988cae" Namespace="kube-system" Pod="coredns-7c65d6cfc9-skbrz" WorkloadEndpoint="ci--4426.1.0--a--49404e8b93-k8s-coredns--7c65d6cfc9--skbrz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4426.1.0--a--49404e8b93-k8s-coredns--7c65d6cfc9--skbrz-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"c512cc85-9497-4540-b387-f122715cba11", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 48, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4426.1.0-a-49404e8b93", ContainerID:"52cc29eb6cf6ba52b76b62a4ddb3a25a3215d1aa7faf56298d7efa40b0988cae", Pod:"coredns-7c65d6cfc9-skbrz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.109.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliea444c7955e", MAC:"72:16:ff:2c:f5:42", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:49:34.894530 containerd[1714]: 2025-09-12 17:49:34.890 [INFO][4676] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="52cc29eb6cf6ba52b76b62a4ddb3a25a3215d1aa7faf56298d7efa40b0988cae" Namespace="kube-system" Pod="coredns-7c65d6cfc9-skbrz" WorkloadEndpoint="ci--4426.1.0--a--49404e8b93-k8s-coredns--7c65d6cfc9--skbrz-eth0" Sep 12 17:49:34.943595 containerd[1714]: time="2025-09-12T17:49:34.943545888Z" level=info msg="connecting to shim 52cc29eb6cf6ba52b76b62a4ddb3a25a3215d1aa7faf56298d7efa40b0988cae" address="unix:///run/containerd/s/498ccd9c38dcc897394ac0eecbaa0d0bdeab77b224161176cb190d53245ffb59" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:49:34.976893 systemd[1]: Started cri-containerd-52cc29eb6cf6ba52b76b62a4ddb3a25a3215d1aa7faf56298d7efa40b0988cae.scope - libcontainer container 52cc29eb6cf6ba52b76b62a4ddb3a25a3215d1aa7faf56298d7efa40b0988cae. Sep 12 17:49:35.031124 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2868548996.mount: Deactivated successfully. Sep 12 17:49:35.031715 containerd[1714]: time="2025-09-12T17:49:35.031695863Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-skbrz,Uid:c512cc85-9497-4540-b387-f122715cba11,Namespace:kube-system,Attempt:0,} returns sandbox id \"52cc29eb6cf6ba52b76b62a4ddb3a25a3215d1aa7faf56298d7efa40b0988cae\"" Sep 12 17:49:35.034843 containerd[1714]: time="2025-09-12T17:49:35.034788754Z" level=info msg="CreateContainer within sandbox \"52cc29eb6cf6ba52b76b62a4ddb3a25a3215d1aa7faf56298d7efa40b0988cae\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 17:49:35.063461 containerd[1714]: time="2025-09-12T17:49:35.063406113Z" level=info msg="Container 9646ecfa08cc5df6b5e28194ff6dfdc2396ec18bcd60a376fe8fa3d642d392ae: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:49:35.089730 containerd[1714]: time="2025-09-12T17:49:35.089711556Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:49:35.090525 containerd[1714]: time="2025-09-12T17:49:35.090491158Z" level=info msg="CreateContainer within sandbox \"52cc29eb6cf6ba52b76b62a4ddb3a25a3215d1aa7faf56298d7efa40b0988cae\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9646ecfa08cc5df6b5e28194ff6dfdc2396ec18bcd60a376fe8fa3d642d392ae\"" Sep 12 17:49:35.090946 containerd[1714]: time="2025-09-12T17:49:35.090925959Z" level=info msg="StartContainer for \"9646ecfa08cc5df6b5e28194ff6dfdc2396ec18bcd60a376fe8fa3d642d392ae\"" Sep 12 17:49:35.091583 containerd[1714]: time="2025-09-12T17:49:35.091529122Z" level=info msg="connecting to shim 9646ecfa08cc5df6b5e28194ff6dfdc2396ec18bcd60a376fe8fa3d642d392ae" address="unix:///run/containerd/s/498ccd9c38dcc897394ac0eecbaa0d0bdeab77b224161176cb190d53245ffb59" protocol=ttrpc version=3 Sep 12 17:49:35.092043 containerd[1714]: time="2025-09-12T17:49:35.092024693Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Sep 12 17:49:35.095519 containerd[1714]: time="2025-09-12T17:49:35.095496762Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:49:35.100795 containerd[1714]: time="2025-09-12T17:49:35.100753193Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:49:35.101305 containerd[1714]: time="2025-09-12T17:49:35.101285911Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 2.838492156s" Sep 12 17:49:35.101355 containerd[1714]: time="2025-09-12T17:49:35.101310972Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 12 17:49:35.103937 containerd[1714]: time="2025-09-12T17:49:35.103790009Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 12 17:49:35.104398 containerd[1714]: time="2025-09-12T17:49:35.104376579Z" level=info msg="CreateContainer within sandbox \"d151f43d2837576df7d81bf1ece4cddc4d97c5896b8e662298e89124f825a416\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 12 17:49:35.107750 systemd[1]: Started cri-containerd-9646ecfa08cc5df6b5e28194ff6dfdc2396ec18bcd60a376fe8fa3d642d392ae.scope - libcontainer container 9646ecfa08cc5df6b5e28194ff6dfdc2396ec18bcd60a376fe8fa3d642d392ae. Sep 12 17:49:35.125652 containerd[1714]: time="2025-09-12T17:49:35.125064603Z" level=info msg="Container 4624569989c277db99520db3544ccd940a065ef0baefcb5692a793bd6d4b91b9: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:49:35.140556 containerd[1714]: time="2025-09-12T17:49:35.140531643Z" level=info msg="StartContainer for \"9646ecfa08cc5df6b5e28194ff6dfdc2396ec18bcd60a376fe8fa3d642d392ae\" returns successfully" Sep 12 17:49:35.148730 containerd[1714]: time="2025-09-12T17:49:35.148196099Z" level=info msg="CreateContainer within sandbox \"d151f43d2837576df7d81bf1ece4cddc4d97c5896b8e662298e89124f825a416\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"4624569989c277db99520db3544ccd940a065ef0baefcb5692a793bd6d4b91b9\"" Sep 12 17:49:35.149121 containerd[1714]: time="2025-09-12T17:49:35.149055751Z" level=info msg="StartContainer for \"4624569989c277db99520db3544ccd940a065ef0baefcb5692a793bd6d4b91b9\"" Sep 12 17:49:35.150801 containerd[1714]: time="2025-09-12T17:49:35.150761786Z" level=info msg="connecting to shim 4624569989c277db99520db3544ccd940a065ef0baefcb5692a793bd6d4b91b9" address="unix:///run/containerd/s/031bd27711003a91a775346fe8f02cb0e1b4be6a23c30478ae5428d212b70d2c" protocol=ttrpc version=3 Sep 12 17:49:35.166904 systemd[1]: Started cri-containerd-4624569989c277db99520db3544ccd940a065ef0baefcb5692a793bd6d4b91b9.scope - libcontainer container 4624569989c277db99520db3544ccd940a065ef0baefcb5692a793bd6d4b91b9. Sep 12 17:49:35.210577 containerd[1714]: time="2025-09-12T17:49:35.210554823Z" level=info msg="StartContainer for \"4624569989c277db99520db3544ccd940a065ef0baefcb5692a793bd6d4b91b9\" returns successfully" Sep 12 17:49:35.760103 containerd[1714]: time="2025-09-12T17:49:35.760054904Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-h8gbr,Uid:2922ec9b-0391-44f8-a413-50f02efd5bd2,Namespace:calico-system,Attempt:0,}" Sep 12 17:49:35.760261 containerd[1714]: time="2025-09-12T17:49:35.760054865Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-gm87k,Uid:c8e5c3e4-8614-46d3-bca5-f5c1f1e4abfc,Namespace:kube-system,Attempt:0,}" Sep 12 17:49:35.865701 systemd-networkd[1362]: cali6ab66c702d8: Link UP Sep 12 17:49:35.866318 systemd-networkd[1362]: cali6ab66c702d8: Gained carrier Sep 12 17:49:35.879535 containerd[1714]: 2025-09-12 17:49:35.801 [INFO][4819] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4426.1.0--a--49404e8b93-k8s-goldmane--7988f88666--h8gbr-eth0 goldmane-7988f88666- calico-system 2922ec9b-0391-44f8-a413-50f02efd5bd2 832 0 2025-09-12 17:49:09 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7988f88666 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4426.1.0-a-49404e8b93 goldmane-7988f88666-h8gbr eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali6ab66c702d8 [] [] }} ContainerID="2f755be201ec294af0adeb6dc158447c3ac735cf75fd30088cac4901710c7032" Namespace="calico-system" Pod="goldmane-7988f88666-h8gbr" WorkloadEndpoint="ci--4426.1.0--a--49404e8b93-k8s-goldmane--7988f88666--h8gbr-" Sep 12 17:49:35.879535 containerd[1714]: 2025-09-12 17:49:35.801 [INFO][4819] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2f755be201ec294af0adeb6dc158447c3ac735cf75fd30088cac4901710c7032" Namespace="calico-system" Pod="goldmane-7988f88666-h8gbr" WorkloadEndpoint="ci--4426.1.0--a--49404e8b93-k8s-goldmane--7988f88666--h8gbr-eth0" Sep 12 17:49:35.879535 containerd[1714]: 2025-09-12 17:49:35.828 [INFO][4845] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2f755be201ec294af0adeb6dc158447c3ac735cf75fd30088cac4901710c7032" HandleID="k8s-pod-network.2f755be201ec294af0adeb6dc158447c3ac735cf75fd30088cac4901710c7032" Workload="ci--4426.1.0--a--49404e8b93-k8s-goldmane--7988f88666--h8gbr-eth0" Sep 12 17:49:35.879535 containerd[1714]: 2025-09-12 17:49:35.828 [INFO][4845] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2f755be201ec294af0adeb6dc158447c3ac735cf75fd30088cac4901710c7032" HandleID="k8s-pod-network.2f755be201ec294af0adeb6dc158447c3ac735cf75fd30088cac4901710c7032" Workload="ci--4426.1.0--a--49404e8b93-k8s-goldmane--7988f88666--h8gbr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4ff0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4426.1.0-a-49404e8b93", "pod":"goldmane-7988f88666-h8gbr", "timestamp":"2025-09-12 17:49:35.828823906 +0000 UTC"}, Hostname:"ci-4426.1.0-a-49404e8b93", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:49:35.879535 containerd[1714]: 2025-09-12 17:49:35.830 [INFO][4845] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:49:35.879535 containerd[1714]: 2025-09-12 17:49:35.830 [INFO][4845] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:49:35.879535 containerd[1714]: 2025-09-12 17:49:35.830 [INFO][4845] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4426.1.0-a-49404e8b93' Sep 12 17:49:35.879535 containerd[1714]: 2025-09-12 17:49:35.836 [INFO][4845] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2f755be201ec294af0adeb6dc158447c3ac735cf75fd30088cac4901710c7032" host="ci-4426.1.0-a-49404e8b93" Sep 12 17:49:35.879535 containerd[1714]: 2025-09-12 17:49:35.839 [INFO][4845] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4426.1.0-a-49404e8b93" Sep 12 17:49:35.879535 containerd[1714]: 2025-09-12 17:49:35.844 [INFO][4845] ipam/ipam.go 511: Trying affinity for 192.168.109.128/26 host="ci-4426.1.0-a-49404e8b93" Sep 12 17:49:35.879535 containerd[1714]: 2025-09-12 17:49:35.845 [INFO][4845] ipam/ipam.go 158: Attempting to load block cidr=192.168.109.128/26 host="ci-4426.1.0-a-49404e8b93" Sep 12 17:49:35.879535 containerd[1714]: 2025-09-12 17:49:35.846 [INFO][4845] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.109.128/26 host="ci-4426.1.0-a-49404e8b93" Sep 12 17:49:35.879535 containerd[1714]: 2025-09-12 17:49:35.848 [INFO][4845] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.109.128/26 handle="k8s-pod-network.2f755be201ec294af0adeb6dc158447c3ac735cf75fd30088cac4901710c7032" host="ci-4426.1.0-a-49404e8b93" Sep 12 17:49:35.879535 containerd[1714]: 2025-09-12 17:49:35.849 [INFO][4845] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2f755be201ec294af0adeb6dc158447c3ac735cf75fd30088cac4901710c7032 Sep 12 17:49:35.879535 containerd[1714]: 2025-09-12 17:49:35.852 [INFO][4845] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.109.128/26 handle="k8s-pod-network.2f755be201ec294af0adeb6dc158447c3ac735cf75fd30088cac4901710c7032" host="ci-4426.1.0-a-49404e8b93" Sep 12 17:49:35.879535 containerd[1714]: 2025-09-12 17:49:35.860 [INFO][4845] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.109.132/26] block=192.168.109.128/26 handle="k8s-pod-network.2f755be201ec294af0adeb6dc158447c3ac735cf75fd30088cac4901710c7032" host="ci-4426.1.0-a-49404e8b93" Sep 12 17:49:35.879535 containerd[1714]: 2025-09-12 17:49:35.860 [INFO][4845] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.109.132/26] handle="k8s-pod-network.2f755be201ec294af0adeb6dc158447c3ac735cf75fd30088cac4901710c7032" host="ci-4426.1.0-a-49404e8b93" Sep 12 17:49:35.879535 containerd[1714]: 2025-09-12 17:49:35.860 [INFO][4845] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:49:35.879535 containerd[1714]: 2025-09-12 17:49:35.860 [INFO][4845] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.109.132/26] IPv6=[] ContainerID="2f755be201ec294af0adeb6dc158447c3ac735cf75fd30088cac4901710c7032" HandleID="k8s-pod-network.2f755be201ec294af0adeb6dc158447c3ac735cf75fd30088cac4901710c7032" Workload="ci--4426.1.0--a--49404e8b93-k8s-goldmane--7988f88666--h8gbr-eth0" Sep 12 17:49:35.880985 containerd[1714]: 2025-09-12 17:49:35.861 [INFO][4819] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2f755be201ec294af0adeb6dc158447c3ac735cf75fd30088cac4901710c7032" Namespace="calico-system" Pod="goldmane-7988f88666-h8gbr" WorkloadEndpoint="ci--4426.1.0--a--49404e8b93-k8s-goldmane--7988f88666--h8gbr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4426.1.0--a--49404e8b93-k8s-goldmane--7988f88666--h8gbr-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"2922ec9b-0391-44f8-a413-50f02efd5bd2", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 49, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4426.1.0-a-49404e8b93", ContainerID:"", Pod:"goldmane-7988f88666-h8gbr", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.109.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali6ab66c702d8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:49:35.880985 containerd[1714]: 2025-09-12 17:49:35.862 [INFO][4819] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.109.132/32] ContainerID="2f755be201ec294af0adeb6dc158447c3ac735cf75fd30088cac4901710c7032" Namespace="calico-system" Pod="goldmane-7988f88666-h8gbr" WorkloadEndpoint="ci--4426.1.0--a--49404e8b93-k8s-goldmane--7988f88666--h8gbr-eth0" Sep 12 17:49:35.880985 containerd[1714]: 2025-09-12 17:49:35.862 [INFO][4819] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6ab66c702d8 ContainerID="2f755be201ec294af0adeb6dc158447c3ac735cf75fd30088cac4901710c7032" Namespace="calico-system" Pod="goldmane-7988f88666-h8gbr" WorkloadEndpoint="ci--4426.1.0--a--49404e8b93-k8s-goldmane--7988f88666--h8gbr-eth0" Sep 12 17:49:35.880985 containerd[1714]: 2025-09-12 17:49:35.866 [INFO][4819] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2f755be201ec294af0adeb6dc158447c3ac735cf75fd30088cac4901710c7032" Namespace="calico-system" Pod="goldmane-7988f88666-h8gbr" WorkloadEndpoint="ci--4426.1.0--a--49404e8b93-k8s-goldmane--7988f88666--h8gbr-eth0" Sep 12 17:49:35.880985 containerd[1714]: 2025-09-12 17:49:35.866 [INFO][4819] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2f755be201ec294af0adeb6dc158447c3ac735cf75fd30088cac4901710c7032" Namespace="calico-system" Pod="goldmane-7988f88666-h8gbr" WorkloadEndpoint="ci--4426.1.0--a--49404e8b93-k8s-goldmane--7988f88666--h8gbr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4426.1.0--a--49404e8b93-k8s-goldmane--7988f88666--h8gbr-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"2922ec9b-0391-44f8-a413-50f02efd5bd2", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 49, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4426.1.0-a-49404e8b93", ContainerID:"2f755be201ec294af0adeb6dc158447c3ac735cf75fd30088cac4901710c7032", Pod:"goldmane-7988f88666-h8gbr", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.109.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali6ab66c702d8", MAC:"ba:04:5b:31:17:cc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:49:35.880985 containerd[1714]: 2025-09-12 17:49:35.877 [INFO][4819] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2f755be201ec294af0adeb6dc158447c3ac735cf75fd30088cac4901710c7032" Namespace="calico-system" Pod="goldmane-7988f88666-h8gbr" WorkloadEndpoint="ci--4426.1.0--a--49404e8b93-k8s-goldmane--7988f88666--h8gbr-eth0" Sep 12 17:49:35.918208 systemd-networkd[1362]: caliea444c7955e: Gained IPv6LL Sep 12 17:49:35.924892 containerd[1714]: time="2025-09-12T17:49:35.924866993Z" level=info msg="connecting to shim 2f755be201ec294af0adeb6dc158447c3ac735cf75fd30088cac4901710c7032" address="unix:///run/containerd/s/cb5575177b692c73a8f3344fef14654a0026c4ffb6aac4e4c8410a9b8ff2e581" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:49:35.929241 kubelet[3174]: I0912 17:49:35.928954 3174 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-65f9696c6f-vnfk2" podStartSLOduration=2.34636526 podStartE2EDuration="6.928936655s" podCreationTimestamp="2025-09-12 17:49:29 +0000 UTC" firstStartedPulling="2025-09-12 17:49:30.520233447 +0000 UTC m=+37.839073510" lastFinishedPulling="2025-09-12 17:49:35.102804859 +0000 UTC m=+42.421644905" observedRunningTime="2025-09-12 17:49:35.927972503 +0000 UTC m=+43.246812557" watchObservedRunningTime="2025-09-12 17:49:35.928936655 +0000 UTC m=+43.247776700" Sep 12 17:49:35.950838 systemd[1]: Started cri-containerd-2f755be201ec294af0adeb6dc158447c3ac735cf75fd30088cac4901710c7032.scope - libcontainer container 2f755be201ec294af0adeb6dc158447c3ac735cf75fd30088cac4901710c7032. Sep 12 17:49:35.960923 kubelet[3174]: I0912 17:49:35.960890 3174 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-skbrz" podStartSLOduration=37.960876782 podStartE2EDuration="37.960876782s" podCreationTimestamp="2025-09-12 17:48:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:49:35.959978291 +0000 UTC m=+43.278818346" watchObservedRunningTime="2025-09-12 17:49:35.960876782 +0000 UTC m=+43.279716856" Sep 12 17:49:36.004325 systemd-networkd[1362]: calibd05c48310e: Link UP Sep 12 17:49:36.004478 systemd-networkd[1362]: calibd05c48310e: Gained carrier Sep 12 17:49:36.021454 containerd[1714]: 2025-09-12 17:49:35.811 [INFO][4830] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4426.1.0--a--49404e8b93-k8s-coredns--7c65d6cfc9--gm87k-eth0 coredns-7c65d6cfc9- kube-system c8e5c3e4-8614-46d3-bca5-f5c1f1e4abfc 828 0 2025-09-12 17:48:58 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4426.1.0-a-49404e8b93 coredns-7c65d6cfc9-gm87k eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calibd05c48310e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="62fbbcfdd63a7e3b02d4a56faca938f3012c4994f8bd41ac426e1190cf69ec29" Namespace="kube-system" Pod="coredns-7c65d6cfc9-gm87k" WorkloadEndpoint="ci--4426.1.0--a--49404e8b93-k8s-coredns--7c65d6cfc9--gm87k-" Sep 12 17:49:36.021454 containerd[1714]: 2025-09-12 17:49:35.811 [INFO][4830] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="62fbbcfdd63a7e3b02d4a56faca938f3012c4994f8bd41ac426e1190cf69ec29" Namespace="kube-system" Pod="coredns-7c65d6cfc9-gm87k" WorkloadEndpoint="ci--4426.1.0--a--49404e8b93-k8s-coredns--7c65d6cfc9--gm87k-eth0" Sep 12 17:49:36.021454 containerd[1714]: 2025-09-12 17:49:35.840 [INFO][4850] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="62fbbcfdd63a7e3b02d4a56faca938f3012c4994f8bd41ac426e1190cf69ec29" HandleID="k8s-pod-network.62fbbcfdd63a7e3b02d4a56faca938f3012c4994f8bd41ac426e1190cf69ec29" Workload="ci--4426.1.0--a--49404e8b93-k8s-coredns--7c65d6cfc9--gm87k-eth0" Sep 12 17:49:36.021454 containerd[1714]: 2025-09-12 17:49:35.840 [INFO][4850] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="62fbbcfdd63a7e3b02d4a56faca938f3012c4994f8bd41ac426e1190cf69ec29" HandleID="k8s-pod-network.62fbbcfdd63a7e3b02d4a56faca938f3012c4994f8bd41ac426e1190cf69ec29" Workload="ci--4426.1.0--a--49404e8b93-k8s-coredns--7c65d6cfc9--gm87k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5600), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4426.1.0-a-49404e8b93", "pod":"coredns-7c65d6cfc9-gm87k", "timestamp":"2025-09-12 17:49:35.839991231 +0000 UTC"}, Hostname:"ci-4426.1.0-a-49404e8b93", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:49:36.021454 containerd[1714]: 2025-09-12 17:49:35.840 [INFO][4850] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:49:36.021454 containerd[1714]: 2025-09-12 17:49:35.860 [INFO][4850] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:49:36.021454 containerd[1714]: 2025-09-12 17:49:35.860 [INFO][4850] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4426.1.0-a-49404e8b93' Sep 12 17:49:36.021454 containerd[1714]: 2025-09-12 17:49:35.940 [INFO][4850] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.62fbbcfdd63a7e3b02d4a56faca938f3012c4994f8bd41ac426e1190cf69ec29" host="ci-4426.1.0-a-49404e8b93" Sep 12 17:49:36.021454 containerd[1714]: 2025-09-12 17:49:35.955 [INFO][4850] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4426.1.0-a-49404e8b93" Sep 12 17:49:36.021454 containerd[1714]: 2025-09-12 17:49:35.974 [INFO][4850] ipam/ipam.go 511: Trying affinity for 192.168.109.128/26 host="ci-4426.1.0-a-49404e8b93" Sep 12 17:49:36.021454 containerd[1714]: 2025-09-12 17:49:35.978 [INFO][4850] ipam/ipam.go 158: Attempting to load block cidr=192.168.109.128/26 host="ci-4426.1.0-a-49404e8b93" Sep 12 17:49:36.021454 containerd[1714]: 2025-09-12 17:49:35.981 [INFO][4850] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.109.128/26 host="ci-4426.1.0-a-49404e8b93" Sep 12 17:49:36.021454 containerd[1714]: 2025-09-12 17:49:35.981 [INFO][4850] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.109.128/26 handle="k8s-pod-network.62fbbcfdd63a7e3b02d4a56faca938f3012c4994f8bd41ac426e1190cf69ec29" host="ci-4426.1.0-a-49404e8b93" Sep 12 17:49:36.021454 containerd[1714]: 2025-09-12 17:49:35.986 [INFO][4850] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.62fbbcfdd63a7e3b02d4a56faca938f3012c4994f8bd41ac426e1190cf69ec29 Sep 12 17:49:36.021454 containerd[1714]: 2025-09-12 17:49:35.994 [INFO][4850] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.109.128/26 handle="k8s-pod-network.62fbbcfdd63a7e3b02d4a56faca938f3012c4994f8bd41ac426e1190cf69ec29" host="ci-4426.1.0-a-49404e8b93" Sep 12 17:49:36.021454 containerd[1714]: 2025-09-12 17:49:35.999 [INFO][4850] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.109.133/26] block=192.168.109.128/26 handle="k8s-pod-network.62fbbcfdd63a7e3b02d4a56faca938f3012c4994f8bd41ac426e1190cf69ec29" host="ci-4426.1.0-a-49404e8b93" Sep 12 17:49:36.021454 containerd[1714]: 2025-09-12 17:49:35.999 [INFO][4850] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.109.133/26] handle="k8s-pod-network.62fbbcfdd63a7e3b02d4a56faca938f3012c4994f8bd41ac426e1190cf69ec29" host="ci-4426.1.0-a-49404e8b93" Sep 12 17:49:36.021454 containerd[1714]: 2025-09-12 17:49:35.999 [INFO][4850] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:49:36.021454 containerd[1714]: 2025-09-12 17:49:35.999 [INFO][4850] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.109.133/26] IPv6=[] ContainerID="62fbbcfdd63a7e3b02d4a56faca938f3012c4994f8bd41ac426e1190cf69ec29" HandleID="k8s-pod-network.62fbbcfdd63a7e3b02d4a56faca938f3012c4994f8bd41ac426e1190cf69ec29" Workload="ci--4426.1.0--a--49404e8b93-k8s-coredns--7c65d6cfc9--gm87k-eth0" Sep 12 17:49:36.022468 containerd[1714]: 2025-09-12 17:49:36.000 [INFO][4830] cni-plugin/k8s.go 418: Populated endpoint ContainerID="62fbbcfdd63a7e3b02d4a56faca938f3012c4994f8bd41ac426e1190cf69ec29" Namespace="kube-system" Pod="coredns-7c65d6cfc9-gm87k" WorkloadEndpoint="ci--4426.1.0--a--49404e8b93-k8s-coredns--7c65d6cfc9--gm87k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4426.1.0--a--49404e8b93-k8s-coredns--7c65d6cfc9--gm87k-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"c8e5c3e4-8614-46d3-bca5-f5c1f1e4abfc", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 48, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4426.1.0-a-49404e8b93", ContainerID:"", Pod:"coredns-7c65d6cfc9-gm87k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.109.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibd05c48310e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:49:36.022468 containerd[1714]: 2025-09-12 17:49:36.001 [INFO][4830] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.109.133/32] ContainerID="62fbbcfdd63a7e3b02d4a56faca938f3012c4994f8bd41ac426e1190cf69ec29" Namespace="kube-system" Pod="coredns-7c65d6cfc9-gm87k" WorkloadEndpoint="ci--4426.1.0--a--49404e8b93-k8s-coredns--7c65d6cfc9--gm87k-eth0" Sep 12 17:49:36.022468 containerd[1714]: 2025-09-12 17:49:36.001 [INFO][4830] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibd05c48310e ContainerID="62fbbcfdd63a7e3b02d4a56faca938f3012c4994f8bd41ac426e1190cf69ec29" Namespace="kube-system" Pod="coredns-7c65d6cfc9-gm87k" WorkloadEndpoint="ci--4426.1.0--a--49404e8b93-k8s-coredns--7c65d6cfc9--gm87k-eth0" Sep 12 17:49:36.022468 containerd[1714]: 2025-09-12 17:49:36.002 [INFO][4830] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="62fbbcfdd63a7e3b02d4a56faca938f3012c4994f8bd41ac426e1190cf69ec29" Namespace="kube-system" Pod="coredns-7c65d6cfc9-gm87k" WorkloadEndpoint="ci--4426.1.0--a--49404e8b93-k8s-coredns--7c65d6cfc9--gm87k-eth0" Sep 12 17:49:36.022468 containerd[1714]: 2025-09-12 17:49:36.004 [INFO][4830] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="62fbbcfdd63a7e3b02d4a56faca938f3012c4994f8bd41ac426e1190cf69ec29" Namespace="kube-system" Pod="coredns-7c65d6cfc9-gm87k" WorkloadEndpoint="ci--4426.1.0--a--49404e8b93-k8s-coredns--7c65d6cfc9--gm87k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4426.1.0--a--49404e8b93-k8s-coredns--7c65d6cfc9--gm87k-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"c8e5c3e4-8614-46d3-bca5-f5c1f1e4abfc", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 48, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4426.1.0-a-49404e8b93", ContainerID:"62fbbcfdd63a7e3b02d4a56faca938f3012c4994f8bd41ac426e1190cf69ec29", Pod:"coredns-7c65d6cfc9-gm87k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.109.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibd05c48310e", MAC:"26:d8:38:cf:20:62", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:49:36.022468 containerd[1714]: 2025-09-12 17:49:36.016 [INFO][4830] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="62fbbcfdd63a7e3b02d4a56faca938f3012c4994f8bd41ac426e1190cf69ec29" Namespace="kube-system" Pod="coredns-7c65d6cfc9-gm87k" WorkloadEndpoint="ci--4426.1.0--a--49404e8b93-k8s-coredns--7c65d6cfc9--gm87k-eth0" Sep 12 17:49:36.051867 containerd[1714]: time="2025-09-12T17:49:36.051846922Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-h8gbr,Uid:2922ec9b-0391-44f8-a413-50f02efd5bd2,Namespace:calico-system,Attempt:0,} returns sandbox id \"2f755be201ec294af0adeb6dc158447c3ac735cf75fd30088cac4901710c7032\"" Sep 12 17:49:36.081766 containerd[1714]: time="2025-09-12T17:49:36.081688977Z" level=info msg="connecting to shim 62fbbcfdd63a7e3b02d4a56faca938f3012c4994f8bd41ac426e1190cf69ec29" address="unix:///run/containerd/s/32a28f732e64ac2091cde4c2af3d40878ea364775f7f1f0cc078d7531b69922a" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:49:36.096744 systemd[1]: Started cri-containerd-62fbbcfdd63a7e3b02d4a56faca938f3012c4994f8bd41ac426e1190cf69ec29.scope - libcontainer container 62fbbcfdd63a7e3b02d4a56faca938f3012c4994f8bd41ac426e1190cf69ec29. Sep 12 17:49:36.130113 containerd[1714]: time="2025-09-12T17:49:36.130089629Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-gm87k,Uid:c8e5c3e4-8614-46d3-bca5-f5c1f1e4abfc,Namespace:kube-system,Attempt:0,} returns sandbox id \"62fbbcfdd63a7e3b02d4a56faca938f3012c4994f8bd41ac426e1190cf69ec29\"" Sep 12 17:49:36.131756 containerd[1714]: time="2025-09-12T17:49:36.131738012Z" level=info msg="CreateContainer within sandbox \"62fbbcfdd63a7e3b02d4a56faca938f3012c4994f8bd41ac426e1190cf69ec29\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 17:49:36.156189 containerd[1714]: time="2025-09-12T17:49:36.155822742Z" level=info msg="Container 0a9a8835b862882bdbbbd4613983f6cb1e5bd92ed633e95a947402bd7ecbd13e: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:49:36.166944 containerd[1714]: time="2025-09-12T17:49:36.166921997Z" level=info msg="CreateContainer within sandbox \"62fbbcfdd63a7e3b02d4a56faca938f3012c4994f8bd41ac426e1190cf69ec29\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0a9a8835b862882bdbbbd4613983f6cb1e5bd92ed633e95a947402bd7ecbd13e\"" Sep 12 17:49:36.167380 containerd[1714]: time="2025-09-12T17:49:36.167355234Z" level=info msg="StartContainer for \"0a9a8835b862882bdbbbd4613983f6cb1e5bd92ed633e95a947402bd7ecbd13e\"" Sep 12 17:49:36.168171 containerd[1714]: time="2025-09-12T17:49:36.168140743Z" level=info msg="connecting to shim 0a9a8835b862882bdbbbd4613983f6cb1e5bd92ed633e95a947402bd7ecbd13e" address="unix:///run/containerd/s/32a28f732e64ac2091cde4c2af3d40878ea364775f7f1f0cc078d7531b69922a" protocol=ttrpc version=3 Sep 12 17:49:36.183820 systemd[1]: Started cri-containerd-0a9a8835b862882bdbbbd4613983f6cb1e5bd92ed633e95a947402bd7ecbd13e.scope - libcontainer container 0a9a8835b862882bdbbbd4613983f6cb1e5bd92ed633e95a947402bd7ecbd13e. Sep 12 17:49:36.210172 containerd[1714]: time="2025-09-12T17:49:36.210152031Z" level=info msg="StartContainer for \"0a9a8835b862882bdbbbd4613983f6cb1e5bd92ed633e95a947402bd7ecbd13e\" returns successfully" Sep 12 17:49:36.428187 containerd[1714]: time="2025-09-12T17:49:36.428154459Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:49:36.431972 containerd[1714]: time="2025-09-12T17:49:36.431948734Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 12 17:49:36.435410 containerd[1714]: time="2025-09-12T17:49:36.435379840Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:49:36.439699 containerd[1714]: time="2025-09-12T17:49:36.439662799Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:49:36.440076 containerd[1714]: time="2025-09-12T17:49:36.440057433Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 1.336243724s" Sep 12 17:49:36.440111 containerd[1714]: time="2025-09-12T17:49:36.440081438Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 12 17:49:36.441672 containerd[1714]: time="2025-09-12T17:49:36.441132457Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 12 17:49:36.441941 containerd[1714]: time="2025-09-12T17:49:36.441921923Z" level=info msg="CreateContainer within sandbox \"354d83ec7036ef74b9eb75d804ca088ca86483942f9ca5450366e598ecd39ba5\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 12 17:49:36.461872 containerd[1714]: time="2025-09-12T17:49:36.461825177Z" level=info msg="Container 080693386ed0985ca6a626602f23f48127bff247d677860718b87c3283ad4aa5: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:49:36.478959 containerd[1714]: time="2025-09-12T17:49:36.478937536Z" level=info msg="CreateContainer within sandbox \"354d83ec7036ef74b9eb75d804ca088ca86483942f9ca5450366e598ecd39ba5\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"080693386ed0985ca6a626602f23f48127bff247d677860718b87c3283ad4aa5\"" Sep 12 17:49:36.479921 containerd[1714]: time="2025-09-12T17:49:36.479897907Z" level=info msg="StartContainer for \"080693386ed0985ca6a626602f23f48127bff247d677860718b87c3283ad4aa5\"" Sep 12 17:49:36.484719 containerd[1714]: time="2025-09-12T17:49:36.484688333Z" level=info msg="connecting to shim 080693386ed0985ca6a626602f23f48127bff247d677860718b87c3283ad4aa5" address="unix:///run/containerd/s/e223ac16f22b7506d12030fdfd643d76d2786e4003371affe2e8f396683f6f65" protocol=ttrpc version=3 Sep 12 17:49:36.503752 systemd[1]: Started cri-containerd-080693386ed0985ca6a626602f23f48127bff247d677860718b87c3283ad4aa5.scope - libcontainer container 080693386ed0985ca6a626602f23f48127bff247d677860718b87c3283ad4aa5. Sep 12 17:49:36.531891 containerd[1714]: time="2025-09-12T17:49:36.531873942Z" level=info msg="StartContainer for \"080693386ed0985ca6a626602f23f48127bff247d677860718b87c3283ad4aa5\" returns successfully" Sep 12 17:49:36.759814 containerd[1714]: time="2025-09-12T17:49:36.759753756Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54d8b7455c-46626,Uid:8da1c4ea-8982-43c2-a817-5e4387c1aa69,Namespace:calico-system,Attempt:0,}" Sep 12 17:49:36.760209 containerd[1714]: time="2025-09-12T17:49:36.760189108Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-866bfd7cf-22shl,Uid:d61f5835-12df-456c-8f74-9315a7448ca0,Namespace:calico-apiserver,Attempt:0,}" Sep 12 17:49:36.868726 systemd-networkd[1362]: cali457a2167c70: Link UP Sep 12 17:49:36.868915 systemd-networkd[1362]: cali457a2167c70: Gained carrier Sep 12 17:49:36.880345 containerd[1714]: 2025-09-12 17:49:36.799 [INFO][5044] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4426.1.0--a--49404e8b93-k8s-calico--kube--controllers--54d8b7455c--46626-eth0 calico-kube-controllers-54d8b7455c- calico-system 8da1c4ea-8982-43c2-a817-5e4387c1aa69 834 0 2025-09-12 17:49:10 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:54d8b7455c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4426.1.0-a-49404e8b93 calico-kube-controllers-54d8b7455c-46626 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali457a2167c70 [] [] }} ContainerID="10358bfcc08ac2c420d6465e53ee847464aa62c71f773ffccdb02fe2bb4480cd" Namespace="calico-system" Pod="calico-kube-controllers-54d8b7455c-46626" WorkloadEndpoint="ci--4426.1.0--a--49404e8b93-k8s-calico--kube--controllers--54d8b7455c--46626-" Sep 12 17:49:36.880345 containerd[1714]: 2025-09-12 17:49:36.799 [INFO][5044] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="10358bfcc08ac2c420d6465e53ee847464aa62c71f773ffccdb02fe2bb4480cd" Namespace="calico-system" Pod="calico-kube-controllers-54d8b7455c-46626" WorkloadEndpoint="ci--4426.1.0--a--49404e8b93-k8s-calico--kube--controllers--54d8b7455c--46626-eth0" Sep 12 17:49:36.880345 containerd[1714]: 2025-09-12 17:49:36.831 [INFO][5068] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="10358bfcc08ac2c420d6465e53ee847464aa62c71f773ffccdb02fe2bb4480cd" HandleID="k8s-pod-network.10358bfcc08ac2c420d6465e53ee847464aa62c71f773ffccdb02fe2bb4480cd" Workload="ci--4426.1.0--a--49404e8b93-k8s-calico--kube--controllers--54d8b7455c--46626-eth0" Sep 12 17:49:36.880345 containerd[1714]: 2025-09-12 17:49:36.832 [INFO][5068] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="10358bfcc08ac2c420d6465e53ee847464aa62c71f773ffccdb02fe2bb4480cd" HandleID="k8s-pod-network.10358bfcc08ac2c420d6465e53ee847464aa62c71f773ffccdb02fe2bb4480cd" Workload="ci--4426.1.0--a--49404e8b93-k8s-calico--kube--controllers--54d8b7455c--46626-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f070), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4426.1.0-a-49404e8b93", "pod":"calico-kube-controllers-54d8b7455c-46626", "timestamp":"2025-09-12 17:49:36.831918916 +0000 UTC"}, Hostname:"ci-4426.1.0-a-49404e8b93", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:49:36.880345 containerd[1714]: 2025-09-12 17:49:36.832 [INFO][5068] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:49:36.880345 containerd[1714]: 2025-09-12 17:49:36.832 [INFO][5068] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:49:36.880345 containerd[1714]: 2025-09-12 17:49:36.832 [INFO][5068] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4426.1.0-a-49404e8b93' Sep 12 17:49:36.880345 containerd[1714]: 2025-09-12 17:49:36.838 [INFO][5068] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.10358bfcc08ac2c420d6465e53ee847464aa62c71f773ffccdb02fe2bb4480cd" host="ci-4426.1.0-a-49404e8b93" Sep 12 17:49:36.880345 containerd[1714]: 2025-09-12 17:49:36.841 [INFO][5068] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4426.1.0-a-49404e8b93" Sep 12 17:49:36.880345 containerd[1714]: 2025-09-12 17:49:36.844 [INFO][5068] ipam/ipam.go 511: Trying affinity for 192.168.109.128/26 host="ci-4426.1.0-a-49404e8b93" Sep 12 17:49:36.880345 containerd[1714]: 2025-09-12 17:49:36.845 [INFO][5068] ipam/ipam.go 158: Attempting to load block cidr=192.168.109.128/26 host="ci-4426.1.0-a-49404e8b93" Sep 12 17:49:36.880345 containerd[1714]: 2025-09-12 17:49:36.847 [INFO][5068] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.109.128/26 host="ci-4426.1.0-a-49404e8b93" Sep 12 17:49:36.880345 containerd[1714]: 2025-09-12 17:49:36.847 [INFO][5068] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.109.128/26 handle="k8s-pod-network.10358bfcc08ac2c420d6465e53ee847464aa62c71f773ffccdb02fe2bb4480cd" host="ci-4426.1.0-a-49404e8b93" Sep 12 17:49:36.880345 containerd[1714]: 2025-09-12 17:49:36.847 [INFO][5068] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.10358bfcc08ac2c420d6465e53ee847464aa62c71f773ffccdb02fe2bb4480cd Sep 12 17:49:36.880345 containerd[1714]: 2025-09-12 17:49:36.851 [INFO][5068] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.109.128/26 handle="k8s-pod-network.10358bfcc08ac2c420d6465e53ee847464aa62c71f773ffccdb02fe2bb4480cd" host="ci-4426.1.0-a-49404e8b93" Sep 12 17:49:36.880345 containerd[1714]: 2025-09-12 17:49:36.862 [INFO][5068] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.109.134/26] block=192.168.109.128/26 handle="k8s-pod-network.10358bfcc08ac2c420d6465e53ee847464aa62c71f773ffccdb02fe2bb4480cd" host="ci-4426.1.0-a-49404e8b93" Sep 12 17:49:36.880345 containerd[1714]: 2025-09-12 17:49:36.862 [INFO][5068] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.109.134/26] handle="k8s-pod-network.10358bfcc08ac2c420d6465e53ee847464aa62c71f773ffccdb02fe2bb4480cd" host="ci-4426.1.0-a-49404e8b93" Sep 12 17:49:36.880345 containerd[1714]: 2025-09-12 17:49:36.862 [INFO][5068] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:49:36.880345 containerd[1714]: 2025-09-12 17:49:36.862 [INFO][5068] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.109.134/26] IPv6=[] ContainerID="10358bfcc08ac2c420d6465e53ee847464aa62c71f773ffccdb02fe2bb4480cd" HandleID="k8s-pod-network.10358bfcc08ac2c420d6465e53ee847464aa62c71f773ffccdb02fe2bb4480cd" Workload="ci--4426.1.0--a--49404e8b93-k8s-calico--kube--controllers--54d8b7455c--46626-eth0" Sep 12 17:49:36.882256 containerd[1714]: 2025-09-12 17:49:36.863 [INFO][5044] cni-plugin/k8s.go 418: Populated endpoint ContainerID="10358bfcc08ac2c420d6465e53ee847464aa62c71f773ffccdb02fe2bb4480cd" Namespace="calico-system" Pod="calico-kube-controllers-54d8b7455c-46626" WorkloadEndpoint="ci--4426.1.0--a--49404e8b93-k8s-calico--kube--controllers--54d8b7455c--46626-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4426.1.0--a--49404e8b93-k8s-calico--kube--controllers--54d8b7455c--46626-eth0", GenerateName:"calico-kube-controllers-54d8b7455c-", Namespace:"calico-system", SelfLink:"", UID:"8da1c4ea-8982-43c2-a817-5e4387c1aa69", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 49, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"54d8b7455c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4426.1.0-a-49404e8b93", ContainerID:"", Pod:"calico-kube-controllers-54d8b7455c-46626", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.109.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali457a2167c70", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:49:36.882256 containerd[1714]: 2025-09-12 17:49:36.863 [INFO][5044] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.109.134/32] ContainerID="10358bfcc08ac2c420d6465e53ee847464aa62c71f773ffccdb02fe2bb4480cd" Namespace="calico-system" Pod="calico-kube-controllers-54d8b7455c-46626" WorkloadEndpoint="ci--4426.1.0--a--49404e8b93-k8s-calico--kube--controllers--54d8b7455c--46626-eth0" Sep 12 17:49:36.882256 containerd[1714]: 2025-09-12 17:49:36.863 [INFO][5044] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali457a2167c70 ContainerID="10358bfcc08ac2c420d6465e53ee847464aa62c71f773ffccdb02fe2bb4480cd" Namespace="calico-system" Pod="calico-kube-controllers-54d8b7455c-46626" WorkloadEndpoint="ci--4426.1.0--a--49404e8b93-k8s-calico--kube--controllers--54d8b7455c--46626-eth0" Sep 12 17:49:36.882256 containerd[1714]: 2025-09-12 17:49:36.866 [INFO][5044] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="10358bfcc08ac2c420d6465e53ee847464aa62c71f773ffccdb02fe2bb4480cd" Namespace="calico-system" Pod="calico-kube-controllers-54d8b7455c-46626" WorkloadEndpoint="ci--4426.1.0--a--49404e8b93-k8s-calico--kube--controllers--54d8b7455c--46626-eth0" Sep 12 17:49:36.882256 containerd[1714]: 2025-09-12 17:49:36.867 [INFO][5044] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="10358bfcc08ac2c420d6465e53ee847464aa62c71f773ffccdb02fe2bb4480cd" Namespace="calico-system" Pod="calico-kube-controllers-54d8b7455c-46626" WorkloadEndpoint="ci--4426.1.0--a--49404e8b93-k8s-calico--kube--controllers--54d8b7455c--46626-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4426.1.0--a--49404e8b93-k8s-calico--kube--controllers--54d8b7455c--46626-eth0", GenerateName:"calico-kube-controllers-54d8b7455c-", Namespace:"calico-system", SelfLink:"", UID:"8da1c4ea-8982-43c2-a817-5e4387c1aa69", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 49, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"54d8b7455c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4426.1.0-a-49404e8b93", ContainerID:"10358bfcc08ac2c420d6465e53ee847464aa62c71f773ffccdb02fe2bb4480cd", Pod:"calico-kube-controllers-54d8b7455c-46626", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.109.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali457a2167c70", MAC:"36:00:de:5a:df:37", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:49:36.882256 containerd[1714]: 2025-09-12 17:49:36.878 [INFO][5044] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="10358bfcc08ac2c420d6465e53ee847464aa62c71f773ffccdb02fe2bb4480cd" Namespace="calico-system" Pod="calico-kube-controllers-54d8b7455c-46626" WorkloadEndpoint="ci--4426.1.0--a--49404e8b93-k8s-calico--kube--controllers--54d8b7455c--46626-eth0" Sep 12 17:49:36.920194 containerd[1714]: time="2025-09-12T17:49:36.919839374Z" level=info msg="connecting to shim 10358bfcc08ac2c420d6465e53ee847464aa62c71f773ffccdb02fe2bb4480cd" address="unix:///run/containerd/s/f77c429792f16b0a15b76963ba0e2bc13a2db93404bc8a10020965122eed18be" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:49:36.945862 systemd[1]: Started cri-containerd-10358bfcc08ac2c420d6465e53ee847464aa62c71f773ffccdb02fe2bb4480cd.scope - libcontainer container 10358bfcc08ac2c420d6465e53ee847464aa62c71f773ffccdb02fe2bb4480cd. Sep 12 17:49:36.946691 kubelet[3174]: I0912 17:49:36.946656 3174 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-gm87k" podStartSLOduration=38.946619781 podStartE2EDuration="38.946619781s" podCreationTimestamp="2025-09-12 17:48:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:49:36.943742149 +0000 UTC m=+44.262582205" watchObservedRunningTime="2025-09-12 17:49:36.946619781 +0000 UTC m=+44.265459837" Sep 12 17:49:37.001298 systemd-networkd[1362]: cali2bbafcb0322: Link UP Sep 12 17:49:37.001460 systemd-networkd[1362]: cali2bbafcb0322: Gained carrier Sep 12 17:49:37.016093 containerd[1714]: 2025-09-12 17:49:36.808 [INFO][5054] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4426.1.0--a--49404e8b93-k8s-calico--apiserver--866bfd7cf--22shl-eth0 calico-apiserver-866bfd7cf- calico-apiserver d61f5835-12df-456c-8f74-9315a7448ca0 830 0 2025-09-12 17:49:07 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:866bfd7cf projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4426.1.0-a-49404e8b93 calico-apiserver-866bfd7cf-22shl eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali2bbafcb0322 [] [] }} ContainerID="6e9d51f54e8f911d3d8fb61131788e7a9767ec6f7445d26eb717e2c1362e17ad" Namespace="calico-apiserver" Pod="calico-apiserver-866bfd7cf-22shl" WorkloadEndpoint="ci--4426.1.0--a--49404e8b93-k8s-calico--apiserver--866bfd7cf--22shl-" Sep 12 17:49:37.016093 containerd[1714]: 2025-09-12 17:49:36.809 [INFO][5054] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6e9d51f54e8f911d3d8fb61131788e7a9767ec6f7445d26eb717e2c1362e17ad" Namespace="calico-apiserver" Pod="calico-apiserver-866bfd7cf-22shl" WorkloadEndpoint="ci--4426.1.0--a--49404e8b93-k8s-calico--apiserver--866bfd7cf--22shl-eth0" Sep 12 17:49:37.016093 containerd[1714]: 2025-09-12 17:49:36.841 [INFO][5073] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6e9d51f54e8f911d3d8fb61131788e7a9767ec6f7445d26eb717e2c1362e17ad" HandleID="k8s-pod-network.6e9d51f54e8f911d3d8fb61131788e7a9767ec6f7445d26eb717e2c1362e17ad" Workload="ci--4426.1.0--a--49404e8b93-k8s-calico--apiserver--866bfd7cf--22shl-eth0" Sep 12 17:49:37.016093 containerd[1714]: 2025-09-12 17:49:36.841 [INFO][5073] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6e9d51f54e8f911d3d8fb61131788e7a9767ec6f7445d26eb717e2c1362e17ad" HandleID="k8s-pod-network.6e9d51f54e8f911d3d8fb61131788e7a9767ec6f7445d26eb717e2c1362e17ad" Workload="ci--4426.1.0--a--49404e8b93-k8s-calico--apiserver--866bfd7cf--22shl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f230), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4426.1.0-a-49404e8b93", "pod":"calico-apiserver-866bfd7cf-22shl", "timestamp":"2025-09-12 17:49:36.841295442 +0000 UTC"}, Hostname:"ci-4426.1.0-a-49404e8b93", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:49:37.016093 containerd[1714]: 2025-09-12 17:49:36.841 [INFO][5073] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:49:37.016093 containerd[1714]: 2025-09-12 17:49:36.862 [INFO][5073] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:49:37.016093 containerd[1714]: 2025-09-12 17:49:36.862 [INFO][5073] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4426.1.0-a-49404e8b93' Sep 12 17:49:37.016093 containerd[1714]: 2025-09-12 17:49:36.938 [INFO][5073] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6e9d51f54e8f911d3d8fb61131788e7a9767ec6f7445d26eb717e2c1362e17ad" host="ci-4426.1.0-a-49404e8b93" Sep 12 17:49:37.016093 containerd[1714]: 2025-09-12 17:49:36.956 [INFO][5073] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4426.1.0-a-49404e8b93" Sep 12 17:49:37.016093 containerd[1714]: 2025-09-12 17:49:36.973 [INFO][5073] ipam/ipam.go 511: Trying affinity for 192.168.109.128/26 host="ci-4426.1.0-a-49404e8b93" Sep 12 17:49:37.016093 containerd[1714]: 2025-09-12 17:49:36.976 [INFO][5073] ipam/ipam.go 158: Attempting to load block cidr=192.168.109.128/26 host="ci-4426.1.0-a-49404e8b93" Sep 12 17:49:37.016093 containerd[1714]: 2025-09-12 17:49:36.980 [INFO][5073] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.109.128/26 host="ci-4426.1.0-a-49404e8b93" Sep 12 17:49:37.016093 containerd[1714]: 2025-09-12 17:49:36.980 [INFO][5073] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.109.128/26 handle="k8s-pod-network.6e9d51f54e8f911d3d8fb61131788e7a9767ec6f7445d26eb717e2c1362e17ad" host="ci-4426.1.0-a-49404e8b93" Sep 12 17:49:37.016093 containerd[1714]: 2025-09-12 17:49:36.984 [INFO][5073] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6e9d51f54e8f911d3d8fb61131788e7a9767ec6f7445d26eb717e2c1362e17ad Sep 12 17:49:37.016093 containerd[1714]: 2025-09-12 17:49:36.989 [INFO][5073] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.109.128/26 handle="k8s-pod-network.6e9d51f54e8f911d3d8fb61131788e7a9767ec6f7445d26eb717e2c1362e17ad" host="ci-4426.1.0-a-49404e8b93" Sep 12 17:49:37.016093 containerd[1714]: 2025-09-12 17:49:36.997 [INFO][5073] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.109.135/26] block=192.168.109.128/26 handle="k8s-pod-network.6e9d51f54e8f911d3d8fb61131788e7a9767ec6f7445d26eb717e2c1362e17ad" host="ci-4426.1.0-a-49404e8b93" Sep 12 17:49:37.016093 containerd[1714]: 2025-09-12 17:49:36.997 [INFO][5073] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.109.135/26] handle="k8s-pod-network.6e9d51f54e8f911d3d8fb61131788e7a9767ec6f7445d26eb717e2c1362e17ad" host="ci-4426.1.0-a-49404e8b93" Sep 12 17:49:37.016093 containerd[1714]: 2025-09-12 17:49:36.997 [INFO][5073] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:49:37.016093 containerd[1714]: 2025-09-12 17:49:36.997 [INFO][5073] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.109.135/26] IPv6=[] ContainerID="6e9d51f54e8f911d3d8fb61131788e7a9767ec6f7445d26eb717e2c1362e17ad" HandleID="k8s-pod-network.6e9d51f54e8f911d3d8fb61131788e7a9767ec6f7445d26eb717e2c1362e17ad" Workload="ci--4426.1.0--a--49404e8b93-k8s-calico--apiserver--866bfd7cf--22shl-eth0" Sep 12 17:49:37.016528 containerd[1714]: 2025-09-12 17:49:36.998 [INFO][5054] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6e9d51f54e8f911d3d8fb61131788e7a9767ec6f7445d26eb717e2c1362e17ad" Namespace="calico-apiserver" Pod="calico-apiserver-866bfd7cf-22shl" WorkloadEndpoint="ci--4426.1.0--a--49404e8b93-k8s-calico--apiserver--866bfd7cf--22shl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4426.1.0--a--49404e8b93-k8s-calico--apiserver--866bfd7cf--22shl-eth0", GenerateName:"calico-apiserver-866bfd7cf-", Namespace:"calico-apiserver", SelfLink:"", UID:"d61f5835-12df-456c-8f74-9315a7448ca0", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 49, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"866bfd7cf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4426.1.0-a-49404e8b93", ContainerID:"", Pod:"calico-apiserver-866bfd7cf-22shl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.109.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2bbafcb0322", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:49:37.016528 containerd[1714]: 2025-09-12 17:49:36.998 [INFO][5054] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.109.135/32] ContainerID="6e9d51f54e8f911d3d8fb61131788e7a9767ec6f7445d26eb717e2c1362e17ad" Namespace="calico-apiserver" Pod="calico-apiserver-866bfd7cf-22shl" WorkloadEndpoint="ci--4426.1.0--a--49404e8b93-k8s-calico--apiserver--866bfd7cf--22shl-eth0" Sep 12 17:49:37.016528 containerd[1714]: 2025-09-12 17:49:36.998 [INFO][5054] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2bbafcb0322 ContainerID="6e9d51f54e8f911d3d8fb61131788e7a9767ec6f7445d26eb717e2c1362e17ad" Namespace="calico-apiserver" Pod="calico-apiserver-866bfd7cf-22shl" WorkloadEndpoint="ci--4426.1.0--a--49404e8b93-k8s-calico--apiserver--866bfd7cf--22shl-eth0" Sep 12 17:49:37.016528 containerd[1714]: 2025-09-12 17:49:37.000 [INFO][5054] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6e9d51f54e8f911d3d8fb61131788e7a9767ec6f7445d26eb717e2c1362e17ad" Namespace="calico-apiserver" Pod="calico-apiserver-866bfd7cf-22shl" WorkloadEndpoint="ci--4426.1.0--a--49404e8b93-k8s-calico--apiserver--866bfd7cf--22shl-eth0" Sep 12 17:49:37.016528 containerd[1714]: 2025-09-12 17:49:37.000 [INFO][5054] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6e9d51f54e8f911d3d8fb61131788e7a9767ec6f7445d26eb717e2c1362e17ad" Namespace="calico-apiserver" Pod="calico-apiserver-866bfd7cf-22shl" WorkloadEndpoint="ci--4426.1.0--a--49404e8b93-k8s-calico--apiserver--866bfd7cf--22shl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4426.1.0--a--49404e8b93-k8s-calico--apiserver--866bfd7cf--22shl-eth0", GenerateName:"calico-apiserver-866bfd7cf-", Namespace:"calico-apiserver", SelfLink:"", UID:"d61f5835-12df-456c-8f74-9315a7448ca0", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 49, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"866bfd7cf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4426.1.0-a-49404e8b93", ContainerID:"6e9d51f54e8f911d3d8fb61131788e7a9767ec6f7445d26eb717e2c1362e17ad", Pod:"calico-apiserver-866bfd7cf-22shl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.109.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2bbafcb0322", MAC:"96:5f:43:5f:4b:61", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:49:37.016528 containerd[1714]: 2025-09-12 17:49:37.013 [INFO][5054] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6e9d51f54e8f911d3d8fb61131788e7a9767ec6f7445d26eb717e2c1362e17ad" Namespace="calico-apiserver" Pod="calico-apiserver-866bfd7cf-22shl" WorkloadEndpoint="ci--4426.1.0--a--49404e8b93-k8s-calico--apiserver--866bfd7cf--22shl-eth0" Sep 12 17:49:37.019546 containerd[1714]: time="2025-09-12T17:49:37.019473207Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54d8b7455c-46626,Uid:8da1c4ea-8982-43c2-a817-5e4387c1aa69,Namespace:calico-system,Attempt:0,} returns sandbox id \"10358bfcc08ac2c420d6465e53ee847464aa62c71f773ffccdb02fe2bb4480cd\"" Sep 12 17:49:37.049103 containerd[1714]: time="2025-09-12T17:49:37.049058196Z" level=info msg="connecting to shim 6e9d51f54e8f911d3d8fb61131788e7a9767ec6f7445d26eb717e2c1362e17ad" address="unix:///run/containerd/s/00f80ffe0b9e09288d1fec955d6e13cc78206c37ef70029c132a85af38505a45" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:49:37.066753 systemd[1]: Started cri-containerd-6e9d51f54e8f911d3d8fb61131788e7a9767ec6f7445d26eb717e2c1362e17ad.scope - libcontainer container 6e9d51f54e8f911d3d8fb61131788e7a9767ec6f7445d26eb717e2c1362e17ad. Sep 12 17:49:37.103149 containerd[1714]: time="2025-09-12T17:49:37.103133610Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-866bfd7cf-22shl,Uid:d61f5835-12df-456c-8f74-9315a7448ca0,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"6e9d51f54e8f911d3d8fb61131788e7a9767ec6f7445d26eb717e2c1362e17ad\"" Sep 12 17:49:37.645737 systemd-networkd[1362]: calibd05c48310e: Gained IPv6LL Sep 12 17:49:37.646144 systemd-networkd[1362]: cali6ab66c702d8: Gained IPv6LL Sep 12 17:49:37.759882 containerd[1714]: time="2025-09-12T17:49:37.759847385Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-866bfd7cf-6xdw9,Uid:e8b27c91-3783-4996-a1f6-5caf22d96c35,Namespace:calico-apiserver,Attempt:0,}" Sep 12 17:49:37.879137 systemd-networkd[1362]: cali126bcee4c4e: Link UP Sep 12 17:49:37.880722 systemd-networkd[1362]: cali126bcee4c4e: Gained carrier Sep 12 17:49:37.902971 containerd[1714]: 2025-09-12 17:49:37.794 [INFO][5194] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4426.1.0--a--49404e8b93-k8s-calico--apiserver--866bfd7cf--6xdw9-eth0 calico-apiserver-866bfd7cf- calico-apiserver e8b27c91-3783-4996-a1f6-5caf22d96c35 831 0 2025-09-12 17:49:07 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:866bfd7cf projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4426.1.0-a-49404e8b93 calico-apiserver-866bfd7cf-6xdw9 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali126bcee4c4e [] [] }} ContainerID="9c9908d5caa358e82d62fa7408a6c4379180e365febcbcbf8f077d68a67e7776" Namespace="calico-apiserver" Pod="calico-apiserver-866bfd7cf-6xdw9" WorkloadEndpoint="ci--4426.1.0--a--49404e8b93-k8s-calico--apiserver--866bfd7cf--6xdw9-" Sep 12 17:49:37.902971 containerd[1714]: 2025-09-12 17:49:37.794 [INFO][5194] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9c9908d5caa358e82d62fa7408a6c4379180e365febcbcbf8f077d68a67e7776" Namespace="calico-apiserver" Pod="calico-apiserver-866bfd7cf-6xdw9" WorkloadEndpoint="ci--4426.1.0--a--49404e8b93-k8s-calico--apiserver--866bfd7cf--6xdw9-eth0" Sep 12 17:49:37.902971 containerd[1714]: 2025-09-12 17:49:37.830 [INFO][5206] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9c9908d5caa358e82d62fa7408a6c4379180e365febcbcbf8f077d68a67e7776" HandleID="k8s-pod-network.9c9908d5caa358e82d62fa7408a6c4379180e365febcbcbf8f077d68a67e7776" Workload="ci--4426.1.0--a--49404e8b93-k8s-calico--apiserver--866bfd7cf--6xdw9-eth0" Sep 12 17:49:37.902971 containerd[1714]: 2025-09-12 17:49:37.830 [INFO][5206] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9c9908d5caa358e82d62fa7408a6c4379180e365febcbcbf8f077d68a67e7776" HandleID="k8s-pod-network.9c9908d5caa358e82d62fa7408a6c4379180e365febcbcbf8f077d68a67e7776" Workload="ci--4426.1.0--a--49404e8b93-k8s-calico--apiserver--866bfd7cf--6xdw9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5860), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4426.1.0-a-49404e8b93", "pod":"calico-apiserver-866bfd7cf-6xdw9", "timestamp":"2025-09-12 17:49:37.829830603 +0000 UTC"}, Hostname:"ci-4426.1.0-a-49404e8b93", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:49:37.902971 containerd[1714]: 2025-09-12 17:49:37.830 [INFO][5206] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:49:37.902971 containerd[1714]: 2025-09-12 17:49:37.830 [INFO][5206] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:49:37.902971 containerd[1714]: 2025-09-12 17:49:37.830 [INFO][5206] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4426.1.0-a-49404e8b93' Sep 12 17:49:37.902971 containerd[1714]: 2025-09-12 17:49:37.837 [INFO][5206] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9c9908d5caa358e82d62fa7408a6c4379180e365febcbcbf8f077d68a67e7776" host="ci-4426.1.0-a-49404e8b93" Sep 12 17:49:37.902971 containerd[1714]: 2025-09-12 17:49:37.842 [INFO][5206] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4426.1.0-a-49404e8b93" Sep 12 17:49:37.902971 containerd[1714]: 2025-09-12 17:49:37.848 [INFO][5206] ipam/ipam.go 511: Trying affinity for 192.168.109.128/26 host="ci-4426.1.0-a-49404e8b93" Sep 12 17:49:37.902971 containerd[1714]: 2025-09-12 17:49:37.850 [INFO][5206] ipam/ipam.go 158: Attempting to load block cidr=192.168.109.128/26 host="ci-4426.1.0-a-49404e8b93" Sep 12 17:49:37.902971 containerd[1714]: 2025-09-12 17:49:37.851 [INFO][5206] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.109.128/26 host="ci-4426.1.0-a-49404e8b93" Sep 12 17:49:37.902971 containerd[1714]: 2025-09-12 17:49:37.851 [INFO][5206] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.109.128/26 handle="k8s-pod-network.9c9908d5caa358e82d62fa7408a6c4379180e365febcbcbf8f077d68a67e7776" host="ci-4426.1.0-a-49404e8b93" Sep 12 17:49:37.902971 containerd[1714]: 2025-09-12 17:49:37.853 [INFO][5206] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9c9908d5caa358e82d62fa7408a6c4379180e365febcbcbf8f077d68a67e7776 Sep 12 17:49:37.902971 containerd[1714]: 2025-09-12 17:49:37.862 [INFO][5206] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.109.128/26 handle="k8s-pod-network.9c9908d5caa358e82d62fa7408a6c4379180e365febcbcbf8f077d68a67e7776" host="ci-4426.1.0-a-49404e8b93" Sep 12 17:49:37.902971 containerd[1714]: 2025-09-12 17:49:37.873 [INFO][5206] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.109.136/26] block=192.168.109.128/26 handle="k8s-pod-network.9c9908d5caa358e82d62fa7408a6c4379180e365febcbcbf8f077d68a67e7776" host="ci-4426.1.0-a-49404e8b93" Sep 12 17:49:37.902971 containerd[1714]: 2025-09-12 17:49:37.874 [INFO][5206] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.109.136/26] handle="k8s-pod-network.9c9908d5caa358e82d62fa7408a6c4379180e365febcbcbf8f077d68a67e7776" host="ci-4426.1.0-a-49404e8b93" Sep 12 17:49:37.902971 containerd[1714]: 2025-09-12 17:49:37.874 [INFO][5206] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:49:37.902971 containerd[1714]: 2025-09-12 17:49:37.874 [INFO][5206] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.109.136/26] IPv6=[] ContainerID="9c9908d5caa358e82d62fa7408a6c4379180e365febcbcbf8f077d68a67e7776" HandleID="k8s-pod-network.9c9908d5caa358e82d62fa7408a6c4379180e365febcbcbf8f077d68a67e7776" Workload="ci--4426.1.0--a--49404e8b93-k8s-calico--apiserver--866bfd7cf--6xdw9-eth0" Sep 12 17:49:37.903597 containerd[1714]: 2025-09-12 17:49:37.876 [INFO][5194] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9c9908d5caa358e82d62fa7408a6c4379180e365febcbcbf8f077d68a67e7776" Namespace="calico-apiserver" Pod="calico-apiserver-866bfd7cf-6xdw9" WorkloadEndpoint="ci--4426.1.0--a--49404e8b93-k8s-calico--apiserver--866bfd7cf--6xdw9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4426.1.0--a--49404e8b93-k8s-calico--apiserver--866bfd7cf--6xdw9-eth0", GenerateName:"calico-apiserver-866bfd7cf-", Namespace:"calico-apiserver", SelfLink:"", UID:"e8b27c91-3783-4996-a1f6-5caf22d96c35", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 49, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"866bfd7cf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4426.1.0-a-49404e8b93", ContainerID:"", Pod:"calico-apiserver-866bfd7cf-6xdw9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.109.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali126bcee4c4e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:49:37.903597 containerd[1714]: 2025-09-12 17:49:37.876 [INFO][5194] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.109.136/32] ContainerID="9c9908d5caa358e82d62fa7408a6c4379180e365febcbcbf8f077d68a67e7776" Namespace="calico-apiserver" Pod="calico-apiserver-866bfd7cf-6xdw9" WorkloadEndpoint="ci--4426.1.0--a--49404e8b93-k8s-calico--apiserver--866bfd7cf--6xdw9-eth0" Sep 12 17:49:37.903597 containerd[1714]: 2025-09-12 17:49:37.876 [INFO][5194] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali126bcee4c4e ContainerID="9c9908d5caa358e82d62fa7408a6c4379180e365febcbcbf8f077d68a67e7776" Namespace="calico-apiserver" Pod="calico-apiserver-866bfd7cf-6xdw9" WorkloadEndpoint="ci--4426.1.0--a--49404e8b93-k8s-calico--apiserver--866bfd7cf--6xdw9-eth0" Sep 12 17:49:37.903597 containerd[1714]: 2025-09-12 17:49:37.881 [INFO][5194] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9c9908d5caa358e82d62fa7408a6c4379180e365febcbcbf8f077d68a67e7776" Namespace="calico-apiserver" Pod="calico-apiserver-866bfd7cf-6xdw9" WorkloadEndpoint="ci--4426.1.0--a--49404e8b93-k8s-calico--apiserver--866bfd7cf--6xdw9-eth0" Sep 12 17:49:37.903597 containerd[1714]: 2025-09-12 17:49:37.883 [INFO][5194] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9c9908d5caa358e82d62fa7408a6c4379180e365febcbcbf8f077d68a67e7776" Namespace="calico-apiserver" Pod="calico-apiserver-866bfd7cf-6xdw9" WorkloadEndpoint="ci--4426.1.0--a--49404e8b93-k8s-calico--apiserver--866bfd7cf--6xdw9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4426.1.0--a--49404e8b93-k8s-calico--apiserver--866bfd7cf--6xdw9-eth0", GenerateName:"calico-apiserver-866bfd7cf-", Namespace:"calico-apiserver", SelfLink:"", UID:"e8b27c91-3783-4996-a1f6-5caf22d96c35", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 49, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"866bfd7cf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4426.1.0-a-49404e8b93", ContainerID:"9c9908d5caa358e82d62fa7408a6c4379180e365febcbcbf8f077d68a67e7776", Pod:"calico-apiserver-866bfd7cf-6xdw9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.109.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali126bcee4c4e", MAC:"8e:3f:ee:9a:c1:ca", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:49:37.903597 containerd[1714]: 2025-09-12 17:49:37.901 [INFO][5194] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9c9908d5caa358e82d62fa7408a6c4379180e365febcbcbf8f077d68a67e7776" Namespace="calico-apiserver" Pod="calico-apiserver-866bfd7cf-6xdw9" WorkloadEndpoint="ci--4426.1.0--a--49404e8b93-k8s-calico--apiserver--866bfd7cf--6xdw9-eth0" Sep 12 17:49:37.959838 containerd[1714]: time="2025-09-12T17:49:37.959771310Z" level=info msg="connecting to shim 9c9908d5caa358e82d62fa7408a6c4379180e365febcbcbf8f077d68a67e7776" address="unix:///run/containerd/s/150dce35035aece5ba3f2b57a2325a12ecc057f25a039fc89dea42e6a46ad806" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:49:37.996917 systemd[1]: Started cri-containerd-9c9908d5caa358e82d62fa7408a6c4379180e365febcbcbf8f077d68a67e7776.scope - libcontainer container 9c9908d5caa358e82d62fa7408a6c4379180e365febcbcbf8f077d68a67e7776. Sep 12 17:49:38.067067 containerd[1714]: time="2025-09-12T17:49:38.067042612Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-866bfd7cf-6xdw9,Uid:e8b27c91-3783-4996-a1f6-5caf22d96c35,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"9c9908d5caa358e82d62fa7408a6c4379180e365febcbcbf8f077d68a67e7776\"" Sep 12 17:49:38.541743 systemd-networkd[1362]: cali2bbafcb0322: Gained IPv6LL Sep 12 17:49:38.797773 systemd-networkd[1362]: cali457a2167c70: Gained IPv6LL Sep 12 17:49:39.077254 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3461223347.mount: Deactivated successfully. Sep 12 17:49:39.309715 systemd-networkd[1362]: cali126bcee4c4e: Gained IPv6LL Sep 12 17:49:39.458441 containerd[1714]: time="2025-09-12T17:49:39.458375795Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:49:39.461234 containerd[1714]: time="2025-09-12T17:49:39.461161932Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Sep 12 17:49:39.463706 containerd[1714]: time="2025-09-12T17:49:39.463683361Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:49:39.467509 containerd[1714]: time="2025-09-12T17:49:39.467477591Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:49:39.468083 containerd[1714]: time="2025-09-12T17:49:39.468000397Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 3.026844053s" Sep 12 17:49:39.468083 containerd[1714]: time="2025-09-12T17:49:39.468025676Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 12 17:49:39.469411 containerd[1714]: time="2025-09-12T17:49:39.469229076Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 12 17:49:39.470022 containerd[1714]: time="2025-09-12T17:49:39.470002641Z" level=info msg="CreateContainer within sandbox \"2f755be201ec294af0adeb6dc158447c3ac735cf75fd30088cac4901710c7032\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 12 17:49:39.486655 containerd[1714]: time="2025-09-12T17:49:39.486180145Z" level=info msg="Container 68e42d77684b6a512e9667dbfef75d02c1b41a389d425ad461e6f94fa51ca6a0: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:49:39.503126 containerd[1714]: time="2025-09-12T17:49:39.503102009Z" level=info msg="CreateContainer within sandbox \"2f755be201ec294af0adeb6dc158447c3ac735cf75fd30088cac4901710c7032\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"68e42d77684b6a512e9667dbfef75d02c1b41a389d425ad461e6f94fa51ca6a0\"" Sep 12 17:49:39.503695 containerd[1714]: time="2025-09-12T17:49:39.503676976Z" level=info msg="StartContainer for \"68e42d77684b6a512e9667dbfef75d02c1b41a389d425ad461e6f94fa51ca6a0\"" Sep 12 17:49:39.504666 containerd[1714]: time="2025-09-12T17:49:39.504619664Z" level=info msg="connecting to shim 68e42d77684b6a512e9667dbfef75d02c1b41a389d425ad461e6f94fa51ca6a0" address="unix:///run/containerd/s/cb5575177b692c73a8f3344fef14654a0026c4ffb6aac4e4c8410a9b8ff2e581" protocol=ttrpc version=3 Sep 12 17:49:39.528760 systemd[1]: Started cri-containerd-68e42d77684b6a512e9667dbfef75d02c1b41a389d425ad461e6f94fa51ca6a0.scope - libcontainer container 68e42d77684b6a512e9667dbfef75d02c1b41a389d425ad461e6f94fa51ca6a0. Sep 12 17:49:39.566683 containerd[1714]: time="2025-09-12T17:49:39.566659567Z" level=info msg="StartContainer for \"68e42d77684b6a512e9667dbfef75d02c1b41a389d425ad461e6f94fa51ca6a0\" returns successfully" Sep 12 17:49:39.950852 kubelet[3174]: I0912 17:49:39.950811 3174 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-7988f88666-h8gbr" podStartSLOduration=27.534545499 podStartE2EDuration="30.95079369s" podCreationTimestamp="2025-09-12 17:49:09 +0000 UTC" firstStartedPulling="2025-09-12 17:49:36.052495526 +0000 UTC m=+43.371335584" lastFinishedPulling="2025-09-12 17:49:39.468743728 +0000 UTC m=+46.787583775" observedRunningTime="2025-09-12 17:49:39.950149518 +0000 UTC m=+47.268989575" watchObservedRunningTime="2025-09-12 17:49:39.95079369 +0000 UTC m=+47.269633748" Sep 12 17:49:40.008057 containerd[1714]: time="2025-09-12T17:49:40.008032735Z" level=info msg="TaskExit event in podsandbox handler container_id:\"68e42d77684b6a512e9667dbfef75d02c1b41a389d425ad461e6f94fa51ca6a0\" id:\"48004dca1723f92cc1c8af991676548f0ce9558d181f7e617817be78bb902810\" pid:5331 exit_status:1 exited_at:{seconds:1757699380 nanos:7820623}" Sep 12 17:49:41.001652 containerd[1714]: time="2025-09-12T17:49:41.001599903Z" level=info msg="TaskExit event in podsandbox handler container_id:\"68e42d77684b6a512e9667dbfef75d02c1b41a389d425ad461e6f94fa51ca6a0\" id:\"0de687d815876951d69d1666bafb14daffff1ffbd6f146a27e8c4f0046da81ce\" pid:5355 exit_status:1 exited_at:{seconds:1757699381 nanos:1447092}" Sep 12 17:49:41.155339 containerd[1714]: time="2025-09-12T17:49:41.155313885Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:49:41.157774 containerd[1714]: time="2025-09-12T17:49:41.157751966Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 12 17:49:41.161558 containerd[1714]: time="2025-09-12T17:49:41.161437052Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:49:41.165664 containerd[1714]: time="2025-09-12T17:49:41.165071680Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:49:41.165664 containerd[1714]: time="2025-09-12T17:49:41.165604341Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 1.696347184s" Sep 12 17:49:41.165664 containerd[1714]: time="2025-09-12T17:49:41.165626672Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 12 17:49:41.166859 containerd[1714]: time="2025-09-12T17:49:41.166841431Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 12 17:49:41.168398 containerd[1714]: time="2025-09-12T17:49:41.168350415Z" level=info msg="CreateContainer within sandbox \"354d83ec7036ef74b9eb75d804ca088ca86483942f9ca5450366e598ecd39ba5\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 12 17:49:41.186654 containerd[1714]: time="2025-09-12T17:49:41.186436258Z" level=info msg="Container b41dde28b4a563f9367f102f7aa68f5955d952bc8bf267a834ac3441a1abd348: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:49:41.205039 containerd[1714]: time="2025-09-12T17:49:41.204987679Z" level=info msg="CreateContainer within sandbox \"354d83ec7036ef74b9eb75d804ca088ca86483942f9ca5450366e598ecd39ba5\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"b41dde28b4a563f9367f102f7aa68f5955d952bc8bf267a834ac3441a1abd348\"" Sep 12 17:49:41.209619 containerd[1714]: time="2025-09-12T17:49:41.209592288Z" level=info msg="StartContainer for \"b41dde28b4a563f9367f102f7aa68f5955d952bc8bf267a834ac3441a1abd348\"" Sep 12 17:49:41.211216 containerd[1714]: time="2025-09-12T17:49:41.211195589Z" level=info msg="connecting to shim b41dde28b4a563f9367f102f7aa68f5955d952bc8bf267a834ac3441a1abd348" address="unix:///run/containerd/s/e223ac16f22b7506d12030fdfd643d76d2786e4003371affe2e8f396683f6f65" protocol=ttrpc version=3 Sep 12 17:49:41.229781 systemd[1]: Started cri-containerd-b41dde28b4a563f9367f102f7aa68f5955d952bc8bf267a834ac3441a1abd348.scope - libcontainer container b41dde28b4a563f9367f102f7aa68f5955d952bc8bf267a834ac3441a1abd348. Sep 12 17:49:41.257569 containerd[1714]: time="2025-09-12T17:49:41.257467279Z" level=info msg="StartContainer for \"b41dde28b4a563f9367f102f7aa68f5955d952bc8bf267a834ac3441a1abd348\" returns successfully" Sep 12 17:49:41.849272 kubelet[3174]: I0912 17:49:41.849250 3174 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 12 17:49:41.849272 kubelet[3174]: I0912 17:49:41.849275 3174 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 12 17:49:41.961873 kubelet[3174]: I0912 17:49:41.961531 3174 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-s6pvs" podStartSLOduration=23.746243514 podStartE2EDuration="31.961516984s" podCreationTimestamp="2025-09-12 17:49:10 +0000 UTC" firstStartedPulling="2025-09-12 17:49:32.951485678 +0000 UTC m=+40.270325728" lastFinishedPulling="2025-09-12 17:49:41.16675914 +0000 UTC m=+48.485599198" observedRunningTime="2025-09-12 17:49:41.961000514 +0000 UTC m=+49.279840572" watchObservedRunningTime="2025-09-12 17:49:41.961516984 +0000 UTC m=+49.280357042" Sep 12 17:49:42.011369 containerd[1714]: time="2025-09-12T17:49:42.011341554Z" level=info msg="TaskExit event in podsandbox handler container_id:\"68e42d77684b6a512e9667dbfef75d02c1b41a389d425ad461e6f94fa51ca6a0\" id:\"f2c8cb2c5884e57aac8225b1f8e5e8809245b1eef98ce752e60b8614b8086fc5\" pid:5420 exit_status:1 exited_at:{seconds:1757699382 nanos:10974383}" Sep 12 17:49:43.793518 containerd[1714]: time="2025-09-12T17:49:43.793477290Z" level=info msg="TaskExit event in podsandbox handler container_id:\"98d9ce3227bcdbde6b4a764c3a51ece4fe45d5cce54b2ce3099a88e679f02a49\" id:\"e0579a54121227a67052b01dfa8d9b481978807d31f36a28d48c05f444f54448\" pid:5448 exited_at:{seconds:1757699383 nanos:793286338}" Sep 12 17:49:44.755170 containerd[1714]: time="2025-09-12T17:49:44.755133561Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:49:44.758774 containerd[1714]: time="2025-09-12T17:49:44.758664589Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Sep 12 17:49:44.761647 containerd[1714]: time="2025-09-12T17:49:44.761606676Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:49:44.767573 containerd[1714]: time="2025-09-12T17:49:44.767124136Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:49:44.767573 containerd[1714]: time="2025-09-12T17:49:44.767491042Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 3.600556398s" Sep 12 17:49:44.767573 containerd[1714]: time="2025-09-12T17:49:44.767512075Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 12 17:49:44.768865 containerd[1714]: time="2025-09-12T17:49:44.768750765Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 12 17:49:44.780916 containerd[1714]: time="2025-09-12T17:49:44.780894594Z" level=info msg="CreateContainer within sandbox \"10358bfcc08ac2c420d6465e53ee847464aa62c71f773ffccdb02fe2bb4480cd\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 12 17:49:44.799063 containerd[1714]: time="2025-09-12T17:49:44.798890235Z" level=info msg="Container 794e8ada7c172943fe8ae8680dab249a39a32ba5f06d80bdac279b1f8e40bcbe: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:49:44.818392 containerd[1714]: time="2025-09-12T17:49:44.818368471Z" level=info msg="CreateContainer within sandbox \"10358bfcc08ac2c420d6465e53ee847464aa62c71f773ffccdb02fe2bb4480cd\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"794e8ada7c172943fe8ae8680dab249a39a32ba5f06d80bdac279b1f8e40bcbe\"" Sep 12 17:49:44.818809 containerd[1714]: time="2025-09-12T17:49:44.818782438Z" level=info msg="StartContainer for \"794e8ada7c172943fe8ae8680dab249a39a32ba5f06d80bdac279b1f8e40bcbe\"" Sep 12 17:49:44.819786 containerd[1714]: time="2025-09-12T17:49:44.819763311Z" level=info msg="connecting to shim 794e8ada7c172943fe8ae8680dab249a39a32ba5f06d80bdac279b1f8e40bcbe" address="unix:///run/containerd/s/f77c429792f16b0a15b76963ba0e2bc13a2db93404bc8a10020965122eed18be" protocol=ttrpc version=3 Sep 12 17:49:44.839767 systemd[1]: Started cri-containerd-794e8ada7c172943fe8ae8680dab249a39a32ba5f06d80bdac279b1f8e40bcbe.scope - libcontainer container 794e8ada7c172943fe8ae8680dab249a39a32ba5f06d80bdac279b1f8e40bcbe. Sep 12 17:49:44.879329 containerd[1714]: time="2025-09-12T17:49:44.879272686Z" level=info msg="StartContainer for \"794e8ada7c172943fe8ae8680dab249a39a32ba5f06d80bdac279b1f8e40bcbe\" returns successfully" Sep 12 17:49:44.968672 kubelet[3174]: I0912 17:49:44.966482 3174 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-54d8b7455c-46626" podStartSLOduration=27.218791653 podStartE2EDuration="34.9664674s" podCreationTimestamp="2025-09-12 17:49:10 +0000 UTC" firstStartedPulling="2025-09-12 17:49:37.020537834 +0000 UTC m=+44.339377884" lastFinishedPulling="2025-09-12 17:49:44.768213586 +0000 UTC m=+52.087053631" observedRunningTime="2025-09-12 17:49:44.966014513 +0000 UTC m=+52.284854573" watchObservedRunningTime="2025-09-12 17:49:44.9664674 +0000 UTC m=+52.285307457" Sep 12 17:49:44.986887 containerd[1714]: time="2025-09-12T17:49:44.986858774Z" level=info msg="TaskExit event in podsandbox handler container_id:\"794e8ada7c172943fe8ae8680dab249a39a32ba5f06d80bdac279b1f8e40bcbe\" id:\"adecad415bb8c2183eef70d4f4f3a9ec0c8f33004312ed99404ce0f42233f229\" pid:5522 exited_at:{seconds:1757699384 nanos:986516895}" Sep 12 17:49:47.634823 containerd[1714]: time="2025-09-12T17:49:47.634783685Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:49:47.636954 containerd[1714]: time="2025-09-12T17:49:47.636870553Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Sep 12 17:49:47.639883 containerd[1714]: time="2025-09-12T17:49:47.639713019Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:49:47.645013 containerd[1714]: time="2025-09-12T17:49:47.644966877Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:49:47.645541 containerd[1714]: time="2025-09-12T17:49:47.645382377Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 2.876606936s" Sep 12 17:49:47.645541 containerd[1714]: time="2025-09-12T17:49:47.645408715Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 12 17:49:47.646538 containerd[1714]: time="2025-09-12T17:49:47.646184036Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 12 17:49:47.647876 containerd[1714]: time="2025-09-12T17:49:47.647849500Z" level=info msg="CreateContainer within sandbox \"6e9d51f54e8f911d3d8fb61131788e7a9767ec6f7445d26eb717e2c1362e17ad\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 12 17:49:47.671731 containerd[1714]: time="2025-09-12T17:49:47.671706957Z" level=info msg="Container fd2ce641a7a702cde35d049ce6143eb7b10dc7db51ab3f83b52db727eaf415c2: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:49:47.692219 containerd[1714]: time="2025-09-12T17:49:47.692194529Z" level=info msg="CreateContainer within sandbox \"6e9d51f54e8f911d3d8fb61131788e7a9767ec6f7445d26eb717e2c1362e17ad\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"fd2ce641a7a702cde35d049ce6143eb7b10dc7db51ab3f83b52db727eaf415c2\"" Sep 12 17:49:47.692814 containerd[1714]: time="2025-09-12T17:49:47.692672024Z" level=info msg="StartContainer for \"fd2ce641a7a702cde35d049ce6143eb7b10dc7db51ab3f83b52db727eaf415c2\"" Sep 12 17:49:47.694462 containerd[1714]: time="2025-09-12T17:49:47.694438211Z" level=info msg="connecting to shim fd2ce641a7a702cde35d049ce6143eb7b10dc7db51ab3f83b52db727eaf415c2" address="unix:///run/containerd/s/00f80ffe0b9e09288d1fec955d6e13cc78206c37ef70029c132a85af38505a45" protocol=ttrpc version=3 Sep 12 17:49:47.716779 systemd[1]: Started cri-containerd-fd2ce641a7a702cde35d049ce6143eb7b10dc7db51ab3f83b52db727eaf415c2.scope - libcontainer container fd2ce641a7a702cde35d049ce6143eb7b10dc7db51ab3f83b52db727eaf415c2. Sep 12 17:49:47.759266 containerd[1714]: time="2025-09-12T17:49:47.759242734Z" level=info msg="StartContainer for \"fd2ce641a7a702cde35d049ce6143eb7b10dc7db51ab3f83b52db727eaf415c2\" returns successfully" Sep 12 17:49:47.970789 kubelet[3174]: I0912 17:49:47.970484 3174 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-866bfd7cf-22shl" podStartSLOduration=30.428197392 podStartE2EDuration="40.970468982s" podCreationTimestamp="2025-09-12 17:49:07 +0000 UTC" firstStartedPulling="2025-09-12 17:49:37.103829068 +0000 UTC m=+44.422669123" lastFinishedPulling="2025-09-12 17:49:47.646100662 +0000 UTC m=+54.964940713" observedRunningTime="2025-09-12 17:49:47.969563111 +0000 UTC m=+55.288403166" watchObservedRunningTime="2025-09-12 17:49:47.970468982 +0000 UTC m=+55.289309037" Sep 12 17:49:47.978712 containerd[1714]: time="2025-09-12T17:49:47.978687382Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:49:47.981809 containerd[1714]: time="2025-09-12T17:49:47.981746934Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 12 17:49:47.984534 containerd[1714]: time="2025-09-12T17:49:47.984506269Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 338.297225ms" Sep 12 17:49:47.984670 containerd[1714]: time="2025-09-12T17:49:47.984533731Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 12 17:49:47.988081 containerd[1714]: time="2025-09-12T17:49:47.988033601Z" level=info msg="CreateContainer within sandbox \"9c9908d5caa358e82d62fa7408a6c4379180e365febcbcbf8f077d68a67e7776\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 12 17:49:48.010916 containerd[1714]: time="2025-09-12T17:49:48.010833046Z" level=info msg="Container 9e531b3a35cd3c8d65b06a480b1183edb513d0403813e67a00eb1514e3e3b761: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:49:48.026189 containerd[1714]: time="2025-09-12T17:49:48.026113303Z" level=info msg="CreateContainer within sandbox \"9c9908d5caa358e82d62fa7408a6c4379180e365febcbcbf8f077d68a67e7776\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"9e531b3a35cd3c8d65b06a480b1183edb513d0403813e67a00eb1514e3e3b761\"" Sep 12 17:49:48.026863 containerd[1714]: time="2025-09-12T17:49:48.026842860Z" level=info msg="StartContainer for \"9e531b3a35cd3c8d65b06a480b1183edb513d0403813e67a00eb1514e3e3b761\"" Sep 12 17:49:48.028141 containerd[1714]: time="2025-09-12T17:49:48.028115117Z" level=info msg="connecting to shim 9e531b3a35cd3c8d65b06a480b1183edb513d0403813e67a00eb1514e3e3b761" address="unix:///run/containerd/s/150dce35035aece5ba3f2b57a2325a12ecc057f25a039fc89dea42e6a46ad806" protocol=ttrpc version=3 Sep 12 17:49:48.052756 systemd[1]: Started cri-containerd-9e531b3a35cd3c8d65b06a480b1183edb513d0403813e67a00eb1514e3e3b761.scope - libcontainer container 9e531b3a35cd3c8d65b06a480b1183edb513d0403813e67a00eb1514e3e3b761. Sep 12 17:49:48.108935 containerd[1714]: time="2025-09-12T17:49:48.108908561Z" level=info msg="StartContainer for \"9e531b3a35cd3c8d65b06a480b1183edb513d0403813e67a00eb1514e3e3b761\" returns successfully" Sep 12 17:49:49.965479 kubelet[3174]: I0912 17:49:49.965442 3174 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:49:54.484239 kubelet[3174]: I0912 17:49:54.484204 3174 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:49:54.508744 kubelet[3174]: I0912 17:49:54.508460 3174 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-866bfd7cf-6xdw9" podStartSLOduration=37.592451903 podStartE2EDuration="47.508445103s" podCreationTimestamp="2025-09-12 17:49:07 +0000 UTC" firstStartedPulling="2025-09-12 17:49:38.069712542 +0000 UTC m=+45.388552590" lastFinishedPulling="2025-09-12 17:49:47.985705736 +0000 UTC m=+55.304545790" observedRunningTime="2025-09-12 17:49:48.982342741 +0000 UTC m=+56.301182797" watchObservedRunningTime="2025-09-12 17:49:54.508445103 +0000 UTC m=+61.827285192" Sep 12 17:49:56.969310 containerd[1714]: time="2025-09-12T17:49:56.969206612Z" level=info msg="TaskExit event in podsandbox handler container_id:\"794e8ada7c172943fe8ae8680dab249a39a32ba5f06d80bdac279b1f8e40bcbe\" id:\"72c8d056b17d658b4ef90786e545da617c4894cff85095701428dabd75988ff0\" pid:5642 exited_at:{seconds:1757699396 nanos:968815829}" Sep 12 17:50:05.442354 containerd[1714]: time="2025-09-12T17:50:05.442292986Z" level=info msg="TaskExit event in podsandbox handler container_id:\"68e42d77684b6a512e9667dbfef75d02c1b41a389d425ad461e6f94fa51ca6a0\" id:\"7feeda7edd94a11d05d44d79c8906b89a58cbd6a0a9b32e14fa4284f287380e1\" pid:5669 exited_at:{seconds:1757699405 nanos:442059248}" Sep 12 17:50:07.205014 containerd[1714]: time="2025-09-12T17:50:07.204973194Z" level=info msg="TaskExit event in podsandbox handler container_id:\"794e8ada7c172943fe8ae8680dab249a39a32ba5f06d80bdac279b1f8e40bcbe\" id:\"85c1fe5aebefa325740937bc610dddb5811b2c66dba2f2a0d65c8e1c00929bcb\" pid:5693 exited_at:{seconds:1757699407 nanos:204511895}" Sep 12 17:50:13.825604 containerd[1714]: time="2025-09-12T17:50:13.825544803Z" level=info msg="TaskExit event in podsandbox handler container_id:\"98d9ce3227bcdbde6b4a764c3a51ece4fe45d5cce54b2ce3099a88e679f02a49\" id:\"31cf37f4380d3722e229f56933a16a9d7643759636603166d65f410872fec4c8\" pid:5720 exited_at:{seconds:1757699413 nanos:824669848}" Sep 12 17:50:16.831801 containerd[1714]: time="2025-09-12T17:50:16.831765255Z" level=info msg="TaskExit event in podsandbox handler container_id:\"68e42d77684b6a512e9667dbfef75d02c1b41a389d425ad461e6f94fa51ca6a0\" id:\"c419d3cc5145b89e3766ff7fbd1e11965c37b15d64b8f4805ead74e85bc28565\" pid:5745 exited_at:{seconds:1757699416 nanos:831374121}" Sep 12 17:50:35.435969 containerd[1714]: time="2025-09-12T17:50:35.435928041Z" level=info msg="TaskExit event in podsandbox handler container_id:\"68e42d77684b6a512e9667dbfef75d02c1b41a389d425ad461e6f94fa51ca6a0\" id:\"0f487578c4d6376eef41f8c0ba63b5ebf79c254dda87b4ac84741c8d86f4d90a\" pid:5775 exited_at:{seconds:1757699435 nanos:435695971}" Sep 12 17:50:37.199810 containerd[1714]: time="2025-09-12T17:50:37.199769586Z" level=info msg="TaskExit event in podsandbox handler container_id:\"794e8ada7c172943fe8ae8680dab249a39a32ba5f06d80bdac279b1f8e40bcbe\" id:\"ef0105386749e5c3c96110c773f66bc41391d37227ac265db50c7c4d65b8cfb7\" pid:5799 exited_at:{seconds:1757699437 nanos:199417123}" Sep 12 17:50:43.158781 systemd[1]: Started sshd@7-10.200.8.42:22-10.200.16.10:60876.service - OpenSSH per-connection server daemon (10.200.16.10:60876). Sep 12 17:50:43.784702 sshd[5812]: Accepted publickey for core from 10.200.16.10 port 60876 ssh2: RSA SHA256:ZNiJkqu8Loo123AdfZic/f9v0/MsiWfNTs209WSupSg Sep 12 17:50:43.786775 sshd-session[5812]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:50:43.792230 systemd-logind[1701]: New session 10 of user core. Sep 12 17:50:43.796816 containerd[1714]: time="2025-09-12T17:50:43.796770824Z" level=info msg="TaskExit event in podsandbox handler container_id:\"98d9ce3227bcdbde6b4a764c3a51ece4fe45d5cce54b2ce3099a88e679f02a49\" id:\"a46acc224201ec7630917d96e5cadb2a8fbd9cd64aff9ddc4f0d7bace5752d93\" pid:5827 exited_at:{seconds:1757699443 nanos:796517672}" Sep 12 17:50:43.797848 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 12 17:50:44.275539 sshd[5839]: Connection closed by 10.200.16.10 port 60876 Sep 12 17:50:44.275927 sshd-session[5812]: pam_unix(sshd:session): session closed for user core Sep 12 17:50:44.278593 systemd[1]: sshd@7-10.200.8.42:22-10.200.16.10:60876.service: Deactivated successfully. Sep 12 17:50:44.280219 systemd[1]: session-10.scope: Deactivated successfully. Sep 12 17:50:44.281156 systemd-logind[1701]: Session 10 logged out. Waiting for processes to exit. Sep 12 17:50:44.282123 systemd-logind[1701]: Removed session 10. Sep 12 17:50:49.386827 systemd[1]: Started sshd@8-10.200.8.42:22-10.200.16.10:60878.service - OpenSSH per-connection server daemon (10.200.16.10:60878). Sep 12 17:50:50.024989 sshd[5852]: Accepted publickey for core from 10.200.16.10 port 60878 ssh2: RSA SHA256:ZNiJkqu8Loo123AdfZic/f9v0/MsiWfNTs209WSupSg Sep 12 17:50:50.025994 sshd-session[5852]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:50:50.029934 systemd-logind[1701]: New session 11 of user core. Sep 12 17:50:50.032907 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 12 17:50:50.508779 sshd[5855]: Connection closed by 10.200.16.10 port 60878 Sep 12 17:50:50.509124 sshd-session[5852]: pam_unix(sshd:session): session closed for user core Sep 12 17:50:50.511321 systemd[1]: sshd@8-10.200.8.42:22-10.200.16.10:60878.service: Deactivated successfully. Sep 12 17:50:50.512935 systemd[1]: session-11.scope: Deactivated successfully. Sep 12 17:50:50.514507 systemd-logind[1701]: Session 11 logged out. Waiting for processes to exit. Sep 12 17:50:50.515224 systemd-logind[1701]: Removed session 11. Sep 12 17:50:55.620389 systemd[1]: Started sshd@9-10.200.8.42:22-10.200.16.10:42236.service - OpenSSH per-connection server daemon (10.200.16.10:42236). Sep 12 17:50:56.243814 sshd[5875]: Accepted publickey for core from 10.200.16.10 port 42236 ssh2: RSA SHA256:ZNiJkqu8Loo123AdfZic/f9v0/MsiWfNTs209WSupSg Sep 12 17:50:56.244673 sshd-session[5875]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:50:56.248079 systemd-logind[1701]: New session 12 of user core. Sep 12 17:50:56.253780 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 12 17:50:56.725496 sshd[5878]: Connection closed by 10.200.16.10 port 42236 Sep 12 17:50:56.726583 sshd-session[5875]: pam_unix(sshd:session): session closed for user core Sep 12 17:50:56.729145 systemd-logind[1701]: Session 12 logged out. Waiting for processes to exit. Sep 12 17:50:56.729346 systemd[1]: sshd@9-10.200.8.42:22-10.200.16.10:42236.service: Deactivated successfully. Sep 12 17:50:56.731215 systemd[1]: session-12.scope: Deactivated successfully. Sep 12 17:50:56.732733 systemd-logind[1701]: Removed session 12. Sep 12 17:50:56.834872 systemd[1]: Started sshd@10-10.200.8.42:22-10.200.16.10:42248.service - OpenSSH per-connection server daemon (10.200.16.10:42248). Sep 12 17:50:56.970942 containerd[1714]: time="2025-09-12T17:50:56.970896221Z" level=info msg="TaskExit event in podsandbox handler container_id:\"794e8ada7c172943fe8ae8680dab249a39a32ba5f06d80bdac279b1f8e40bcbe\" id:\"843eb0642df5b1529bf122aa1b6b6c4bbdbe91041405a24e3d243f049bf06cbe\" pid:5908 exited_at:{seconds:1757699456 nanos:970707440}" Sep 12 17:50:57.463120 sshd[5891]: Accepted publickey for core from 10.200.16.10 port 42248 ssh2: RSA SHA256:ZNiJkqu8Loo123AdfZic/f9v0/MsiWfNTs209WSupSg Sep 12 17:50:57.463935 sshd-session[5891]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:50:57.467481 systemd-logind[1701]: New session 13 of user core. Sep 12 17:50:57.472739 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 12 17:50:57.970907 sshd[5919]: Connection closed by 10.200.16.10 port 42248 Sep 12 17:50:57.972072 sshd-session[5891]: pam_unix(sshd:session): session closed for user core Sep 12 17:50:57.974603 systemd-logind[1701]: Session 13 logged out. Waiting for processes to exit. Sep 12 17:50:57.975288 systemd[1]: sshd@10-10.200.8.42:22-10.200.16.10:42248.service: Deactivated successfully. Sep 12 17:50:57.976534 systemd[1]: session-13.scope: Deactivated successfully. Sep 12 17:50:57.978054 systemd-logind[1701]: Removed session 13. Sep 12 17:50:58.080952 systemd[1]: Started sshd@11-10.200.8.42:22-10.200.16.10:42262.service - OpenSSH per-connection server daemon (10.200.16.10:42262). Sep 12 17:50:58.705469 sshd[5931]: Accepted publickey for core from 10.200.16.10 port 42262 ssh2: RSA SHA256:ZNiJkqu8Loo123AdfZic/f9v0/MsiWfNTs209WSupSg Sep 12 17:50:58.706311 sshd-session[5931]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:50:58.710452 systemd-logind[1701]: New session 14 of user core. Sep 12 17:50:58.713771 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 12 17:50:59.187785 sshd[5934]: Connection closed by 10.200.16.10 port 42262 Sep 12 17:50:59.188768 sshd-session[5931]: pam_unix(sshd:session): session closed for user core Sep 12 17:50:59.191307 systemd-logind[1701]: Session 14 logged out. Waiting for processes to exit. Sep 12 17:50:59.191489 systemd[1]: sshd@11-10.200.8.42:22-10.200.16.10:42262.service: Deactivated successfully. Sep 12 17:50:59.193071 systemd[1]: session-14.scope: Deactivated successfully. Sep 12 17:50:59.194434 systemd-logind[1701]: Removed session 14. Sep 12 17:51:04.304196 systemd[1]: Started sshd@12-10.200.8.42:22-10.200.16.10:42788.service - OpenSSH per-connection server daemon (10.200.16.10:42788). Sep 12 17:51:04.925052 sshd[5952]: Accepted publickey for core from 10.200.16.10 port 42788 ssh2: RSA SHA256:ZNiJkqu8Loo123AdfZic/f9v0/MsiWfNTs209WSupSg Sep 12 17:51:04.925903 sshd-session[5952]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:51:04.929702 systemd-logind[1701]: New session 15 of user core. Sep 12 17:51:04.932773 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 12 17:51:05.415938 sshd[5962]: Connection closed by 10.200.16.10 port 42788 Sep 12 17:51:05.416760 sshd-session[5952]: pam_unix(sshd:session): session closed for user core Sep 12 17:51:05.420076 systemd-logind[1701]: Session 15 logged out. Waiting for processes to exit. Sep 12 17:51:05.420162 systemd[1]: sshd@12-10.200.8.42:22-10.200.16.10:42788.service: Deactivated successfully. Sep 12 17:51:05.423148 systemd[1]: session-15.scope: Deactivated successfully. Sep 12 17:51:05.425335 systemd-logind[1701]: Removed session 15. Sep 12 17:51:05.443196 containerd[1714]: time="2025-09-12T17:51:05.443160438Z" level=info msg="TaskExit event in podsandbox handler container_id:\"68e42d77684b6a512e9667dbfef75d02c1b41a389d425ad461e6f94fa51ca6a0\" id:\"ed6f69573cfe63f30ad7f355df894a596ede45f049b1ef305ade0578d9dcffe2\" pid:5983 exited_at:{seconds:1757699465 nanos:442940463}" Sep 12 17:51:07.196062 containerd[1714]: time="2025-09-12T17:51:07.196027763Z" level=info msg="TaskExit event in podsandbox handler container_id:\"794e8ada7c172943fe8ae8680dab249a39a32ba5f06d80bdac279b1f8e40bcbe\" id:\"a0a8fc292d0b693da044a05f002c5c2c0e7442a19855893a417a67dba5c788ca\" pid:6008 exited_at:{seconds:1757699467 nanos:195757799}" Sep 12 17:51:10.529312 systemd[1]: Started sshd@13-10.200.8.42:22-10.200.16.10:38962.service - OpenSSH per-connection server daemon (10.200.16.10:38962). Sep 12 17:51:11.156667 sshd[6032]: Accepted publickey for core from 10.200.16.10 port 38962 ssh2: RSA SHA256:ZNiJkqu8Loo123AdfZic/f9v0/MsiWfNTs209WSupSg Sep 12 17:51:11.157513 sshd-session[6032]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:51:11.160902 systemd-logind[1701]: New session 16 of user core. Sep 12 17:51:11.167752 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 12 17:51:11.640054 sshd[6035]: Connection closed by 10.200.16.10 port 38962 Sep 12 17:51:11.640389 sshd-session[6032]: pam_unix(sshd:session): session closed for user core Sep 12 17:51:11.642929 systemd[1]: sshd@13-10.200.8.42:22-10.200.16.10:38962.service: Deactivated successfully. Sep 12 17:51:11.644320 systemd[1]: session-16.scope: Deactivated successfully. Sep 12 17:51:11.645134 systemd-logind[1701]: Session 16 logged out. Waiting for processes to exit. Sep 12 17:51:11.646225 systemd-logind[1701]: Removed session 16. Sep 12 17:51:13.798188 containerd[1714]: time="2025-09-12T17:51:13.798149397Z" level=info msg="TaskExit event in podsandbox handler container_id:\"98d9ce3227bcdbde6b4a764c3a51ece4fe45d5cce54b2ce3099a88e679f02a49\" id:\"c4cfdc60c8bca3a2a9c2c70fd28f9e6b87cb29be8ba953dd7ac8f53dd1c8c9a9\" pid:6057 exited_at:{seconds:1757699473 nanos:797923054}" Sep 12 17:51:16.751540 systemd[1]: Started sshd@14-10.200.8.42:22-10.200.16.10:38976.service - OpenSSH per-connection server daemon (10.200.16.10:38976). Sep 12 17:51:16.825995 containerd[1714]: time="2025-09-12T17:51:16.825969401Z" level=info msg="TaskExit event in podsandbox handler container_id:\"68e42d77684b6a512e9667dbfef75d02c1b41a389d425ad461e6f94fa51ca6a0\" id:\"fe5c85fd707f9d18db31a95d9e7b8b7dca0dc1b162368861d602f8d8aab57d86\" pid:6085 exited_at:{seconds:1757699476 nanos:825554001}" Sep 12 17:51:17.385494 sshd[6069]: Accepted publickey for core from 10.200.16.10 port 38976 ssh2: RSA SHA256:ZNiJkqu8Loo123AdfZic/f9v0/MsiWfNTs209WSupSg Sep 12 17:51:17.388301 sshd-session[6069]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:51:17.393717 systemd-logind[1701]: New session 17 of user core. Sep 12 17:51:17.403760 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 12 17:51:17.871548 sshd[6096]: Connection closed by 10.200.16.10 port 38976 Sep 12 17:51:17.871686 sshd-session[6069]: pam_unix(sshd:session): session closed for user core Sep 12 17:51:17.874858 systemd[1]: sshd@14-10.200.8.42:22-10.200.16.10:38976.service: Deactivated successfully. Sep 12 17:51:17.876143 systemd[1]: session-17.scope: Deactivated successfully. Sep 12 17:51:17.877501 systemd-logind[1701]: Session 17 logged out. Waiting for processes to exit. Sep 12 17:51:17.878319 systemd-logind[1701]: Removed session 17. Sep 12 17:51:17.981542 systemd[1]: Started sshd@15-10.200.8.42:22-10.200.16.10:38992.service - OpenSSH per-connection server daemon (10.200.16.10:38992). Sep 12 17:51:18.611406 sshd[6109]: Accepted publickey for core from 10.200.16.10 port 38992 ssh2: RSA SHA256:ZNiJkqu8Loo123AdfZic/f9v0/MsiWfNTs209WSupSg Sep 12 17:51:18.612621 sshd-session[6109]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:51:18.616810 systemd-logind[1701]: New session 18 of user core. Sep 12 17:51:18.621773 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 12 17:51:19.223663 sshd[6112]: Connection closed by 10.200.16.10 port 38992 Sep 12 17:51:19.224102 sshd-session[6109]: pam_unix(sshd:session): session closed for user core Sep 12 17:51:19.227030 systemd[1]: sshd@15-10.200.8.42:22-10.200.16.10:38992.service: Deactivated successfully. Sep 12 17:51:19.228575 systemd[1]: session-18.scope: Deactivated successfully. Sep 12 17:51:19.229258 systemd-logind[1701]: Session 18 logged out. Waiting for processes to exit. Sep 12 17:51:19.230701 systemd-logind[1701]: Removed session 18. Sep 12 17:51:19.334599 systemd[1]: Started sshd@16-10.200.8.42:22-10.200.16.10:38996.service - OpenSSH per-connection server daemon (10.200.16.10:38996). Sep 12 17:51:19.959738 sshd[6121]: Accepted publickey for core from 10.200.16.10 port 38996 ssh2: RSA SHA256:ZNiJkqu8Loo123AdfZic/f9v0/MsiWfNTs209WSupSg Sep 12 17:51:19.960510 sshd-session[6121]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:51:19.964080 systemd-logind[1701]: New session 19 of user core. Sep 12 17:51:19.969776 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 12 17:51:21.763915 sshd[6124]: Connection closed by 10.200.16.10 port 38996 Sep 12 17:51:21.764377 sshd-session[6121]: pam_unix(sshd:session): session closed for user core Sep 12 17:51:21.767850 systemd[1]: sshd@16-10.200.8.42:22-10.200.16.10:38996.service: Deactivated successfully. Sep 12 17:51:21.768023 systemd-logind[1701]: Session 19 logged out. Waiting for processes to exit. Sep 12 17:51:21.769993 systemd[1]: session-19.scope: Deactivated successfully. Sep 12 17:51:21.770175 systemd[1]: session-19.scope: Consumed 380ms CPU time, 73.4M memory peak. Sep 12 17:51:21.772319 systemd-logind[1701]: Removed session 19. Sep 12 17:51:21.877053 systemd[1]: Started sshd@17-10.200.8.42:22-10.200.16.10:58410.service - OpenSSH per-connection server daemon (10.200.16.10:58410). Sep 12 17:51:22.498321 sshd[6141]: Accepted publickey for core from 10.200.16.10 port 58410 ssh2: RSA SHA256:ZNiJkqu8Loo123AdfZic/f9v0/MsiWfNTs209WSupSg Sep 12 17:51:22.499124 sshd-session[6141]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:51:22.502904 systemd-logind[1701]: New session 20 of user core. Sep 12 17:51:22.513750 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 12 17:51:23.066334 sshd[6144]: Connection closed by 10.200.16.10 port 58410 Sep 12 17:51:23.066757 sshd-session[6141]: pam_unix(sshd:session): session closed for user core Sep 12 17:51:23.069524 systemd[1]: sshd@17-10.200.8.42:22-10.200.16.10:58410.service: Deactivated successfully. Sep 12 17:51:23.071187 systemd[1]: session-20.scope: Deactivated successfully. Sep 12 17:51:23.071865 systemd-logind[1701]: Session 20 logged out. Waiting for processes to exit. Sep 12 17:51:23.072948 systemd-logind[1701]: Removed session 20. Sep 12 17:51:23.178170 systemd[1]: Started sshd@18-10.200.8.42:22-10.200.16.10:58412.service - OpenSSH per-connection server daemon (10.200.16.10:58412). Sep 12 17:51:23.804176 sshd[6154]: Accepted publickey for core from 10.200.16.10 port 58412 ssh2: RSA SHA256:ZNiJkqu8Loo123AdfZic/f9v0/MsiWfNTs209WSupSg Sep 12 17:51:23.804976 sshd-session[6154]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:51:23.808495 systemd-logind[1701]: New session 21 of user core. Sep 12 17:51:23.812750 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 12 17:51:24.286895 sshd[6157]: Connection closed by 10.200.16.10 port 58412 Sep 12 17:51:24.287330 sshd-session[6154]: pam_unix(sshd:session): session closed for user core Sep 12 17:51:24.289826 systemd[1]: sshd@18-10.200.8.42:22-10.200.16.10:58412.service: Deactivated successfully. Sep 12 17:51:24.291242 systemd[1]: session-21.scope: Deactivated successfully. Sep 12 17:51:24.292335 systemd-logind[1701]: Session 21 logged out. Waiting for processes to exit. Sep 12 17:51:24.293462 systemd-logind[1701]: Removed session 21. Sep 12 17:51:29.398300 systemd[1]: Started sshd@19-10.200.8.42:22-10.200.16.10:58426.service - OpenSSH per-connection server daemon (10.200.16.10:58426). Sep 12 17:51:30.024037 sshd[6174]: Accepted publickey for core from 10.200.16.10 port 58426 ssh2: RSA SHA256:ZNiJkqu8Loo123AdfZic/f9v0/MsiWfNTs209WSupSg Sep 12 17:51:30.025499 sshd-session[6174]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:51:30.029612 systemd-logind[1701]: New session 22 of user core. Sep 12 17:51:30.040794 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 12 17:51:30.543304 sshd[6177]: Connection closed by 10.200.16.10 port 58426 Sep 12 17:51:30.543831 sshd-session[6174]: pam_unix(sshd:session): session closed for user core Sep 12 17:51:30.550173 systemd[1]: sshd@19-10.200.8.42:22-10.200.16.10:58426.service: Deactivated successfully. Sep 12 17:51:30.552495 systemd[1]: session-22.scope: Deactivated successfully. Sep 12 17:51:30.553546 systemd-logind[1701]: Session 22 logged out. Waiting for processes to exit. Sep 12 17:51:30.556339 systemd-logind[1701]: Removed session 22. Sep 12 17:51:35.433131 containerd[1714]: time="2025-09-12T17:51:35.433056173Z" level=info msg="TaskExit event in podsandbox handler container_id:\"68e42d77684b6a512e9667dbfef75d02c1b41a389d425ad461e6f94fa51ca6a0\" id:\"d85b7177eac3cfa0982194ad485286dccb5d231b35b3b73892e1cb6acfca582b\" pid:6200 exited_at:{seconds:1757699495 nanos:432830227}" Sep 12 17:51:35.656286 systemd[1]: Started sshd@20-10.200.8.42:22-10.200.16.10:34712.service - OpenSSH per-connection server daemon (10.200.16.10:34712). Sep 12 17:51:36.276572 sshd[6213]: Accepted publickey for core from 10.200.16.10 port 34712 ssh2: RSA SHA256:ZNiJkqu8Loo123AdfZic/f9v0/MsiWfNTs209WSupSg Sep 12 17:51:36.277419 sshd-session[6213]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:51:36.281104 systemd-logind[1701]: New session 23 of user core. Sep 12 17:51:36.286751 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 12 17:51:36.760650 sshd[6216]: Connection closed by 10.200.16.10 port 34712 Sep 12 17:51:36.760968 sshd-session[6213]: pam_unix(sshd:session): session closed for user core Sep 12 17:51:36.764298 systemd-logind[1701]: Session 23 logged out. Waiting for processes to exit. Sep 12 17:51:36.764751 systemd[1]: sshd@20-10.200.8.42:22-10.200.16.10:34712.service: Deactivated successfully. Sep 12 17:51:36.766170 systemd[1]: session-23.scope: Deactivated successfully. Sep 12 17:51:36.767338 systemd-logind[1701]: Removed session 23. Sep 12 17:51:37.195838 containerd[1714]: time="2025-09-12T17:51:37.195807740Z" level=info msg="TaskExit event in podsandbox handler container_id:\"794e8ada7c172943fe8ae8680dab249a39a32ba5f06d80bdac279b1f8e40bcbe\" id:\"72470b7cfb43f3d7434ab89c2737155e8100a87ab96fd1e09d7ccebaabeaf990\" pid:6239 exited_at:{seconds:1757699497 nanos:195581677}" Sep 12 17:51:41.877293 systemd[1]: Started sshd@21-10.200.8.42:22-10.200.16.10:34780.service - OpenSSH per-connection server daemon (10.200.16.10:34780). Sep 12 17:51:42.505925 sshd[6249]: Accepted publickey for core from 10.200.16.10 port 34780 ssh2: RSA SHA256:ZNiJkqu8Loo123AdfZic/f9v0/MsiWfNTs209WSupSg Sep 12 17:51:42.506871 sshd-session[6249]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:51:42.510688 systemd-logind[1701]: New session 24 of user core. Sep 12 17:51:42.513776 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 12 17:51:42.995236 sshd[6252]: Connection closed by 10.200.16.10 port 34780 Sep 12 17:51:42.997190 sshd-session[6249]: pam_unix(sshd:session): session closed for user core Sep 12 17:51:42.999375 systemd[1]: sshd@21-10.200.8.42:22-10.200.16.10:34780.service: Deactivated successfully. Sep 12 17:51:43.001051 systemd[1]: session-24.scope: Deactivated successfully. Sep 12 17:51:43.002739 systemd-logind[1701]: Session 24 logged out. Waiting for processes to exit. Sep 12 17:51:43.003591 systemd-logind[1701]: Removed session 24. Sep 12 17:51:43.798104 containerd[1714]: time="2025-09-12T17:51:43.798072530Z" level=info msg="TaskExit event in podsandbox handler container_id:\"98d9ce3227bcdbde6b4a764c3a51ece4fe45d5cce54b2ce3099a88e679f02a49\" id:\"0233e920dd92ff92d750c1c5f77c988a59311221e3d1b3be6b35b27d4d39ed78\" pid:6276 exited_at:{seconds:1757699503 nanos:797851146}" Sep 12 17:51:48.111869 systemd[1]: Started sshd@22-10.200.8.42:22-10.200.16.10:34792.service - OpenSSH per-connection server daemon (10.200.16.10:34792). Sep 12 17:51:48.755652 sshd[6289]: Accepted publickey for core from 10.200.16.10 port 34792 ssh2: RSA SHA256:ZNiJkqu8Loo123AdfZic/f9v0/MsiWfNTs209WSupSg Sep 12 17:51:48.757438 sshd-session[6289]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:51:48.768690 systemd-logind[1701]: New session 25 of user core. Sep 12 17:51:48.773783 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 12 17:51:49.236206 sshd[6292]: Connection closed by 10.200.16.10 port 34792 Sep 12 17:51:49.237494 sshd-session[6289]: pam_unix(sshd:session): session closed for user core Sep 12 17:51:49.240370 systemd[1]: sshd@22-10.200.8.42:22-10.200.16.10:34792.service: Deactivated successfully. Sep 12 17:51:49.242221 systemd[1]: session-25.scope: Deactivated successfully. Sep 12 17:51:49.242954 systemd-logind[1701]: Session 25 logged out. Waiting for processes to exit. Sep 12 17:51:49.243883 systemd-logind[1701]: Removed session 25. Sep 12 17:51:54.347376 systemd[1]: Started sshd@23-10.200.8.42:22-10.200.16.10:59294.service - OpenSSH per-connection server daemon (10.200.16.10:59294). Sep 12 17:51:54.974925 sshd[6306]: Accepted publickey for core from 10.200.16.10 port 59294 ssh2: RSA SHA256:ZNiJkqu8Loo123AdfZic/f9v0/MsiWfNTs209WSupSg Sep 12 17:51:54.975844 sshd-session[6306]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:51:54.979552 systemd-logind[1701]: New session 26 of user core. Sep 12 17:51:54.982782 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 12 17:51:55.469655 sshd[6309]: Connection closed by 10.200.16.10 port 59294 Sep 12 17:51:55.469992 sshd-session[6306]: pam_unix(sshd:session): session closed for user core Sep 12 17:51:55.472585 systemd[1]: sshd@23-10.200.8.42:22-10.200.16.10:59294.service: Deactivated successfully. Sep 12 17:51:55.474337 systemd[1]: session-26.scope: Deactivated successfully. Sep 12 17:51:55.474977 systemd-logind[1701]: Session 26 logged out. Waiting for processes to exit. Sep 12 17:51:55.475817 systemd-logind[1701]: Removed session 26. Sep 12 17:51:56.969427 containerd[1714]: time="2025-09-12T17:51:56.969395410Z" level=info msg="TaskExit event in podsandbox handler container_id:\"794e8ada7c172943fe8ae8680dab249a39a32ba5f06d80bdac279b1f8e40bcbe\" id:\"6649267fab49e72ef3b745e0bd902159534f801b6a79b858e29c3e8b8356dfb7\" pid:6334 exited_at:{seconds:1757699516 nanos:968821132}"