Sep 11 00:27:50.005364 kernel: Linux version 6.12.46-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Wed Sep 10 22:25:29 -00 2025 Sep 11 00:27:50.005398 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=24178014e7d1a618b6c727661dc98ca9324f7f5aeefcaa5f4996d4d839e6e63a Sep 11 00:27:50.005409 kernel: BIOS-provided physical RAM map: Sep 11 00:27:50.005415 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 11 00:27:50.005422 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Sep 11 00:27:50.005429 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000044fdfff] usable Sep 11 00:27:50.005438 kernel: BIOS-e820: [mem 0x00000000044fe000-0x00000000048fdfff] reserved Sep 11 00:27:50.005458 kernel: BIOS-e820: [mem 0x00000000048fe000-0x000000003ff1efff] usable Sep 11 00:27:50.005467 kernel: BIOS-e820: [mem 0x000000003ff1f000-0x000000003ffc8fff] reserved Sep 11 00:27:50.005474 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Sep 11 00:27:50.005481 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Sep 11 00:27:50.005488 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Sep 11 00:27:50.005495 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Sep 11 00:27:50.005502 kernel: printk: legacy bootconsole [earlyser0] enabled Sep 11 00:27:50.005513 kernel: NX (Execute Disable) protection: active Sep 11 00:27:50.005521 kernel: APIC: Static calls initialized Sep 11 00:27:50.005528 kernel: efi: EFI v2.7 by Microsoft Sep 11 00:27:50.005536 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff88000 SMBIOS 3.0=0x3ff86000 MEMATTR=0x3ead5518 RNG=0x3ffd2018 Sep 11 00:27:50.005544 kernel: random: crng init done Sep 11 00:27:50.005551 kernel: secureboot: Secure boot disabled Sep 11 00:27:50.005559 kernel: SMBIOS 3.1.0 present. Sep 11 00:27:50.005567 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 01/28/2025 Sep 11 00:27:50.005576 kernel: DMI: Memory slots populated: 2/2 Sep 11 00:27:50.005583 kernel: Hypervisor detected: Microsoft Hyper-V Sep 11 00:27:50.005591 kernel: Hyper-V: privilege flags low 0xae7f, high 0x3b8030, hints 0x9e4e24, misc 0xe0bed7b2 Sep 11 00:27:50.005598 kernel: Hyper-V: Nested features: 0x3e0101 Sep 11 00:27:50.005606 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Sep 11 00:27:50.005613 kernel: Hyper-V: Using hypercall for remote TLB flush Sep 11 00:27:50.005621 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Sep 11 00:27:50.005629 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Sep 11 00:27:50.005636 kernel: tsc: Detected 2300.000 MHz processor Sep 11 00:27:50.005644 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 11 00:27:50.005653 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 11 00:27:50.005663 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x10000000000 Sep 11 00:27:50.005671 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Sep 11 00:27:50.005680 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 11 00:27:50.005688 kernel: e820: update [mem 0x48000000-0xffffffff] usable ==> reserved Sep 11 00:27:50.005695 kernel: last_pfn = 0x40000 max_arch_pfn = 0x10000000000 Sep 11 00:27:50.005703 kernel: Using GB pages for direct mapping Sep 11 00:27:50.005711 kernel: ACPI: Early table checksum verification disabled Sep 11 00:27:50.005722 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Sep 11 00:27:50.005732 kernel: ACPI: XSDT 0x000000003FFF90E8 00005C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 11 00:27:50.005740 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 11 00:27:50.005748 kernel: ACPI: DSDT 0x000000003FFD6000 01E27A (v02 MSFTVM DSDT01 00000001 INTL 20230628) Sep 11 00:27:50.005756 kernel: ACPI: FACS 0x000000003FFFE000 000040 Sep 11 00:27:50.005765 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 11 00:27:50.005773 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 11 00:27:50.005783 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 11 00:27:50.005791 kernel: ACPI: APIC 0x000000003FFD5000 000052 (v05 HVLITE HVLITETB 00000000 MSHV 00000000) Sep 11 00:27:50.005799 kernel: ACPI: SRAT 0x000000003FFD4000 0000A0 (v03 HVLITE HVLITETB 00000000 MSHV 00000000) Sep 11 00:27:50.005807 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 11 00:27:50.005815 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Sep 11 00:27:50.005824 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4279] Sep 11 00:27:50.005832 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Sep 11 00:27:50.005840 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Sep 11 00:27:50.005850 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Sep 11 00:27:50.005858 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Sep 11 00:27:50.005866 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5051] Sep 11 00:27:50.005875 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd409f] Sep 11 00:27:50.005883 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Sep 11 00:27:50.005891 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Sep 11 00:27:50.005900 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] Sep 11 00:27:50.005908 kernel: NUMA: Node 0 [mem 0x00001000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00001000-0x2bfffffff] Sep 11 00:27:50.005916 kernel: NODE_DATA(0) allocated [mem 0x2bfff8dc0-0x2bfffffff] Sep 11 00:27:50.005926 kernel: Zone ranges: Sep 11 00:27:50.005935 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 11 00:27:50.005943 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Sep 11 00:27:50.005951 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Sep 11 00:27:50.005959 kernel: Device empty Sep 11 00:27:50.005967 kernel: Movable zone start for each node Sep 11 00:27:50.005976 kernel: Early memory node ranges Sep 11 00:27:50.005984 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Sep 11 00:27:50.005992 kernel: node 0: [mem 0x0000000000100000-0x00000000044fdfff] Sep 11 00:27:50.005999 kernel: node 0: [mem 0x00000000048fe000-0x000000003ff1efff] Sep 11 00:27:50.006009 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Sep 11 00:27:50.006017 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Sep 11 00:27:50.006026 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Sep 11 00:27:50.006034 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 11 00:27:50.006044 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Sep 11 00:27:50.006053 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Sep 11 00:27:50.006064 kernel: On node 0, zone DMA32: 224 pages in unavailable ranges Sep 11 00:27:50.006073 kernel: ACPI: PM-Timer IO Port: 0x408 Sep 11 00:27:50.006081 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 11 00:27:50.006090 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 11 00:27:50.006097 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 11 00:27:50.006103 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Sep 11 00:27:50.006110 kernel: TSC deadline timer available Sep 11 00:27:50.006117 kernel: CPU topo: Max. logical packages: 1 Sep 11 00:27:50.006124 kernel: CPU topo: Max. logical dies: 1 Sep 11 00:27:50.006130 kernel: CPU topo: Max. dies per package: 1 Sep 11 00:27:50.006137 kernel: CPU topo: Max. threads per core: 2 Sep 11 00:27:50.006144 kernel: CPU topo: Num. cores per package: 1 Sep 11 00:27:50.006152 kernel: CPU topo: Num. threads per package: 2 Sep 11 00:27:50.006160 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Sep 11 00:27:50.006167 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Sep 11 00:27:50.006174 kernel: Booting paravirtualized kernel on Hyper-V Sep 11 00:27:50.006182 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 11 00:27:50.006190 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Sep 11 00:27:50.006197 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Sep 11 00:27:50.006205 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Sep 11 00:27:50.006212 kernel: pcpu-alloc: [0] 0 1 Sep 11 00:27:50.006221 kernel: Hyper-V: PV spinlocks enabled Sep 11 00:27:50.006229 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 11 00:27:50.006239 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=24178014e7d1a618b6c727661dc98ca9324f7f5aeefcaa5f4996d4d839e6e63a Sep 11 00:27:50.006247 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 11 00:27:50.006255 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Sep 11 00:27:50.006262 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 11 00:27:50.006269 kernel: Fallback order for Node 0: 0 Sep 11 00:27:50.006277 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2095807 Sep 11 00:27:50.006286 kernel: Policy zone: Normal Sep 11 00:27:50.006294 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 11 00:27:50.006302 kernel: software IO TLB: area num 2. Sep 11 00:27:50.006311 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 11 00:27:50.006319 kernel: ftrace: allocating 40103 entries in 157 pages Sep 11 00:27:50.006327 kernel: ftrace: allocated 157 pages with 5 groups Sep 11 00:27:50.006335 kernel: Dynamic Preempt: voluntary Sep 11 00:27:50.006343 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 11 00:27:50.006352 kernel: rcu: RCU event tracing is enabled. Sep 11 00:27:50.006368 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 11 00:27:50.006376 kernel: Trampoline variant of Tasks RCU enabled. Sep 11 00:27:50.006385 kernel: Rude variant of Tasks RCU enabled. Sep 11 00:27:50.006395 kernel: Tracing variant of Tasks RCU enabled. Sep 11 00:27:50.006404 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 11 00:27:50.006413 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 11 00:27:50.006422 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 11 00:27:50.006430 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 11 00:27:50.006439 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 11 00:27:50.006463 kernel: Using NULL legacy PIC Sep 11 00:27:50.006474 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Sep 11 00:27:50.006483 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 11 00:27:50.006492 kernel: Console: colour dummy device 80x25 Sep 11 00:27:50.006500 kernel: printk: legacy console [tty1] enabled Sep 11 00:27:50.006509 kernel: printk: legacy console [ttyS0] enabled Sep 11 00:27:50.006518 kernel: printk: legacy bootconsole [earlyser0] disabled Sep 11 00:27:50.006528 kernel: ACPI: Core revision 20240827 Sep 11 00:27:50.006537 kernel: Failed to register legacy timer interrupt Sep 11 00:27:50.006546 kernel: APIC: Switch to symmetric I/O mode setup Sep 11 00:27:50.006554 kernel: x2apic enabled Sep 11 00:27:50.006563 kernel: APIC: Switched APIC routing to: physical x2apic Sep 11 00:27:50.006572 kernel: Hyper-V: Host Build 10.0.26100.1293-1-0 Sep 11 00:27:50.006580 kernel: Hyper-V: enabling crash_kexec_post_notifiers Sep 11 00:27:50.006589 kernel: Hyper-V: Disabling IBT because of Hyper-V bug Sep 11 00:27:50.006598 kernel: Hyper-V: Using IPI hypercalls Sep 11 00:27:50.006609 kernel: APIC: send_IPI() replaced with hv_send_ipi() Sep 11 00:27:50.006618 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Sep 11 00:27:50.006627 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Sep 11 00:27:50.006636 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Sep 11 00:27:50.006644 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Sep 11 00:27:50.006652 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Sep 11 00:27:50.006661 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212735223b2, max_idle_ns: 440795277976 ns Sep 11 00:27:50.006669 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 4600.00 BogoMIPS (lpj=2300000) Sep 11 00:27:50.006678 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 11 00:27:50.006688 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Sep 11 00:27:50.006695 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Sep 11 00:27:50.006704 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 11 00:27:50.006712 kernel: Spectre V2 : Mitigation: Retpolines Sep 11 00:27:50.006720 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 11 00:27:50.006729 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Sep 11 00:27:50.006738 kernel: RETBleed: Vulnerable Sep 11 00:27:50.006746 kernel: Speculative Store Bypass: Vulnerable Sep 11 00:27:50.006754 kernel: active return thunk: its_return_thunk Sep 11 00:27:50.006762 kernel: ITS: Mitigation: Aligned branch/return thunks Sep 11 00:27:50.006771 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 11 00:27:50.006780 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 11 00:27:50.006789 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 11 00:27:50.006797 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Sep 11 00:27:50.006811 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Sep 11 00:27:50.006819 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Sep 11 00:27:50.006828 kernel: x86/fpu: Supporting XSAVE feature 0x800: 'Control-flow User registers' Sep 11 00:27:50.006836 kernel: x86/fpu: Supporting XSAVE feature 0x20000: 'AMX Tile config' Sep 11 00:27:50.006845 kernel: x86/fpu: Supporting XSAVE feature 0x40000: 'AMX Tile data' Sep 11 00:27:50.006853 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 11 00:27:50.006861 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Sep 11 00:27:50.006870 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Sep 11 00:27:50.006880 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Sep 11 00:27:50.006888 kernel: x86/fpu: xstate_offset[11]: 2432, xstate_sizes[11]: 16 Sep 11 00:27:50.006896 kernel: x86/fpu: xstate_offset[17]: 2496, xstate_sizes[17]: 64 Sep 11 00:27:50.006905 kernel: x86/fpu: xstate_offset[18]: 2560, xstate_sizes[18]: 8192 Sep 11 00:27:50.006913 kernel: x86/fpu: Enabled xstate features 0x608e7, context size is 10752 bytes, using 'compacted' format. Sep 11 00:27:50.006921 kernel: Freeing SMP alternatives memory: 32K Sep 11 00:27:50.006930 kernel: pid_max: default: 32768 minimum: 301 Sep 11 00:27:50.006938 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 11 00:27:50.006946 kernel: landlock: Up and running. Sep 11 00:27:50.006954 kernel: SELinux: Initializing. Sep 11 00:27:50.006963 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 11 00:27:50.006973 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 11 00:27:50.006981 kernel: smpboot: CPU0: Intel INTEL(R) XEON(R) PLATINUM 8573C (family: 0x6, model: 0xcf, stepping: 0x2) Sep 11 00:27:50.006990 kernel: Performance Events: unsupported p6 CPU model 207 no PMU driver, software events only. Sep 11 00:27:50.006998 kernel: signal: max sigframe size: 11952 Sep 11 00:27:50.007007 kernel: rcu: Hierarchical SRCU implementation. Sep 11 00:27:50.007015 kernel: rcu: Max phase no-delay instances is 400. Sep 11 00:27:50.007024 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 11 00:27:50.007033 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 11 00:27:50.007041 kernel: smp: Bringing up secondary CPUs ... Sep 11 00:27:50.007050 kernel: smpboot: x86: Booting SMP configuration: Sep 11 00:27:50.007060 kernel: .... node #0, CPUs: #1 Sep 11 00:27:50.007068 kernel: smp: Brought up 1 node, 2 CPUs Sep 11 00:27:50.007077 kernel: smpboot: Total of 2 processors activated (9200.00 BogoMIPS) Sep 11 00:27:50.007086 kernel: Memory: 8079080K/8383228K available (14336K kernel code, 2429K rwdata, 9960K rodata, 53832K init, 1088K bss, 297940K reserved, 0K cma-reserved) Sep 11 00:27:50.007095 kernel: devtmpfs: initialized Sep 11 00:27:50.007103 kernel: x86/mm: Memory block size: 128MB Sep 11 00:27:50.007112 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Sep 11 00:27:50.007121 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 11 00:27:50.007130 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 11 00:27:50.007140 kernel: pinctrl core: initialized pinctrl subsystem Sep 11 00:27:50.007148 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 11 00:27:50.007157 kernel: audit: initializing netlink subsys (disabled) Sep 11 00:27:50.007166 kernel: audit: type=2000 audit(1757550466.029:1): state=initialized audit_enabled=0 res=1 Sep 11 00:27:50.007174 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 11 00:27:50.007182 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 11 00:27:50.007191 kernel: cpuidle: using governor menu Sep 11 00:27:50.007199 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 11 00:27:50.007208 kernel: dca service started, version 1.12.1 Sep 11 00:27:50.007218 kernel: e820: reserve RAM buffer [mem 0x044fe000-0x07ffffff] Sep 11 00:27:50.007226 kernel: e820: reserve RAM buffer [mem 0x3ff1f000-0x3fffffff] Sep 11 00:27:50.007235 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 11 00:27:50.007244 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 11 00:27:50.007252 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 11 00:27:50.007261 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 11 00:27:50.007270 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 11 00:27:50.007278 kernel: ACPI: Added _OSI(Module Device) Sep 11 00:27:50.007288 kernel: ACPI: Added _OSI(Processor Device) Sep 11 00:27:50.007297 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 11 00:27:50.007305 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 11 00:27:50.007314 kernel: ACPI: Interpreter enabled Sep 11 00:27:50.007322 kernel: ACPI: PM: (supports S0 S5) Sep 11 00:27:50.007331 kernel: ACPI: Using IOAPIC for interrupt routing Sep 11 00:27:50.007339 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 11 00:27:50.007348 kernel: PCI: Ignoring E820 reservations for host bridge windows Sep 11 00:27:50.007356 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Sep 11 00:27:50.007365 kernel: iommu: Default domain type: Translated Sep 11 00:27:50.007375 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 11 00:27:50.007383 kernel: efivars: Registered efivars operations Sep 11 00:27:50.007391 kernel: PCI: Using ACPI for IRQ routing Sep 11 00:27:50.007400 kernel: PCI: System does not support PCI Sep 11 00:27:50.007408 kernel: vgaarb: loaded Sep 11 00:27:50.007417 kernel: clocksource: Switched to clocksource tsc-early Sep 11 00:27:50.007425 kernel: VFS: Disk quotas dquot_6.6.0 Sep 11 00:27:50.007434 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 11 00:27:50.007443 kernel: pnp: PnP ACPI init Sep 11 00:27:50.007462 kernel: pnp: PnP ACPI: found 3 devices Sep 11 00:27:50.007471 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 11 00:27:50.007480 kernel: NET: Registered PF_INET protocol family Sep 11 00:27:50.007488 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 11 00:27:50.007497 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Sep 11 00:27:50.007506 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 11 00:27:50.007514 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 11 00:27:50.007523 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Sep 11 00:27:50.007533 kernel: TCP: Hash tables configured (established 65536 bind 65536) Sep 11 00:27:50.007542 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Sep 11 00:27:50.007551 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Sep 11 00:27:50.007560 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 11 00:27:50.007568 kernel: NET: Registered PF_XDP protocol family Sep 11 00:27:50.007577 kernel: PCI: CLS 0 bytes, default 64 Sep 11 00:27:50.007585 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Sep 11 00:27:50.007594 kernel: software IO TLB: mapped [mem 0x000000003a9d3000-0x000000003e9d3000] (64MB) Sep 11 00:27:50.007603 kernel: RAPL PMU: API unit is 2^-32 Joules, 1 fixed counters, 10737418240 ms ovfl timer Sep 11 00:27:50.007613 kernel: RAPL PMU: hw unit of domain psys 2^-0 Joules Sep 11 00:27:50.007622 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212735223b2, max_idle_ns: 440795277976 ns Sep 11 00:27:50.007630 kernel: clocksource: Switched to clocksource tsc Sep 11 00:27:50.007639 kernel: Initialise system trusted keyrings Sep 11 00:27:50.007648 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Sep 11 00:27:50.007656 kernel: Key type asymmetric registered Sep 11 00:27:50.007665 kernel: Asymmetric key parser 'x509' registered Sep 11 00:27:50.007673 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 11 00:27:50.007682 kernel: io scheduler mq-deadline registered Sep 11 00:27:50.007692 kernel: io scheduler kyber registered Sep 11 00:27:50.007701 kernel: io scheduler bfq registered Sep 11 00:27:50.007709 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 11 00:27:50.007718 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 11 00:27:50.007726 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 11 00:27:50.007735 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Sep 11 00:27:50.007744 kernel: serial8250: ttyS2 at I/O 0x3e8 (irq = 4, base_baud = 115200) is a 16550A Sep 11 00:27:50.007752 kernel: i8042: PNP: No PS/2 controller found. Sep 11 00:27:50.007884 kernel: rtc_cmos 00:02: registered as rtc0 Sep 11 00:27:50.007960 kernel: rtc_cmos 00:02: setting system clock to 2025-09-11T00:27:49 UTC (1757550469) Sep 11 00:27:50.008028 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Sep 11 00:27:50.008038 kernel: intel_pstate: Intel P-state driver initializing Sep 11 00:27:50.008047 kernel: efifb: probing for efifb Sep 11 00:27:50.008055 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Sep 11 00:27:50.008064 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Sep 11 00:27:50.008072 kernel: efifb: scrolling: redraw Sep 11 00:27:50.008081 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 11 00:27:50.008091 kernel: Console: switching to colour frame buffer device 128x48 Sep 11 00:27:50.008099 kernel: fb0: EFI VGA frame buffer device Sep 11 00:27:50.008108 kernel: pstore: Using crash dump compression: deflate Sep 11 00:27:50.008116 kernel: pstore: Registered efi_pstore as persistent store backend Sep 11 00:27:50.008124 kernel: NET: Registered PF_INET6 protocol family Sep 11 00:27:50.008132 kernel: Segment Routing with IPv6 Sep 11 00:27:50.008140 kernel: In-situ OAM (IOAM) with IPv6 Sep 11 00:27:50.008148 kernel: NET: Registered PF_PACKET protocol family Sep 11 00:27:50.008156 kernel: Key type dns_resolver registered Sep 11 00:27:50.008166 kernel: IPI shorthand broadcast: enabled Sep 11 00:27:50.008176 kernel: sched_clock: Marking stable (3165004572, 108441363)->(3630649025, -357203090) Sep 11 00:27:50.008184 kernel: registered taskstats version 1 Sep 11 00:27:50.008193 kernel: Loading compiled-in X.509 certificates Sep 11 00:27:50.008201 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.46-flatcar: 8138ce5002a1b572fd22b23ac238f29bab3f249f' Sep 11 00:27:50.008210 kernel: Demotion targets for Node 0: null Sep 11 00:27:50.008219 kernel: Key type .fscrypt registered Sep 11 00:27:50.008227 kernel: Key type fscrypt-provisioning registered Sep 11 00:27:50.008236 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 11 00:27:50.008246 kernel: ima: Allocated hash algorithm: sha1 Sep 11 00:27:50.008255 kernel: ima: No architecture policies found Sep 11 00:27:50.008265 kernel: clk: Disabling unused clocks Sep 11 00:27:50.008274 kernel: Warning: unable to open an initial console. Sep 11 00:27:50.008283 kernel: Freeing unused kernel image (initmem) memory: 53832K Sep 11 00:27:50.008293 kernel: Write protecting the kernel read-only data: 24576k Sep 11 00:27:50.008302 kernel: Freeing unused kernel image (rodata/data gap) memory: 280K Sep 11 00:27:50.008311 kernel: Run /init as init process Sep 11 00:27:50.008320 kernel: with arguments: Sep 11 00:27:50.008331 kernel: /init Sep 11 00:27:50.008340 kernel: with environment: Sep 11 00:27:50.008349 kernel: HOME=/ Sep 11 00:27:50.008358 kernel: TERM=linux Sep 11 00:27:50.008367 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 11 00:27:50.008378 systemd[1]: Successfully made /usr/ read-only. Sep 11 00:27:50.008391 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 11 00:27:50.008403 systemd[1]: Detected virtualization microsoft. Sep 11 00:27:50.008413 systemd[1]: Detected architecture x86-64. Sep 11 00:27:50.008422 systemd[1]: Running in initrd. Sep 11 00:27:50.008431 systemd[1]: No hostname configured, using default hostname. Sep 11 00:27:50.008440 systemd[1]: Hostname set to . Sep 11 00:27:50.008467 systemd[1]: Initializing machine ID from random generator. Sep 11 00:27:50.008477 systemd[1]: Queued start job for default target initrd.target. Sep 11 00:27:50.008486 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 11 00:27:50.008495 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 11 00:27:50.008506 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 11 00:27:50.008516 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 11 00:27:50.008832 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 11 00:27:50.009121 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 11 00:27:50.009651 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 11 00:27:50.009662 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 11 00:27:50.009676 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 11 00:27:50.009745 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 11 00:27:50.009755 systemd[1]: Reached target paths.target - Path Units. Sep 11 00:27:50.009764 systemd[1]: Reached target slices.target - Slice Units. Sep 11 00:27:50.009774 systemd[1]: Reached target swap.target - Swaps. Sep 11 00:27:50.009784 systemd[1]: Reached target timers.target - Timer Units. Sep 11 00:27:50.009793 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 11 00:27:50.009803 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 11 00:27:50.009813 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 11 00:27:50.009824 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 11 00:27:50.009834 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 11 00:27:50.009844 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 11 00:27:50.009853 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 11 00:27:50.009863 systemd[1]: Reached target sockets.target - Socket Units. Sep 11 00:27:50.009873 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 11 00:27:50.009882 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 11 00:27:50.009892 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 11 00:27:50.009904 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 11 00:27:50.009914 systemd[1]: Starting systemd-fsck-usr.service... Sep 11 00:27:50.009924 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 11 00:27:50.009942 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 11 00:27:50.009953 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 11 00:27:50.009963 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 11 00:27:50.009995 systemd-journald[205]: Collecting audit messages is disabled. Sep 11 00:27:50.010021 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 11 00:27:50.010032 systemd[1]: Finished systemd-fsck-usr.service. Sep 11 00:27:50.010043 systemd-journald[205]: Journal started Sep 11 00:27:50.010068 systemd-journald[205]: Runtime Journal (/run/log/journal/394bf674bc6844a59da066c39a52b131) is 8M, max 158.9M, 150.9M free. Sep 11 00:27:50.010722 systemd-modules-load[206]: Inserted module 'overlay' Sep 11 00:27:50.022529 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 11 00:27:50.028468 systemd[1]: Started systemd-journald.service - Journal Service. Sep 11 00:27:50.036024 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 11 00:27:50.045499 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 11 00:27:50.050112 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 11 00:27:50.057482 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 11 00:27:50.057504 kernel: Bridge firewalling registered Sep 11 00:27:50.056623 systemd-modules-load[206]: Inserted module 'br_netfilter' Sep 11 00:27:50.059282 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 11 00:27:50.065708 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 11 00:27:50.069294 systemd-tmpfiles[221]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 11 00:27:50.069937 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 11 00:27:50.074835 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 11 00:27:50.080692 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 11 00:27:50.089175 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 11 00:27:50.097688 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 11 00:27:50.103838 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 11 00:27:50.109536 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 11 00:27:50.116564 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 11 00:27:50.125031 dracut-cmdline[244]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=24178014e7d1a618b6c727661dc98ca9324f7f5aeefcaa5f4996d4d839e6e63a Sep 11 00:27:50.166563 systemd-resolved[249]: Positive Trust Anchors: Sep 11 00:27:50.166575 systemd-resolved[249]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 11 00:27:50.166606 systemd-resolved[249]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 11 00:27:50.169862 systemd-resolved[249]: Defaulting to hostname 'linux'. Sep 11 00:27:50.170696 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 11 00:27:50.181442 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 11 00:27:50.216473 kernel: SCSI subsystem initialized Sep 11 00:27:50.224463 kernel: Loading iSCSI transport class v2.0-870. Sep 11 00:27:50.233473 kernel: iscsi: registered transport (tcp) Sep 11 00:27:50.251538 kernel: iscsi: registered transport (qla4xxx) Sep 11 00:27:50.251577 kernel: QLogic iSCSI HBA Driver Sep 11 00:27:50.265086 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 11 00:27:50.279771 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 11 00:27:50.282661 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 11 00:27:50.314190 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 11 00:27:50.319364 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 11 00:27:50.374470 kernel: raid6: avx512x4 gen() 44893 MB/s Sep 11 00:27:50.391459 kernel: raid6: avx512x2 gen() 43744 MB/s Sep 11 00:27:50.408456 kernel: raid6: avx512x1 gen() 25312 MB/s Sep 11 00:27:50.426458 kernel: raid6: avx2x4 gen() 35515 MB/s Sep 11 00:27:50.444459 kernel: raid6: avx2x2 gen() 37784 MB/s Sep 11 00:27:50.462726 kernel: raid6: avx2x1 gen() 30823 MB/s Sep 11 00:27:50.462751 kernel: raid6: using algorithm avx512x4 gen() 44893 MB/s Sep 11 00:27:50.481156 kernel: raid6: .... xor() 7507 MB/s, rmw enabled Sep 11 00:27:50.481180 kernel: raid6: using avx512x2 recovery algorithm Sep 11 00:27:50.500472 kernel: xor: automatically using best checksumming function avx Sep 11 00:27:50.625469 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 11 00:27:50.630396 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 11 00:27:50.635649 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 11 00:27:50.660965 systemd-udevd[455]: Using default interface naming scheme 'v255'. Sep 11 00:27:50.665941 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 11 00:27:50.673556 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 11 00:27:50.693134 dracut-pre-trigger[466]: rd.md=0: removing MD RAID activation Sep 11 00:27:50.711612 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 11 00:27:50.714563 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 11 00:27:50.747183 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 11 00:27:50.754883 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 11 00:27:50.798473 kernel: cryptd: max_cpu_qlen set to 1000 Sep 11 00:27:50.806464 kernel: AES CTR mode by8 optimization enabled Sep 11 00:27:50.848794 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 11 00:27:50.848907 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 11 00:27:50.853539 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 11 00:27:50.859674 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 11 00:27:50.863694 kernel: hv_vmbus: Vmbus version:5.3 Sep 11 00:27:50.875462 kernel: hv_vmbus: registering driver hyperv_keyboard Sep 11 00:27:50.878992 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 11 00:27:50.879025 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 11 00:27:50.878790 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 11 00:27:50.878892 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 11 00:27:50.893462 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Sep 11 00:27:50.896554 kernel: hv_vmbus: registering driver hv_storvsc Sep 11 00:27:50.900003 kernel: PTP clock support registered Sep 11 00:27:50.900038 kernel: scsi host0: storvsc_host_t Sep 11 00:27:50.903571 kernel: hv_vmbus: registering driver hv_netvsc Sep 11 00:27:50.901947 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 11 00:27:50.906983 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Sep 11 00:27:50.909461 kernel: hv_vmbus: registering driver hv_pci Sep 11 00:27:50.922365 kernel: hv_utils: Registering HyperV Utility Driver Sep 11 00:27:50.922400 kernel: hv_vmbus: registering driver hv_utils Sep 11 00:27:50.925200 kernel: hv_utils: Shutdown IC version 3.2 Sep 11 00:27:50.926111 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI VMBus probing: Using version 0x10004 Sep 11 00:27:50.926282 kernel: hv_utils: Heartbeat IC version 3.0 Sep 11 00:27:50.929494 kernel: hv_utils: TimeSync IC version 4.0 Sep 11 00:27:50.632501 systemd-resolved[249]: Clock change detected. Flushing caches. Sep 11 00:27:50.642618 systemd-journald[205]: Time jumped backwards, rotating. Sep 11 00:27:50.642663 kernel: hv_netvsc f8615163-0000-1000-2000-7c1e52fca61a (unnamed net_device) (uninitialized): VF slot 1 added Sep 11 00:27:50.642794 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI host bridge to bus c05b:00 Sep 11 00:27:50.648449 kernel: pci_bus c05b:00: root bus resource [mem 0xfc0000000-0xfc007ffff window] Sep 11 00:27:50.653343 kernel: pci_bus c05b:00: No busn resource found for root bus, will use [bus 00-ff] Sep 11 00:27:50.657695 kernel: pci c05b:00:00.0: [1414:00a9] type 00 class 0x010802 PCIe Endpoint Sep 11 00:27:50.661875 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 11 00:27:50.661908 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit] Sep 11 00:27:50.666490 kernel: hv_vmbus: registering driver hid_hyperv Sep 11 00:27:50.670400 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Sep 11 00:27:50.670423 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Sep 11 00:27:50.673062 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Sep 11 00:27:50.673598 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 11 00:27:50.679952 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 11 00:27:50.685396 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Sep 11 00:27:50.695939 kernel: pci_bus c05b:00: busn_res: [bus 00-ff] end is updated to 00 Sep 11 00:27:50.696140 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit]: assigned Sep 11 00:27:50.705398 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#200 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Sep 11 00:27:50.715667 kernel: nvme nvme0: pci function c05b:00:00.0 Sep 11 00:27:50.715815 kernel: nvme c05b:00:00.0: enabling device (0000 -> 0002) Sep 11 00:27:50.730053 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#233 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Sep 11 00:27:50.872431 kernel: nvme nvme0: 2/0/0 default/read/poll queues Sep 11 00:27:50.878398 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 11 00:27:50.944404 kernel: nvme nvme0: using unchecked data buffer Sep 11 00:27:51.011319 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Sep 11 00:27:51.016316 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 11 00:27:51.030135 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - MSFT NVMe Accelerator v1.0 EFI-SYSTEM. Sep 11 00:27:51.047011 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - MSFT NVMe Accelerator v1.0 ROOT. Sep 11 00:27:51.057237 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - MSFT NVMe Accelerator v1.0 USR-A. Sep 11 00:27:51.057723 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - MSFT NVMe Accelerator v1.0 USR-A. Sep 11 00:27:51.064470 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 11 00:27:51.067427 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 11 00:27:51.068893 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 11 00:27:51.069704 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 11 00:27:51.071531 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 11 00:27:51.094196 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 11 00:27:51.100931 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 11 00:27:51.106404 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 11 00:27:51.688378 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI VMBus probing: Using version 0x10004 Sep 11 00:27:51.688613 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI host bridge to bus 7870:00 Sep 11 00:27:51.691470 kernel: pci_bus 7870:00: root bus resource [mem 0xfc2000000-0xfc4007fff window] Sep 11 00:27:51.693003 kernel: pci_bus 7870:00: No busn resource found for root bus, will use [bus 00-ff] Sep 11 00:27:51.699596 kernel: pci 7870:00:00.0: [1414:00ba] type 00 class 0x020000 PCIe Endpoint Sep 11 00:27:51.704547 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref] Sep 11 00:27:51.709460 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref] Sep 11 00:27:51.709489 kernel: pci 7870:00:00.0: enabling Extended Tags Sep 11 00:27:51.727411 kernel: pci_bus 7870:00: busn_res: [bus 00-ff] end is updated to 00 Sep 11 00:27:51.727596 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref]: assigned Sep 11 00:27:51.731531 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref]: assigned Sep 11 00:27:51.735613 kernel: mana 7870:00:00.0: enabling device (0000 -> 0002) Sep 11 00:27:51.746401 kernel: mana 7870:00:00.0: Microsoft Azure Network Adapter protocol version: 0.1.1 Sep 11 00:27:51.749084 kernel: hv_netvsc f8615163-0000-1000-2000-7c1e52fca61a eth0: VF registering: eth1 Sep 11 00:27:51.749253 kernel: mana 7870:00:00.0 eth1: joined to eth0 Sep 11 00:27:51.753402 kernel: mana 7870:00:00.0 enP30832s1: renamed from eth1 Sep 11 00:27:52.114425 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 11 00:27:52.115497 disk-uuid[678]: The operation has completed successfully. Sep 11 00:27:52.166779 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 11 00:27:52.166879 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 11 00:27:52.200637 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 11 00:27:52.213548 sh[715]: Success Sep 11 00:27:52.233438 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 11 00:27:52.233657 kernel: device-mapper: uevent: version 1.0.3 Sep 11 00:27:52.234808 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 11 00:27:52.244417 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Sep 11 00:27:52.313880 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 11 00:27:52.317980 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 11 00:27:52.331694 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 11 00:27:52.344133 kernel: BTRFS: device fsid f1eb5eb7-34cc-49c0-9f2b-e603bd772d66 devid 1 transid 39 /dev/mapper/usr (254:0) scanned by mount (728) Sep 11 00:27:52.344240 kernel: BTRFS info (device dm-0): first mount of filesystem f1eb5eb7-34cc-49c0-9f2b-e603bd772d66 Sep 11 00:27:52.345466 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 11 00:27:52.408591 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 11 00:27:52.408637 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 11 00:27:52.409592 kernel: BTRFS info (device dm-0): enabling free space tree Sep 11 00:27:52.418184 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 11 00:27:52.420929 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 11 00:27:52.425542 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 11 00:27:52.426128 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 11 00:27:52.443663 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 11 00:27:52.468414 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (751) Sep 11 00:27:52.472419 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem a5de7b5e-e14d-4c62-883d-af7ea22fae7e Sep 11 00:27:52.472455 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Sep 11 00:27:52.483780 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 11 00:27:52.483825 kernel: BTRFS info (device nvme0n1p6): turning on async discard Sep 11 00:27:52.485420 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Sep 11 00:27:52.492069 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem a5de7b5e-e14d-4c62-883d-af7ea22fae7e Sep 11 00:27:52.490828 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 11 00:27:52.497513 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 11 00:27:52.528308 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 11 00:27:52.531655 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 11 00:27:52.568693 systemd-networkd[897]: lo: Link UP Sep 11 00:27:52.568701 systemd-networkd[897]: lo: Gained carrier Sep 11 00:27:52.576095 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Sep 11 00:27:52.570496 systemd-networkd[897]: Enumeration completed Sep 11 00:27:52.580888 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Sep 11 00:27:52.570875 systemd-networkd[897]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 11 00:27:52.587442 kernel: hv_netvsc f8615163-0000-1000-2000-7c1e52fca61a eth0: Data path switched to VF: enP30832s1 Sep 11 00:27:52.570878 systemd-networkd[897]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 11 00:27:52.571040 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 11 00:27:52.576123 systemd[1]: Reached target network.target - Network. Sep 11 00:27:52.583657 systemd-networkd[897]: enP30832s1: Link UP Sep 11 00:27:52.583725 systemd-networkd[897]: eth0: Link UP Sep 11 00:27:52.583868 systemd-networkd[897]: eth0: Gained carrier Sep 11 00:27:52.583879 systemd-networkd[897]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 11 00:27:52.592407 systemd-networkd[897]: enP30832s1: Gained carrier Sep 11 00:27:52.602434 systemd-networkd[897]: eth0: DHCPv4 address 10.200.8.15/24, gateway 10.200.8.1 acquired from 168.63.129.16 Sep 11 00:27:52.735218 ignition[835]: Ignition 2.21.0 Sep 11 00:27:52.735230 ignition[835]: Stage: fetch-offline Sep 11 00:27:52.737458 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 11 00:27:52.735323 ignition[835]: no configs at "/usr/lib/ignition/base.d" Sep 11 00:27:52.741537 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 11 00:27:52.735330 ignition[835]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 11 00:27:52.735440 ignition[835]: parsed url from cmdline: "" Sep 11 00:27:52.735443 ignition[835]: no config URL provided Sep 11 00:27:52.735447 ignition[835]: reading system config file "/usr/lib/ignition/user.ign" Sep 11 00:27:52.735453 ignition[835]: no config at "/usr/lib/ignition/user.ign" Sep 11 00:27:52.735457 ignition[835]: failed to fetch config: resource requires networking Sep 11 00:27:52.735962 ignition[835]: Ignition finished successfully Sep 11 00:27:52.769856 ignition[907]: Ignition 2.21.0 Sep 11 00:27:52.769866 ignition[907]: Stage: fetch Sep 11 00:27:52.770055 ignition[907]: no configs at "/usr/lib/ignition/base.d" Sep 11 00:27:52.770063 ignition[907]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 11 00:27:52.770131 ignition[907]: parsed url from cmdline: "" Sep 11 00:27:52.770134 ignition[907]: no config URL provided Sep 11 00:27:52.770138 ignition[907]: reading system config file "/usr/lib/ignition/user.ign" Sep 11 00:27:52.770143 ignition[907]: no config at "/usr/lib/ignition/user.ign" Sep 11 00:27:52.770184 ignition[907]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Sep 11 00:27:52.832282 ignition[907]: GET result: OK Sep 11 00:27:52.832356 ignition[907]: config has been read from IMDS userdata Sep 11 00:27:52.832399 ignition[907]: parsing config with SHA512: af9e5c1ee44c471880c92bb981d5690e0dcb4131f29f51098fea382b69f57bca97db21eec1d92be8e996eb225189a561b2913008762dcb6aecd2b2ccdf07228c Sep 11 00:27:52.838831 unknown[907]: fetched base config from "system" Sep 11 00:27:52.839055 ignition[907]: fetch: fetch complete Sep 11 00:27:52.838840 unknown[907]: fetched base config from "system" Sep 11 00:27:52.839059 ignition[907]: fetch: fetch passed Sep 11 00:27:52.838845 unknown[907]: fetched user config from "azure" Sep 11 00:27:52.839084 ignition[907]: Ignition finished successfully Sep 11 00:27:52.841557 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 11 00:27:52.847079 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 11 00:27:52.873410 ignition[914]: Ignition 2.21.0 Sep 11 00:27:52.873422 ignition[914]: Stage: kargs Sep 11 00:27:52.873599 ignition[914]: no configs at "/usr/lib/ignition/base.d" Sep 11 00:27:52.876122 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 11 00:27:52.873607 ignition[914]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 11 00:27:52.879155 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 11 00:27:52.874320 ignition[914]: kargs: kargs passed Sep 11 00:27:52.874352 ignition[914]: Ignition finished successfully Sep 11 00:27:52.895988 ignition[920]: Ignition 2.21.0 Sep 11 00:27:52.895998 ignition[920]: Stage: disks Sep 11 00:27:52.896186 ignition[920]: no configs at "/usr/lib/ignition/base.d" Sep 11 00:27:52.900209 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 11 00:27:52.896194 ignition[920]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 11 00:27:52.903806 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 11 00:27:52.899542 ignition[920]: disks: disks passed Sep 11 00:27:52.906982 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 11 00:27:52.899600 ignition[920]: Ignition finished successfully Sep 11 00:27:52.912480 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 11 00:27:52.917438 systemd[1]: Reached target sysinit.target - System Initialization. Sep 11 00:27:52.920416 systemd[1]: Reached target basic.target - Basic System. Sep 11 00:27:52.925150 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 11 00:27:53.001963 systemd-fsck[928]: ROOT: clean, 15/7326000 files, 477845/7359488 blocks Sep 11 00:27:53.005567 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 11 00:27:53.010092 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 11 00:27:53.165411 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 6a9ce0af-81d0-4628-9791-e47488ed2744 r/w with ordered data mode. Quota mode: none. Sep 11 00:27:53.165445 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 11 00:27:53.167119 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 11 00:27:53.173971 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 11 00:27:53.178466 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 11 00:27:53.189304 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Sep 11 00:27:53.196484 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 11 00:27:53.196517 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 11 00:27:53.201409 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (937) Sep 11 00:27:53.207409 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem a5de7b5e-e14d-4c62-883d-af7ea22fae7e Sep 11 00:27:53.207446 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Sep 11 00:27:53.206377 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 11 00:27:53.210861 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 11 00:27:53.221608 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 11 00:27:53.221643 kernel: BTRFS info (device nvme0n1p6): turning on async discard Sep 11 00:27:53.223994 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Sep 11 00:27:53.225788 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 11 00:27:53.334112 coreos-metadata[939]: Sep 11 00:27:53.334 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Sep 11 00:27:53.338115 coreos-metadata[939]: Sep 11 00:27:53.338 INFO Fetch successful Sep 11 00:27:53.339759 coreos-metadata[939]: Sep 11 00:27:53.339 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Sep 11 00:27:53.348934 coreos-metadata[939]: Sep 11 00:27:53.348 INFO Fetch successful Sep 11 00:27:53.352504 coreos-metadata[939]: Sep 11 00:27:53.352 INFO wrote hostname ci-4372.1.0-n-4da84ffec3 to /sysroot/etc/hostname Sep 11 00:27:53.354236 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 11 00:27:53.371791 initrd-setup-root[967]: cut: /sysroot/etc/passwd: No such file or directory Sep 11 00:27:53.385681 initrd-setup-root[974]: cut: /sysroot/etc/group: No such file or directory Sep 11 00:27:53.390337 initrd-setup-root[981]: cut: /sysroot/etc/shadow: No such file or directory Sep 11 00:27:53.395050 initrd-setup-root[988]: cut: /sysroot/etc/gshadow: No such file or directory Sep 11 00:27:53.593576 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 11 00:27:53.596475 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 11 00:27:53.601988 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 11 00:27:53.613968 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 11 00:27:53.618066 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem a5de7b5e-e14d-4c62-883d-af7ea22fae7e Sep 11 00:27:53.635910 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 11 00:27:53.642454 ignition[1056]: INFO : Ignition 2.21.0 Sep 11 00:27:53.642454 ignition[1056]: INFO : Stage: mount Sep 11 00:27:53.646477 ignition[1056]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 11 00:27:53.646477 ignition[1056]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 11 00:27:53.646477 ignition[1056]: INFO : mount: mount passed Sep 11 00:27:53.646477 ignition[1056]: INFO : Ignition finished successfully Sep 11 00:27:53.648291 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 11 00:27:53.653019 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 11 00:27:53.667783 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 11 00:27:53.687461 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (1067) Sep 11 00:27:53.687494 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem a5de7b5e-e14d-4c62-883d-af7ea22fae7e Sep 11 00:27:53.690403 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Sep 11 00:27:53.694894 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 11 00:27:53.694925 kernel: BTRFS info (device nvme0n1p6): turning on async discard Sep 11 00:27:53.694936 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Sep 11 00:27:53.697775 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 11 00:27:53.718733 ignition[1085]: INFO : Ignition 2.21.0 Sep 11 00:27:53.718733 ignition[1085]: INFO : Stage: files Sep 11 00:27:53.723177 ignition[1085]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 11 00:27:53.723177 ignition[1085]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 11 00:27:53.723177 ignition[1085]: DEBUG : files: compiled without relabeling support, skipping Sep 11 00:27:53.732461 ignition[1085]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 11 00:27:53.732461 ignition[1085]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 11 00:27:53.744729 ignition[1085]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 11 00:27:53.748442 ignition[1085]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 11 00:27:53.748442 ignition[1085]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 11 00:27:53.748442 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 11 00:27:53.748442 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Sep 11 00:27:53.745045 unknown[1085]: wrote ssh authorized keys file for user: core Sep 11 00:27:53.819333 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 11 00:27:54.070019 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 11 00:27:54.070019 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 11 00:27:54.077470 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 11 00:27:54.077470 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 11 00:27:54.077470 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 11 00:27:54.077470 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 11 00:27:54.077470 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 11 00:27:54.077470 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 11 00:27:54.077470 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 11 00:27:54.101416 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 11 00:27:54.101416 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 11 00:27:54.101416 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 11 00:27:54.101416 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 11 00:27:54.101416 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 11 00:27:54.101416 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Sep 11 00:27:54.127487 systemd-networkd[897]: eth0: Gained IPv6LL Sep 11 00:27:54.491376 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 11 00:27:55.102578 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 11 00:27:55.102578 ignition[1085]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 11 00:27:55.110468 ignition[1085]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 11 00:27:55.117167 ignition[1085]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 11 00:27:55.117167 ignition[1085]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 11 00:27:55.124785 ignition[1085]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Sep 11 00:27:55.124785 ignition[1085]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Sep 11 00:27:55.124785 ignition[1085]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 11 00:27:55.124785 ignition[1085]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 11 00:27:55.124785 ignition[1085]: INFO : files: files passed Sep 11 00:27:55.124785 ignition[1085]: INFO : Ignition finished successfully Sep 11 00:27:55.121367 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 11 00:27:55.127600 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 11 00:27:55.137649 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 11 00:27:55.154195 initrd-setup-root-after-ignition[1111]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 11 00:27:55.156948 initrd-setup-root-after-ignition[1111]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 11 00:27:55.160617 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 11 00:27:55.160701 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 11 00:27:55.171472 initrd-setup-root-after-ignition[1116]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 11 00:27:55.165612 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 11 00:27:55.169980 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 11 00:27:55.175534 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 11 00:27:55.211005 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 11 00:27:55.211089 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 11 00:27:55.214195 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 11 00:27:55.217076 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 11 00:27:55.218117 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 11 00:27:55.219477 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 11 00:27:55.240706 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 11 00:27:55.245497 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 11 00:27:55.262924 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 11 00:27:55.266355 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 11 00:27:55.270741 systemd[1]: Stopped target timers.target - Timer Units. Sep 11 00:27:55.273529 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 11 00:27:55.273651 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 11 00:27:55.274685 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 11 00:27:55.274791 systemd[1]: Stopped target basic.target - Basic System. Sep 11 00:27:55.282842 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 11 00:27:55.284410 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 11 00:27:55.288531 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 11 00:27:55.293527 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 11 00:27:55.297546 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 11 00:27:55.301523 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 11 00:27:55.304481 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 11 00:27:55.307892 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 11 00:27:55.312536 systemd[1]: Stopped target swap.target - Swaps. Sep 11 00:27:55.316517 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 11 00:27:55.316651 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 11 00:27:55.318222 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 11 00:27:55.318585 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 11 00:27:55.318858 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 11 00:27:55.322017 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 11 00:27:55.327254 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 11 00:27:55.327359 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 11 00:27:55.340522 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 11 00:27:55.340680 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 11 00:27:55.345553 systemd[1]: ignition-files.service: Deactivated successfully. Sep 11 00:27:55.345670 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 11 00:27:55.349643 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 11 00:27:55.349758 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 11 00:27:55.355580 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 11 00:27:55.356049 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 11 00:27:55.356190 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 11 00:27:55.359165 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 11 00:27:55.365461 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 11 00:27:55.365607 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 11 00:27:55.378430 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 11 00:27:55.378545 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 11 00:27:55.392294 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 11 00:27:55.392394 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 11 00:27:55.409598 ignition[1137]: INFO : Ignition 2.21.0 Sep 11 00:27:55.409598 ignition[1137]: INFO : Stage: umount Sep 11 00:27:55.409598 ignition[1137]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 11 00:27:55.409598 ignition[1137]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 11 00:27:55.409598 ignition[1137]: INFO : umount: umount passed Sep 11 00:27:55.409598 ignition[1137]: INFO : Ignition finished successfully Sep 11 00:27:55.402063 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 11 00:27:55.402526 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 11 00:27:55.402596 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 11 00:27:55.408988 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 11 00:27:55.409112 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 11 00:27:55.425027 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 11 00:27:55.425083 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 11 00:27:55.425344 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 11 00:27:55.425375 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 11 00:27:55.425661 systemd[1]: Stopped target network.target - Network. Sep 11 00:27:55.425688 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 11 00:27:55.425719 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 11 00:27:55.426367 systemd[1]: Stopped target paths.target - Path Units. Sep 11 00:27:55.426409 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 11 00:27:55.426718 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 11 00:27:55.436958 systemd[1]: Stopped target slices.target - Slice Units. Sep 11 00:27:55.441028 systemd[1]: Stopped target sockets.target - Socket Units. Sep 11 00:27:55.445865 systemd[1]: iscsid.socket: Deactivated successfully. Sep 11 00:27:55.445894 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 11 00:27:55.447621 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 11 00:27:55.447643 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 11 00:27:55.453495 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 11 00:27:55.453536 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 11 00:27:55.457407 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 11 00:27:55.457443 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 11 00:27:55.462800 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 11 00:27:55.468275 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 11 00:27:55.485058 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 11 00:27:55.486446 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 11 00:27:55.492161 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 11 00:27:55.492296 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 11 00:27:55.492371 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 11 00:27:55.496231 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 11 00:27:55.497417 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 11 00:27:55.502898 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 11 00:27:55.502944 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 11 00:27:55.506557 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 11 00:27:55.507070 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 11 00:27:55.507112 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 11 00:27:55.507180 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 11 00:27:55.507207 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 11 00:27:55.510783 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 11 00:27:55.510829 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 11 00:27:55.550054 kernel: hv_netvsc f8615163-0000-1000-2000-7c1e52fca61a eth0: Data path switched from VF: enP30832s1 Sep 11 00:27:55.518479 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 11 00:27:55.552856 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Sep 11 00:27:55.518531 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 11 00:27:55.526693 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 11 00:27:55.531137 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 11 00:27:55.531231 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 11 00:27:55.543703 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 11 00:27:55.544332 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 11 00:27:55.558269 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 11 00:27:55.558309 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 11 00:27:55.567560 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 11 00:27:55.567592 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 11 00:27:55.570283 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 11 00:27:55.570323 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 11 00:27:55.571766 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 11 00:27:55.571801 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 11 00:27:55.580419 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 11 00:27:55.580462 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 11 00:27:55.589740 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 11 00:27:55.595845 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 11 00:27:55.595894 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 11 00:27:55.607520 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 11 00:27:55.607569 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 11 00:27:55.613031 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 11 00:27:55.613074 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 11 00:27:55.617979 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Sep 11 00:27:55.618018 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 11 00:27:55.618044 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 11 00:27:55.618254 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 11 00:27:55.618319 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 11 00:27:55.628610 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 11 00:27:55.629133 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 11 00:27:55.848043 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 11 00:27:55.848152 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 11 00:27:55.852699 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 11 00:27:55.854577 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 11 00:27:55.856322 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 11 00:27:55.860164 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 11 00:27:55.872620 systemd[1]: Switching root. Sep 11 00:27:55.916979 systemd-journald[205]: Journal stopped Sep 11 00:27:57.732792 systemd-journald[205]: Received SIGTERM from PID 1 (systemd). Sep 11 00:27:57.732827 kernel: SELinux: policy capability network_peer_controls=1 Sep 11 00:27:57.732840 kernel: SELinux: policy capability open_perms=1 Sep 11 00:27:57.732849 kernel: SELinux: policy capability extended_socket_class=1 Sep 11 00:27:57.732858 kernel: SELinux: policy capability always_check_network=0 Sep 11 00:27:57.732866 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 11 00:27:57.732877 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 11 00:27:57.732886 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 11 00:27:57.732895 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 11 00:27:57.732903 kernel: SELinux: policy capability userspace_initial_context=0 Sep 11 00:27:57.732912 kernel: audit: type=1403 audit(1757550476.597:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 11 00:27:57.732922 systemd[1]: Successfully loaded SELinux policy in 66.261ms. Sep 11 00:27:57.732933 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 6.835ms. Sep 11 00:27:57.732946 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 11 00:27:57.732956 systemd[1]: Detected virtualization microsoft. Sep 11 00:27:57.732966 systemd[1]: Detected architecture x86-64. Sep 11 00:27:57.732976 systemd[1]: Detected first boot. Sep 11 00:27:57.732986 systemd[1]: Hostname set to . Sep 11 00:27:57.732997 systemd[1]: Initializing machine ID from random generator. Sep 11 00:27:57.733008 zram_generator::config[1181]: No configuration found. Sep 11 00:27:57.733018 kernel: Guest personality initialized and is inactive Sep 11 00:27:57.733027 kernel: VMCI host device registered (name=vmci, major=10, minor=124) Sep 11 00:27:57.733036 kernel: Initialized host personality Sep 11 00:27:57.733045 kernel: NET: Registered PF_VSOCK protocol family Sep 11 00:27:57.733054 systemd[1]: Populated /etc with preset unit settings. Sep 11 00:27:57.733067 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 11 00:27:57.733076 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 11 00:27:57.733086 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 11 00:27:57.733096 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 11 00:27:57.733106 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 11 00:27:57.733116 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 11 00:27:57.733126 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 11 00:27:57.733137 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 11 00:27:57.733147 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 11 00:27:57.733157 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 11 00:27:57.733167 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 11 00:27:57.733177 systemd[1]: Created slice user.slice - User and Session Slice. Sep 11 00:27:57.733187 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 11 00:27:57.733197 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 11 00:27:57.733207 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 11 00:27:57.733220 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 11 00:27:57.733232 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 11 00:27:57.733242 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 11 00:27:57.733252 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 11 00:27:57.733262 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 11 00:27:57.733272 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 11 00:27:57.733282 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 11 00:27:57.733292 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 11 00:27:57.733304 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 11 00:27:57.733314 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 11 00:27:57.733324 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 11 00:27:57.733336 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 11 00:27:57.733346 systemd[1]: Reached target slices.target - Slice Units. Sep 11 00:27:57.733356 systemd[1]: Reached target swap.target - Swaps. Sep 11 00:27:57.733366 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 11 00:27:57.733376 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 11 00:27:57.733917 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 11 00:27:57.733934 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 11 00:27:57.733946 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 11 00:27:57.733960 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 11 00:27:57.733972 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 11 00:27:57.733987 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 11 00:27:57.734001 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 11 00:27:57.734012 systemd[1]: Mounting media.mount - External Media Directory... Sep 11 00:27:57.734025 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 11 00:27:57.734037 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 11 00:27:57.734050 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 11 00:27:57.734063 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 11 00:27:57.734079 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 11 00:27:57.734093 systemd[1]: Reached target machines.target - Containers. Sep 11 00:27:57.734104 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 11 00:27:57.734119 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 11 00:27:57.734134 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 11 00:27:57.734148 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 11 00:27:57.734161 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 11 00:27:57.734175 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 11 00:27:57.734189 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 11 00:27:57.734206 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 11 00:27:57.734221 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 11 00:27:57.734232 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 11 00:27:57.734244 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 11 00:27:57.734258 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 11 00:27:57.734271 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 11 00:27:57.734284 systemd[1]: Stopped systemd-fsck-usr.service. Sep 11 00:27:57.734299 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 11 00:27:57.734312 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 11 00:27:57.734329 kernel: loop: module loaded Sep 11 00:27:57.734343 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 11 00:27:57.734356 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 11 00:27:57.734369 kernel: fuse: init (API version 7.41) Sep 11 00:27:57.734396 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 11 00:27:57.734411 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 11 00:27:57.734424 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 11 00:27:57.734438 systemd[1]: verity-setup.service: Deactivated successfully. Sep 11 00:27:57.734453 systemd[1]: Stopped verity-setup.service. Sep 11 00:27:57.734466 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 11 00:27:57.734476 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 11 00:27:57.734488 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 11 00:27:57.734498 systemd[1]: Mounted media.mount - External Media Directory. Sep 11 00:27:57.734529 systemd-journald[1288]: Collecting audit messages is disabled. Sep 11 00:27:57.734555 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 11 00:27:57.734565 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 11 00:27:57.734576 kernel: ACPI: bus type drm_connector registered Sep 11 00:27:57.734586 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 11 00:27:57.734598 systemd-journald[1288]: Journal started Sep 11 00:27:57.734623 systemd-journald[1288]: Runtime Journal (/run/log/journal/30707e11ee124794b8c5804f43149484) is 8M, max 158.9M, 150.9M free. Sep 11 00:27:57.305671 systemd[1]: Queued start job for default target multi-user.target. Sep 11 00:27:57.313771 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Sep 11 00:27:57.314078 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 11 00:27:57.740686 systemd[1]: Started systemd-journald.service - Journal Service. Sep 11 00:27:57.744161 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 11 00:27:57.748731 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 11 00:27:57.751898 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 11 00:27:57.752118 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 11 00:27:57.756696 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 11 00:27:57.756932 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 11 00:27:57.759722 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 11 00:27:57.759934 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 11 00:27:57.762605 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 11 00:27:57.762814 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 11 00:27:57.765226 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 11 00:27:57.765370 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 11 00:27:57.768180 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 11 00:27:57.768336 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 11 00:27:57.771664 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 11 00:27:57.776716 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 11 00:27:57.780060 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 11 00:27:57.783698 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 11 00:27:57.786665 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 11 00:27:57.795260 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 11 00:27:57.797587 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 11 00:27:57.801465 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 11 00:27:57.805450 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 11 00:27:57.805939 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 11 00:27:57.808124 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 11 00:27:57.812969 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 11 00:27:57.815173 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 11 00:27:57.817500 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 11 00:27:57.836797 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 11 00:27:57.841472 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 11 00:27:57.842093 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 11 00:27:57.846245 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 11 00:27:57.847486 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 11 00:27:57.851471 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 11 00:27:57.857452 systemd-journald[1288]: Time spent on flushing to /var/log/journal/30707e11ee124794b8c5804f43149484 is 63.610ms for 990 entries. Sep 11 00:27:57.857452 systemd-journald[1288]: System Journal (/var/log/journal/30707e11ee124794b8c5804f43149484) is 11.8M, max 2.6G, 2.6G free. Sep 11 00:27:58.003326 systemd-journald[1288]: Received client request to flush runtime journal. Sep 11 00:27:58.003378 kernel: loop0: detected capacity change from 0 to 113872 Sep 11 00:27:58.003406 systemd-journald[1288]: /var/log/journal/30707e11ee124794b8c5804f43149484/system.journal: Realtime clock jumped backwards relative to last journal entry, rotating. Sep 11 00:27:58.003425 systemd-journald[1288]: Rotating system journal. Sep 11 00:27:57.856636 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 11 00:27:57.861113 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 11 00:27:57.863618 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 11 00:27:57.885884 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 11 00:27:57.888354 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 11 00:27:57.894969 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 11 00:27:57.917608 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 11 00:27:57.952518 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 11 00:27:57.959489 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 11 00:27:57.991694 systemd-tmpfiles[1333]: ACLs are not supported, ignoring. Sep 11 00:27:57.991709 systemd-tmpfiles[1333]: ACLs are not supported, ignoring. Sep 11 00:27:57.995103 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 11 00:27:58.004176 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 11 00:27:58.012237 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 11 00:27:58.032580 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 11 00:27:58.049407 kernel: loop1: detected capacity change from 0 to 146240 Sep 11 00:27:58.238404 kernel: loop2: detected capacity change from 0 to 224512 Sep 11 00:27:58.267419 kernel: loop3: detected capacity change from 0 to 28504 Sep 11 00:27:58.315511 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 11 00:27:58.386639 kernel: loop4: detected capacity change from 0 to 113872 Sep 11 00:27:58.400431 kernel: loop5: detected capacity change from 0 to 146240 Sep 11 00:27:58.415403 kernel: loop6: detected capacity change from 0 to 224512 Sep 11 00:27:58.455436 kernel: loop7: detected capacity change from 0 to 28504 Sep 11 00:27:58.468427 (sd-merge)[1346]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Sep 11 00:27:58.468835 (sd-merge)[1346]: Merged extensions into '/usr'. Sep 11 00:27:58.489019 systemd[1]: Reload requested from client PID 1323 ('systemd-sysext') (unit systemd-sysext.service)... Sep 11 00:27:58.489034 systemd[1]: Reloading... Sep 11 00:27:58.545434 zram_generator::config[1376]: No configuration found. Sep 11 00:27:58.656763 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 11 00:27:58.736354 systemd[1]: Reloading finished in 246 ms. Sep 11 00:27:58.759305 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 11 00:27:58.760913 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 11 00:27:58.769315 systemd[1]: Starting ensure-sysext.service... Sep 11 00:27:58.774525 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 11 00:27:58.784933 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 11 00:27:58.801887 systemd[1]: Reload requested from client PID 1431 ('systemctl') (unit ensure-sysext.service)... Sep 11 00:27:58.801901 systemd[1]: Reloading... Sep 11 00:27:58.825194 systemd-tmpfiles[1432]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 11 00:27:58.825233 systemd-tmpfiles[1432]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 11 00:27:58.826307 systemd-tmpfiles[1432]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 11 00:27:58.826555 systemd-tmpfiles[1432]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 11 00:27:58.827244 systemd-tmpfiles[1432]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 11 00:27:58.829847 systemd-tmpfiles[1432]: ACLs are not supported, ignoring. Sep 11 00:27:58.829903 systemd-tmpfiles[1432]: ACLs are not supported, ignoring. Sep 11 00:27:58.831134 systemd-udevd[1433]: Using default interface naming scheme 'v255'. Sep 11 00:27:58.837125 systemd-tmpfiles[1432]: Detected autofs mount point /boot during canonicalization of boot. Sep 11 00:27:58.837140 systemd-tmpfiles[1432]: Skipping /boot Sep 11 00:27:58.850372 systemd-tmpfiles[1432]: Detected autofs mount point /boot during canonicalization of boot. Sep 11 00:27:58.851195 systemd-tmpfiles[1432]: Skipping /boot Sep 11 00:27:58.895409 zram_generator::config[1458]: No configuration found. Sep 11 00:27:59.135203 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 11 00:27:59.139424 kernel: mousedev: PS/2 mouse device common for all mice Sep 11 00:27:59.144490 kernel: hv_vmbus: registering driver hv_balloon Sep 11 00:27:59.147412 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Sep 11 00:27:59.159486 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#210 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Sep 11 00:27:59.188342 kernel: hv_vmbus: registering driver hyperv_fb Sep 11 00:27:59.188413 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Sep 11 00:27:59.190471 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Sep 11 00:27:59.190518 kernel: Console: switching to colour dummy device 80x25 Sep 11 00:27:59.194596 kernel: Console: switching to colour frame buffer device 128x48 Sep 11 00:27:59.352418 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 11 00:27:59.352726 systemd[1]: Reloading finished in 550 ms. Sep 11 00:27:59.365648 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 11 00:27:59.382120 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 11 00:27:59.429766 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 11 00:27:59.432582 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 11 00:27:59.436478 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 11 00:27:59.439622 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 11 00:27:59.444256 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 11 00:27:59.451270 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 11 00:27:59.454960 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 11 00:27:59.458637 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 11 00:27:59.459116 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 11 00:27:59.464637 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 11 00:27:59.470485 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 11 00:27:59.480670 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 11 00:27:59.485788 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 11 00:27:59.489588 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 11 00:27:59.491826 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 11 00:27:59.492320 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 11 00:27:59.505972 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 11 00:27:59.506150 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 11 00:27:59.509992 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 11 00:27:59.510239 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 11 00:27:59.526700 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 11 00:27:59.526912 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 11 00:27:59.528966 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 11 00:27:59.530407 ldconfig[1318]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 11 00:27:59.533588 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 11 00:27:59.537693 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 11 00:27:59.539525 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 11 00:27:59.539700 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 11 00:27:59.543461 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 11 00:27:59.545505 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 11 00:27:59.548126 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 11 00:27:59.562597 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 11 00:27:59.563773 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 11 00:27:59.568724 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 11 00:27:59.572630 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 11 00:27:59.572878 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 11 00:27:59.573245 systemd[1]: Reached target time-set.target - System Time Set. Sep 11 00:27:59.575350 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 11 00:27:59.578197 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 11 00:27:59.579626 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 11 00:27:59.585454 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 11 00:27:59.588639 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 11 00:27:59.593263 systemd[1]: Finished ensure-sysext.service. Sep 11 00:27:59.642502 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 11 00:27:59.647506 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 11 00:27:59.650157 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 11 00:27:59.657742 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 11 00:27:59.668051 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 11 00:27:59.668231 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 11 00:27:59.672796 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 11 00:27:59.673637 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 11 00:27:59.676750 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 11 00:27:59.677285 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 11 00:27:59.681296 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 11 00:27:59.682572 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 11 00:27:59.688011 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 11 00:27:59.701909 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 11 00:27:59.719678 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 11 00:27:59.720357 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 11 00:27:59.725308 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 11 00:27:59.732282 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Sep 11 00:27:59.732958 augenrules[1652]: No rules Sep 11 00:27:59.736789 systemd[1]: audit-rules.service: Deactivated successfully. Sep 11 00:27:59.737456 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 11 00:27:59.744565 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 11 00:27:59.750351 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 11 00:27:59.768186 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 11 00:27:59.768377 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 11 00:27:59.776490 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 11 00:27:59.777681 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Sep 11 00:27:59.795305 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 11 00:27:59.847619 systemd-resolved[1600]: Positive Trust Anchors: Sep 11 00:27:59.847818 systemd-resolved[1600]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 11 00:27:59.847861 systemd-resolved[1600]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 11 00:27:59.849657 systemd-networkd[1598]: lo: Link UP Sep 11 00:27:59.849663 systemd-networkd[1598]: lo: Gained carrier Sep 11 00:27:59.851505 systemd-networkd[1598]: Enumeration completed Sep 11 00:27:59.851662 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 11 00:27:59.853594 systemd-networkd[1598]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 11 00:27:59.853748 systemd-networkd[1598]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 11 00:27:59.854286 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 11 00:27:59.857496 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 11 00:27:59.861753 systemd-resolved[1600]: Using system hostname 'ci-4372.1.0-n-4da84ffec3'. Sep 11 00:27:59.862403 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Sep 11 00:27:59.868402 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Sep 11 00:27:59.868622 kernel: hv_netvsc f8615163-0000-1000-2000-7c1e52fca61a eth0: Data path switched to VF: enP30832s1 Sep 11 00:27:59.870323 systemd-networkd[1598]: enP30832s1: Link UP Sep 11 00:27:59.870436 systemd-networkd[1598]: eth0: Link UP Sep 11 00:27:59.870443 systemd-networkd[1598]: eth0: Gained carrier Sep 11 00:27:59.870461 systemd-networkd[1598]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 11 00:27:59.871127 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 11 00:27:59.872777 systemd[1]: Reached target network.target - Network. Sep 11 00:27:59.873350 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 11 00:27:59.877212 systemd-networkd[1598]: enP30832s1: Gained carrier Sep 11 00:27:59.879728 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 11 00:27:59.886417 systemd-networkd[1598]: eth0: DHCPv4 address 10.200.8.15/24, gateway 10.200.8.1 acquired from 168.63.129.16 Sep 11 00:27:59.910900 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 11 00:27:59.912619 systemd[1]: Reached target sysinit.target - System Initialization. Sep 11 00:27:59.916529 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 11 00:27:59.919454 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 11 00:27:59.921004 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Sep 11 00:27:59.924538 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 11 00:27:59.927501 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 11 00:27:59.930442 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 11 00:27:59.931885 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 11 00:27:59.931913 systemd[1]: Reached target paths.target - Path Units. Sep 11 00:27:59.933087 systemd[1]: Reached target timers.target - Timer Units. Sep 11 00:27:59.936126 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 11 00:27:59.938749 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 11 00:27:59.943218 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 11 00:27:59.946552 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 11 00:27:59.949460 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 11 00:27:59.953751 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 11 00:27:59.955434 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 11 00:27:59.958931 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 11 00:27:59.962089 systemd[1]: Reached target sockets.target - Socket Units. Sep 11 00:27:59.963325 systemd[1]: Reached target basic.target - Basic System. Sep 11 00:27:59.964415 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 11 00:27:59.964441 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 11 00:27:59.966213 systemd[1]: Starting chronyd.service - NTP client/server... Sep 11 00:27:59.968229 systemd[1]: Starting containerd.service - containerd container runtime... Sep 11 00:27:59.973223 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 11 00:27:59.979840 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 11 00:27:59.983495 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 11 00:27:59.987579 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 11 00:27:59.991284 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 11 00:27:59.993355 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 11 00:27:59.996425 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Sep 11 00:27:59.999503 systemd[1]: hv_fcopy_uio_daemon.service - Hyper-V FCOPY UIO daemon was skipped because of an unmet condition check (ConditionPathExists=/sys/bus/vmbus/devices/eb765408-105f-49b6-b4aa-c123b64d17d4/uio). Sep 11 00:28:00.004133 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Sep 11 00:28:00.004673 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Sep 11 00:28:00.005983 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 11 00:28:00.016294 jq[1682]: false Sep 11 00:28:00.017525 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 11 00:28:00.023292 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 11 00:28:00.028527 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 11 00:28:00.032140 KVP[1687]: KVP starting; pid is:1687 Sep 11 00:28:00.038499 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 11 00:28:00.039745 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 11 00:28:00.040166 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 11 00:28:00.041307 google_oslogin_nss_cache[1684]: oslogin_cache_refresh[1684]: Refreshing passwd entry cache Sep 11 00:28:00.045891 kernel: hv_utils: KVP IC version 4.0 Sep 11 00:28:00.044118 systemd[1]: Starting update-engine.service - Update Engine... Sep 11 00:28:00.048421 oslogin_cache_refresh[1684]: Refreshing passwd entry cache Sep 11 00:28:00.050733 KVP[1687]: KVP LIC Version: 3.1 Sep 11 00:28:00.058249 extend-filesystems[1683]: Found /dev/nvme0n1p6 Sep 11 00:28:00.051044 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 11 00:28:00.067888 extend-filesystems[1683]: Found /dev/nvme0n1p9 Sep 11 00:28:00.070440 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 11 00:28:00.073165 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 11 00:28:00.073346 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 11 00:28:00.074897 extend-filesystems[1683]: Checking size of /dev/nvme0n1p9 Sep 11 00:28:00.074809 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 11 00:28:00.076518 oslogin_cache_refresh[1684]: Failure getting users, quitting Sep 11 00:28:00.078772 google_oslogin_nss_cache[1684]: oslogin_cache_refresh[1684]: Failure getting users, quitting Sep 11 00:28:00.078772 google_oslogin_nss_cache[1684]: oslogin_cache_refresh[1684]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 11 00:28:00.078772 google_oslogin_nss_cache[1684]: oslogin_cache_refresh[1684]: Refreshing group entry cache Sep 11 00:28:00.076533 oslogin_cache_refresh[1684]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 11 00:28:00.076567 oslogin_cache_refresh[1684]: Refreshing group entry cache Sep 11 00:28:00.083294 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 11 00:28:00.085966 systemd[1]: motdgen.service: Deactivated successfully. Sep 11 00:28:00.086202 google_oslogin_nss_cache[1684]: oslogin_cache_refresh[1684]: Failure getting groups, quitting Sep 11 00:28:00.086240 google_oslogin_nss_cache[1684]: oslogin_cache_refresh[1684]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 11 00:28:00.086201 oslogin_cache_refresh[1684]: Failure getting groups, quitting Sep 11 00:28:00.086214 oslogin_cache_refresh[1684]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 11 00:28:00.088443 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 11 00:28:00.091620 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Sep 11 00:28:00.091808 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Sep 11 00:28:00.114451 update_engine[1698]: I20250911 00:28:00.110460 1698 main.cc:92] Flatcar Update Engine starting Sep 11 00:28:00.114895 jq[1700]: true Sep 11 00:28:00.118659 extend-filesystems[1683]: Old size kept for /dev/nvme0n1p9 Sep 11 00:28:00.123007 (ntainerd)[1713]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 11 00:28:00.123674 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 11 00:28:00.123877 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 11 00:28:00.146052 (chronyd)[1676]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Sep 11 00:28:00.153985 tar[1708]: linux-amd64/LICENSE Sep 11 00:28:00.154183 tar[1708]: linux-amd64/helm Sep 11 00:28:00.166064 jq[1727]: true Sep 11 00:28:00.174189 chronyd[1732]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Sep 11 00:28:00.184468 chronyd[1732]: Timezone right/UTC failed leap second check, ignoring Sep 11 00:28:00.184621 chronyd[1732]: Loaded seccomp filter (level 2) Sep 11 00:28:00.188528 systemd[1]: Started chronyd.service - NTP client/server. Sep 11 00:28:00.208202 dbus-daemon[1679]: [system] SELinux support is enabled Sep 11 00:28:00.208326 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 11 00:28:00.216545 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 11 00:28:00.216577 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 11 00:28:00.219829 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 11 00:28:00.219850 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 11 00:28:00.224255 update_engine[1698]: I20250911 00:28:00.224102 1698 update_check_scheduler.cc:74] Next update check in 2m0s Sep 11 00:28:00.231530 systemd[1]: Started update-engine.service - Update Engine. Sep 11 00:28:00.240767 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 11 00:28:00.328182 systemd-logind[1697]: New seat seat0. Sep 11 00:28:00.330322 systemd-logind[1697]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 11 00:28:00.337569 systemd[1]: Started systemd-logind.service - User Login Management. Sep 11 00:28:00.359554 locksmithd[1737]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 11 00:28:00.390535 coreos-metadata[1678]: Sep 11 00:28:00.386 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Sep 11 00:28:00.390535 coreos-metadata[1678]: Sep 11 00:28:00.390 INFO Fetch successful Sep 11 00:28:00.390535 coreos-metadata[1678]: Sep 11 00:28:00.390 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Sep 11 00:28:00.391118 bash[1761]: Updated "/home/core/.ssh/authorized_keys" Sep 11 00:28:00.391198 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 11 00:28:00.393823 coreos-metadata[1678]: Sep 11 00:28:00.393 INFO Fetch successful Sep 11 00:28:00.393823 coreos-metadata[1678]: Sep 11 00:28:00.393 INFO Fetching http://168.63.129.16/machine/8f336833-d5ce-4164-878f-51fa5dcbe498/ef12d5b3%2D5767%2D40bb%2Db42e%2D2d4456770b40.%5Fci%2D4372.1.0%2Dn%2D4da84ffec3?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Sep 11 00:28:00.395184 coreos-metadata[1678]: Sep 11 00:28:00.394 INFO Fetch successful Sep 11 00:28:00.395184 coreos-metadata[1678]: Sep 11 00:28:00.395 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Sep 11 00:28:00.395287 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 11 00:28:00.404398 coreos-metadata[1678]: Sep 11 00:28:00.403 INFO Fetch successful Sep 11 00:28:00.441915 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 11 00:28:00.445314 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 11 00:28:00.538343 sshd_keygen[1712]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 11 00:28:00.565857 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 11 00:28:00.569984 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 11 00:28:00.588456 systemd[1]: issuegen.service: Deactivated successfully. Sep 11 00:28:00.588647 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 11 00:28:00.596731 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 11 00:28:00.616305 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 11 00:28:00.623635 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 11 00:28:00.628016 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 11 00:28:00.631639 systemd[1]: Reached target getty.target - Login Prompts. Sep 11 00:28:00.756923 containerd[1713]: time="2025-09-11T00:28:00Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 11 00:28:00.757557 containerd[1713]: time="2025-09-11T00:28:00.757526664Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Sep 11 00:28:00.768276 containerd[1713]: time="2025-09-11T00:28:00.768244095Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.142µs" Sep 11 00:28:00.768276 containerd[1713]: time="2025-09-11T00:28:00.768274619Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 11 00:28:00.768351 containerd[1713]: time="2025-09-11T00:28:00.768291929Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 11 00:28:00.768443 containerd[1713]: time="2025-09-11T00:28:00.768429286Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 11 00:28:00.768471 containerd[1713]: time="2025-09-11T00:28:00.768448105Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 11 00:28:00.768492 containerd[1713]: time="2025-09-11T00:28:00.768469733Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 11 00:28:00.768529 containerd[1713]: time="2025-09-11T00:28:00.768516714Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 11 00:28:00.768550 containerd[1713]: time="2025-09-11T00:28:00.768530123Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 11 00:28:00.768748 containerd[1713]: time="2025-09-11T00:28:00.768731452Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 11 00:28:00.768771 containerd[1713]: time="2025-09-11T00:28:00.768749012Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 11 00:28:00.768771 containerd[1713]: time="2025-09-11T00:28:00.768760743Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 11 00:28:00.768813 containerd[1713]: time="2025-09-11T00:28:00.768769540Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 11 00:28:00.768963 containerd[1713]: time="2025-09-11T00:28:00.768833199Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 11 00:28:00.768988 containerd[1713]: time="2025-09-11T00:28:00.768980965Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 11 00:28:00.769045 containerd[1713]: time="2025-09-11T00:28:00.769029908Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 11 00:28:00.769067 containerd[1713]: time="2025-09-11T00:28:00.769044973Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 11 00:28:00.769086 containerd[1713]: time="2025-09-11T00:28:00.769071392Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 11 00:28:00.770117 containerd[1713]: time="2025-09-11T00:28:00.769850843Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 11 00:28:00.770117 containerd[1713]: time="2025-09-11T00:28:00.769931339Z" level=info msg="metadata content store policy set" policy=shared Sep 11 00:28:00.800030 containerd[1713]: time="2025-09-11T00:28:00.800000856Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 11 00:28:00.800096 containerd[1713]: time="2025-09-11T00:28:00.800061947Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 11 00:28:00.800096 containerd[1713]: time="2025-09-11T00:28:00.800080114Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 11 00:28:00.800096 containerd[1713]: time="2025-09-11T00:28:00.800093316Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 11 00:28:00.800166 containerd[1713]: time="2025-09-11T00:28:00.800107170Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 11 00:28:00.800166 containerd[1713]: time="2025-09-11T00:28:00.800119011Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 11 00:28:00.800166 containerd[1713]: time="2025-09-11T00:28:00.800132134Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 11 00:28:00.800166 containerd[1713]: time="2025-09-11T00:28:00.800144761Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 11 00:28:00.800166 containerd[1713]: time="2025-09-11T00:28:00.800156854Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 11 00:28:00.800263 containerd[1713]: time="2025-09-11T00:28:00.800167128Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 11 00:28:00.800263 containerd[1713]: time="2025-09-11T00:28:00.800177655Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 11 00:28:00.800263 containerd[1713]: time="2025-09-11T00:28:00.800190243Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 11 00:28:00.800318 containerd[1713]: time="2025-09-11T00:28:00.800288176Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 11 00:28:00.800318 containerd[1713]: time="2025-09-11T00:28:00.800306884Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 11 00:28:00.800357 containerd[1713]: time="2025-09-11T00:28:00.800337037Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 11 00:28:00.800357 containerd[1713]: time="2025-09-11T00:28:00.800348989Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 11 00:28:00.800412 containerd[1713]: time="2025-09-11T00:28:00.800360341Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 11 00:28:00.800412 containerd[1713]: time="2025-09-11T00:28:00.800373794Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 11 00:28:00.800412 containerd[1713]: time="2025-09-11T00:28:00.800403034Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 11 00:28:00.800478 containerd[1713]: time="2025-09-11T00:28:00.800414116Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 11 00:28:00.800478 containerd[1713]: time="2025-09-11T00:28:00.800426674Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 11 00:28:00.800478 containerd[1713]: time="2025-09-11T00:28:00.800438712Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 11 00:28:00.800478 containerd[1713]: time="2025-09-11T00:28:00.800450155Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 11 00:28:00.800556 containerd[1713]: time="2025-09-11T00:28:00.800515994Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 11 00:28:00.800556 containerd[1713]: time="2025-09-11T00:28:00.800528945Z" level=info msg="Start snapshots syncer" Sep 11 00:28:00.800556 containerd[1713]: time="2025-09-11T00:28:00.800548217Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 11 00:28:00.801398 containerd[1713]: time="2025-09-11T00:28:00.800783345Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 11 00:28:00.801398 containerd[1713]: time="2025-09-11T00:28:00.800832313Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 11 00:28:00.801557 containerd[1713]: time="2025-09-11T00:28:00.800918058Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 11 00:28:00.801557 containerd[1713]: time="2025-09-11T00:28:00.801001721Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 11 00:28:00.801557 containerd[1713]: time="2025-09-11T00:28:00.801020271Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 11 00:28:00.801557 containerd[1713]: time="2025-09-11T00:28:00.801030614Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 11 00:28:00.801557 containerd[1713]: time="2025-09-11T00:28:00.801040827Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 11 00:28:00.801557 containerd[1713]: time="2025-09-11T00:28:00.801052810Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 11 00:28:00.801557 containerd[1713]: time="2025-09-11T00:28:00.801063341Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 11 00:28:00.801557 containerd[1713]: time="2025-09-11T00:28:00.801074174Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 11 00:28:00.801557 containerd[1713]: time="2025-09-11T00:28:00.801101286Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 11 00:28:00.801557 containerd[1713]: time="2025-09-11T00:28:00.801113386Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 11 00:28:00.801557 containerd[1713]: time="2025-09-11T00:28:00.801124027Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 11 00:28:00.801557 containerd[1713]: time="2025-09-11T00:28:00.801151181Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 11 00:28:00.801557 containerd[1713]: time="2025-09-11T00:28:00.801163270Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 11 00:28:00.801557 containerd[1713]: time="2025-09-11T00:28:00.801172254Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 11 00:28:00.801830 containerd[1713]: time="2025-09-11T00:28:00.801182016Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 11 00:28:00.801830 containerd[1713]: time="2025-09-11T00:28:00.801189370Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 11 00:28:00.801830 containerd[1713]: time="2025-09-11T00:28:00.801198596Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 11 00:28:00.801830 containerd[1713]: time="2025-09-11T00:28:00.801208861Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 11 00:28:00.801830 containerd[1713]: time="2025-09-11T00:28:00.801223009Z" level=info msg="runtime interface created" Sep 11 00:28:00.801830 containerd[1713]: time="2025-09-11T00:28:00.801228206Z" level=info msg="created NRI interface" Sep 11 00:28:00.801830 containerd[1713]: time="2025-09-11T00:28:00.801235700Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 11 00:28:00.801830 containerd[1713]: time="2025-09-11T00:28:00.801246391Z" level=info msg="Connect containerd service" Sep 11 00:28:00.801830 containerd[1713]: time="2025-09-11T00:28:00.801273626Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 11 00:28:00.801998 containerd[1713]: time="2025-09-11T00:28:00.801888815Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 11 00:28:00.889299 tar[1708]: linux-amd64/README.md Sep 11 00:28:00.906653 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 11 00:28:01.074081 containerd[1713]: time="2025-09-11T00:28:01.072523364Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 11 00:28:01.074081 containerd[1713]: time="2025-09-11T00:28:01.072574619Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 11 00:28:01.074081 containerd[1713]: time="2025-09-11T00:28:01.072595041Z" level=info msg="Start subscribing containerd event" Sep 11 00:28:01.074081 containerd[1713]: time="2025-09-11T00:28:01.072619399Z" level=info msg="Start recovering state" Sep 11 00:28:01.074081 containerd[1713]: time="2025-09-11T00:28:01.072698405Z" level=info msg="Start event monitor" Sep 11 00:28:01.074081 containerd[1713]: time="2025-09-11T00:28:01.072708907Z" level=info msg="Start cni network conf syncer for default" Sep 11 00:28:01.074081 containerd[1713]: time="2025-09-11T00:28:01.072716175Z" level=info msg="Start streaming server" Sep 11 00:28:01.074081 containerd[1713]: time="2025-09-11T00:28:01.072724447Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 11 00:28:01.074081 containerd[1713]: time="2025-09-11T00:28:01.072732027Z" level=info msg="runtime interface starting up..." Sep 11 00:28:01.074081 containerd[1713]: time="2025-09-11T00:28:01.072738488Z" level=info msg="starting plugins..." Sep 11 00:28:01.074081 containerd[1713]: time="2025-09-11T00:28:01.072749030Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 11 00:28:01.074081 containerd[1713]: time="2025-09-11T00:28:01.072837561Z" level=info msg="containerd successfully booted in 0.316292s" Sep 11 00:28:01.073494 systemd[1]: Started containerd.service - containerd container runtime. Sep 11 00:28:01.487535 systemd-networkd[1598]: eth0: Gained IPv6LL Sep 11 00:28:01.489904 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 11 00:28:01.493892 systemd[1]: Reached target network-online.target - Network is Online. Sep 11 00:28:01.497029 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 00:28:01.500590 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 11 00:28:01.510890 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Sep 11 00:28:01.543775 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 11 00:28:01.546487 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Sep 11 00:28:02.209170 waagent[1828]: 2025-09-11T00:28:02.209097Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Sep 11 00:28:02.212955 waagent[1828]: 2025-09-11T00:28:02.212454Z INFO Daemon Daemon OS: flatcar 4372.1.0 Sep 11 00:28:02.214519 waagent[1828]: 2025-09-11T00:28:02.214476Z INFO Daemon Daemon Python: 3.11.12 Sep 11 00:28:02.216673 waagent[1828]: 2025-09-11T00:28:02.216238Z INFO Daemon Daemon Run daemon Sep 11 00:28:02.218716 waagent[1828]: 2025-09-11T00:28:02.217640Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4372.1.0' Sep 11 00:28:02.220231 waagent[1828]: 2025-09-11T00:28:02.220192Z INFO Daemon Daemon Using waagent for provisioning Sep 11 00:28:02.225955 waagent[1828]: 2025-09-11T00:28:02.223609Z INFO Daemon Daemon Activate resource disk Sep 11 00:28:02.225955 waagent[1828]: 2025-09-11T00:28:02.225334Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Sep 11 00:28:02.230654 waagent[1828]: 2025-09-11T00:28:02.230020Z INFO Daemon Daemon Found device: None Sep 11 00:28:02.231981 waagent[1828]: 2025-09-11T00:28:02.231944Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Sep 11 00:28:02.234179 waagent[1828]: 2025-09-11T00:28:02.234147Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Sep 11 00:28:02.238964 waagent[1828]: 2025-09-11T00:28:02.238930Z INFO Daemon Daemon Clean protocol and wireserver endpoint Sep 11 00:28:02.240504 waagent[1828]: 2025-09-11T00:28:02.240471Z INFO Daemon Daemon Running default provisioning handler Sep 11 00:28:02.251694 waagent[1828]: 2025-09-11T00:28:02.251641Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Sep 11 00:28:02.256633 waagent[1828]: 2025-09-11T00:28:02.256045Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Sep 11 00:28:02.260633 waagent[1828]: 2025-09-11T00:28:02.260439Z INFO Daemon Daemon cloud-init is enabled: False Sep 11 00:28:02.263446 waagent[1828]: 2025-09-11T00:28:02.262533Z INFO Daemon Daemon Copying ovf-env.xml Sep 11 00:28:02.289739 waagent[1828]: 2025-09-11T00:28:02.289553Z INFO Daemon Daemon Successfully mounted dvd Sep 11 00:28:02.311498 waagent[1828]: 2025-09-11T00:28:02.311460Z INFO Daemon Daemon Detect protocol endpoint Sep 11 00:28:02.313091 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Sep 11 00:28:02.314152 waagent[1828]: 2025-09-11T00:28:02.313609Z INFO Daemon Daemon Clean protocol and wireserver endpoint Sep 11 00:28:02.316355 waagent[1828]: 2025-09-11T00:28:02.315834Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Sep 11 00:28:02.319140 waagent[1828]: 2025-09-11T00:28:02.318636Z INFO Daemon Daemon Test for route to 168.63.129.16 Sep 11 00:28:02.320621 waagent[1828]: 2025-09-11T00:28:02.320571Z INFO Daemon Daemon Route to 168.63.129.16 exists Sep 11 00:28:02.323963 waagent[1828]: 2025-09-11T00:28:02.322528Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Sep 11 00:28:02.338279 waagent[1828]: 2025-09-11T00:28:02.338229Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Sep 11 00:28:02.341205 waagent[1828]: 2025-09-11T00:28:02.341179Z INFO Daemon Daemon Wire protocol version:2012-11-30 Sep 11 00:28:02.344074 waagent[1828]: 2025-09-11T00:28:02.343118Z INFO Daemon Daemon Server preferred version:2015-04-05 Sep 11 00:28:02.404497 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:28:02.407639 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 11 00:28:02.411801 systemd[1]: Startup finished in 3.301s (kernel) + 7.099s (initrd) + 5.878s (userspace) = 16.279s. Sep 11 00:28:02.421700 (kubelet)[1845]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 11 00:28:02.432676 waagent[1828]: 2025-09-11T00:28:02.432079Z INFO Daemon Daemon Initializing goal state during protocol detection Sep 11 00:28:02.433927 waagent[1828]: 2025-09-11T00:28:02.433889Z INFO Daemon Daemon Forcing an update of the goal state. Sep 11 00:28:02.446076 waagent[1828]: 2025-09-11T00:28:02.443343Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Sep 11 00:28:02.462293 waagent[1828]: 2025-09-11T00:28:02.462233Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.175 Sep 11 00:28:02.463062 waagent[1828]: 2025-09-11T00:28:02.463031Z INFO Daemon Sep 11 00:28:02.463459 waagent[1828]: 2025-09-11T00:28:02.463437Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 5ec5d1ec-4a8a-495c-a926-f65bb652dd59 eTag: 11900816572156301464 source: Fabric] Sep 11 00:28:02.463949 waagent[1828]: 2025-09-11T00:28:02.463925Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Sep 11 00:28:02.464330 waagent[1828]: 2025-09-11T00:28:02.464310Z INFO Daemon Sep 11 00:28:02.464550 waagent[1828]: 2025-09-11T00:28:02.464531Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Sep 11 00:28:02.475370 waagent[1828]: 2025-09-11T00:28:02.475338Z INFO Daemon Daemon Downloading artifacts profile blob Sep 11 00:28:02.513087 login[1791]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 11 00:28:02.514775 login[1792]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 11 00:28:02.521738 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 11 00:28:02.523629 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 11 00:28:02.535131 systemd-logind[1697]: New session 2 of user core. Sep 11 00:28:02.543433 systemd-logind[1697]: New session 1 of user core. Sep 11 00:28:02.553804 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 11 00:28:02.557196 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 11 00:28:02.566565 (systemd)[1854]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 11 00:28:02.570142 systemd-logind[1697]: New session c1 of user core. Sep 11 00:28:02.580435 waagent[1828]: 2025-09-11T00:28:02.580370Z INFO Daemon Downloaded certificate {'thumbprint': '0756EA025C77677AC77196927342CD2406EF400A', 'hasPrivateKey': True} Sep 11 00:28:02.585772 waagent[1828]: 2025-09-11T00:28:02.585741Z INFO Daemon Fetch goal state completed Sep 11 00:28:02.592531 waagent[1828]: 2025-09-11T00:28:02.592505Z INFO Daemon Daemon Starting provisioning Sep 11 00:28:02.593033 waagent[1828]: 2025-09-11T00:28:02.593007Z INFO Daemon Daemon Handle ovf-env.xml. Sep 11 00:28:02.593242 waagent[1828]: 2025-09-11T00:28:02.593226Z INFO Daemon Daemon Set hostname [ci-4372.1.0-n-4da84ffec3] Sep 11 00:28:02.599542 waagent[1828]: 2025-09-11T00:28:02.599279Z INFO Daemon Daemon Publish hostname [ci-4372.1.0-n-4da84ffec3] Sep 11 00:28:02.600053 waagent[1828]: 2025-09-11T00:28:02.600021Z INFO Daemon Daemon Examine /proc/net/route for primary interface Sep 11 00:28:02.600330 waagent[1828]: 2025-09-11T00:28:02.600308Z INFO Daemon Daemon Primary interface is [eth0] Sep 11 00:28:02.613932 systemd-networkd[1598]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 11 00:28:02.615460 systemd-networkd[1598]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 11 00:28:02.615588 systemd-networkd[1598]: eth0: DHCP lease lost Sep 11 00:28:02.616239 waagent[1828]: 2025-09-11T00:28:02.616202Z INFO Daemon Daemon Create user account if not exists Sep 11 00:28:02.617001 waagent[1828]: 2025-09-11T00:28:02.616968Z INFO Daemon Daemon User core already exists, skip useradd Sep 11 00:28:02.617342 waagent[1828]: 2025-09-11T00:28:02.617323Z INFO Daemon Daemon Configure sudoer Sep 11 00:28:02.621305 waagent[1828]: 2025-09-11T00:28:02.621256Z INFO Daemon Daemon Configure sshd Sep 11 00:28:02.625714 waagent[1828]: 2025-09-11T00:28:02.625665Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Sep 11 00:28:02.627394 waagent[1828]: 2025-09-11T00:28:02.626282Z INFO Daemon Daemon Deploy ssh public key. Sep 11 00:28:02.650435 systemd-networkd[1598]: eth0: DHCPv4 address 10.200.8.15/24, gateway 10.200.8.1 acquired from 168.63.129.16 Sep 11 00:28:02.764349 systemd[1854]: Queued start job for default target default.target. Sep 11 00:28:02.770187 systemd[1854]: Created slice app.slice - User Application Slice. Sep 11 00:28:02.770217 systemd[1854]: Reached target paths.target - Paths. Sep 11 00:28:02.770248 systemd[1854]: Reached target timers.target - Timers. Sep 11 00:28:02.773488 systemd[1854]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 11 00:28:02.788624 systemd[1854]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 11 00:28:02.788713 systemd[1854]: Reached target sockets.target - Sockets. Sep 11 00:28:02.788749 systemd[1854]: Reached target basic.target - Basic System. Sep 11 00:28:02.788811 systemd[1854]: Reached target default.target - Main User Target. Sep 11 00:28:02.788833 systemd[1854]: Startup finished in 208ms. Sep 11 00:28:02.788922 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 11 00:28:02.793529 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 11 00:28:02.794222 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 11 00:28:03.042521 kubelet[1845]: E0911 00:28:03.042424 1845 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 11 00:28:03.044273 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 11 00:28:03.044416 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 11 00:28:03.044704 systemd[1]: kubelet.service: Consumed 950ms CPU time, 262.4M memory peak. Sep 11 00:28:03.701250 waagent[1828]: 2025-09-11T00:28:03.701182Z INFO Daemon Daemon Provisioning complete Sep 11 00:28:03.712841 waagent[1828]: 2025-09-11T00:28:03.712807Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Sep 11 00:28:03.714446 waagent[1828]: 2025-09-11T00:28:03.714415Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Sep 11 00:28:03.716730 waagent[1828]: 2025-09-11T00:28:03.716702Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Sep 11 00:28:03.819662 waagent[1900]: 2025-09-11T00:28:03.819596Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Sep 11 00:28:03.819990 waagent[1900]: 2025-09-11T00:28:03.819698Z INFO ExtHandler ExtHandler OS: flatcar 4372.1.0 Sep 11 00:28:03.819990 waagent[1900]: 2025-09-11T00:28:03.819740Z INFO ExtHandler ExtHandler Python: 3.11.12 Sep 11 00:28:03.819990 waagent[1900]: 2025-09-11T00:28:03.819779Z INFO ExtHandler ExtHandler CPU Arch: x86_64 Sep 11 00:28:03.833122 waagent[1900]: 2025-09-11T00:28:03.833073Z INFO ExtHandler ExtHandler Distro: flatcar-4372.1.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.12; Arch: x86_64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Sep 11 00:28:03.833254 waagent[1900]: 2025-09-11T00:28:03.833229Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 11 00:28:03.833316 waagent[1900]: 2025-09-11T00:28:03.833281Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 11 00:28:03.839580 waagent[1900]: 2025-09-11T00:28:03.839532Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Sep 11 00:28:03.844576 waagent[1900]: 2025-09-11T00:28:03.844544Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.175 Sep 11 00:28:03.844897 waagent[1900]: 2025-09-11T00:28:03.844865Z INFO ExtHandler Sep 11 00:28:03.844947 waagent[1900]: 2025-09-11T00:28:03.844919Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 2eb405c9-e963-43d5-8632-d8419b67bf2a eTag: 11900816572156301464 source: Fabric] Sep 11 00:28:03.845154 waagent[1900]: 2025-09-11T00:28:03.845125Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Sep 11 00:28:03.845515 waagent[1900]: 2025-09-11T00:28:03.845485Z INFO ExtHandler Sep 11 00:28:03.845564 waagent[1900]: 2025-09-11T00:28:03.845529Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Sep 11 00:28:03.852308 waagent[1900]: 2025-09-11T00:28:03.852276Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Sep 11 00:28:03.933192 waagent[1900]: 2025-09-11T00:28:03.933144Z INFO ExtHandler Downloaded certificate {'thumbprint': '0756EA025C77677AC77196927342CD2406EF400A', 'hasPrivateKey': True} Sep 11 00:28:03.933557 waagent[1900]: 2025-09-11T00:28:03.933528Z INFO ExtHandler Fetch goal state completed Sep 11 00:28:03.949330 waagent[1900]: 2025-09-11T00:28:03.949285Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.3.3 11 Feb 2025 (Library: OpenSSL 3.3.3 11 Feb 2025) Sep 11 00:28:03.953565 waagent[1900]: 2025-09-11T00:28:03.953492Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 1900 Sep 11 00:28:03.953641 waagent[1900]: 2025-09-11T00:28:03.953612Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Sep 11 00:28:03.953877 waagent[1900]: 2025-09-11T00:28:03.953854Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Sep 11 00:28:03.954901 waagent[1900]: 2025-09-11T00:28:03.954867Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4372.1.0', '', 'Flatcar Container Linux by Kinvolk'] Sep 11 00:28:03.955189 waagent[1900]: 2025-09-11T00:28:03.955163Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4372.1.0', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Sep 11 00:28:03.955299 waagent[1900]: 2025-09-11T00:28:03.955278Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Sep 11 00:28:03.955730 waagent[1900]: 2025-09-11T00:28:03.955697Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Sep 11 00:28:03.962478 waagent[1900]: 2025-09-11T00:28:03.962455Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Sep 11 00:28:03.962597 waagent[1900]: 2025-09-11T00:28:03.962577Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Sep 11 00:28:03.968238 waagent[1900]: 2025-09-11T00:28:03.967883Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Sep 11 00:28:03.973378 systemd[1]: Reload requested from client PID 1915 ('systemctl') (unit waagent.service)... Sep 11 00:28:03.973610 systemd[1]: Reloading... Sep 11 00:28:04.056464 zram_generator::config[1959]: No configuration found. Sep 11 00:28:04.136161 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 11 00:28:04.234970 systemd[1]: Reloading finished in 261 ms. Sep 11 00:28:04.252426 waagent[1900]: 2025-09-11T00:28:04.250593Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Sep 11 00:28:04.252426 waagent[1900]: 2025-09-11T00:28:04.250734Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Sep 11 00:28:04.372412 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#250 cmd 0x4a status: scsi 0x0 srb 0x20 hv 0xc0000001 Sep 11 00:28:04.465865 waagent[1900]: 2025-09-11T00:28:04.465799Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Sep 11 00:28:04.466127 waagent[1900]: 2025-09-11T00:28:04.466102Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Sep 11 00:28:04.466875 waagent[1900]: 2025-09-11T00:28:04.466829Z INFO ExtHandler ExtHandler Starting env monitor service. Sep 11 00:28:04.467271 waagent[1900]: 2025-09-11T00:28:04.467116Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 11 00:28:04.467271 waagent[1900]: 2025-09-11T00:28:04.467211Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 11 00:28:04.467332 waagent[1900]: 2025-09-11T00:28:04.467276Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Sep 11 00:28:04.467734 waagent[1900]: 2025-09-11T00:28:04.467705Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Sep 11 00:28:04.467895 waagent[1900]: 2025-09-11T00:28:04.467868Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Sep 11 00:28:04.468047 waagent[1900]: 2025-09-11T00:28:04.468027Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 11 00:28:04.468137 waagent[1900]: 2025-09-11T00:28:04.468116Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Sep 11 00:28:04.468279 waagent[1900]: 2025-09-11T00:28:04.468259Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Sep 11 00:28:04.468279 waagent[1900]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Sep 11 00:28:04.468279 waagent[1900]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Sep 11 00:28:04.468279 waagent[1900]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Sep 11 00:28:04.468279 waagent[1900]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Sep 11 00:28:04.468279 waagent[1900]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Sep 11 00:28:04.468279 waagent[1900]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Sep 11 00:28:04.468916 waagent[1900]: 2025-09-11T00:28:04.468893Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 11 00:28:04.469257 waagent[1900]: 2025-09-11T00:28:04.469213Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Sep 11 00:28:04.469468 waagent[1900]: 2025-09-11T00:28:04.469440Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Sep 11 00:28:04.469593 waagent[1900]: 2025-09-11T00:28:04.469571Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Sep 11 00:28:04.469792 waagent[1900]: 2025-09-11T00:28:04.469770Z INFO EnvHandler ExtHandler Configure routes Sep 11 00:28:04.470138 waagent[1900]: 2025-09-11T00:28:04.470097Z INFO EnvHandler ExtHandler Gateway:None Sep 11 00:28:04.470869 waagent[1900]: 2025-09-11T00:28:04.470675Z INFO EnvHandler ExtHandler Routes:None Sep 11 00:28:04.482708 waagent[1900]: 2025-09-11T00:28:04.482666Z INFO MonitorHandler ExtHandler Network interfaces: Sep 11 00:28:04.482708 waagent[1900]: Executing ['ip', '-a', '-o', 'link']: Sep 11 00:28:04.482708 waagent[1900]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Sep 11 00:28:04.482708 waagent[1900]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:fc:a6:1a brd ff:ff:ff:ff:ff:ff\ alias Network Device Sep 11 00:28:04.482708 waagent[1900]: 3: enP30832s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:fc:a6:1a brd ff:ff:ff:ff:ff:ff\ altname enP30832p0s0 Sep 11 00:28:04.482708 waagent[1900]: Executing ['ip', '-4', '-a', '-o', 'address']: Sep 11 00:28:04.482708 waagent[1900]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Sep 11 00:28:04.482708 waagent[1900]: 2: eth0 inet 10.200.8.15/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Sep 11 00:28:04.482708 waagent[1900]: Executing ['ip', '-6', '-a', '-o', 'address']: Sep 11 00:28:04.482708 waagent[1900]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Sep 11 00:28:04.482708 waagent[1900]: 2: eth0 inet6 fe80::7e1e:52ff:fefc:a61a/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Sep 11 00:28:04.512697 waagent[1900]: 2025-09-11T00:28:04.512634Z INFO ExtHandler ExtHandler Sep 11 00:28:04.512833 waagent[1900]: 2025-09-11T00:28:04.512804Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 8af4a27e-5d03-451a-b1f4-1375b32ed0f2 correlation 4384dfff-ef2d-48f3-bf4e-b76b4453c689 created: 2025-09-11T00:27:32.156494Z] Sep 11 00:28:04.513218 waagent[1900]: 2025-09-11T00:28:04.513192Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Sep 11 00:28:04.513823 waagent[1900]: 2025-09-11T00:28:04.513789Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Sep 11 00:28:04.519686 waagent[1900]: 2025-09-11T00:28:04.519643Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Sep 11 00:28:04.519686 waagent[1900]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Sep 11 00:28:04.519686 waagent[1900]: pkts bytes target prot opt in out source destination Sep 11 00:28:04.519686 waagent[1900]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Sep 11 00:28:04.519686 waagent[1900]: pkts bytes target prot opt in out source destination Sep 11 00:28:04.519686 waagent[1900]: Chain OUTPUT (policy ACCEPT 6 packets, 1144 bytes) Sep 11 00:28:04.519686 waagent[1900]: pkts bytes target prot opt in out source destination Sep 11 00:28:04.519686 waagent[1900]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Sep 11 00:28:04.519686 waagent[1900]: 16 1394 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Sep 11 00:28:04.519686 waagent[1900]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Sep 11 00:28:04.522646 waagent[1900]: 2025-09-11T00:28:04.522610Z INFO EnvHandler ExtHandler Current Firewall rules: Sep 11 00:28:04.522646 waagent[1900]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Sep 11 00:28:04.522646 waagent[1900]: pkts bytes target prot opt in out source destination Sep 11 00:28:04.522646 waagent[1900]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Sep 11 00:28:04.522646 waagent[1900]: pkts bytes target prot opt in out source destination Sep 11 00:28:04.522646 waagent[1900]: Chain OUTPUT (policy ACCEPT 6 packets, 1144 bytes) Sep 11 00:28:04.522646 waagent[1900]: pkts bytes target prot opt in out source destination Sep 11 00:28:04.522646 waagent[1900]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Sep 11 00:28:04.522646 waagent[1900]: 19 1929 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Sep 11 00:28:04.522646 waagent[1900]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Sep 11 00:28:04.544622 waagent[1900]: 2025-09-11T00:28:04.544580Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Sep 11 00:28:04.544622 waagent[1900]: Try `iptables -h' or 'iptables --help' for more information.) Sep 11 00:28:04.544907 waagent[1900]: 2025-09-11T00:28:04.544876Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: AB35C46C-4991-4576-A87E-B2D32BFBC4E0;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Sep 11 00:28:13.295163 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 11 00:28:13.296666 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 00:28:13.779438 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:28:13.784669 (kubelet)[2051]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 11 00:28:13.826490 kubelet[2051]: E0911 00:28:13.826454 2051 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 11 00:28:13.829561 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 11 00:28:13.829692 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 11 00:28:13.830036 systemd[1]: kubelet.service: Consumed 136ms CPU time, 110.5M memory peak. Sep 11 00:28:18.714098 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 11 00:28:18.715287 systemd[1]: Started sshd@0-10.200.8.15:22-10.200.16.10:54474.service - OpenSSH per-connection server daemon (10.200.16.10:54474). Sep 11 00:28:19.377440 sshd[2059]: Accepted publickey for core from 10.200.16.10 port 54474 ssh2: RSA SHA256:WsqnZe1Vz7kcM6FJ5Bl6636L4nXESJA3OI736agNivA Sep 11 00:28:19.378597 sshd-session[2059]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:28:19.382462 systemd-logind[1697]: New session 3 of user core. Sep 11 00:28:19.393557 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 11 00:28:19.943535 systemd[1]: Started sshd@1-10.200.8.15:22-10.200.16.10:58554.service - OpenSSH per-connection server daemon (10.200.16.10:58554). Sep 11 00:28:20.586444 sshd[2064]: Accepted publickey for core from 10.200.16.10 port 58554 ssh2: RSA SHA256:WsqnZe1Vz7kcM6FJ5Bl6636L4nXESJA3OI736agNivA Sep 11 00:28:20.587531 sshd-session[2064]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:28:20.591228 systemd-logind[1697]: New session 4 of user core. Sep 11 00:28:20.597540 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 11 00:28:21.034597 sshd[2066]: Connection closed by 10.200.16.10 port 58554 Sep 11 00:28:21.035104 sshd-session[2064]: pam_unix(sshd:session): session closed for user core Sep 11 00:28:21.038217 systemd[1]: sshd@1-10.200.8.15:22-10.200.16.10:58554.service: Deactivated successfully. Sep 11 00:28:21.039784 systemd[1]: session-4.scope: Deactivated successfully. Sep 11 00:28:21.040531 systemd-logind[1697]: Session 4 logged out. Waiting for processes to exit. Sep 11 00:28:21.041749 systemd-logind[1697]: Removed session 4. Sep 11 00:28:21.164003 systemd[1]: Started sshd@2-10.200.8.15:22-10.200.16.10:58570.service - OpenSSH per-connection server daemon (10.200.16.10:58570). Sep 11 00:28:21.809782 sshd[2072]: Accepted publickey for core from 10.200.16.10 port 58570 ssh2: RSA SHA256:WsqnZe1Vz7kcM6FJ5Bl6636L4nXESJA3OI736agNivA Sep 11 00:28:21.810863 sshd-session[2072]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:28:21.815142 systemd-logind[1697]: New session 5 of user core. Sep 11 00:28:21.821533 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 11 00:28:22.259772 sshd[2074]: Connection closed by 10.200.16.10 port 58570 Sep 11 00:28:22.260627 sshd-session[2072]: pam_unix(sshd:session): session closed for user core Sep 11 00:28:22.263810 systemd[1]: sshd@2-10.200.8.15:22-10.200.16.10:58570.service: Deactivated successfully. Sep 11 00:28:22.265180 systemd[1]: session-5.scope: Deactivated successfully. Sep 11 00:28:22.265874 systemd-logind[1697]: Session 5 logged out. Waiting for processes to exit. Sep 11 00:28:22.267280 systemd-logind[1697]: Removed session 5. Sep 11 00:28:22.373930 systemd[1]: Started sshd@3-10.200.8.15:22-10.200.16.10:58572.service - OpenSSH per-connection server daemon (10.200.16.10:58572). Sep 11 00:28:23.027611 sshd[2080]: Accepted publickey for core from 10.200.16.10 port 58572 ssh2: RSA SHA256:WsqnZe1Vz7kcM6FJ5Bl6636L4nXESJA3OI736agNivA Sep 11 00:28:23.028674 sshd-session[2080]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:28:23.032464 systemd-logind[1697]: New session 6 of user core. Sep 11 00:28:23.039517 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 11 00:28:23.478581 sshd[2082]: Connection closed by 10.200.16.10 port 58572 Sep 11 00:28:23.479056 sshd-session[2080]: pam_unix(sshd:session): session closed for user core Sep 11 00:28:23.481624 systemd[1]: sshd@3-10.200.8.15:22-10.200.16.10:58572.service: Deactivated successfully. Sep 11 00:28:23.483140 systemd[1]: session-6.scope: Deactivated successfully. Sep 11 00:28:23.483825 systemd-logind[1697]: Session 6 logged out. Waiting for processes to exit. Sep 11 00:28:23.485595 systemd-logind[1697]: Removed session 6. Sep 11 00:28:23.594939 systemd[1]: Started sshd@4-10.200.8.15:22-10.200.16.10:58588.service - OpenSSH per-connection server daemon (10.200.16.10:58588). Sep 11 00:28:23.971489 chronyd[1732]: Selected source PHC0 Sep 11 00:28:24.080153 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 11 00:28:24.081660 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 00:28:24.240130 sshd[2088]: Accepted publickey for core from 10.200.16.10 port 58588 ssh2: RSA SHA256:WsqnZe1Vz7kcM6FJ5Bl6636L4nXESJA3OI736agNivA Sep 11 00:28:24.241204 sshd-session[2088]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:28:24.245822 systemd-logind[1697]: New session 7 of user core. Sep 11 00:28:24.250963 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 11 00:28:24.676376 sudo[2094]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 11 00:28:24.676617 sudo[2094]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 11 00:28:24.693360 sudo[2094]: pam_unix(sudo:session): session closed for user root Sep 11 00:28:24.797289 sshd[2093]: Connection closed by 10.200.16.10 port 58588 Sep 11 00:28:24.797956 sshd-session[2088]: pam_unix(sshd:session): session closed for user core Sep 11 00:28:24.800750 systemd[1]: sshd@4-10.200.8.15:22-10.200.16.10:58588.service: Deactivated successfully. Sep 11 00:28:24.802356 systemd[1]: session-7.scope: Deactivated successfully. Sep 11 00:28:24.804185 systemd-logind[1697]: Session 7 logged out. Waiting for processes to exit. Sep 11 00:28:24.805009 systemd-logind[1697]: Removed session 7. Sep 11 00:28:24.910061 systemd[1]: Started sshd@5-10.200.8.15:22-10.200.16.10:58590.service - OpenSSH per-connection server daemon (10.200.16.10:58590). Sep 11 00:28:25.199348 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:28:25.202529 (kubelet)[2107]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 11 00:28:25.236435 kubelet[2107]: E0911 00:28:25.236378 2107 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 11 00:28:25.238123 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 11 00:28:25.238252 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 11 00:28:25.238574 systemd[1]: kubelet.service: Consumed 126ms CPU time, 110.1M memory peak. Sep 11 00:28:25.555726 sshd[2100]: Accepted publickey for core from 10.200.16.10 port 58590 ssh2: RSA SHA256:WsqnZe1Vz7kcM6FJ5Bl6636L4nXESJA3OI736agNivA Sep 11 00:28:25.556911 sshd-session[2100]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:28:25.561748 systemd-logind[1697]: New session 8 of user core. Sep 11 00:28:25.567528 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 11 00:28:25.904566 sudo[2116]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 11 00:28:25.904796 sudo[2116]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 11 00:28:25.911911 sudo[2116]: pam_unix(sudo:session): session closed for user root Sep 11 00:28:25.915968 sudo[2115]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 11 00:28:25.916186 sudo[2115]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 11 00:28:25.923900 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 11 00:28:25.956987 augenrules[2138]: No rules Sep 11 00:28:25.957978 systemd[1]: audit-rules.service: Deactivated successfully. Sep 11 00:28:25.958143 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 11 00:28:25.959073 sudo[2115]: pam_unix(sudo:session): session closed for user root Sep 11 00:28:26.065154 sshd[2114]: Connection closed by 10.200.16.10 port 58590 Sep 11 00:28:26.065584 sshd-session[2100]: pam_unix(sshd:session): session closed for user core Sep 11 00:28:26.067886 systemd[1]: sshd@5-10.200.8.15:22-10.200.16.10:58590.service: Deactivated successfully. Sep 11 00:28:26.069448 systemd[1]: session-8.scope: Deactivated successfully. Sep 11 00:28:26.071037 systemd-logind[1697]: Session 8 logged out. Waiting for processes to exit. Sep 11 00:28:26.072012 systemd-logind[1697]: Removed session 8. Sep 11 00:28:26.180967 systemd[1]: Started sshd@6-10.200.8.15:22-10.200.16.10:58604.service - OpenSSH per-connection server daemon (10.200.16.10:58604). Sep 11 00:28:26.829427 sshd[2147]: Accepted publickey for core from 10.200.16.10 port 58604 ssh2: RSA SHA256:WsqnZe1Vz7kcM6FJ5Bl6636L4nXESJA3OI736agNivA Sep 11 00:28:26.830525 sshd-session[2147]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:28:26.834459 systemd-logind[1697]: New session 9 of user core. Sep 11 00:28:26.841502 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 11 00:28:27.177467 sudo[2150]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 11 00:28:27.177701 sudo[2150]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 11 00:28:27.564361 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 11 00:28:27.575618 (dockerd)[2168]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 11 00:28:27.880285 dockerd[2168]: time="2025-09-11T00:28:27.880167840Z" level=info msg="Starting up" Sep 11 00:28:27.881574 dockerd[2168]: time="2025-09-11T00:28:27.881529517Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 11 00:28:27.930773 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3353685315-merged.mount: Deactivated successfully. Sep 11 00:28:28.033212 dockerd[2168]: time="2025-09-11T00:28:28.033048427Z" level=info msg="Loading containers: start." Sep 11 00:28:28.048432 kernel: Initializing XFRM netlink socket Sep 11 00:28:28.227014 systemd-networkd[1598]: docker0: Link UP Sep 11 00:28:28.244224 dockerd[2168]: time="2025-09-11T00:28:28.244188227Z" level=info msg="Loading containers: done." Sep 11 00:28:28.255310 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4000478741-merged.mount: Deactivated successfully. Sep 11 00:28:28.268753 dockerd[2168]: time="2025-09-11T00:28:28.268723314Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 11 00:28:28.268877 dockerd[2168]: time="2025-09-11T00:28:28.268795623Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Sep 11 00:28:28.268905 dockerd[2168]: time="2025-09-11T00:28:28.268884692Z" level=info msg="Initializing buildkit" Sep 11 00:28:28.312080 dockerd[2168]: time="2025-09-11T00:28:28.312040817Z" level=info msg="Completed buildkit initialization" Sep 11 00:28:28.318007 dockerd[2168]: time="2025-09-11T00:28:28.317973842Z" level=info msg="Daemon has completed initialization" Sep 11 00:28:28.318422 dockerd[2168]: time="2025-09-11T00:28:28.318110173Z" level=info msg="API listen on /run/docker.sock" Sep 11 00:28:28.318172 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 11 00:28:29.452424 containerd[1713]: time="2025-09-11T00:28:29.452358808Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Sep 11 00:28:30.392564 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3867988704.mount: Deactivated successfully. Sep 11 00:28:31.755100 containerd[1713]: time="2025-09-11T00:28:31.755052037Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:28:31.758431 containerd[1713]: time="2025-09-11T00:28:31.758393715Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=28837924" Sep 11 00:28:31.761880 containerd[1713]: time="2025-09-11T00:28:31.761834445Z" level=info msg="ImageCreate event name:\"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:28:31.765415 containerd[1713]: time="2025-09-11T00:28:31.765361003Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:28:31.766559 containerd[1713]: time="2025-09-11T00:28:31.766065997Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"28834515\" in 2.313643403s" Sep 11 00:28:31.766559 containerd[1713]: time="2025-09-11T00:28:31.766102900Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\"" Sep 11 00:28:31.766755 containerd[1713]: time="2025-09-11T00:28:31.766726330Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Sep 11 00:28:33.183656 containerd[1713]: time="2025-09-11T00:28:33.183608533Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:28:33.188783 containerd[1713]: time="2025-09-11T00:28:33.188743515Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=24787035" Sep 11 00:28:33.190309 containerd[1713]: time="2025-09-11T00:28:33.190263575Z" level=info msg="ImageCreate event name:\"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:28:33.193972 containerd[1713]: time="2025-09-11T00:28:33.193929786Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:28:33.194692 containerd[1713]: time="2025-09-11T00:28:33.194544662Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"26421706\" in 1.427790006s" Sep 11 00:28:33.194692 containerd[1713]: time="2025-09-11T00:28:33.194576796Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\"" Sep 11 00:28:33.195119 containerd[1713]: time="2025-09-11T00:28:33.195090466Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Sep 11 00:28:34.333626 containerd[1713]: time="2025-09-11T00:28:34.333576431Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:28:34.336026 containerd[1713]: time="2025-09-11T00:28:34.335990701Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=19176297" Sep 11 00:28:34.339012 containerd[1713]: time="2025-09-11T00:28:34.338924372Z" level=info msg="ImageCreate event name:\"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:28:34.343993 containerd[1713]: time="2025-09-11T00:28:34.343964247Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:28:34.344799 containerd[1713]: time="2025-09-11T00:28:34.344659434Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"20810986\" in 1.149538907s" Sep 11 00:28:34.344799 containerd[1713]: time="2025-09-11T00:28:34.344689043Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\"" Sep 11 00:28:34.345280 containerd[1713]: time="2025-09-11T00:28:34.345248847Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Sep 11 00:28:35.313482 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4020877469.mount: Deactivated successfully. Sep 11 00:28:35.315154 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 11 00:28:35.319564 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 00:28:35.869572 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:28:35.874753 (kubelet)[2449]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 11 00:28:35.901449 containerd[1713]: time="2025-09-11T00:28:35.901061506Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:28:35.903547 containerd[1713]: time="2025-09-11T00:28:35.903517027Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=30924214" Sep 11 00:28:35.906085 containerd[1713]: time="2025-09-11T00:28:35.906043443Z" level=info msg="ImageCreate event name:\"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:28:35.910858 containerd[1713]: time="2025-09-11T00:28:35.909928956Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:28:35.910858 containerd[1713]: time="2025-09-11T00:28:35.910568398Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"30923225\" in 1.565277092s" Sep 11 00:28:35.910858 containerd[1713]: time="2025-09-11T00:28:35.910600647Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\"" Sep 11 00:28:35.911623 containerd[1713]: time="2025-09-11T00:28:35.911605595Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 11 00:28:35.911954 kubelet[2449]: E0911 00:28:35.911926 2449 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 11 00:28:35.913721 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 11 00:28:35.913861 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 11 00:28:35.914232 systemd[1]: kubelet.service: Consumed 144ms CPU time, 110.4M memory peak. Sep 11 00:28:36.523583 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1981889704.mount: Deactivated successfully. Sep 11 00:28:37.476466 containerd[1713]: time="2025-09-11T00:28:37.476421193Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:28:37.479371 containerd[1713]: time="2025-09-11T00:28:37.479233983Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565249" Sep 11 00:28:37.482461 containerd[1713]: time="2025-09-11T00:28:37.482436650Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:28:37.486449 containerd[1713]: time="2025-09-11T00:28:37.486418774Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:28:37.487156 containerd[1713]: time="2025-09-11T00:28:37.487131657Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.574994461s" Sep 11 00:28:37.487241 containerd[1713]: time="2025-09-11T00:28:37.487228890Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 11 00:28:37.487777 containerd[1713]: time="2025-09-11T00:28:37.487740613Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 11 00:28:37.949916 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount353749670.mount: Deactivated successfully. Sep 11 00:28:37.966842 containerd[1713]: time="2025-09-11T00:28:37.966796729Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 11 00:28:37.969218 containerd[1713]: time="2025-09-11T00:28:37.969185883Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Sep 11 00:28:37.972115 containerd[1713]: time="2025-09-11T00:28:37.972077287Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 11 00:28:37.975867 containerd[1713]: time="2025-09-11T00:28:37.975829516Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 11 00:28:37.976456 containerd[1713]: time="2025-09-11T00:28:37.976290051Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 488.51654ms" Sep 11 00:28:37.976456 containerd[1713]: time="2025-09-11T00:28:37.976319196Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 11 00:28:37.976908 containerd[1713]: time="2025-09-11T00:28:37.976871830Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Sep 11 00:28:38.498765 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2706948910.mount: Deactivated successfully. Sep 11 00:28:40.189603 containerd[1713]: time="2025-09-11T00:28:40.189556388Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:28:40.191883 containerd[1713]: time="2025-09-11T00:28:40.191843267Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682064" Sep 11 00:28:40.194516 containerd[1713]: time="2025-09-11T00:28:40.194461201Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:28:40.198226 containerd[1713]: time="2025-09-11T00:28:40.198193365Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:28:40.199406 containerd[1713]: time="2025-09-11T00:28:40.198985861Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.221876699s" Sep 11 00:28:40.199406 containerd[1713]: time="2025-09-11T00:28:40.199018945Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Sep 11 00:28:42.483876 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:28:42.484352 systemd[1]: kubelet.service: Consumed 144ms CPU time, 110.4M memory peak. Sep 11 00:28:42.486328 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 00:28:42.507793 systemd[1]: Reload requested from client PID 2594 ('systemctl') (unit session-9.scope)... Sep 11 00:28:42.507805 systemd[1]: Reloading... Sep 11 00:28:42.592469 zram_generator::config[2636]: No configuration found. Sep 11 00:28:42.689264 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 11 00:28:42.795369 systemd[1]: Reloading finished in 287 ms. Sep 11 00:28:42.893898 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 11 00:28:42.893996 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 11 00:28:42.894324 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:28:42.894398 systemd[1]: kubelet.service: Consumed 70ms CPU time, 69.9M memory peak. Sep 11 00:28:42.895943 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 00:28:43.429636 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:28:43.438638 (kubelet)[2707]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 11 00:28:43.476644 kubelet[2707]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 11 00:28:43.476644 kubelet[2707]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 11 00:28:43.476644 kubelet[2707]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 11 00:28:43.476917 kubelet[2707]: I0911 00:28:43.476712 2707 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 11 00:28:43.779082 kubelet[2707]: I0911 00:28:43.778977 2707 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 11 00:28:43.779082 kubelet[2707]: I0911 00:28:43.779006 2707 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 11 00:28:43.779627 kubelet[2707]: I0911 00:28:43.779267 2707 server.go:954] "Client rotation is on, will bootstrap in background" Sep 11 00:28:43.810992 kubelet[2707]: E0911 00:28:43.810525 2707 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.8.15:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.8.15:6443: connect: connection refused" logger="UnhandledError" Sep 11 00:28:43.810992 kubelet[2707]: I0911 00:28:43.810738 2707 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 11 00:28:43.818309 kubelet[2707]: I0911 00:28:43.818287 2707 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 11 00:28:43.821150 kubelet[2707]: I0911 00:28:43.821130 2707 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 11 00:28:43.821358 kubelet[2707]: I0911 00:28:43.821327 2707 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 11 00:28:43.821549 kubelet[2707]: I0911 00:28:43.821357 2707 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4372.1.0-n-4da84ffec3","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 11 00:28:43.821668 kubelet[2707]: I0911 00:28:43.821557 2707 topology_manager.go:138] "Creating topology manager with none policy" Sep 11 00:28:43.821668 kubelet[2707]: I0911 00:28:43.821568 2707 container_manager_linux.go:304] "Creating device plugin manager" Sep 11 00:28:43.821717 kubelet[2707]: I0911 00:28:43.821674 2707 state_mem.go:36] "Initialized new in-memory state store" Sep 11 00:28:43.824583 kubelet[2707]: I0911 00:28:43.824567 2707 kubelet.go:446] "Attempting to sync node with API server" Sep 11 00:28:43.824651 kubelet[2707]: I0911 00:28:43.824593 2707 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 11 00:28:43.824651 kubelet[2707]: I0911 00:28:43.824615 2707 kubelet.go:352] "Adding apiserver pod source" Sep 11 00:28:43.824651 kubelet[2707]: I0911 00:28:43.824628 2707 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 11 00:28:43.829443 kubelet[2707]: W0911 00:28:43.828577 2707 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.8.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4372.1.0-n-4da84ffec3&limit=500&resourceVersion=0": dial tcp 10.200.8.15:6443: connect: connection refused Sep 11 00:28:43.829443 kubelet[2707]: E0911 00:28:43.828625 2707 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.8.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4372.1.0-n-4da84ffec3&limit=500&resourceVersion=0\": dial tcp 10.200.8.15:6443: connect: connection refused" logger="UnhandledError" Sep 11 00:28:43.829443 kubelet[2707]: W0911 00:28:43.828683 2707 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.8.15:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.8.15:6443: connect: connection refused Sep 11 00:28:43.829443 kubelet[2707]: E0911 00:28:43.828711 2707 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.8.15:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.15:6443: connect: connection refused" logger="UnhandledError" Sep 11 00:28:43.829667 kubelet[2707]: I0911 00:28:43.829652 2707 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Sep 11 00:28:43.830055 kubelet[2707]: I0911 00:28:43.830042 2707 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 11 00:28:43.830942 kubelet[2707]: W0911 00:28:43.830919 2707 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 11 00:28:43.834455 kubelet[2707]: I0911 00:28:43.834435 2707 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 11 00:28:43.834528 kubelet[2707]: I0911 00:28:43.834476 2707 server.go:1287] "Started kubelet" Sep 11 00:28:43.838109 kubelet[2707]: I0911 00:28:43.837965 2707 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 11 00:28:43.842494 kubelet[2707]: E0911 00:28:43.842479 2707 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 11 00:28:43.845305 kubelet[2707]: I0911 00:28:43.845279 2707 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 11 00:28:43.846208 kubelet[2707]: I0911 00:28:43.846191 2707 server.go:479] "Adding debug handlers to kubelet server" Sep 11 00:28:43.848534 kubelet[2707]: I0911 00:28:43.848063 2707 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 11 00:28:43.848534 kubelet[2707]: I0911 00:28:43.848162 2707 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 11 00:28:43.848534 kubelet[2707]: I0911 00:28:43.848358 2707 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 11 00:28:43.849363 kubelet[2707]: I0911 00:28:43.849343 2707 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 11 00:28:43.849590 kubelet[2707]: E0911 00:28:43.849573 2707 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4372.1.0-n-4da84ffec3\" not found" Sep 11 00:28:43.850001 kubelet[2707]: I0911 00:28:43.849982 2707 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 11 00:28:43.850054 kubelet[2707]: I0911 00:28:43.850025 2707 reconciler.go:26] "Reconciler: start to sync state" Sep 11 00:28:43.851120 kubelet[2707]: I0911 00:28:43.851102 2707 factory.go:221] Registration of the systemd container factory successfully Sep 11 00:28:43.851184 kubelet[2707]: I0911 00:28:43.851172 2707 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 11 00:28:43.851594 kubelet[2707]: W0911 00:28:43.851561 2707 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.8.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.15:6443: connect: connection refused Sep 11 00:28:43.851654 kubelet[2707]: E0911 00:28:43.851606 2707 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.8.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.15:6443: connect: connection refused" logger="UnhandledError" Sep 11 00:28:43.851682 kubelet[2707]: E0911 00:28:43.851659 2707 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372.1.0-n-4da84ffec3?timeout=10s\": dial tcp 10.200.8.15:6443: connect: connection refused" interval="200ms" Sep 11 00:28:43.853237 kubelet[2707]: E0911 00:28:43.851707 2707 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.15:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.15:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4372.1.0-n-4da84ffec3.186412e3b7f44351 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4372.1.0-n-4da84ffec3,UID:ci-4372.1.0-n-4da84ffec3,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4372.1.0-n-4da84ffec3,},FirstTimestamp:2025-09-11 00:28:43.834450769 +0000 UTC m=+0.392329719,LastTimestamp:2025-09-11 00:28:43.834450769 +0000 UTC m=+0.392329719,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4372.1.0-n-4da84ffec3,}" Sep 11 00:28:43.853913 kubelet[2707]: I0911 00:28:43.853900 2707 factory.go:221] Registration of the containerd container factory successfully Sep 11 00:28:43.863366 kubelet[2707]: I0911 00:28:43.863330 2707 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 11 00:28:43.864744 kubelet[2707]: I0911 00:28:43.864339 2707 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 11 00:28:43.864744 kubelet[2707]: I0911 00:28:43.864359 2707 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 11 00:28:43.864744 kubelet[2707]: I0911 00:28:43.864455 2707 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 11 00:28:43.864744 kubelet[2707]: I0911 00:28:43.864463 2707 kubelet.go:2382] "Starting kubelet main sync loop" Sep 11 00:28:43.864744 kubelet[2707]: E0911 00:28:43.864502 2707 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 11 00:28:43.870641 kubelet[2707]: W0911 00:28:43.870600 2707 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.8.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.15:6443: connect: connection refused Sep 11 00:28:43.870699 kubelet[2707]: E0911 00:28:43.870641 2707 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.8.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.15:6443: connect: connection refused" logger="UnhandledError" Sep 11 00:28:43.880509 kubelet[2707]: I0911 00:28:43.880491 2707 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 11 00:28:43.880509 kubelet[2707]: I0911 00:28:43.880504 2707 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 11 00:28:43.880619 kubelet[2707]: I0911 00:28:43.880543 2707 state_mem.go:36] "Initialized new in-memory state store" Sep 11 00:28:43.885094 kubelet[2707]: I0911 00:28:43.885079 2707 policy_none.go:49] "None policy: Start" Sep 11 00:28:43.885094 kubelet[2707]: I0911 00:28:43.885095 2707 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 11 00:28:43.885170 kubelet[2707]: I0911 00:28:43.885105 2707 state_mem.go:35] "Initializing new in-memory state store" Sep 11 00:28:43.893701 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 11 00:28:43.905315 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 11 00:28:43.908046 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 11 00:28:43.917936 kubelet[2707]: I0911 00:28:43.917919 2707 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 11 00:28:43.918083 kubelet[2707]: I0911 00:28:43.918071 2707 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 11 00:28:43.918125 kubelet[2707]: I0911 00:28:43.918092 2707 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 11 00:28:43.918528 kubelet[2707]: I0911 00:28:43.918514 2707 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 11 00:28:43.919711 kubelet[2707]: E0911 00:28:43.919683 2707 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 11 00:28:43.919776 kubelet[2707]: E0911 00:28:43.919717 2707 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4372.1.0-n-4da84ffec3\" not found" Sep 11 00:28:43.973674 systemd[1]: Created slice kubepods-burstable-pod9ce96c1e7251f0d1936833c867f79a8f.slice - libcontainer container kubepods-burstable-pod9ce96c1e7251f0d1936833c867f79a8f.slice. Sep 11 00:28:43.980151 kubelet[2707]: E0911 00:28:43.979990 2707 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.1.0-n-4da84ffec3\" not found" node="ci-4372.1.0-n-4da84ffec3" Sep 11 00:28:43.981747 systemd[1]: Created slice kubepods-burstable-pod758f19069e070efbe2e7f15da626a445.slice - libcontainer container kubepods-burstable-pod758f19069e070efbe2e7f15da626a445.slice. Sep 11 00:28:43.983804 kubelet[2707]: E0911 00:28:43.983787 2707 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.1.0-n-4da84ffec3\" not found" node="ci-4372.1.0-n-4da84ffec3" Sep 11 00:28:43.986100 systemd[1]: Created slice kubepods-burstable-pod24dc98ab1bd4f55375492cc8b4442f2c.slice - libcontainer container kubepods-burstable-pod24dc98ab1bd4f55375492cc8b4442f2c.slice. Sep 11 00:28:43.987682 kubelet[2707]: E0911 00:28:43.987656 2707 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.1.0-n-4da84ffec3\" not found" node="ci-4372.1.0-n-4da84ffec3" Sep 11 00:28:44.019845 kubelet[2707]: I0911 00:28:44.019829 2707 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372.1.0-n-4da84ffec3" Sep 11 00:28:44.020140 kubelet[2707]: E0911 00:28:44.020115 2707 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.15:6443/api/v1/nodes\": dial tcp 10.200.8.15:6443: connect: connection refused" node="ci-4372.1.0-n-4da84ffec3" Sep 11 00:28:44.052765 kubelet[2707]: E0911 00:28:44.052670 2707 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372.1.0-n-4da84ffec3?timeout=10s\": dial tcp 10.200.8.15:6443: connect: connection refused" interval="400ms" Sep 11 00:28:44.151112 kubelet[2707]: I0911 00:28:44.151069 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/758f19069e070efbe2e7f15da626a445-ca-certs\") pod \"kube-controller-manager-ci-4372.1.0-n-4da84ffec3\" (UID: \"758f19069e070efbe2e7f15da626a445\") " pod="kube-system/kube-controller-manager-ci-4372.1.0-n-4da84ffec3" Sep 11 00:28:44.151112 kubelet[2707]: I0911 00:28:44.151105 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/758f19069e070efbe2e7f15da626a445-flexvolume-dir\") pod \"kube-controller-manager-ci-4372.1.0-n-4da84ffec3\" (UID: \"758f19069e070efbe2e7f15da626a445\") " pod="kube-system/kube-controller-manager-ci-4372.1.0-n-4da84ffec3" Sep 11 00:28:44.151332 kubelet[2707]: I0911 00:28:44.151123 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/24dc98ab1bd4f55375492cc8b4442f2c-kubeconfig\") pod \"kube-scheduler-ci-4372.1.0-n-4da84ffec3\" (UID: \"24dc98ab1bd4f55375492cc8b4442f2c\") " pod="kube-system/kube-scheduler-ci-4372.1.0-n-4da84ffec3" Sep 11 00:28:44.151332 kubelet[2707]: I0911 00:28:44.151138 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9ce96c1e7251f0d1936833c867f79a8f-ca-certs\") pod \"kube-apiserver-ci-4372.1.0-n-4da84ffec3\" (UID: \"9ce96c1e7251f0d1936833c867f79a8f\") " pod="kube-system/kube-apiserver-ci-4372.1.0-n-4da84ffec3" Sep 11 00:28:44.151332 kubelet[2707]: I0911 00:28:44.151156 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9ce96c1e7251f0d1936833c867f79a8f-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4372.1.0-n-4da84ffec3\" (UID: \"9ce96c1e7251f0d1936833c867f79a8f\") " pod="kube-system/kube-apiserver-ci-4372.1.0-n-4da84ffec3" Sep 11 00:28:44.151594 kubelet[2707]: I0911 00:28:44.151563 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/758f19069e070efbe2e7f15da626a445-kubeconfig\") pod \"kube-controller-manager-ci-4372.1.0-n-4da84ffec3\" (UID: \"758f19069e070efbe2e7f15da626a445\") " pod="kube-system/kube-controller-manager-ci-4372.1.0-n-4da84ffec3" Sep 11 00:28:44.151638 kubelet[2707]: I0911 00:28:44.151605 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/758f19069e070efbe2e7f15da626a445-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4372.1.0-n-4da84ffec3\" (UID: \"758f19069e070efbe2e7f15da626a445\") " pod="kube-system/kube-controller-manager-ci-4372.1.0-n-4da84ffec3" Sep 11 00:28:44.151638 kubelet[2707]: I0911 00:28:44.151623 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9ce96c1e7251f0d1936833c867f79a8f-k8s-certs\") pod \"kube-apiserver-ci-4372.1.0-n-4da84ffec3\" (UID: \"9ce96c1e7251f0d1936833c867f79a8f\") " pod="kube-system/kube-apiserver-ci-4372.1.0-n-4da84ffec3" Sep 11 00:28:44.151719 kubelet[2707]: I0911 00:28:44.151641 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/758f19069e070efbe2e7f15da626a445-k8s-certs\") pod \"kube-controller-manager-ci-4372.1.0-n-4da84ffec3\" (UID: \"758f19069e070efbe2e7f15da626a445\") " pod="kube-system/kube-controller-manager-ci-4372.1.0-n-4da84ffec3" Sep 11 00:28:44.222733 kubelet[2707]: I0911 00:28:44.222714 2707 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372.1.0-n-4da84ffec3" Sep 11 00:28:44.223227 kubelet[2707]: E0911 00:28:44.223155 2707 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.15:6443/api/v1/nodes\": dial tcp 10.200.8.15:6443: connect: connection refused" node="ci-4372.1.0-n-4da84ffec3" Sep 11 00:28:44.281592 containerd[1713]: time="2025-09-11T00:28:44.281554551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4372.1.0-n-4da84ffec3,Uid:9ce96c1e7251f0d1936833c867f79a8f,Namespace:kube-system,Attempt:0,}" Sep 11 00:28:44.285217 containerd[1713]: time="2025-09-11T00:28:44.284985828Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4372.1.0-n-4da84ffec3,Uid:758f19069e070efbe2e7f15da626a445,Namespace:kube-system,Attempt:0,}" Sep 11 00:28:44.288749 containerd[1713]: time="2025-09-11T00:28:44.288724995Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4372.1.0-n-4da84ffec3,Uid:24dc98ab1bd4f55375492cc8b4442f2c,Namespace:kube-system,Attempt:0,}" Sep 11 00:28:44.348866 containerd[1713]: time="2025-09-11T00:28:44.348691405Z" level=info msg="connecting to shim 6777da17a6e929f6e94af6f4b41efa0031e8012a8da70360ef63754bfcf481b2" address="unix:///run/containerd/s/794bc1699de74f456d12661f35229c48f2b2bcbfbd10e7ed5860fa0b20547b6a" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:28:44.374522 containerd[1713]: time="2025-09-11T00:28:44.374491150Z" level=info msg="connecting to shim f6a1b1d4cfa5daf56e929c791bf675ad1424b2abd3fbc251671d9abfdcb14829" address="unix:///run/containerd/s/3470b1ae2bb0eb8c9d6f0e4e32cabbc7cc4b4537291027c46d245b0216553183" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:28:44.394649 systemd[1]: Started cri-containerd-6777da17a6e929f6e94af6f4b41efa0031e8012a8da70360ef63754bfcf481b2.scope - libcontainer container 6777da17a6e929f6e94af6f4b41efa0031e8012a8da70360ef63754bfcf481b2. Sep 11 00:28:44.395434 containerd[1713]: time="2025-09-11T00:28:44.395002778Z" level=info msg="connecting to shim 015515964c25475c58c75ab96021e2d874d48abd0b8f62b45653399a3cdfac9f" address="unix:///run/containerd/s/607e554fe4b4865abf667a229bee4fc133f0bcd76c505649ad26ea915f24ab4c" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:28:44.404542 systemd[1]: Started cri-containerd-f6a1b1d4cfa5daf56e929c791bf675ad1424b2abd3fbc251671d9abfdcb14829.scope - libcontainer container f6a1b1d4cfa5daf56e929c791bf675ad1424b2abd3fbc251671d9abfdcb14829. Sep 11 00:28:44.431552 systemd[1]: Started cri-containerd-015515964c25475c58c75ab96021e2d874d48abd0b8f62b45653399a3cdfac9f.scope - libcontainer container 015515964c25475c58c75ab96021e2d874d48abd0b8f62b45653399a3cdfac9f. Sep 11 00:28:44.455900 kubelet[2707]: E0911 00:28:44.455245 2707 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372.1.0-n-4da84ffec3?timeout=10s\": dial tcp 10.200.8.15:6443: connect: connection refused" interval="800ms" Sep 11 00:28:44.475940 containerd[1713]: time="2025-09-11T00:28:44.475905630Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4372.1.0-n-4da84ffec3,Uid:9ce96c1e7251f0d1936833c867f79a8f,Namespace:kube-system,Attempt:0,} returns sandbox id \"6777da17a6e929f6e94af6f4b41efa0031e8012a8da70360ef63754bfcf481b2\"" Sep 11 00:28:44.480641 containerd[1713]: time="2025-09-11T00:28:44.480616308Z" level=info msg="CreateContainer within sandbox \"6777da17a6e929f6e94af6f4b41efa0031e8012a8da70360ef63754bfcf481b2\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 11 00:28:44.494134 containerd[1713]: time="2025-09-11T00:28:44.494107785Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4372.1.0-n-4da84ffec3,Uid:758f19069e070efbe2e7f15da626a445,Namespace:kube-system,Attempt:0,} returns sandbox id \"f6a1b1d4cfa5daf56e929c791bf675ad1424b2abd3fbc251671d9abfdcb14829\"" Sep 11 00:28:44.498813 containerd[1713]: time="2025-09-11T00:28:44.498780983Z" level=info msg="CreateContainer within sandbox \"f6a1b1d4cfa5daf56e929c791bf675ad1424b2abd3fbc251671d9abfdcb14829\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 11 00:28:44.502373 containerd[1713]: time="2025-09-11T00:28:44.502348545Z" level=info msg="Container bff80ba3f3aa0506fbc0682c924f4e134ace4699ec5227e8ff31fbcd0c41bd62: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:28:44.504707 containerd[1713]: time="2025-09-11T00:28:44.504682181Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4372.1.0-n-4da84ffec3,Uid:24dc98ab1bd4f55375492cc8b4442f2c,Namespace:kube-system,Attempt:0,} returns sandbox id \"015515964c25475c58c75ab96021e2d874d48abd0b8f62b45653399a3cdfac9f\"" Sep 11 00:28:44.506149 containerd[1713]: time="2025-09-11T00:28:44.506118108Z" level=info msg="CreateContainer within sandbox \"015515964c25475c58c75ab96021e2d874d48abd0b8f62b45653399a3cdfac9f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 11 00:28:44.534583 containerd[1713]: time="2025-09-11T00:28:44.534560386Z" level=info msg="Container abeb36433cadb1ead28b89b3c2b1b6a8cca6d42fb800132de3c5c7343f564933: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:28:44.546277 containerd[1713]: time="2025-09-11T00:28:44.546251407Z" level=info msg="CreateContainer within sandbox \"6777da17a6e929f6e94af6f4b41efa0031e8012a8da70360ef63754bfcf481b2\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"bff80ba3f3aa0506fbc0682c924f4e134ace4699ec5227e8ff31fbcd0c41bd62\"" Sep 11 00:28:44.546749 containerd[1713]: time="2025-09-11T00:28:44.546726713Z" level=info msg="StartContainer for \"bff80ba3f3aa0506fbc0682c924f4e134ace4699ec5227e8ff31fbcd0c41bd62\"" Sep 11 00:28:44.547610 containerd[1713]: time="2025-09-11T00:28:44.547582810Z" level=info msg="connecting to shim bff80ba3f3aa0506fbc0682c924f4e134ace4699ec5227e8ff31fbcd0c41bd62" address="unix:///run/containerd/s/794bc1699de74f456d12661f35229c48f2b2bcbfbd10e7ed5860fa0b20547b6a" protocol=ttrpc version=3 Sep 11 00:28:44.558046 containerd[1713]: time="2025-09-11T00:28:44.557977074Z" level=info msg="CreateContainer within sandbox \"f6a1b1d4cfa5daf56e929c791bf675ad1424b2abd3fbc251671d9abfdcb14829\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"abeb36433cadb1ead28b89b3c2b1b6a8cca6d42fb800132de3c5c7343f564933\"" Sep 11 00:28:44.558349 containerd[1713]: time="2025-09-11T00:28:44.558323283Z" level=info msg="StartContainer for \"abeb36433cadb1ead28b89b3c2b1b6a8cca6d42fb800132de3c5c7343f564933\"" Sep 11 00:28:44.560759 containerd[1713]: time="2025-09-11T00:28:44.560721435Z" level=info msg="connecting to shim abeb36433cadb1ead28b89b3c2b1b6a8cca6d42fb800132de3c5c7343f564933" address="unix:///run/containerd/s/3470b1ae2bb0eb8c9d6f0e4e32cabbc7cc4b4537291027c46d245b0216553183" protocol=ttrpc version=3 Sep 11 00:28:44.564030 containerd[1713]: time="2025-09-11T00:28:44.563649809Z" level=info msg="Container 4e0e52ddad886c7b88ac81dc503e51cbc747e8b1417a52e79436032f0c595567: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:28:44.566632 systemd[1]: Started cri-containerd-bff80ba3f3aa0506fbc0682c924f4e134ace4699ec5227e8ff31fbcd0c41bd62.scope - libcontainer container bff80ba3f3aa0506fbc0682c924f4e134ace4699ec5227e8ff31fbcd0c41bd62. Sep 11 00:28:44.579705 containerd[1713]: time="2025-09-11T00:28:44.579677848Z" level=info msg="CreateContainer within sandbox \"015515964c25475c58c75ab96021e2d874d48abd0b8f62b45653399a3cdfac9f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"4e0e52ddad886c7b88ac81dc503e51cbc747e8b1417a52e79436032f0c595567\"" Sep 11 00:28:44.580412 containerd[1713]: time="2025-09-11T00:28:44.580029435Z" level=info msg="StartContainer for \"4e0e52ddad886c7b88ac81dc503e51cbc747e8b1417a52e79436032f0c595567\"" Sep 11 00:28:44.580831 containerd[1713]: time="2025-09-11T00:28:44.580795738Z" level=info msg="connecting to shim 4e0e52ddad886c7b88ac81dc503e51cbc747e8b1417a52e79436032f0c595567" address="unix:///run/containerd/s/607e554fe4b4865abf667a229bee4fc133f0bcd76c505649ad26ea915f24ab4c" protocol=ttrpc version=3 Sep 11 00:28:44.581681 systemd[1]: Started cri-containerd-abeb36433cadb1ead28b89b3c2b1b6a8cca6d42fb800132de3c5c7343f564933.scope - libcontainer container abeb36433cadb1ead28b89b3c2b1b6a8cca6d42fb800132de3c5c7343f564933. Sep 11 00:28:44.604512 systemd[1]: Started cri-containerd-4e0e52ddad886c7b88ac81dc503e51cbc747e8b1417a52e79436032f0c595567.scope - libcontainer container 4e0e52ddad886c7b88ac81dc503e51cbc747e8b1417a52e79436032f0c595567. Sep 11 00:28:44.630104 kubelet[2707]: I0911 00:28:44.629749 2707 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372.1.0-n-4da84ffec3" Sep 11 00:28:44.630104 kubelet[2707]: E0911 00:28:44.630045 2707 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.15:6443/api/v1/nodes\": dial tcp 10.200.8.15:6443: connect: connection refused" node="ci-4372.1.0-n-4da84ffec3" Sep 11 00:28:44.650989 containerd[1713]: time="2025-09-11T00:28:44.650968100Z" level=info msg="StartContainer for \"bff80ba3f3aa0506fbc0682c924f4e134ace4699ec5227e8ff31fbcd0c41bd62\" returns successfully" Sep 11 00:28:44.665272 containerd[1713]: time="2025-09-11T00:28:44.665251111Z" level=info msg="StartContainer for \"abeb36433cadb1ead28b89b3c2b1b6a8cca6d42fb800132de3c5c7343f564933\" returns successfully" Sep 11 00:28:44.711150 containerd[1713]: time="2025-09-11T00:28:44.711129965Z" level=info msg="StartContainer for \"4e0e52ddad886c7b88ac81dc503e51cbc747e8b1417a52e79436032f0c595567\" returns successfully" Sep 11 00:28:44.889485 kubelet[2707]: E0911 00:28:44.888493 2707 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.1.0-n-4da84ffec3\" not found" node="ci-4372.1.0-n-4da84ffec3" Sep 11 00:28:44.899102 kubelet[2707]: E0911 00:28:44.899001 2707 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.1.0-n-4da84ffec3\" not found" node="ci-4372.1.0-n-4da84ffec3" Sep 11 00:28:44.903301 kubelet[2707]: E0911 00:28:44.903279 2707 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.1.0-n-4da84ffec3\" not found" node="ci-4372.1.0-n-4da84ffec3" Sep 11 00:28:45.132470 update_engine[1698]: I20250911 00:28:45.132411 1698 update_attempter.cc:509] Updating boot flags... Sep 11 00:28:45.435912 kubelet[2707]: I0911 00:28:45.435515 2707 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372.1.0-n-4da84ffec3" Sep 11 00:28:45.906938 kubelet[2707]: E0911 00:28:45.906630 2707 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.1.0-n-4da84ffec3\" not found" node="ci-4372.1.0-n-4da84ffec3" Sep 11 00:28:45.907892 kubelet[2707]: E0911 00:28:45.907871 2707 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.1.0-n-4da84ffec3\" not found" node="ci-4372.1.0-n-4da84ffec3" Sep 11 00:28:46.570567 kubelet[2707]: E0911 00:28:46.570534 2707 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4372.1.0-n-4da84ffec3\" not found" node="ci-4372.1.0-n-4da84ffec3" Sep 11 00:28:46.602932 kubelet[2707]: I0911 00:28:46.602907 2707 kubelet_node_status.go:78] "Successfully registered node" node="ci-4372.1.0-n-4da84ffec3" Sep 11 00:28:46.650108 kubelet[2707]: I0911 00:28:46.650083 2707 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4372.1.0-n-4da84ffec3" Sep 11 00:28:46.708324 kubelet[2707]: E0911 00:28:46.708276 2707 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4372.1.0-n-4da84ffec3\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4372.1.0-n-4da84ffec3" Sep 11 00:28:46.708324 kubelet[2707]: I0911 00:28:46.708310 2707 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4372.1.0-n-4da84ffec3" Sep 11 00:28:46.709659 kubelet[2707]: E0911 00:28:46.709634 2707 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4372.1.0-n-4da84ffec3\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4372.1.0-n-4da84ffec3" Sep 11 00:28:46.709659 kubelet[2707]: I0911 00:28:46.709655 2707 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4372.1.0-n-4da84ffec3" Sep 11 00:28:46.711039 kubelet[2707]: E0911 00:28:46.710945 2707 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4372.1.0-n-4da84ffec3\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4372.1.0-n-4da84ffec3" Sep 11 00:28:46.828258 kubelet[2707]: I0911 00:28:46.827960 2707 apiserver.go:52] "Watching apiserver" Sep 11 00:28:46.850060 kubelet[2707]: I0911 00:28:46.850032 2707 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 11 00:28:46.905466 kubelet[2707]: I0911 00:28:46.905437 2707 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4372.1.0-n-4da84ffec3" Sep 11 00:28:46.906799 kubelet[2707]: E0911 00:28:46.906772 2707 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4372.1.0-n-4da84ffec3\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4372.1.0-n-4da84ffec3" Sep 11 00:28:47.292219 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Sep 11 00:28:48.493770 kubelet[2707]: I0911 00:28:48.493742 2707 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4372.1.0-n-4da84ffec3" Sep 11 00:28:48.500313 kubelet[2707]: W0911 00:28:48.500258 2707 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 11 00:28:48.888508 systemd[1]: Reload requested from client PID 3022 ('systemctl') (unit session-9.scope)... Sep 11 00:28:48.888522 systemd[1]: Reloading... Sep 11 00:28:48.985427 zram_generator::config[3068]: No configuration found. Sep 11 00:28:49.075797 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 11 00:28:49.183190 systemd[1]: Reloading finished in 294 ms. Sep 11 00:28:49.207281 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 00:28:49.216919 systemd[1]: kubelet.service: Deactivated successfully. Sep 11 00:28:49.217114 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:28:49.217155 systemd[1]: kubelet.service: Consumed 738ms CPU time, 131.2M memory peak. Sep 11 00:28:49.219320 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 00:28:49.694297 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:28:49.707159 (kubelet)[3134]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 11 00:28:49.767919 kubelet[3134]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 11 00:28:49.789633 kubelet[3134]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 11 00:28:49.789633 kubelet[3134]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 11 00:28:49.789633 kubelet[3134]: I0911 00:28:49.768528 3134 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 11 00:28:49.789633 kubelet[3134]: I0911 00:28:49.777926 3134 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 11 00:28:49.789633 kubelet[3134]: I0911 00:28:49.777943 3134 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 11 00:28:49.789633 kubelet[3134]: I0911 00:28:49.778152 3134 server.go:954] "Client rotation is on, will bootstrap in background" Sep 11 00:28:49.790191 kubelet[3134]: I0911 00:28:49.790174 3134 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 11 00:28:49.793483 kubelet[3134]: I0911 00:28:49.793427 3134 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 11 00:28:49.799456 kubelet[3134]: I0911 00:28:49.799439 3134 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 11 00:28:49.805400 kubelet[3134]: I0911 00:28:49.804795 3134 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 11 00:28:49.805400 kubelet[3134]: I0911 00:28:49.804962 3134 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 11 00:28:49.805400 kubelet[3134]: I0911 00:28:49.804995 3134 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4372.1.0-n-4da84ffec3","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 11 00:28:49.805400 kubelet[3134]: I0911 00:28:49.805264 3134 topology_manager.go:138] "Creating topology manager with none policy" Sep 11 00:28:49.805619 kubelet[3134]: I0911 00:28:49.805274 3134 container_manager_linux.go:304] "Creating device plugin manager" Sep 11 00:28:49.805619 kubelet[3134]: I0911 00:28:49.805318 3134 state_mem.go:36] "Initialized new in-memory state store" Sep 11 00:28:49.805775 kubelet[3134]: I0911 00:28:49.805766 3134 kubelet.go:446] "Attempting to sync node with API server" Sep 11 00:28:49.805865 kubelet[3134]: I0911 00:28:49.805857 3134 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 11 00:28:49.806314 kubelet[3134]: I0911 00:28:49.806292 3134 kubelet.go:352] "Adding apiserver pod source" Sep 11 00:28:49.806415 kubelet[3134]: I0911 00:28:49.806408 3134 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 11 00:28:49.810146 kubelet[3134]: I0911 00:28:49.810121 3134 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Sep 11 00:28:49.810695 kubelet[3134]: I0911 00:28:49.810676 3134 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 11 00:28:49.811185 kubelet[3134]: I0911 00:28:49.811177 3134 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 11 00:28:49.811265 kubelet[3134]: I0911 00:28:49.811259 3134 server.go:1287] "Started kubelet" Sep 11 00:28:49.813267 kubelet[3134]: I0911 00:28:49.813248 3134 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 11 00:28:49.819784 kubelet[3134]: I0911 00:28:49.819755 3134 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 11 00:28:49.820323 kubelet[3134]: I0911 00:28:49.820288 3134 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 11 00:28:49.822251 kubelet[3134]: I0911 00:28:49.821092 3134 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 11 00:28:49.822430 kubelet[3134]: I0911 00:28:49.822416 3134 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 11 00:28:49.823736 kubelet[3134]: I0911 00:28:49.823718 3134 server.go:479] "Adding debug handlers to kubelet server" Sep 11 00:28:49.826232 kubelet[3134]: E0911 00:28:49.826011 3134 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4372.1.0-n-4da84ffec3\" not found" Sep 11 00:28:49.826436 kubelet[3134]: I0911 00:28:49.826127 3134 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 11 00:28:49.826498 kubelet[3134]: I0911 00:28:49.826135 3134 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 11 00:28:49.826646 kubelet[3134]: I0911 00:28:49.826570 3134 reconciler.go:26] "Reconciler: start to sync state" Sep 11 00:28:49.830747 kubelet[3134]: I0911 00:28:49.830732 3134 factory.go:221] Registration of the systemd container factory successfully Sep 11 00:28:49.830902 kubelet[3134]: I0911 00:28:49.830845 3134 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 11 00:28:49.831932 kubelet[3134]: E0911 00:28:49.831914 3134 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 11 00:28:49.833699 kubelet[3134]: I0911 00:28:49.833675 3134 factory.go:221] Registration of the containerd container factory successfully Sep 11 00:28:49.839053 kubelet[3134]: I0911 00:28:49.839034 3134 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 11 00:28:49.842847 kubelet[3134]: I0911 00:28:49.842826 3134 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 11 00:28:49.842919 kubelet[3134]: I0911 00:28:49.842854 3134 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 11 00:28:49.842919 kubelet[3134]: I0911 00:28:49.842868 3134 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 11 00:28:49.842919 kubelet[3134]: I0911 00:28:49.842876 3134 kubelet.go:2382] "Starting kubelet main sync loop" Sep 11 00:28:49.842991 kubelet[3134]: E0911 00:28:49.842918 3134 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 11 00:28:49.876018 kubelet[3134]: I0911 00:28:49.876003 3134 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 11 00:28:49.876018 kubelet[3134]: I0911 00:28:49.876017 3134 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 11 00:28:49.876117 kubelet[3134]: I0911 00:28:49.876032 3134 state_mem.go:36] "Initialized new in-memory state store" Sep 11 00:28:49.876264 kubelet[3134]: I0911 00:28:49.876254 3134 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 11 00:28:49.876298 kubelet[3134]: I0911 00:28:49.876267 3134 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 11 00:28:49.876298 kubelet[3134]: I0911 00:28:49.876284 3134 policy_none.go:49] "None policy: Start" Sep 11 00:28:49.876298 kubelet[3134]: I0911 00:28:49.876294 3134 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 11 00:28:49.876371 kubelet[3134]: I0911 00:28:49.876304 3134 state_mem.go:35] "Initializing new in-memory state store" Sep 11 00:28:49.876431 kubelet[3134]: I0911 00:28:49.876424 3134 state_mem.go:75] "Updated machine memory state" Sep 11 00:28:49.880578 kubelet[3134]: I0911 00:28:49.880223 3134 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 11 00:28:49.880578 kubelet[3134]: I0911 00:28:49.880349 3134 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 11 00:28:49.880578 kubelet[3134]: I0911 00:28:49.880358 3134 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 11 00:28:49.880578 kubelet[3134]: I0911 00:28:49.880534 3134 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 11 00:28:49.883984 kubelet[3134]: E0911 00:28:49.883814 3134 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 11 00:28:49.944257 kubelet[3134]: I0911 00:28:49.944230 3134 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4372.1.0-n-4da84ffec3" Sep 11 00:28:49.944455 kubelet[3134]: I0911 00:28:49.944442 3134 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4372.1.0-n-4da84ffec3" Sep 11 00:28:49.945166 kubelet[3134]: I0911 00:28:49.945131 3134 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4372.1.0-n-4da84ffec3" Sep 11 00:28:49.954186 kubelet[3134]: W0911 00:28:49.953314 3134 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 11 00:28:49.958177 kubelet[3134]: W0911 00:28:49.957705 3134 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 11 00:28:49.958435 kubelet[3134]: W0911 00:28:49.958420 3134 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 11 00:28:49.958499 kubelet[3134]: E0911 00:28:49.958472 3134 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4372.1.0-n-4da84ffec3\" already exists" pod="kube-system/kube-scheduler-ci-4372.1.0-n-4da84ffec3" Sep 11 00:28:49.984479 kubelet[3134]: I0911 00:28:49.984306 3134 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372.1.0-n-4da84ffec3" Sep 11 00:28:49.996603 kubelet[3134]: I0911 00:28:49.995452 3134 kubelet_node_status.go:124] "Node was previously registered" node="ci-4372.1.0-n-4da84ffec3" Sep 11 00:28:49.996603 kubelet[3134]: I0911 00:28:49.995501 3134 kubelet_node_status.go:78] "Successfully registered node" node="ci-4372.1.0-n-4da84ffec3" Sep 11 00:28:50.026744 kubelet[3134]: I0911 00:28:50.026716 3134 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9ce96c1e7251f0d1936833c867f79a8f-ca-certs\") pod \"kube-apiserver-ci-4372.1.0-n-4da84ffec3\" (UID: \"9ce96c1e7251f0d1936833c867f79a8f\") " pod="kube-system/kube-apiserver-ci-4372.1.0-n-4da84ffec3" Sep 11 00:28:50.026744 kubelet[3134]: I0911 00:28:50.026746 3134 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/24dc98ab1bd4f55375492cc8b4442f2c-kubeconfig\") pod \"kube-scheduler-ci-4372.1.0-n-4da84ffec3\" (UID: \"24dc98ab1bd4f55375492cc8b4442f2c\") " pod="kube-system/kube-scheduler-ci-4372.1.0-n-4da84ffec3" Sep 11 00:28:50.026847 kubelet[3134]: I0911 00:28:50.026763 3134 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/758f19069e070efbe2e7f15da626a445-ca-certs\") pod \"kube-controller-manager-ci-4372.1.0-n-4da84ffec3\" (UID: \"758f19069e070efbe2e7f15da626a445\") " pod="kube-system/kube-controller-manager-ci-4372.1.0-n-4da84ffec3" Sep 11 00:28:50.026847 kubelet[3134]: I0911 00:28:50.026782 3134 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/758f19069e070efbe2e7f15da626a445-flexvolume-dir\") pod \"kube-controller-manager-ci-4372.1.0-n-4da84ffec3\" (UID: \"758f19069e070efbe2e7f15da626a445\") " pod="kube-system/kube-controller-manager-ci-4372.1.0-n-4da84ffec3" Sep 11 00:28:50.026847 kubelet[3134]: I0911 00:28:50.026800 3134 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/758f19069e070efbe2e7f15da626a445-k8s-certs\") pod \"kube-controller-manager-ci-4372.1.0-n-4da84ffec3\" (UID: \"758f19069e070efbe2e7f15da626a445\") " pod="kube-system/kube-controller-manager-ci-4372.1.0-n-4da84ffec3" Sep 11 00:28:50.026847 kubelet[3134]: I0911 00:28:50.026817 3134 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/758f19069e070efbe2e7f15da626a445-kubeconfig\") pod \"kube-controller-manager-ci-4372.1.0-n-4da84ffec3\" (UID: \"758f19069e070efbe2e7f15da626a445\") " pod="kube-system/kube-controller-manager-ci-4372.1.0-n-4da84ffec3" Sep 11 00:28:50.026847 kubelet[3134]: I0911 00:28:50.026837 3134 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/758f19069e070efbe2e7f15da626a445-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4372.1.0-n-4da84ffec3\" (UID: \"758f19069e070efbe2e7f15da626a445\") " pod="kube-system/kube-controller-manager-ci-4372.1.0-n-4da84ffec3" Sep 11 00:28:50.026965 kubelet[3134]: I0911 00:28:50.026854 3134 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9ce96c1e7251f0d1936833c867f79a8f-k8s-certs\") pod \"kube-apiserver-ci-4372.1.0-n-4da84ffec3\" (UID: \"9ce96c1e7251f0d1936833c867f79a8f\") " pod="kube-system/kube-apiserver-ci-4372.1.0-n-4da84ffec3" Sep 11 00:28:50.026965 kubelet[3134]: I0911 00:28:50.026874 3134 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9ce96c1e7251f0d1936833c867f79a8f-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4372.1.0-n-4da84ffec3\" (UID: \"9ce96c1e7251f0d1936833c867f79a8f\") " pod="kube-system/kube-apiserver-ci-4372.1.0-n-4da84ffec3" Sep 11 00:28:50.809837 kubelet[3134]: I0911 00:28:50.809803 3134 apiserver.go:52] "Watching apiserver" Sep 11 00:28:50.826990 kubelet[3134]: I0911 00:28:50.826962 3134 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 11 00:28:50.863436 kubelet[3134]: I0911 00:28:50.862915 3134 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4372.1.0-n-4da84ffec3" Sep 11 00:28:50.863436 kubelet[3134]: I0911 00:28:50.863141 3134 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4372.1.0-n-4da84ffec3" Sep 11 00:28:50.874405 kubelet[3134]: W0911 00:28:50.874274 3134 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 11 00:28:50.874405 kubelet[3134]: E0911 00:28:50.874327 3134 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4372.1.0-n-4da84ffec3\" already exists" pod="kube-system/kube-apiserver-ci-4372.1.0-n-4da84ffec3" Sep 11 00:28:50.876406 kubelet[3134]: W0911 00:28:50.876371 3134 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 11 00:28:50.876497 kubelet[3134]: E0911 00:28:50.876427 3134 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4372.1.0-n-4da84ffec3\" already exists" pod="kube-system/kube-scheduler-ci-4372.1.0-n-4da84ffec3" Sep 11 00:28:50.890590 kubelet[3134]: I0911 00:28:50.890535 3134 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4372.1.0-n-4da84ffec3" podStartSLOduration=2.89051989 podStartE2EDuration="2.89051989s" podCreationTimestamp="2025-09-11 00:28:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-11 00:28:50.882055017 +0000 UTC m=+1.170742606" watchObservedRunningTime="2025-09-11 00:28:50.89051989 +0000 UTC m=+1.179207504" Sep 11 00:28:50.898600 kubelet[3134]: I0911 00:28:50.898553 3134 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4372.1.0-n-4da84ffec3" podStartSLOduration=1.898541898 podStartE2EDuration="1.898541898s" podCreationTimestamp="2025-09-11 00:28:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-11 00:28:50.890681669 +0000 UTC m=+1.179369256" watchObservedRunningTime="2025-09-11 00:28:50.898541898 +0000 UTC m=+1.187229482" Sep 11 00:28:50.906877 kubelet[3134]: I0911 00:28:50.906704 3134 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4372.1.0-n-4da84ffec3" podStartSLOduration=1.9066930690000001 podStartE2EDuration="1.906693069s" podCreationTimestamp="2025-09-11 00:28:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-11 00:28:50.89879895 +0000 UTC m=+1.187486660" watchObservedRunningTime="2025-09-11 00:28:50.906693069 +0000 UTC m=+1.195380666" Sep 11 00:28:54.305654 kubelet[3134]: I0911 00:28:54.305586 3134 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 11 00:28:54.306358 containerd[1713]: time="2025-09-11T00:28:54.306326905Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 11 00:28:54.306883 kubelet[3134]: I0911 00:28:54.306508 3134 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 11 00:28:55.002769 systemd[1]: Created slice kubepods-besteffort-pod64ea58d0_1203_4722_a5ca_2fa7254e5bcf.slice - libcontainer container kubepods-besteffort-pod64ea58d0_1203_4722_a5ca_2fa7254e5bcf.slice. Sep 11 00:28:55.296842 kubelet[3134]: I0911 00:28:55.054817 3134 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnh68\" (UniqueName: \"kubernetes.io/projected/64ea58d0-1203-4722-a5ca-2fa7254e5bcf-kube-api-access-dnh68\") pod \"kube-proxy-bkxcp\" (UID: \"64ea58d0-1203-4722-a5ca-2fa7254e5bcf\") " pod="kube-system/kube-proxy-bkxcp" Sep 11 00:28:55.296842 kubelet[3134]: I0911 00:28:55.054850 3134 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/64ea58d0-1203-4722-a5ca-2fa7254e5bcf-kube-proxy\") pod \"kube-proxy-bkxcp\" (UID: \"64ea58d0-1203-4722-a5ca-2fa7254e5bcf\") " pod="kube-system/kube-proxy-bkxcp" Sep 11 00:28:55.296842 kubelet[3134]: I0911 00:28:55.054873 3134 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/64ea58d0-1203-4722-a5ca-2fa7254e5bcf-xtables-lock\") pod \"kube-proxy-bkxcp\" (UID: \"64ea58d0-1203-4722-a5ca-2fa7254e5bcf\") " pod="kube-system/kube-proxy-bkxcp" Sep 11 00:28:55.296842 kubelet[3134]: I0911 00:28:55.054890 3134 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/64ea58d0-1203-4722-a5ca-2fa7254e5bcf-lib-modules\") pod \"kube-proxy-bkxcp\" (UID: \"64ea58d0-1203-4722-a5ca-2fa7254e5bcf\") " pod="kube-system/kube-proxy-bkxcp" Sep 11 00:28:55.423429 systemd[1]: Created slice kubepods-besteffort-pod673fa3c6_4d92_43f7_b73f_83081f94c6fa.slice - libcontainer container kubepods-besteffort-pod673fa3c6_4d92_43f7_b73f_83081f94c6fa.slice. Sep 11 00:28:55.457548 kubelet[3134]: I0911 00:28:55.457512 3134 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/673fa3c6-4d92-43f7-b73f-83081f94c6fa-var-lib-calico\") pod \"tigera-operator-755d956888-xzqxg\" (UID: \"673fa3c6-4d92-43f7-b73f-83081f94c6fa\") " pod="tigera-operator/tigera-operator-755d956888-xzqxg" Sep 11 00:28:55.457548 kubelet[3134]: I0911 00:28:55.457545 3134 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h72z2\" (UniqueName: \"kubernetes.io/projected/673fa3c6-4d92-43f7-b73f-83081f94c6fa-kube-api-access-h72z2\") pod \"tigera-operator-755d956888-xzqxg\" (UID: \"673fa3c6-4d92-43f7-b73f-83081f94c6fa\") " pod="tigera-operator/tigera-operator-755d956888-xzqxg" Sep 11 00:28:55.597510 containerd[1713]: time="2025-09-11T00:28:55.597409758Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bkxcp,Uid:64ea58d0-1203-4722-a5ca-2fa7254e5bcf,Namespace:kube-system,Attempt:0,}" Sep 11 00:28:55.728990 containerd[1713]: time="2025-09-11T00:28:55.728934530Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-xzqxg,Uid:673fa3c6-4d92-43f7-b73f-83081f94c6fa,Namespace:tigera-operator,Attempt:0,}" Sep 11 00:28:56.002108 containerd[1713]: time="2025-09-11T00:28:56.002060633Z" level=info msg="connecting to shim b6379c2f464ce2c8ca250bd2a3f891cb2a6c5faeeab8998a4a125a2df1daa6cf" address="unix:///run/containerd/s/87188ae287f9126846447fba8b6aa9ef1e6fee67e0d2b255f7eff2ffba9d9e1f" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:28:56.024533 systemd[1]: Started cri-containerd-b6379c2f464ce2c8ca250bd2a3f891cb2a6c5faeeab8998a4a125a2df1daa6cf.scope - libcontainer container b6379c2f464ce2c8ca250bd2a3f891cb2a6c5faeeab8998a4a125a2df1daa6cf. Sep 11 00:28:56.135868 containerd[1713]: time="2025-09-11T00:28:56.135835145Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bkxcp,Uid:64ea58d0-1203-4722-a5ca-2fa7254e5bcf,Namespace:kube-system,Attempt:0,} returns sandbox id \"b6379c2f464ce2c8ca250bd2a3f891cb2a6c5faeeab8998a4a125a2df1daa6cf\"" Sep 11 00:28:56.138195 containerd[1713]: time="2025-09-11T00:28:56.138157392Z" level=info msg="CreateContainer within sandbox \"b6379c2f464ce2c8ca250bd2a3f891cb2a6c5faeeab8998a4a125a2df1daa6cf\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 11 00:28:56.257226 containerd[1713]: time="2025-09-11T00:28:56.257076860Z" level=info msg="connecting to shim e7a79934d4f60dd713c488a9bbfdad761d6397f98f9ee345fc2a2eae3e9e2c1a" address="unix:///run/containerd/s/8259689ba939dbde8e746c4409ab4dedeb0c92185c62ea672c550c657667a51f" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:28:56.281522 systemd[1]: Started cri-containerd-e7a79934d4f60dd713c488a9bbfdad761d6397f98f9ee345fc2a2eae3e9e2c1a.scope - libcontainer container e7a79934d4f60dd713c488a9bbfdad761d6397f98f9ee345fc2a2eae3e9e2c1a. Sep 11 00:28:56.387885 containerd[1713]: time="2025-09-11T00:28:56.387856445Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-xzqxg,Uid:673fa3c6-4d92-43f7-b73f-83081f94c6fa,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"e7a79934d4f60dd713c488a9bbfdad761d6397f98f9ee345fc2a2eae3e9e2c1a\"" Sep 11 00:28:56.389230 containerd[1713]: time="2025-09-11T00:28:56.389105178Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 11 00:28:56.442861 containerd[1713]: time="2025-09-11T00:28:56.442833961Z" level=info msg="Container 60dca2ce8f12e8ab44bc343893020454c0ed64432897d94aca5270ddb676cd81: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:28:56.646918 containerd[1713]: time="2025-09-11T00:28:56.646838045Z" level=info msg="CreateContainer within sandbox \"b6379c2f464ce2c8ca250bd2a3f891cb2a6c5faeeab8998a4a125a2df1daa6cf\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"60dca2ce8f12e8ab44bc343893020454c0ed64432897d94aca5270ddb676cd81\"" Sep 11 00:28:56.647724 containerd[1713]: time="2025-09-11T00:28:56.647664570Z" level=info msg="StartContainer for \"60dca2ce8f12e8ab44bc343893020454c0ed64432897d94aca5270ddb676cd81\"" Sep 11 00:28:56.649279 containerd[1713]: time="2025-09-11T00:28:56.649253329Z" level=info msg="connecting to shim 60dca2ce8f12e8ab44bc343893020454c0ed64432897d94aca5270ddb676cd81" address="unix:///run/containerd/s/87188ae287f9126846447fba8b6aa9ef1e6fee67e0d2b255f7eff2ffba9d9e1f" protocol=ttrpc version=3 Sep 11 00:28:56.669672 systemd[1]: Started cri-containerd-60dca2ce8f12e8ab44bc343893020454c0ed64432897d94aca5270ddb676cd81.scope - libcontainer container 60dca2ce8f12e8ab44bc343893020454c0ed64432897d94aca5270ddb676cd81. Sep 11 00:28:56.704236 containerd[1713]: time="2025-09-11T00:28:56.704213549Z" level=info msg="StartContainer for \"60dca2ce8f12e8ab44bc343893020454c0ed64432897d94aca5270ddb676cd81\" returns successfully" Sep 11 00:28:59.768601 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1499963227.mount: Deactivated successfully. Sep 11 00:29:00.699955 containerd[1713]: time="2025-09-11T00:29:00.699901790Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:29:00.745734 containerd[1713]: time="2025-09-11T00:29:00.745698620Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Sep 11 00:29:00.748653 containerd[1713]: time="2025-09-11T00:29:00.748610477Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:29:00.793103 containerd[1713]: time="2025-09-11T00:29:00.793050796Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:29:00.793778 containerd[1713]: time="2025-09-11T00:29:00.793687912Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 4.404552444s" Sep 11 00:29:00.793778 containerd[1713]: time="2025-09-11T00:29:00.793718962Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 11 00:29:00.795747 containerd[1713]: time="2025-09-11T00:29:00.795713870Z" level=info msg="CreateContainer within sandbox \"e7a79934d4f60dd713c488a9bbfdad761d6397f98f9ee345fc2a2eae3e9e2c1a\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 11 00:29:00.943291 containerd[1713]: time="2025-09-11T00:29:00.942505629Z" level=info msg="Container 560584c06fb1e7dbbe19244fbc03db1ae5c5623afc03a9f6b12380b21a2df12c: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:29:01.095482 containerd[1713]: time="2025-09-11T00:29:01.095147767Z" level=info msg="CreateContainer within sandbox \"e7a79934d4f60dd713c488a9bbfdad761d6397f98f9ee345fc2a2eae3e9e2c1a\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"560584c06fb1e7dbbe19244fbc03db1ae5c5623afc03a9f6b12380b21a2df12c\"" Sep 11 00:29:01.095587 containerd[1713]: time="2025-09-11T00:29:01.095523822Z" level=info msg="StartContainer for \"560584c06fb1e7dbbe19244fbc03db1ae5c5623afc03a9f6b12380b21a2df12c\"" Sep 11 00:29:01.096464 containerd[1713]: time="2025-09-11T00:29:01.096434968Z" level=info msg="connecting to shim 560584c06fb1e7dbbe19244fbc03db1ae5c5623afc03a9f6b12380b21a2df12c" address="unix:///run/containerd/s/8259689ba939dbde8e746c4409ab4dedeb0c92185c62ea672c550c657667a51f" protocol=ttrpc version=3 Sep 11 00:29:01.116567 systemd[1]: Started cri-containerd-560584c06fb1e7dbbe19244fbc03db1ae5c5623afc03a9f6b12380b21a2df12c.scope - libcontainer container 560584c06fb1e7dbbe19244fbc03db1ae5c5623afc03a9f6b12380b21a2df12c. Sep 11 00:29:01.142882 containerd[1713]: time="2025-09-11T00:29:01.142780792Z" level=info msg="StartContainer for \"560584c06fb1e7dbbe19244fbc03db1ae5c5623afc03a9f6b12380b21a2df12c\" returns successfully" Sep 11 00:29:01.888482 kubelet[3134]: I0911 00:29:01.888420 3134 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-bkxcp" podStartSLOduration=7.888067725 podStartE2EDuration="7.888067725s" podCreationTimestamp="2025-09-11 00:28:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-11 00:28:56.890274968 +0000 UTC m=+7.178962555" watchObservedRunningTime="2025-09-11 00:29:01.888067725 +0000 UTC m=+12.176755311" Sep 11 00:29:01.889358 kubelet[3134]: I0911 00:29:01.888899 3134 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-xzqxg" podStartSLOduration=2.483159042 podStartE2EDuration="6.888886389s" podCreationTimestamp="2025-09-11 00:28:55 +0000 UTC" firstStartedPulling="2025-09-11 00:28:56.388703593 +0000 UTC m=+6.677391176" lastFinishedPulling="2025-09-11 00:29:00.794430942 +0000 UTC m=+11.083118523" observedRunningTime="2025-09-11 00:29:01.888869599 +0000 UTC m=+12.177557258" watchObservedRunningTime="2025-09-11 00:29:01.888886389 +0000 UTC m=+12.177573974" Sep 11 00:29:06.919371 sudo[2150]: pam_unix(sudo:session): session closed for user root Sep 11 00:29:07.022402 sshd[2149]: Connection closed by 10.200.16.10 port 58604 Sep 11 00:29:07.022904 sshd-session[2147]: pam_unix(sshd:session): session closed for user core Sep 11 00:29:07.027451 systemd-logind[1697]: Session 9 logged out. Waiting for processes to exit. Sep 11 00:29:07.028583 systemd[1]: sshd@6-10.200.8.15:22-10.200.16.10:58604.service: Deactivated successfully. Sep 11 00:29:07.033802 systemd[1]: session-9.scope: Deactivated successfully. Sep 11 00:29:07.034258 systemd[1]: session-9.scope: Consumed 3.631s CPU time, 230.3M memory peak. Sep 11 00:29:07.040794 systemd-logind[1697]: Removed session 9. Sep 11 00:29:10.451769 systemd[1]: Created slice kubepods-besteffort-poda3994475_2733_4fcf_8a38_d1758ff18320.slice - libcontainer container kubepods-besteffort-poda3994475_2733_4fcf_8a38_d1758ff18320.slice. Sep 11 00:29:10.454362 kubelet[3134]: I0911 00:29:10.454252 3134 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4xxl\" (UniqueName: \"kubernetes.io/projected/a3994475-2733-4fcf-8a38-d1758ff18320-kube-api-access-l4xxl\") pod \"calico-typha-cf99bdc96-hfm5q\" (UID: \"a3994475-2733-4fcf-8a38-d1758ff18320\") " pod="calico-system/calico-typha-cf99bdc96-hfm5q" Sep 11 00:29:10.455189 kubelet[3134]: I0911 00:29:10.455159 3134 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a3994475-2733-4fcf-8a38-d1758ff18320-tigera-ca-bundle\") pod \"calico-typha-cf99bdc96-hfm5q\" (UID: \"a3994475-2733-4fcf-8a38-d1758ff18320\") " pod="calico-system/calico-typha-cf99bdc96-hfm5q" Sep 11 00:29:10.455278 kubelet[3134]: I0911 00:29:10.455205 3134 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/a3994475-2733-4fcf-8a38-d1758ff18320-typha-certs\") pod \"calico-typha-cf99bdc96-hfm5q\" (UID: \"a3994475-2733-4fcf-8a38-d1758ff18320\") " pod="calico-system/calico-typha-cf99bdc96-hfm5q" Sep 11 00:29:10.755982 containerd[1713]: time="2025-09-11T00:29:10.755873526Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-cf99bdc96-hfm5q,Uid:a3994475-2733-4fcf-8a38-d1758ff18320,Namespace:calico-system,Attempt:0,}" Sep 11 00:29:10.830977 containerd[1713]: time="2025-09-11T00:29:10.830445022Z" level=info msg="connecting to shim eb1aa4b7d055086b384b85d7acdf1ce71b08af9fecc8799223e9693b3001cd21" address="unix:///run/containerd/s/24964116e7afed12fc972af897828df1542f0f48b00ee4251c4f3e8b791bc0df" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:29:10.835223 systemd[1]: Created slice kubepods-besteffort-pod8120899d_6a8e_4934_8bae_5bc6978e3f63.slice - libcontainer container kubepods-besteffort-pod8120899d_6a8e_4934_8bae_5bc6978e3f63.slice. Sep 11 00:29:10.858417 kubelet[3134]: I0911 00:29:10.858366 3134 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlmxz\" (UniqueName: \"kubernetes.io/projected/8120899d-6a8e-4934-8bae-5bc6978e3f63-kube-api-access-nlmxz\") pod \"calico-node-czwjg\" (UID: \"8120899d-6a8e-4934-8bae-5bc6978e3f63\") " pod="calico-system/calico-node-czwjg" Sep 11 00:29:10.858500 kubelet[3134]: I0911 00:29:10.858433 3134 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/8120899d-6a8e-4934-8bae-5bc6978e3f63-cni-bin-dir\") pod \"calico-node-czwjg\" (UID: \"8120899d-6a8e-4934-8bae-5bc6978e3f63\") " pod="calico-system/calico-node-czwjg" Sep 11 00:29:10.858500 kubelet[3134]: I0911 00:29:10.858454 3134 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/8120899d-6a8e-4934-8bae-5bc6978e3f63-cni-net-dir\") pod \"calico-node-czwjg\" (UID: \"8120899d-6a8e-4934-8bae-5bc6978e3f63\") " pod="calico-system/calico-node-czwjg" Sep 11 00:29:10.858500 kubelet[3134]: I0911 00:29:10.858471 3134 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/8120899d-6a8e-4934-8bae-5bc6978e3f63-node-certs\") pod \"calico-node-czwjg\" (UID: \"8120899d-6a8e-4934-8bae-5bc6978e3f63\") " pod="calico-system/calico-node-czwjg" Sep 11 00:29:10.858500 kubelet[3134]: I0911 00:29:10.858487 3134 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8120899d-6a8e-4934-8bae-5bc6978e3f63-tigera-ca-bundle\") pod \"calico-node-czwjg\" (UID: \"8120899d-6a8e-4934-8bae-5bc6978e3f63\") " pod="calico-system/calico-node-czwjg" Sep 11 00:29:10.858602 kubelet[3134]: I0911 00:29:10.858504 3134 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8120899d-6a8e-4934-8bae-5bc6978e3f63-xtables-lock\") pod \"calico-node-czwjg\" (UID: \"8120899d-6a8e-4934-8bae-5bc6978e3f63\") " pod="calico-system/calico-node-czwjg" Sep 11 00:29:10.858602 kubelet[3134]: I0911 00:29:10.858524 3134 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/8120899d-6a8e-4934-8bae-5bc6978e3f63-var-run-calico\") pod \"calico-node-czwjg\" (UID: \"8120899d-6a8e-4934-8bae-5bc6978e3f63\") " pod="calico-system/calico-node-czwjg" Sep 11 00:29:10.858602 kubelet[3134]: I0911 00:29:10.858546 3134 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/8120899d-6a8e-4934-8bae-5bc6978e3f63-cni-log-dir\") pod \"calico-node-czwjg\" (UID: \"8120899d-6a8e-4934-8bae-5bc6978e3f63\") " pod="calico-system/calico-node-czwjg" Sep 11 00:29:10.858602 kubelet[3134]: I0911 00:29:10.858565 3134 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/8120899d-6a8e-4934-8bae-5bc6978e3f63-flexvol-driver-host\") pod \"calico-node-czwjg\" (UID: \"8120899d-6a8e-4934-8bae-5bc6978e3f63\") " pod="calico-system/calico-node-czwjg" Sep 11 00:29:10.858602 kubelet[3134]: I0911 00:29:10.858583 3134 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8120899d-6a8e-4934-8bae-5bc6978e3f63-lib-modules\") pod \"calico-node-czwjg\" (UID: \"8120899d-6a8e-4934-8bae-5bc6978e3f63\") " pod="calico-system/calico-node-czwjg" Sep 11 00:29:10.858721 kubelet[3134]: I0911 00:29:10.858603 3134 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/8120899d-6a8e-4934-8bae-5bc6978e3f63-policysync\") pod \"calico-node-czwjg\" (UID: \"8120899d-6a8e-4934-8bae-5bc6978e3f63\") " pod="calico-system/calico-node-czwjg" Sep 11 00:29:10.858721 kubelet[3134]: I0911 00:29:10.858622 3134 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8120899d-6a8e-4934-8bae-5bc6978e3f63-var-lib-calico\") pod \"calico-node-czwjg\" (UID: \"8120899d-6a8e-4934-8bae-5bc6978e3f63\") " pod="calico-system/calico-node-czwjg" Sep 11 00:29:10.862536 systemd[1]: Started cri-containerd-eb1aa4b7d055086b384b85d7acdf1ce71b08af9fecc8799223e9693b3001cd21.scope - libcontainer container eb1aa4b7d055086b384b85d7acdf1ce71b08af9fecc8799223e9693b3001cd21. Sep 11 00:29:10.961206 kubelet[3134]: E0911 00:29:10.961108 3134 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:10.961206 kubelet[3134]: W0911 00:29:10.961127 3134 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:10.961206 kubelet[3134]: E0911 00:29:10.961154 3134 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:10.961661 kubelet[3134]: E0911 00:29:10.961548 3134 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:10.961661 kubelet[3134]: W0911 00:29:10.961561 3134 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:10.961661 kubelet[3134]: E0911 00:29:10.961574 3134 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:10.963081 kubelet[3134]: E0911 00:29:10.963063 3134 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:10.963318 kubelet[3134]: W0911 00:29:10.963161 3134 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:10.963318 kubelet[3134]: E0911 00:29:10.963180 3134 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:10.963464 kubelet[3134]: E0911 00:29:10.963456 3134 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:10.963516 kubelet[3134]: W0911 00:29:10.963509 3134 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:10.963554 kubelet[3134]: E0911 00:29:10.963547 3134 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:10.969615 kubelet[3134]: E0911 00:29:10.969599 3134 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:10.970021 kubelet[3134]: W0911 00:29:10.970006 3134 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:10.970155 kubelet[3134]: E0911 00:29:10.970096 3134 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:10.979439 kubelet[3134]: E0911 00:29:10.979427 3134 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:10.979642 kubelet[3134]: W0911 00:29:10.979576 3134 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:10.979642 kubelet[3134]: E0911 00:29:10.979591 3134 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:11.010312 containerd[1713]: time="2025-09-11T00:29:11.010167114Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-cf99bdc96-hfm5q,Uid:a3994475-2733-4fcf-8a38-d1758ff18320,Namespace:calico-system,Attempt:0,} returns sandbox id \"eb1aa4b7d055086b384b85d7acdf1ce71b08af9fecc8799223e9693b3001cd21\"" Sep 11 00:29:11.011721 containerd[1713]: time="2025-09-11T00:29:11.011534204Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 11 00:29:11.126217 kubelet[3134]: E0911 00:29:11.126156 3134 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cbtm5" podUID="b7af8a9b-1045-4cb1-926d-47a54f3633ef" Sep 11 00:29:11.145163 kubelet[3134]: E0911 00:29:11.145140 3134 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:11.145163 kubelet[3134]: W0911 00:29:11.145161 3134 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:11.145375 kubelet[3134]: E0911 00:29:11.145180 3134 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:11.145539 kubelet[3134]: E0911 00:29:11.145465 3134 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:11.145539 kubelet[3134]: W0911 00:29:11.145479 3134 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:11.145539 kubelet[3134]: E0911 00:29:11.145495 3134 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:11.145782 kubelet[3134]: E0911 00:29:11.145772 3134 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:11.145820 kubelet[3134]: W0911 00:29:11.145783 3134 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:11.145820 kubelet[3134]: E0911 00:29:11.145795 3134 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:11.146169 kubelet[3134]: E0911 00:29:11.146155 3134 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:11.146246 kubelet[3134]: W0911 00:29:11.146169 3134 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:11.146246 kubelet[3134]: E0911 00:29:11.146182 3134 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:11.146429 kubelet[3134]: E0911 00:29:11.146403 3134 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:11.146429 kubelet[3134]: W0911 00:29:11.146416 3134 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:11.146429 kubelet[3134]: E0911 00:29:11.146427 3134 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:11.146566 kubelet[3134]: E0911 00:29:11.146530 3134 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:11.146566 kubelet[3134]: W0911 00:29:11.146537 3134 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:11.146566 kubelet[3134]: E0911 00:29:11.146544 3134 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:11.146715 kubelet[3134]: E0911 00:29:11.146673 3134 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:11.146715 kubelet[3134]: W0911 00:29:11.146679 3134 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:11.146715 kubelet[3134]: E0911 00:29:11.146686 3134 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:11.146839 kubelet[3134]: E0911 00:29:11.146791 3134 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:11.146839 kubelet[3134]: W0911 00:29:11.146796 3134 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:11.146839 kubelet[3134]: E0911 00:29:11.146803 3134 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:11.147139 kubelet[3134]: E0911 00:29:11.147125 3134 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:11.147190 kubelet[3134]: W0911 00:29:11.147137 3134 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:11.147190 kubelet[3134]: E0911 00:29:11.147157 3134 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:11.147403 kubelet[3134]: E0911 00:29:11.147365 3134 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:11.147485 kubelet[3134]: W0911 00:29:11.147375 3134 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:11.147555 kubelet[3134]: E0911 00:29:11.147502 3134 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:11.147879 kubelet[3134]: E0911 00:29:11.147816 3134 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:11.147879 kubelet[3134]: W0911 00:29:11.147829 3134 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:11.147879 kubelet[3134]: E0911 00:29:11.147841 3134 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:11.148080 kubelet[3134]: E0911 00:29:11.148069 3134 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:11.148080 kubelet[3134]: W0911 00:29:11.148077 3134 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:11.148140 kubelet[3134]: E0911 00:29:11.148086 3134 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:11.148262 kubelet[3134]: E0911 00:29:11.148236 3134 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:11.148309 kubelet[3134]: W0911 00:29:11.148262 3134 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:11.148309 kubelet[3134]: E0911 00:29:11.148272 3134 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:11.148442 kubelet[3134]: E0911 00:29:11.148421 3134 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:11.148442 kubelet[3134]: W0911 00:29:11.148440 3134 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:11.148526 kubelet[3134]: E0911 00:29:11.148448 3134 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:11.148560 kubelet[3134]: E0911 00:29:11.148541 3134 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:11.148560 kubelet[3134]: W0911 00:29:11.148546 3134 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:11.148560 kubelet[3134]: E0911 00:29:11.148553 3134 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:11.148694 kubelet[3134]: E0911 00:29:11.148638 3134 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:11.148694 kubelet[3134]: W0911 00:29:11.148643 3134 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:11.148694 kubelet[3134]: E0911 00:29:11.148650 3134 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:11.148767 kubelet[3134]: E0911 00:29:11.148755 3134 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:11.148767 kubelet[3134]: W0911 00:29:11.148760 3134 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:11.148767 kubelet[3134]: E0911 00:29:11.148765 3134 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:11.148874 kubelet[3134]: E0911 00:29:11.148855 3134 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:11.148874 kubelet[3134]: W0911 00:29:11.148869 3134 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:11.148970 kubelet[3134]: E0911 00:29:11.148875 3134 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:11.148997 kubelet[3134]: E0911 00:29:11.148973 3134 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:11.148997 kubelet[3134]: W0911 00:29:11.148979 3134 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:11.148997 kubelet[3134]: E0911 00:29:11.148985 3134 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:11.149108 kubelet[3134]: E0911 00:29:11.149105 3134 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:11.149131 kubelet[3134]: W0911 00:29:11.149110 3134 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:11.149131 kubelet[3134]: E0911 00:29:11.149116 3134 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:11.150659 containerd[1713]: time="2025-09-11T00:29:11.150628138Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-czwjg,Uid:8120899d-6a8e-4934-8bae-5bc6978e3f63,Namespace:calico-system,Attempt:0,}" Sep 11 00:29:11.160343 kubelet[3134]: E0911 00:29:11.160322 3134 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:11.160343 kubelet[3134]: W0911 00:29:11.160338 3134 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:11.160343 kubelet[3134]: E0911 00:29:11.160351 3134 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:11.160557 kubelet[3134]: I0911 00:29:11.160479 3134 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b7af8a9b-1045-4cb1-926d-47a54f3633ef-registration-dir\") pod \"csi-node-driver-cbtm5\" (UID: \"b7af8a9b-1045-4cb1-926d-47a54f3633ef\") " pod="calico-system/csi-node-driver-cbtm5" Sep 11 00:29:11.160557 kubelet[3134]: E0911 00:29:11.160522 3134 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:11.160557 kubelet[3134]: W0911 00:29:11.160529 3134 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:11.160557 kubelet[3134]: E0911 00:29:11.160538 3134 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:11.160725 kubelet[3134]: E0911 00:29:11.160651 3134 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:11.160725 kubelet[3134]: W0911 00:29:11.160657 3134 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:11.160725 kubelet[3134]: E0911 00:29:11.160669 3134 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:11.160838 kubelet[3134]: E0911 00:29:11.160773 3134 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:11.160838 kubelet[3134]: W0911 00:29:11.160778 3134 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:11.160838 kubelet[3134]: E0911 00:29:11.160786 3134 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:11.160838 kubelet[3134]: I0911 00:29:11.160813 3134 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b7af8a9b-1045-4cb1-926d-47a54f3633ef-kubelet-dir\") pod \"csi-node-driver-cbtm5\" (UID: \"b7af8a9b-1045-4cb1-926d-47a54f3633ef\") " pod="calico-system/csi-node-driver-cbtm5" Sep 11 00:29:11.161004 kubelet[3134]: E0911 00:29:11.160925 3134 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:11.161004 kubelet[3134]: W0911 00:29:11.160932 3134 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:11.161004 kubelet[3134]: E0911 00:29:11.160946 3134 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:11.161004 kubelet[3134]: I0911 00:29:11.160960 3134 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/b7af8a9b-1045-4cb1-926d-47a54f3633ef-varrun\") pod \"csi-node-driver-cbtm5\" (UID: \"b7af8a9b-1045-4cb1-926d-47a54f3633ef\") " pod="calico-system/csi-node-driver-cbtm5" Sep 11 00:29:11.161243 kubelet[3134]: E0911 00:29:11.161089 3134 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:11.161243 kubelet[3134]: W0911 00:29:11.161095 3134 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:11.161243 kubelet[3134]: E0911 00:29:11.161106 3134 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:11.161243 kubelet[3134]: I0911 00:29:11.161120 3134 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b7af8a9b-1045-4cb1-926d-47a54f3633ef-socket-dir\") pod \"csi-node-driver-cbtm5\" (UID: \"b7af8a9b-1045-4cb1-926d-47a54f3633ef\") " pod="calico-system/csi-node-driver-cbtm5" Sep 11 00:29:11.161243 kubelet[3134]: E0911 00:29:11.161242 3134 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:11.161370 kubelet[3134]: W0911 00:29:11.161248 3134 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:11.161370 kubelet[3134]: E0911 00:29:11.161263 3134 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:11.161370 kubelet[3134]: I0911 00:29:11.161278 3134 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxt8x\" (UniqueName: \"kubernetes.io/projected/b7af8a9b-1045-4cb1-926d-47a54f3633ef-kube-api-access-mxt8x\") pod \"csi-node-driver-cbtm5\" (UID: \"b7af8a9b-1045-4cb1-926d-47a54f3633ef\") " pod="calico-system/csi-node-driver-cbtm5" Sep 11 00:29:11.161555 kubelet[3134]: E0911 00:29:11.161540 3134 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:11.161555 kubelet[3134]: W0911 00:29:11.161551 3134 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:11.161616 kubelet[3134]: E0911 00:29:11.161570 3134 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:11.161701 kubelet[3134]: E0911 00:29:11.161690 3134 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:11.161731 kubelet[3134]: W0911 00:29:11.161706 3134 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:11.161731 kubelet[3134]: E0911 00:29:11.161720 3134 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:11.161864 kubelet[3134]: E0911 00:29:11.161845 3134 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:11.161864 kubelet[3134]: W0911 00:29:11.161862 3134 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:11.161962 kubelet[3134]: E0911 00:29:11.161876 3134 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:11.162016 kubelet[3134]: E0911 00:29:11.161976 3134 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:11.162016 kubelet[3134]: W0911 00:29:11.161981 3134 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:11.162016 kubelet[3134]: E0911 00:29:11.161991 3134 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:11.162141 kubelet[3134]: E0911 00:29:11.162115 3134 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:11.162141 kubelet[3134]: W0911 00:29:11.162120 3134 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:11.162141 kubelet[3134]: E0911 00:29:11.162138 3134 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:11.162275 kubelet[3134]: E0911 00:29:11.162237 3134 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:11.162275 kubelet[3134]: W0911 00:29:11.162242 3134 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:11.162275 kubelet[3134]: E0911 00:29:11.162254 3134 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:11.162418 kubelet[3134]: E0911 00:29:11.162356 3134 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:11.162418 kubelet[3134]: W0911 00:29:11.162361 3134 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:11.162418 kubelet[3134]: E0911 00:29:11.162367 3134 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:11.162523 kubelet[3134]: E0911 00:29:11.162475 3134 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:11.162523 kubelet[3134]: W0911 00:29:11.162481 3134 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:11.162523 kubelet[3134]: E0911 00:29:11.162487 3134 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:11.182372 containerd[1713]: time="2025-09-11T00:29:11.181786898Z" level=info msg="connecting to shim 4a9b8ad2d9a3e82e765230605d1656f592c432c6c8083f19fa1954d2521547fe" address="unix:///run/containerd/s/9fbc8a46f281011ceb1024aa604d6fc3eb71cea1239ff768107063d7174e8fcd" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:29:11.211587 systemd[1]: Started cri-containerd-4a9b8ad2d9a3e82e765230605d1656f592c432c6c8083f19fa1954d2521547fe.scope - libcontainer container 4a9b8ad2d9a3e82e765230605d1656f592c432c6c8083f19fa1954d2521547fe. Sep 11 00:29:11.247993 containerd[1713]: time="2025-09-11T00:29:11.247884870Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-czwjg,Uid:8120899d-6a8e-4934-8bae-5bc6978e3f63,Namespace:calico-system,Attempt:0,} returns sandbox id \"4a9b8ad2d9a3e82e765230605d1656f592c432c6c8083f19fa1954d2521547fe\"" Sep 11 00:29:11.261983 kubelet[3134]: E0911 00:29:11.261846 3134 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:11.261983 kubelet[3134]: W0911 00:29:11.261860 3134 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:11.261983 kubelet[3134]: E0911 00:29:11.261871 3134 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:11.263581 kubelet[3134]: E0911 00:29:11.263565 3134 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:11.263581 kubelet[3134]: W0911 00:29:11.263581 3134 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:11.263658 kubelet[3134]: E0911 00:29:11.263598 3134 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:11.263755 kubelet[3134]: E0911 00:29:11.263743 3134 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:11.263786 kubelet[3134]: W0911 00:29:11.263773 3134 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:11.263855 kubelet[3134]: E0911 00:29:11.263842 3134 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:11.263939 kubelet[3134]: E0911 00:29:11.263932 3134 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:11.263966 kubelet[3134]: W0911 00:29:11.263940 3134 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:11.264039 kubelet[3134]: E0911 00:29:11.264029 3134 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:11.264098 kubelet[3134]: E0911 00:29:11.264089 3134 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:11.264124 kubelet[3134]: W0911 00:29:11.264106 3134 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:11.264124 kubelet[3134]: E0911 00:29:11.264120 3134 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:11.264230 kubelet[3134]: E0911 00:29:11.264224 3134 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:11.264271 kubelet[3134]: W0911 00:29:11.264265 3134 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:11.264305 kubelet[3134]: E0911 00:29:11.264298 3134 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:11.264538 kubelet[3134]: E0911 00:29:11.264429 3134 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:11.264538 kubelet[3134]: W0911 00:29:11.264435 3134 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:11.264538 kubelet[3134]: E0911 00:29:11.264444 3134 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:11.264969 kubelet[3134]: E0911 00:29:11.264921 3134 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:11.265270 kubelet[3134]: W0911 00:29:11.265089 3134 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:11.265270 kubelet[3134]: E0911 00:29:11.265105 3134 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:11.265467 kubelet[3134]: E0911 00:29:11.265455 3134 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:11.265500 kubelet[3134]: W0911 00:29:11.265468 3134 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:11.265500 kubelet[3134]: E0911 00:29:11.265481 3134 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:11.265644 kubelet[3134]: E0911 00:29:11.265632 3134 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:11.265682 kubelet[3134]: W0911 00:29:11.265653 3134 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:11.265741 kubelet[3134]: E0911 00:29:11.265728 3134 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:11.265807 kubelet[3134]: E0911 00:29:11.265798 3134 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:11.265807 kubelet[3134]: W0911 00:29:11.265805 3134 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:11.265879 kubelet[3134]: E0911 00:29:11.265869 3134 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:11.265933 kubelet[3134]: E0911 00:29:11.265923 3134 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:11.265933 kubelet[3134]: W0911 00:29:11.265930 3134 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:11.266015 kubelet[3134]: E0911 00:29:11.266004 3134 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:11.266067 kubelet[3134]: E0911 00:29:11.266058 3134 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:11.266067 kubelet[3134]: W0911 00:29:11.266064 3134 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:11.266115 kubelet[3134]: E0911 00:29:11.266079 3134 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:11.266188 kubelet[3134]: E0911 00:29:11.266179 3134 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:11.266188 kubelet[3134]: W0911 00:29:11.266186 3134 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:11.266240 kubelet[3134]: E0911 00:29:11.266199 3134 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:11.266310 kubelet[3134]: E0911 00:29:11.266301 3134 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:11.266310 kubelet[3134]: W0911 00:29:11.266308 3134 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:11.266356 kubelet[3134]: E0911 00:29:11.266321 3134 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:11.266483 kubelet[3134]: E0911 00:29:11.266473 3134 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:11.266483 kubelet[3134]: W0911 00:29:11.266481 3134 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:11.266533 kubelet[3134]: E0911 00:29:11.266494 3134 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:11.266818 kubelet[3134]: E0911 00:29:11.266733 3134 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:11.266818 kubelet[3134]: W0911 00:29:11.266743 3134 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:11.266818 kubelet[3134]: E0911 00:29:11.266755 3134 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:11.267057 kubelet[3134]: E0911 00:29:11.267046 3134 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:11.267094 kubelet[3134]: W0911 00:29:11.267058 3134 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:11.267094 kubelet[3134]: E0911 00:29:11.267085 3134 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:11.267237 kubelet[3134]: E0911 00:29:11.267199 3134 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:11.267237 kubelet[3134]: W0911 00:29:11.267205 3134 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:11.267372 kubelet[3134]: E0911 00:29:11.267292 3134 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:11.267372 kubelet[3134]: E0911 00:29:11.267323 3134 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:11.267372 kubelet[3134]: W0911 00:29:11.267328 3134 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:11.267601 kubelet[3134]: E0911 00:29:11.267403 3134 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:11.267601 kubelet[3134]: E0911 00:29:11.267453 3134 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:11.267601 kubelet[3134]: W0911 00:29:11.267458 3134 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:11.267601 kubelet[3134]: E0911 00:29:11.267465 3134 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:11.267693 kubelet[3134]: E0911 00:29:11.267608 3134 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:11.267693 kubelet[3134]: W0911 00:29:11.267614 3134 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:11.267693 kubelet[3134]: E0911 00:29:11.267622 3134 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:11.267767 kubelet[3134]: E0911 00:29:11.267749 3134 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:11.267767 kubelet[3134]: W0911 00:29:11.267755 3134 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:11.267767 kubelet[3134]: E0911 00:29:11.267765 3134 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:11.268227 kubelet[3134]: E0911 00:29:11.268203 3134 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:11.268227 kubelet[3134]: W0911 00:29:11.268218 3134 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:11.268227 kubelet[3134]: E0911 00:29:11.268233 3134 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:11.268444 kubelet[3134]: E0911 00:29:11.268431 3134 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:11.268444 kubelet[3134]: W0911 00:29:11.268440 3134 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:11.268493 kubelet[3134]: E0911 00:29:11.268452 3134 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:11.273378 kubelet[3134]: E0911 00:29:11.273319 3134 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:11.273378 kubelet[3134]: W0911 00:29:11.273333 3134 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:11.273378 kubelet[3134]: E0911 00:29:11.273346 3134 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:12.273660 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2360510921.mount: Deactivated successfully. Sep 11 00:29:12.843932 kubelet[3134]: E0911 00:29:12.843891 3134 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cbtm5" podUID="b7af8a9b-1045-4cb1-926d-47a54f3633ef" Sep 11 00:29:12.946173 containerd[1713]: time="2025-09-11T00:29:12.946136909Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:29:12.950185 containerd[1713]: time="2025-09-11T00:29:12.950157752Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=35237389" Sep 11 00:29:12.952848 containerd[1713]: time="2025-09-11T00:29:12.952803294Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:29:12.962778 containerd[1713]: time="2025-09-11T00:29:12.962731390Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:29:12.963130 containerd[1713]: time="2025-09-11T00:29:12.963029663Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 1.951463806s" Sep 11 00:29:12.963130 containerd[1713]: time="2025-09-11T00:29:12.963057217Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 11 00:29:12.964039 containerd[1713]: time="2025-09-11T00:29:12.963861468Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 11 00:29:12.978327 containerd[1713]: time="2025-09-11T00:29:12.978002144Z" level=info msg="CreateContainer within sandbox \"eb1aa4b7d055086b384b85d7acdf1ce71b08af9fecc8799223e9693b3001cd21\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 11 00:29:12.997501 containerd[1713]: time="2025-09-11T00:29:12.997475592Z" level=info msg="Container 1a4f1a0615c3ee92e39f864951cfdf5cbfc5c3eec589ca75ff20da0d3b9c21bf: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:29:13.011478 containerd[1713]: time="2025-09-11T00:29:13.011455032Z" level=info msg="CreateContainer within sandbox \"eb1aa4b7d055086b384b85d7acdf1ce71b08af9fecc8799223e9693b3001cd21\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"1a4f1a0615c3ee92e39f864951cfdf5cbfc5c3eec589ca75ff20da0d3b9c21bf\"" Sep 11 00:29:13.011825 containerd[1713]: time="2025-09-11T00:29:13.011783462Z" level=info msg="StartContainer for \"1a4f1a0615c3ee92e39f864951cfdf5cbfc5c3eec589ca75ff20da0d3b9c21bf\"" Sep 11 00:29:13.013410 containerd[1713]: time="2025-09-11T00:29:13.013369970Z" level=info msg="connecting to shim 1a4f1a0615c3ee92e39f864951cfdf5cbfc5c3eec589ca75ff20da0d3b9c21bf" address="unix:///run/containerd/s/24964116e7afed12fc972af897828df1542f0f48b00ee4251c4f3e8b791bc0df" protocol=ttrpc version=3 Sep 11 00:29:13.037582 systemd[1]: Started cri-containerd-1a4f1a0615c3ee92e39f864951cfdf5cbfc5c3eec589ca75ff20da0d3b9c21bf.scope - libcontainer container 1a4f1a0615c3ee92e39f864951cfdf5cbfc5c3eec589ca75ff20da0d3b9c21bf. Sep 11 00:29:13.086679 containerd[1713]: time="2025-09-11T00:29:13.086634577Z" level=info msg="StartContainer for \"1a4f1a0615c3ee92e39f864951cfdf5cbfc5c3eec589ca75ff20da0d3b9c21bf\" returns successfully" Sep 11 00:29:13.921615 kubelet[3134]: I0911 00:29:13.920940 3134 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-cf99bdc96-hfm5q" podStartSLOduration=1.9684711529999999 podStartE2EDuration="3.920921535s" podCreationTimestamp="2025-09-11 00:29:10 +0000 UTC" firstStartedPulling="2025-09-11 00:29:11.011302329 +0000 UTC m=+21.299989908" lastFinishedPulling="2025-09-11 00:29:12.963752705 +0000 UTC m=+23.252440290" observedRunningTime="2025-09-11 00:29:13.920811939 +0000 UTC m=+24.209499531" watchObservedRunningTime="2025-09-11 00:29:13.920921535 +0000 UTC m=+24.209609122" Sep 11 00:29:13.968052 kubelet[3134]: E0911 00:29:13.968028 3134 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:13.968052 kubelet[3134]: W0911 00:29:13.968045 3134 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:13.968052 kubelet[3134]: E0911 00:29:13.968062 3134 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:13.968287 kubelet[3134]: E0911 00:29:13.968171 3134 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:13.968287 kubelet[3134]: W0911 00:29:13.968177 3134 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:13.968287 kubelet[3134]: E0911 00:29:13.968186 3134 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:13.968287 kubelet[3134]: E0911 00:29:13.968278 3134 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:13.968287 kubelet[3134]: W0911 00:29:13.968283 3134 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:13.968475 kubelet[3134]: E0911 00:29:13.968290 3134 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:13.968475 kubelet[3134]: E0911 00:29:13.968440 3134 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:13.968475 kubelet[3134]: W0911 00:29:13.968446 3134 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:13.968475 kubelet[3134]: E0911 00:29:13.968454 3134 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:13.968613 kubelet[3134]: E0911 00:29:13.968556 3134 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:13.968613 kubelet[3134]: W0911 00:29:13.968562 3134 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:13.968613 kubelet[3134]: E0911 00:29:13.968569 3134 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:13.968728 kubelet[3134]: E0911 00:29:13.968653 3134 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:13.968728 kubelet[3134]: W0911 00:29:13.968658 3134 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:13.968728 kubelet[3134]: E0911 00:29:13.968665 3134 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:13.968826 kubelet[3134]: E0911 00:29:13.968745 3134 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:13.968826 kubelet[3134]: W0911 00:29:13.968750 3134 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:13.968826 kubelet[3134]: E0911 00:29:13.968756 3134 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:13.968957 kubelet[3134]: E0911 00:29:13.968839 3134 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:13.968957 kubelet[3134]: W0911 00:29:13.968843 3134 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:13.968957 kubelet[3134]: E0911 00:29:13.968849 3134 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:13.968957 kubelet[3134]: E0911 00:29:13.968936 3134 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:13.968957 kubelet[3134]: W0911 00:29:13.968941 3134 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:13.968957 kubelet[3134]: E0911 00:29:13.968948 3134 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:13.969162 kubelet[3134]: E0911 00:29:13.969024 3134 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:13.969162 kubelet[3134]: W0911 00:29:13.969029 3134 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:13.969162 kubelet[3134]: E0911 00:29:13.969035 3134 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:13.969162 kubelet[3134]: E0911 00:29:13.969111 3134 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:13.969162 kubelet[3134]: W0911 00:29:13.969116 3134 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:13.969162 kubelet[3134]: E0911 00:29:13.969122 3134 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:13.969398 kubelet[3134]: E0911 00:29:13.969199 3134 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:13.969398 kubelet[3134]: W0911 00:29:13.969204 3134 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:13.969398 kubelet[3134]: E0911 00:29:13.969210 3134 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:13.969398 kubelet[3134]: E0911 00:29:13.969294 3134 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:13.969398 kubelet[3134]: W0911 00:29:13.969299 3134 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:13.969398 kubelet[3134]: E0911 00:29:13.969304 3134 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:13.969576 kubelet[3134]: E0911 00:29:13.969402 3134 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:13.969576 kubelet[3134]: W0911 00:29:13.969407 3134 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:13.969576 kubelet[3134]: E0911 00:29:13.969413 3134 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:13.969576 kubelet[3134]: E0911 00:29:13.969502 3134 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:13.969576 kubelet[3134]: W0911 00:29:13.969507 3134 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:13.969576 kubelet[3134]: E0911 00:29:13.969513 3134 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:13.984910 kubelet[3134]: E0911 00:29:13.984868 3134 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:13.984910 kubelet[3134]: W0911 00:29:13.984906 3134 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:13.985082 kubelet[3134]: E0911 00:29:13.984923 3134 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:13.985147 kubelet[3134]: E0911 00:29:13.985138 3134 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:13.985203 kubelet[3134]: W0911 00:29:13.985147 3134 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:13.985203 kubelet[3134]: E0911 00:29:13.985173 3134 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:13.985301 kubelet[3134]: E0911 00:29:13.985288 3134 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:13.985301 kubelet[3134]: W0911 00:29:13.985297 3134 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:13.985368 kubelet[3134]: E0911 00:29:13.985313 3134 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:13.985540 kubelet[3134]: E0911 00:29:13.985513 3134 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:13.985540 kubelet[3134]: W0911 00:29:13.985537 3134 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:13.985634 kubelet[3134]: E0911 00:29:13.985555 3134 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:13.985765 kubelet[3134]: E0911 00:29:13.985740 3134 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:13.985765 kubelet[3134]: W0911 00:29:13.985763 3134 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:13.985839 kubelet[3134]: E0911 00:29:13.985778 3134 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:13.985908 kubelet[3134]: E0911 00:29:13.985885 3134 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:13.985908 kubelet[3134]: W0911 00:29:13.985906 3134 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:13.985989 kubelet[3134]: E0911 00:29:13.985917 3134 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:13.986045 kubelet[3134]: E0911 00:29:13.986034 3134 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:13.986045 kubelet[3134]: W0911 00:29:13.986042 3134 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:13.986099 kubelet[3134]: E0911 00:29:13.986051 3134 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:13.986461 kubelet[3134]: E0911 00:29:13.986444 3134 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:13.986461 kubelet[3134]: W0911 00:29:13.986458 3134 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:13.986637 kubelet[3134]: E0911 00:29:13.986496 3134 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:13.986637 kubelet[3134]: E0911 00:29:13.986583 3134 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:13.986637 kubelet[3134]: W0911 00:29:13.986589 3134 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:13.986772 kubelet[3134]: E0911 00:29:13.986665 3134 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:13.986772 kubelet[3134]: E0911 00:29:13.986712 3134 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:13.986772 kubelet[3134]: W0911 00:29:13.986718 3134 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:13.986772 kubelet[3134]: E0911 00:29:13.986729 3134 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:13.986911 kubelet[3134]: E0911 00:29:13.986838 3134 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:13.986911 kubelet[3134]: W0911 00:29:13.986855 3134 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:13.986911 kubelet[3134]: E0911 00:29:13.986865 3134 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:13.987021 kubelet[3134]: E0911 00:29:13.986959 3134 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:13.987021 kubelet[3134]: W0911 00:29:13.986964 3134 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:13.987021 kubelet[3134]: E0911 00:29:13.986973 3134 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:13.987140 kubelet[3134]: E0911 00:29:13.987094 3134 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:13.987140 kubelet[3134]: W0911 00:29:13.987099 3134 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:13.987140 kubelet[3134]: E0911 00:29:13.987108 3134 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:13.987439 kubelet[3134]: E0911 00:29:13.987418 3134 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:13.987439 kubelet[3134]: W0911 00:29:13.987437 3134 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:13.987524 kubelet[3134]: E0911 00:29:13.987455 3134 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:13.987580 kubelet[3134]: E0911 00:29:13.987554 3134 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:13.987580 kubelet[3134]: W0911 00:29:13.987560 3134 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:13.987580 kubelet[3134]: E0911 00:29:13.987567 3134 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:13.987780 kubelet[3134]: E0911 00:29:13.987756 3134 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:13.987780 kubelet[3134]: W0911 00:29:13.987778 3134 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:13.987846 kubelet[3134]: E0911 00:29:13.987805 3134 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:13.988096 kubelet[3134]: E0911 00:29:13.988084 3134 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:13.988096 kubelet[3134]: W0911 00:29:13.988093 3134 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:13.988173 kubelet[3134]: E0911 00:29:13.988105 3134 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:13.988259 kubelet[3134]: E0911 00:29:13.988249 3134 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:13.988259 kubelet[3134]: W0911 00:29:13.988257 3134 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:13.988321 kubelet[3134]: E0911 00:29:13.988265 3134 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:14.453929 containerd[1713]: time="2025-09-11T00:29:14.453891903Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:29:14.456316 containerd[1713]: time="2025-09-11T00:29:14.456277308Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4446660" Sep 11 00:29:14.458985 containerd[1713]: time="2025-09-11T00:29:14.458932940Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:29:14.463269 containerd[1713]: time="2025-09-11T00:29:14.462749056Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:29:14.463269 containerd[1713]: time="2025-09-11T00:29:14.463169115Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 1.499278832s" Sep 11 00:29:14.463269 containerd[1713]: time="2025-09-11T00:29:14.463196332Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 11 00:29:14.465602 containerd[1713]: time="2025-09-11T00:29:14.465572586Z" level=info msg="CreateContainer within sandbox \"4a9b8ad2d9a3e82e765230605d1656f592c432c6c8083f19fa1954d2521547fe\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 11 00:29:14.488419 containerd[1713]: time="2025-09-11T00:29:14.487709928Z" level=info msg="Container 35f28a22ca3f146232459370400b8a3a775e4edd2042851db2656556cbcda5c9: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:29:14.512639 containerd[1713]: time="2025-09-11T00:29:14.512609166Z" level=info msg="CreateContainer within sandbox \"4a9b8ad2d9a3e82e765230605d1656f592c432c6c8083f19fa1954d2521547fe\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"35f28a22ca3f146232459370400b8a3a775e4edd2042851db2656556cbcda5c9\"" Sep 11 00:29:14.513135 containerd[1713]: time="2025-09-11T00:29:14.513110904Z" level=info msg="StartContainer for \"35f28a22ca3f146232459370400b8a3a775e4edd2042851db2656556cbcda5c9\"" Sep 11 00:29:14.514646 containerd[1713]: time="2025-09-11T00:29:14.514606450Z" level=info msg="connecting to shim 35f28a22ca3f146232459370400b8a3a775e4edd2042851db2656556cbcda5c9" address="unix:///run/containerd/s/9fbc8a46f281011ceb1024aa604d6fc3eb71cea1239ff768107063d7174e8fcd" protocol=ttrpc version=3 Sep 11 00:29:14.539521 systemd[1]: Started cri-containerd-35f28a22ca3f146232459370400b8a3a775e4edd2042851db2656556cbcda5c9.scope - libcontainer container 35f28a22ca3f146232459370400b8a3a775e4edd2042851db2656556cbcda5c9. Sep 11 00:29:14.577663 containerd[1713]: time="2025-09-11T00:29:14.577638687Z" level=info msg="StartContainer for \"35f28a22ca3f146232459370400b8a3a775e4edd2042851db2656556cbcda5c9\" returns successfully" Sep 11 00:29:14.582878 systemd[1]: cri-containerd-35f28a22ca3f146232459370400b8a3a775e4edd2042851db2656556cbcda5c9.scope: Deactivated successfully. Sep 11 00:29:14.586963 containerd[1713]: time="2025-09-11T00:29:14.586899426Z" level=info msg="TaskExit event in podsandbox handler container_id:\"35f28a22ca3f146232459370400b8a3a775e4edd2042851db2656556cbcda5c9\" id:\"35f28a22ca3f146232459370400b8a3a775e4edd2042851db2656556cbcda5c9\" pid:3806 exited_at:{seconds:1757550554 nanos:586137974}" Sep 11 00:29:14.587107 containerd[1713]: time="2025-09-11T00:29:14.586948037Z" level=info msg="received exit event container_id:\"35f28a22ca3f146232459370400b8a3a775e4edd2042851db2656556cbcda5c9\" id:\"35f28a22ca3f146232459370400b8a3a775e4edd2042851db2656556cbcda5c9\" pid:3806 exited_at:{seconds:1757550554 nanos:586137974}" Sep 11 00:29:14.605390 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-35f28a22ca3f146232459370400b8a3a775e4edd2042851db2656556cbcda5c9-rootfs.mount: Deactivated successfully. Sep 11 00:29:14.843790 kubelet[3134]: E0911 00:29:14.843685 3134 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cbtm5" podUID="b7af8a9b-1045-4cb1-926d-47a54f3633ef" Sep 11 00:29:16.843849 kubelet[3134]: E0911 00:29:16.843788 3134 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cbtm5" podUID="b7af8a9b-1045-4cb1-926d-47a54f3633ef" Sep 11 00:29:17.919135 containerd[1713]: time="2025-09-11T00:29:17.919085799Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 11 00:29:18.843642 kubelet[3134]: E0911 00:29:18.843599 3134 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cbtm5" podUID="b7af8a9b-1045-4cb1-926d-47a54f3633ef" Sep 11 00:29:20.689695 containerd[1713]: time="2025-09-11T00:29:20.689585001Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:29:20.693026 containerd[1713]: time="2025-09-11T00:29:20.692893280Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 11 00:29:20.696009 containerd[1713]: time="2025-09-11T00:29:20.695983938Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:29:20.700298 containerd[1713]: time="2025-09-11T00:29:20.699777304Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:29:20.700298 containerd[1713]: time="2025-09-11T00:29:20.700191194Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 2.780272277s" Sep 11 00:29:20.700298 containerd[1713]: time="2025-09-11T00:29:20.700230411Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 11 00:29:20.703085 containerd[1713]: time="2025-09-11T00:29:20.703057974Z" level=info msg="CreateContainer within sandbox \"4a9b8ad2d9a3e82e765230605d1656f592c432c6c8083f19fa1954d2521547fe\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 11 00:29:20.724251 containerd[1713]: time="2025-09-11T00:29:20.724022101Z" level=info msg="Container e1c2b28254d2a679b64feaaa117e9a266a4efbe2edece338ff432d205b57f19a: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:29:20.746184 containerd[1713]: time="2025-09-11T00:29:20.746158049Z" level=info msg="CreateContainer within sandbox \"4a9b8ad2d9a3e82e765230605d1656f592c432c6c8083f19fa1954d2521547fe\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"e1c2b28254d2a679b64feaaa117e9a266a4efbe2edece338ff432d205b57f19a\"" Sep 11 00:29:20.746904 containerd[1713]: time="2025-09-11T00:29:20.746497508Z" level=info msg="StartContainer for \"e1c2b28254d2a679b64feaaa117e9a266a4efbe2edece338ff432d205b57f19a\"" Sep 11 00:29:20.748118 containerd[1713]: time="2025-09-11T00:29:20.748094182Z" level=info msg="connecting to shim e1c2b28254d2a679b64feaaa117e9a266a4efbe2edece338ff432d205b57f19a" address="unix:///run/containerd/s/9fbc8a46f281011ceb1024aa604d6fc3eb71cea1239ff768107063d7174e8fcd" protocol=ttrpc version=3 Sep 11 00:29:20.770521 systemd[1]: Started cri-containerd-e1c2b28254d2a679b64feaaa117e9a266a4efbe2edece338ff432d205b57f19a.scope - libcontainer container e1c2b28254d2a679b64feaaa117e9a266a4efbe2edece338ff432d205b57f19a. Sep 11 00:29:20.806663 containerd[1713]: time="2025-09-11T00:29:20.806640601Z" level=info msg="StartContainer for \"e1c2b28254d2a679b64feaaa117e9a266a4efbe2edece338ff432d205b57f19a\" returns successfully" Sep 11 00:29:20.844398 kubelet[3134]: E0911 00:29:20.844008 3134 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cbtm5" podUID="b7af8a9b-1045-4cb1-926d-47a54f3633ef" Sep 11 00:29:22.024631 systemd[1]: cri-containerd-e1c2b28254d2a679b64feaaa117e9a266a4efbe2edece338ff432d205b57f19a.scope: Deactivated successfully. Sep 11 00:29:22.024904 systemd[1]: cri-containerd-e1c2b28254d2a679b64feaaa117e9a266a4efbe2edece338ff432d205b57f19a.scope: Consumed 404ms CPU time, 193M memory peak, 171.3M written to disk. Sep 11 00:29:22.026424 containerd[1713]: time="2025-09-11T00:29:22.026276211Z" level=info msg="received exit event container_id:\"e1c2b28254d2a679b64feaaa117e9a266a4efbe2edece338ff432d205b57f19a\" id:\"e1c2b28254d2a679b64feaaa117e9a266a4efbe2edece338ff432d205b57f19a\" pid:3865 exited_at:{seconds:1757550562 nanos:26024219}" Sep 11 00:29:22.026424 containerd[1713]: time="2025-09-11T00:29:22.026329696Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e1c2b28254d2a679b64feaaa117e9a266a4efbe2edece338ff432d205b57f19a\" id:\"e1c2b28254d2a679b64feaaa117e9a266a4efbe2edece338ff432d205b57f19a\" pid:3865 exited_at:{seconds:1757550562 nanos:26024219}" Sep 11 00:29:22.046119 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e1c2b28254d2a679b64feaaa117e9a266a4efbe2edece338ff432d205b57f19a-rootfs.mount: Deactivated successfully. Sep 11 00:29:22.058290 kubelet[3134]: I0911 00:29:22.058270 3134 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 11 00:29:22.094148 systemd[1]: Created slice kubepods-burstable-pod175c79fb_099e_4954_8929_ac0445a05992.slice - libcontainer container kubepods-burstable-pod175c79fb_099e_4954_8929_ac0445a05992.slice. Sep 11 00:29:22.111439 systemd[1]: Created slice kubepods-burstable-pod390a3f06_83fb_443b_bd56_822dbd89138c.slice - libcontainer container kubepods-burstable-pod390a3f06_83fb_443b_bd56_822dbd89138c.slice. Sep 11 00:29:22.132605 systemd[1]: Created slice kubepods-besteffort-pod1f68b9e3_b7a9_4e5a_95d3_b579e5278782.slice - libcontainer container kubepods-besteffort-pod1f68b9e3_b7a9_4e5a_95d3_b579e5278782.slice. Sep 11 00:29:22.139166 systemd[1]: Created slice kubepods-besteffort-poddfc938aa_75cb_4bd5_92f2_c0079a9800da.slice - libcontainer container kubepods-besteffort-poddfc938aa_75cb_4bd5_92f2_c0079a9800da.slice. Sep 11 00:29:22.144587 kubelet[3134]: I0911 00:29:22.144142 3134 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d766bed5-827d-43f6-8bff-229467c4b9d1-calico-apiserver-certs\") pod \"calico-apiserver-5fbf5c6d8-z67wz\" (UID: \"d766bed5-827d-43f6-8bff-229467c4b9d1\") " pod="calico-apiserver/calico-apiserver-5fbf5c6d8-z67wz" Sep 11 00:29:22.145397 kubelet[3134]: I0911 00:29:22.144717 3134 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/390a3f06-83fb-443b-bd56-822dbd89138c-config-volume\") pod \"coredns-668d6bf9bc-xnj7p\" (UID: \"390a3f06-83fb-443b-bd56-822dbd89138c\") " pod="kube-system/coredns-668d6bf9bc-xnj7p" Sep 11 00:29:22.145397 kubelet[3134]: I0911 00:29:22.144746 3134 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6de28527-91a4-4436-8fe3-69f9a4c814c5-config\") pod \"goldmane-54d579b49d-r824m\" (UID: \"6de28527-91a4-4436-8fe3-69f9a4c814c5\") " pod="calico-system/goldmane-54d579b49d-r824m" Sep 11 00:29:22.145397 kubelet[3134]: I0911 00:29:22.144767 3134 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/dfc938aa-75cb-4bd5-92f2-c0079a9800da-whisker-backend-key-pair\") pod \"whisker-79ccc8689b-59t9c\" (UID: \"dfc938aa-75cb-4bd5-92f2-c0079a9800da\") " pod="calico-system/whisker-79ccc8689b-59t9c" Sep 11 00:29:22.145397 kubelet[3134]: I0911 00:29:22.144799 3134 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/6de28527-91a4-4436-8fe3-69f9a4c814c5-goldmane-key-pair\") pod \"goldmane-54d579b49d-r824m\" (UID: \"6de28527-91a4-4436-8fe3-69f9a4c814c5\") " pod="calico-system/goldmane-54d579b49d-r824m" Sep 11 00:29:22.145397 kubelet[3134]: I0911 00:29:22.144824 3134 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1f68b9e3-b7a9-4e5a-95d3-b579e5278782-calico-apiserver-certs\") pod \"calico-apiserver-5fbf5c6d8-9hpwn\" (UID: \"1f68b9e3-b7a9-4e5a-95d3-b579e5278782\") " pod="calico-apiserver/calico-apiserver-5fbf5c6d8-9hpwn" Sep 11 00:29:22.145621 kubelet[3134]: I0911 00:29:22.144847 3134 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnwlp\" (UniqueName: \"kubernetes.io/projected/175c79fb-099e-4954-8929-ac0445a05992-kube-api-access-qnwlp\") pod \"coredns-668d6bf9bc-8ksdt\" (UID: \"175c79fb-099e-4954-8929-ac0445a05992\") " pod="kube-system/coredns-668d6bf9bc-8ksdt" Sep 11 00:29:22.145621 kubelet[3134]: I0911 00:29:22.144868 3134 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5drm\" (UniqueName: \"kubernetes.io/projected/d766bed5-827d-43f6-8bff-229467c4b9d1-kube-api-access-f5drm\") pod \"calico-apiserver-5fbf5c6d8-z67wz\" (UID: \"d766bed5-827d-43f6-8bff-229467c4b9d1\") " pod="calico-apiserver/calico-apiserver-5fbf5c6d8-z67wz" Sep 11 00:29:22.145621 kubelet[3134]: I0911 00:29:22.144886 3134 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/175c79fb-099e-4954-8929-ac0445a05992-config-volume\") pod \"coredns-668d6bf9bc-8ksdt\" (UID: \"175c79fb-099e-4954-8929-ac0445a05992\") " pod="kube-system/coredns-668d6bf9bc-8ksdt" Sep 11 00:29:22.145621 kubelet[3134]: I0911 00:29:22.144907 3134 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wntj9\" (UniqueName: \"kubernetes.io/projected/390a3f06-83fb-443b-bd56-822dbd89138c-kube-api-access-wntj9\") pod \"coredns-668d6bf9bc-xnj7p\" (UID: \"390a3f06-83fb-443b-bd56-822dbd89138c\") " pod="kube-system/coredns-668d6bf9bc-xnj7p" Sep 11 00:29:22.145621 kubelet[3134]: I0911 00:29:22.144927 3134 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dfc938aa-75cb-4bd5-92f2-c0079a9800da-whisker-ca-bundle\") pod \"whisker-79ccc8689b-59t9c\" (UID: \"dfc938aa-75cb-4bd5-92f2-c0079a9800da\") " pod="calico-system/whisker-79ccc8689b-59t9c" Sep 11 00:29:22.145746 kubelet[3134]: I0911 00:29:22.144944 3134 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nq84n\" (UniqueName: \"kubernetes.io/projected/6de28527-91a4-4436-8fe3-69f9a4c814c5-kube-api-access-nq84n\") pod \"goldmane-54d579b49d-r824m\" (UID: \"6de28527-91a4-4436-8fe3-69f9a4c814c5\") " pod="calico-system/goldmane-54d579b49d-r824m" Sep 11 00:29:22.145746 kubelet[3134]: I0911 00:29:22.144968 3134 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/25f80610-42d7-46ea-b9f4-52f2ac7254cc-tigera-ca-bundle\") pod \"calico-kube-controllers-79475b87fb-9zlnb\" (UID: \"25f80610-42d7-46ea-b9f4-52f2ac7254cc\") " pod="calico-system/calico-kube-controllers-79475b87fb-9zlnb" Sep 11 00:29:22.145746 kubelet[3134]: I0911 00:29:22.144986 3134 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbr6d\" (UniqueName: \"kubernetes.io/projected/25f80610-42d7-46ea-b9f4-52f2ac7254cc-kube-api-access-hbr6d\") pod \"calico-kube-controllers-79475b87fb-9zlnb\" (UID: \"25f80610-42d7-46ea-b9f4-52f2ac7254cc\") " pod="calico-system/calico-kube-controllers-79475b87fb-9zlnb" Sep 11 00:29:22.145746 kubelet[3134]: I0911 00:29:22.145008 3134 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6de28527-91a4-4436-8fe3-69f9a4c814c5-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-r824m\" (UID: \"6de28527-91a4-4436-8fe3-69f9a4c814c5\") " pod="calico-system/goldmane-54d579b49d-r824m" Sep 11 00:29:22.145746 kubelet[3134]: I0911 00:29:22.145029 3134 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpqk6\" (UniqueName: \"kubernetes.io/projected/dfc938aa-75cb-4bd5-92f2-c0079a9800da-kube-api-access-gpqk6\") pod \"whisker-79ccc8689b-59t9c\" (UID: \"dfc938aa-75cb-4bd5-92f2-c0079a9800da\") " pod="calico-system/whisker-79ccc8689b-59t9c" Sep 11 00:29:22.145879 kubelet[3134]: I0911 00:29:22.145051 3134 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5c8c\" (UniqueName: \"kubernetes.io/projected/1f68b9e3-b7a9-4e5a-95d3-b579e5278782-kube-api-access-v5c8c\") pod \"calico-apiserver-5fbf5c6d8-9hpwn\" (UID: \"1f68b9e3-b7a9-4e5a-95d3-b579e5278782\") " pod="calico-apiserver/calico-apiserver-5fbf5c6d8-9hpwn" Sep 11 00:29:22.147369 systemd[1]: Created slice kubepods-besteffort-pod25f80610_42d7_46ea_b9f4_52f2ac7254cc.slice - libcontainer container kubepods-besteffort-pod25f80610_42d7_46ea_b9f4_52f2ac7254cc.slice. Sep 11 00:29:22.155171 systemd[1]: Created slice kubepods-besteffort-podd766bed5_827d_43f6_8bff_229467c4b9d1.slice - libcontainer container kubepods-besteffort-podd766bed5_827d_43f6_8bff_229467c4b9d1.slice. Sep 11 00:29:22.159111 systemd[1]: Created slice kubepods-besteffort-pod6de28527_91a4_4436_8fe3_69f9a4c814c5.slice - libcontainer container kubepods-besteffort-pod6de28527_91a4_4436_8fe3_69f9a4c814c5.slice. Sep 11 00:29:22.401339 containerd[1713]: time="2025-09-11T00:29:22.401229143Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8ksdt,Uid:175c79fb-099e-4954-8929-ac0445a05992,Namespace:kube-system,Attempt:0,}" Sep 11 00:29:22.416871 containerd[1713]: time="2025-09-11T00:29:22.416827251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xnj7p,Uid:390a3f06-83fb-443b-bd56-822dbd89138c,Namespace:kube-system,Attempt:0,}" Sep 11 00:29:22.437659 containerd[1713]: time="2025-09-11T00:29:22.437622116Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fbf5c6d8-9hpwn,Uid:1f68b9e3-b7a9-4e5a-95d3-b579e5278782,Namespace:calico-apiserver,Attempt:0,}" Sep 11 00:29:22.443238 containerd[1713]: time="2025-09-11T00:29:22.443141802Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-79ccc8689b-59t9c,Uid:dfc938aa-75cb-4bd5-92f2-c0079a9800da,Namespace:calico-system,Attempt:0,}" Sep 11 00:29:22.450897 containerd[1713]: time="2025-09-11T00:29:22.450871262Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79475b87fb-9zlnb,Uid:25f80610-42d7-46ea-b9f4-52f2ac7254cc,Namespace:calico-system,Attempt:0,}" Sep 11 00:29:22.459346 containerd[1713]: time="2025-09-11T00:29:22.459315075Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fbf5c6d8-z67wz,Uid:d766bed5-827d-43f6-8bff-229467c4b9d1,Namespace:calico-apiserver,Attempt:0,}" Sep 11 00:29:22.464168 containerd[1713]: time="2025-09-11T00:29:22.464076155Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-r824m,Uid:6de28527-91a4-4436-8fe3-69f9a4c814c5,Namespace:calico-system,Attempt:0,}" Sep 11 00:29:22.850070 systemd[1]: Created slice kubepods-besteffort-podb7af8a9b_1045_4cb1_926d_47a54f3633ef.slice - libcontainer container kubepods-besteffort-podb7af8a9b_1045_4cb1_926d_47a54f3633ef.slice. Sep 11 00:29:22.852229 containerd[1713]: time="2025-09-11T00:29:22.852192785Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cbtm5,Uid:b7af8a9b-1045-4cb1-926d-47a54f3633ef,Namespace:calico-system,Attempt:0,}" Sep 11 00:29:23.110173 containerd[1713]: time="2025-09-11T00:29:23.109730165Z" level=error msg="Failed to destroy network for sandbox \"0209ed0bbc2b6bcbda52fa5e9538a24911dac4ba58c26e8b32cec2aeee8727c2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:29:23.113103 systemd[1]: run-netns-cni\x2d7e43fd2c\x2d4478\x2d5715\x2dfcb7\x2d1724b3676410.mount: Deactivated successfully. Sep 11 00:29:23.115374 containerd[1713]: time="2025-09-11T00:29:23.115273348Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8ksdt,Uid:175c79fb-099e-4954-8929-ac0445a05992,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0209ed0bbc2b6bcbda52fa5e9538a24911dac4ba58c26e8b32cec2aeee8727c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:29:23.115623 kubelet[3134]: E0911 00:29:23.115510 3134 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0209ed0bbc2b6bcbda52fa5e9538a24911dac4ba58c26e8b32cec2aeee8727c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:29:23.115623 kubelet[3134]: E0911 00:29:23.115585 3134 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0209ed0bbc2b6bcbda52fa5e9538a24911dac4ba58c26e8b32cec2aeee8727c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-8ksdt" Sep 11 00:29:23.115623 kubelet[3134]: E0911 00:29:23.115608 3134 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0209ed0bbc2b6bcbda52fa5e9538a24911dac4ba58c26e8b32cec2aeee8727c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-8ksdt" Sep 11 00:29:23.116253 kubelet[3134]: E0911 00:29:23.115651 3134 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-8ksdt_kube-system(175c79fb-099e-4954-8929-ac0445a05992)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-8ksdt_kube-system(175c79fb-099e-4954-8929-ac0445a05992)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0209ed0bbc2b6bcbda52fa5e9538a24911dac4ba58c26e8b32cec2aeee8727c2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-8ksdt" podUID="175c79fb-099e-4954-8929-ac0445a05992" Sep 11 00:29:23.134341 containerd[1713]: time="2025-09-11T00:29:23.134309075Z" level=error msg="Failed to destroy network for sandbox \"e69d6ae67673c4ec9e8eb7a596cbe902a3b961616152ee69244d7b629436d8f1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:29:23.137308 systemd[1]: run-netns-cni\x2de8c534ad\x2d3d51\x2dee20\x2db847\x2d5d57db61905d.mount: Deactivated successfully. Sep 11 00:29:23.139641 containerd[1713]: time="2025-09-11T00:29:23.139572438Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xnj7p,Uid:390a3f06-83fb-443b-bd56-822dbd89138c,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e69d6ae67673c4ec9e8eb7a596cbe902a3b961616152ee69244d7b629436d8f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:29:23.139978 kubelet[3134]: E0911 00:29:23.139935 3134 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e69d6ae67673c4ec9e8eb7a596cbe902a3b961616152ee69244d7b629436d8f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:29:23.140086 kubelet[3134]: E0911 00:29:23.139998 3134 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e69d6ae67673c4ec9e8eb7a596cbe902a3b961616152ee69244d7b629436d8f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-xnj7p" Sep 11 00:29:23.140086 kubelet[3134]: E0911 00:29:23.140021 3134 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e69d6ae67673c4ec9e8eb7a596cbe902a3b961616152ee69244d7b629436d8f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-xnj7p" Sep 11 00:29:23.140086 kubelet[3134]: E0911 00:29:23.140063 3134 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-xnj7p_kube-system(390a3f06-83fb-443b-bd56-822dbd89138c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-xnj7p_kube-system(390a3f06-83fb-443b-bd56-822dbd89138c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e69d6ae67673c4ec9e8eb7a596cbe902a3b961616152ee69244d7b629436d8f1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-xnj7p" podUID="390a3f06-83fb-443b-bd56-822dbd89138c" Sep 11 00:29:23.160251 containerd[1713]: time="2025-09-11T00:29:23.160194832Z" level=error msg="Failed to destroy network for sandbox \"974479f671d0fa96534c54f6e83208b7742c10b12fbd5844d00dc5b4a89f8eae\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:29:23.163554 systemd[1]: run-netns-cni\x2d951482dc\x2d7458\x2de446\x2db0f7\x2d8592ae67a894.mount: Deactivated successfully. Sep 11 00:29:23.167157 containerd[1713]: time="2025-09-11T00:29:23.167121996Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79475b87fb-9zlnb,Uid:25f80610-42d7-46ea-b9f4-52f2ac7254cc,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"974479f671d0fa96534c54f6e83208b7742c10b12fbd5844d00dc5b4a89f8eae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:29:23.167584 kubelet[3134]: E0911 00:29:23.167437 3134 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"974479f671d0fa96534c54f6e83208b7742c10b12fbd5844d00dc5b4a89f8eae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:29:23.167584 kubelet[3134]: E0911 00:29:23.167488 3134 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"974479f671d0fa96534c54f6e83208b7742c10b12fbd5844d00dc5b4a89f8eae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-79475b87fb-9zlnb" Sep 11 00:29:23.167584 kubelet[3134]: E0911 00:29:23.167509 3134 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"974479f671d0fa96534c54f6e83208b7742c10b12fbd5844d00dc5b4a89f8eae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-79475b87fb-9zlnb" Sep 11 00:29:23.167703 kubelet[3134]: E0911 00:29:23.167551 3134 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-79475b87fb-9zlnb_calico-system(25f80610-42d7-46ea-b9f4-52f2ac7254cc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-79475b87fb-9zlnb_calico-system(25f80610-42d7-46ea-b9f4-52f2ac7254cc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"974479f671d0fa96534c54f6e83208b7742c10b12fbd5844d00dc5b4a89f8eae\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-79475b87fb-9zlnb" podUID="25f80610-42d7-46ea-b9f4-52f2ac7254cc" Sep 11 00:29:23.172686 containerd[1713]: time="2025-09-11T00:29:23.172654192Z" level=error msg="Failed to destroy network for sandbox \"f4573e0ce894d92da6f66db740eff9a1a7fb05f4e3eb15760568b4838ef4ed6b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:29:23.176674 systemd[1]: run-netns-cni\x2d741e46c0\x2dc4db\x2dfe0d\x2d03b3\x2d410035c2e228.mount: Deactivated successfully. Sep 11 00:29:23.179184 containerd[1713]: time="2025-09-11T00:29:23.179145962Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cbtm5,Uid:b7af8a9b-1045-4cb1-926d-47a54f3633ef,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4573e0ce894d92da6f66db740eff9a1a7fb05f4e3eb15760568b4838ef4ed6b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:29:23.179367 kubelet[3134]: E0911 00:29:23.179322 3134 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4573e0ce894d92da6f66db740eff9a1a7fb05f4e3eb15760568b4838ef4ed6b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:29:23.179437 kubelet[3134]: E0911 00:29:23.179363 3134 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4573e0ce894d92da6f66db740eff9a1a7fb05f4e3eb15760568b4838ef4ed6b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-cbtm5" Sep 11 00:29:23.179984 kubelet[3134]: E0911 00:29:23.179954 3134 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4573e0ce894d92da6f66db740eff9a1a7fb05f4e3eb15760568b4838ef4ed6b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-cbtm5" Sep 11 00:29:23.181146 kubelet[3134]: E0911 00:29:23.180225 3134 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-cbtm5_calico-system(b7af8a9b-1045-4cb1-926d-47a54f3633ef)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-cbtm5_calico-system(b7af8a9b-1045-4cb1-926d-47a54f3633ef)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f4573e0ce894d92da6f66db740eff9a1a7fb05f4e3eb15760568b4838ef4ed6b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-cbtm5" podUID="b7af8a9b-1045-4cb1-926d-47a54f3633ef" Sep 11 00:29:23.214027 containerd[1713]: time="2025-09-11T00:29:23.213912820Z" level=error msg="Failed to destroy network for sandbox \"95d628a99cd0d7af4b8bf4381a69892c784a2a19878f6b409acac5a487c4c1de\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:29:23.214443 containerd[1713]: time="2025-09-11T00:29:23.214416833Z" level=error msg="Failed to destroy network for sandbox \"8f16bd7d7c0c2e75d190702f15c83684d3fc2d4c54a6b68c2c67612522e68645\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:29:23.217177 containerd[1713]: time="2025-09-11T00:29:23.217004154Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fbf5c6d8-9hpwn,Uid:1f68b9e3-b7a9-4e5a-95d3-b579e5278782,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"95d628a99cd0d7af4b8bf4381a69892c784a2a19878f6b409acac5a487c4c1de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:29:23.217814 kubelet[3134]: E0911 00:29:23.217353 3134 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"95d628a99cd0d7af4b8bf4381a69892c784a2a19878f6b409acac5a487c4c1de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:29:23.217814 kubelet[3134]: E0911 00:29:23.217418 3134 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"95d628a99cd0d7af4b8bf4381a69892c784a2a19878f6b409acac5a487c4c1de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5fbf5c6d8-9hpwn" Sep 11 00:29:23.217814 kubelet[3134]: E0911 00:29:23.217437 3134 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"95d628a99cd0d7af4b8bf4381a69892c784a2a19878f6b409acac5a487c4c1de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5fbf5c6d8-9hpwn" Sep 11 00:29:23.218001 kubelet[3134]: E0911 00:29:23.217475 3134 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5fbf5c6d8-9hpwn_calico-apiserver(1f68b9e3-b7a9-4e5a-95d3-b579e5278782)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5fbf5c6d8-9hpwn_calico-apiserver(1f68b9e3-b7a9-4e5a-95d3-b579e5278782)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"95d628a99cd0d7af4b8bf4381a69892c784a2a19878f6b409acac5a487c4c1de\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5fbf5c6d8-9hpwn" podUID="1f68b9e3-b7a9-4e5a-95d3-b579e5278782" Sep 11 00:29:23.220706 containerd[1713]: time="2025-09-11T00:29:23.220474860Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-79ccc8689b-59t9c,Uid:dfc938aa-75cb-4bd5-92f2-c0079a9800da,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f16bd7d7c0c2e75d190702f15c83684d3fc2d4c54a6b68c2c67612522e68645\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:29:23.221097 kubelet[3134]: E0911 00:29:23.221053 3134 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f16bd7d7c0c2e75d190702f15c83684d3fc2d4c54a6b68c2c67612522e68645\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:29:23.221097 kubelet[3134]: E0911 00:29:23.221092 3134 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f16bd7d7c0c2e75d190702f15c83684d3fc2d4c54a6b68c2c67612522e68645\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-79ccc8689b-59t9c" Sep 11 00:29:23.221198 kubelet[3134]: E0911 00:29:23.221108 3134 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f16bd7d7c0c2e75d190702f15c83684d3fc2d4c54a6b68c2c67612522e68645\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-79ccc8689b-59t9c" Sep 11 00:29:23.221198 kubelet[3134]: E0911 00:29:23.221149 3134 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-79ccc8689b-59t9c_calico-system(dfc938aa-75cb-4bd5-92f2-c0079a9800da)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-79ccc8689b-59t9c_calico-system(dfc938aa-75cb-4bd5-92f2-c0079a9800da)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8f16bd7d7c0c2e75d190702f15c83684d3fc2d4c54a6b68c2c67612522e68645\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-79ccc8689b-59t9c" podUID="dfc938aa-75cb-4bd5-92f2-c0079a9800da" Sep 11 00:29:23.232013 containerd[1713]: time="2025-09-11T00:29:23.231984447Z" level=error msg="Failed to destroy network for sandbox \"b0d8752ab980bd1deff68da9d8d1305d50a055f8bd7ad422702f1ecbca1d297c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:29:23.232121 containerd[1713]: time="2025-09-11T00:29:23.231984599Z" level=error msg="Failed to destroy network for sandbox \"91e2633d068448b5940f01ac2a74a11e9f990e78285af5d8a96fc1486a2afa3e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:29:23.237332 containerd[1713]: time="2025-09-11T00:29:23.237246663Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fbf5c6d8-z67wz,Uid:d766bed5-827d-43f6-8bff-229467c4b9d1,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b0d8752ab980bd1deff68da9d8d1305d50a055f8bd7ad422702f1ecbca1d297c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:29:23.237489 kubelet[3134]: E0911 00:29:23.237423 3134 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b0d8752ab980bd1deff68da9d8d1305d50a055f8bd7ad422702f1ecbca1d297c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:29:23.237489 kubelet[3134]: E0911 00:29:23.237465 3134 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b0d8752ab980bd1deff68da9d8d1305d50a055f8bd7ad422702f1ecbca1d297c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5fbf5c6d8-z67wz" Sep 11 00:29:23.237489 kubelet[3134]: E0911 00:29:23.237484 3134 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b0d8752ab980bd1deff68da9d8d1305d50a055f8bd7ad422702f1ecbca1d297c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5fbf5c6d8-z67wz" Sep 11 00:29:23.237620 kubelet[3134]: E0911 00:29:23.237560 3134 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5fbf5c6d8-z67wz_calico-apiserver(d766bed5-827d-43f6-8bff-229467c4b9d1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5fbf5c6d8-z67wz_calico-apiserver(d766bed5-827d-43f6-8bff-229467c4b9d1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b0d8752ab980bd1deff68da9d8d1305d50a055f8bd7ad422702f1ecbca1d297c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5fbf5c6d8-z67wz" podUID="d766bed5-827d-43f6-8bff-229467c4b9d1" Sep 11 00:29:23.240539 containerd[1713]: time="2025-09-11T00:29:23.240428644Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-r824m,Uid:6de28527-91a4-4436-8fe3-69f9a4c814c5,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"91e2633d068448b5940f01ac2a74a11e9f990e78285af5d8a96fc1486a2afa3e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:29:23.240694 kubelet[3134]: E0911 00:29:23.240670 3134 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91e2633d068448b5940f01ac2a74a11e9f990e78285af5d8a96fc1486a2afa3e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:29:23.240739 kubelet[3134]: E0911 00:29:23.240715 3134 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91e2633d068448b5940f01ac2a74a11e9f990e78285af5d8a96fc1486a2afa3e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-r824m" Sep 11 00:29:23.240772 kubelet[3134]: E0911 00:29:23.240733 3134 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91e2633d068448b5940f01ac2a74a11e9f990e78285af5d8a96fc1486a2afa3e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-r824m" Sep 11 00:29:23.240798 kubelet[3134]: E0911 00:29:23.240767 3134 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-r824m_calico-system(6de28527-91a4-4436-8fe3-69f9a4c814c5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-r824m_calico-system(6de28527-91a4-4436-8fe3-69f9a4c814c5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"91e2633d068448b5940f01ac2a74a11e9f990e78285af5d8a96fc1486a2afa3e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-r824m" podUID="6de28527-91a4-4436-8fe3-69f9a4c814c5" Sep 11 00:29:23.934410 containerd[1713]: time="2025-09-11T00:29:23.934318538Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 11 00:29:24.046643 systemd[1]: run-netns-cni\x2d01a5964f\x2d38fa\x2db196\x2d5e07\x2dbc58a7d01b12.mount: Deactivated successfully. Sep 11 00:29:24.046729 systemd[1]: run-netns-cni\x2d58093182\x2d4dfe\x2d7084\x2ddf80\x2d8985cc77ce84.mount: Deactivated successfully. Sep 11 00:29:24.046779 systemd[1]: run-netns-cni\x2d5421f88c\x2dc0de\x2d70ee\x2dd412\x2d8d8b1cf9584f.mount: Deactivated successfully. Sep 11 00:29:24.046829 systemd[1]: run-netns-cni\x2d541cf60b\x2df902\x2d9674\x2dbde8\x2d1a1fd0a75e96.mount: Deactivated successfully. Sep 11 00:29:28.671976 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount202002069.mount: Deactivated successfully. Sep 11 00:29:28.699501 containerd[1713]: time="2025-09-11T00:29:28.699457205Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:29:28.702052 containerd[1713]: time="2025-09-11T00:29:28.702018376Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 11 00:29:28.705262 containerd[1713]: time="2025-09-11T00:29:28.705192027Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:29:28.709445 containerd[1713]: time="2025-09-11T00:29:28.709399901Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:29:28.709919 containerd[1713]: time="2025-09-11T00:29:28.709695692Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 4.775327393s" Sep 11 00:29:28.709919 containerd[1713]: time="2025-09-11T00:29:28.709727302Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 11 00:29:28.724016 containerd[1713]: time="2025-09-11T00:29:28.723988292Z" level=info msg="CreateContainer within sandbox \"4a9b8ad2d9a3e82e765230605d1656f592c432c6c8083f19fa1954d2521547fe\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 11 00:29:28.747414 containerd[1713]: time="2025-09-11T00:29:28.746451376Z" level=info msg="Container 7d92f42a5ee2aacc73039347d07b3054ffadae24ffbdaf2a0b5356e74dc33b6d: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:29:28.763266 containerd[1713]: time="2025-09-11T00:29:28.763238249Z" level=info msg="CreateContainer within sandbox \"4a9b8ad2d9a3e82e765230605d1656f592c432c6c8083f19fa1954d2521547fe\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"7d92f42a5ee2aacc73039347d07b3054ffadae24ffbdaf2a0b5356e74dc33b6d\"" Sep 11 00:29:28.763674 containerd[1713]: time="2025-09-11T00:29:28.763654765Z" level=info msg="StartContainer for \"7d92f42a5ee2aacc73039347d07b3054ffadae24ffbdaf2a0b5356e74dc33b6d\"" Sep 11 00:29:28.765412 containerd[1713]: time="2025-09-11T00:29:28.765287696Z" level=info msg="connecting to shim 7d92f42a5ee2aacc73039347d07b3054ffadae24ffbdaf2a0b5356e74dc33b6d" address="unix:///run/containerd/s/9fbc8a46f281011ceb1024aa604d6fc3eb71cea1239ff768107063d7174e8fcd" protocol=ttrpc version=3 Sep 11 00:29:28.783521 systemd[1]: Started cri-containerd-7d92f42a5ee2aacc73039347d07b3054ffadae24ffbdaf2a0b5356e74dc33b6d.scope - libcontainer container 7d92f42a5ee2aacc73039347d07b3054ffadae24ffbdaf2a0b5356e74dc33b6d. Sep 11 00:29:28.818609 containerd[1713]: time="2025-09-11T00:29:28.818581246Z" level=info msg="StartContainer for \"7d92f42a5ee2aacc73039347d07b3054ffadae24ffbdaf2a0b5356e74dc33b6d\" returns successfully" Sep 11 00:29:28.965555 kubelet[3134]: I0911 00:29:28.965492 3134 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-czwjg" podStartSLOduration=1.50461125 podStartE2EDuration="18.965473094s" podCreationTimestamp="2025-09-11 00:29:10 +0000 UTC" firstStartedPulling="2025-09-11 00:29:11.249635048 +0000 UTC m=+21.538322638" lastFinishedPulling="2025-09-11 00:29:28.710496909 +0000 UTC m=+38.999184482" observedRunningTime="2025-09-11 00:29:28.964171845 +0000 UTC m=+39.252859436" watchObservedRunningTime="2025-09-11 00:29:28.965473094 +0000 UTC m=+39.254160680" Sep 11 00:29:29.066416 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 11 00:29:29.066532 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 11 00:29:29.289568 kubelet[3134]: I0911 00:29:29.289172 3134 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/dfc938aa-75cb-4bd5-92f2-c0079a9800da-whisker-backend-key-pair\") pod \"dfc938aa-75cb-4bd5-92f2-c0079a9800da\" (UID: \"dfc938aa-75cb-4bd5-92f2-c0079a9800da\") " Sep 11 00:29:29.289568 kubelet[3134]: I0911 00:29:29.289229 3134 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gpqk6\" (UniqueName: \"kubernetes.io/projected/dfc938aa-75cb-4bd5-92f2-c0079a9800da-kube-api-access-gpqk6\") pod \"dfc938aa-75cb-4bd5-92f2-c0079a9800da\" (UID: \"dfc938aa-75cb-4bd5-92f2-c0079a9800da\") " Sep 11 00:29:29.289568 kubelet[3134]: I0911 00:29:29.289250 3134 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dfc938aa-75cb-4bd5-92f2-c0079a9800da-whisker-ca-bundle\") pod \"dfc938aa-75cb-4bd5-92f2-c0079a9800da\" (UID: \"dfc938aa-75cb-4bd5-92f2-c0079a9800da\") " Sep 11 00:29:29.289768 kubelet[3134]: I0911 00:29:29.289627 3134 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dfc938aa-75cb-4bd5-92f2-c0079a9800da-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "dfc938aa-75cb-4bd5-92f2-c0079a9800da" (UID: "dfc938aa-75cb-4bd5-92f2-c0079a9800da"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 11 00:29:29.293492 kubelet[3134]: I0911 00:29:29.293331 3134 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfc938aa-75cb-4bd5-92f2-c0079a9800da-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "dfc938aa-75cb-4bd5-92f2-c0079a9800da" (UID: "dfc938aa-75cb-4bd5-92f2-c0079a9800da"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 11 00:29:29.294720 kubelet[3134]: I0911 00:29:29.294691 3134 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dfc938aa-75cb-4bd5-92f2-c0079a9800da-kube-api-access-gpqk6" (OuterVolumeSpecName: "kube-api-access-gpqk6") pod "dfc938aa-75cb-4bd5-92f2-c0079a9800da" (UID: "dfc938aa-75cb-4bd5-92f2-c0079a9800da"). InnerVolumeSpecName "kube-api-access-gpqk6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 11 00:29:29.390013 kubelet[3134]: I0911 00:29:29.389965 3134 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/dfc938aa-75cb-4bd5-92f2-c0079a9800da-whisker-backend-key-pair\") on node \"ci-4372.1.0-n-4da84ffec3\" DevicePath \"\"" Sep 11 00:29:29.390013 kubelet[3134]: I0911 00:29:29.390006 3134 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gpqk6\" (UniqueName: \"kubernetes.io/projected/dfc938aa-75cb-4bd5-92f2-c0079a9800da-kube-api-access-gpqk6\") on node \"ci-4372.1.0-n-4da84ffec3\" DevicePath \"\"" Sep 11 00:29:29.390013 kubelet[3134]: I0911 00:29:29.390015 3134 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dfc938aa-75cb-4bd5-92f2-c0079a9800da-whisker-ca-bundle\") on node \"ci-4372.1.0-n-4da84ffec3\" DevicePath \"\"" Sep 11 00:29:29.672374 systemd[1]: var-lib-kubelet-pods-dfc938aa\x2d75cb\x2d4bd5\x2d92f2\x2dc0079a9800da-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgpqk6.mount: Deactivated successfully. Sep 11 00:29:29.672496 systemd[1]: var-lib-kubelet-pods-dfc938aa\x2d75cb\x2d4bd5\x2d92f2\x2dc0079a9800da-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 11 00:29:29.848898 systemd[1]: Removed slice kubepods-besteffort-poddfc938aa_75cb_4bd5_92f2_c0079a9800da.slice - libcontainer container kubepods-besteffort-poddfc938aa_75cb_4bd5_92f2_c0079a9800da.slice. Sep 11 00:29:30.026579 systemd[1]: Created slice kubepods-besteffort-podbad9ce28_db05_4446_ab0f_edef2165bd2e.slice - libcontainer container kubepods-besteffort-podbad9ce28_db05_4446_ab0f_edef2165bd2e.slice. Sep 11 00:29:30.097025 kubelet[3134]: I0911 00:29:30.096979 3134 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8885q\" (UniqueName: \"kubernetes.io/projected/bad9ce28-db05-4446-ab0f-edef2165bd2e-kube-api-access-8885q\") pod \"whisker-cff77ff5f-89884\" (UID: \"bad9ce28-db05-4446-ab0f-edef2165bd2e\") " pod="calico-system/whisker-cff77ff5f-89884" Sep 11 00:29:30.097025 kubelet[3134]: I0911 00:29:30.097030 3134 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bad9ce28-db05-4446-ab0f-edef2165bd2e-whisker-ca-bundle\") pod \"whisker-cff77ff5f-89884\" (UID: \"bad9ce28-db05-4446-ab0f-edef2165bd2e\") " pod="calico-system/whisker-cff77ff5f-89884" Sep 11 00:29:30.097356 kubelet[3134]: I0911 00:29:30.097051 3134 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/bad9ce28-db05-4446-ab0f-edef2165bd2e-whisker-backend-key-pair\") pod \"whisker-cff77ff5f-89884\" (UID: \"bad9ce28-db05-4446-ab0f-edef2165bd2e\") " pod="calico-system/whisker-cff77ff5f-89884" Sep 11 00:29:30.331893 containerd[1713]: time="2025-09-11T00:29:30.331778924Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-cff77ff5f-89884,Uid:bad9ce28-db05-4446-ab0f-edef2165bd2e,Namespace:calico-system,Attempt:0,}" Sep 11 00:29:30.515718 systemd-networkd[1598]: califd6152ae5c8: Link UP Sep 11 00:29:30.516866 systemd-networkd[1598]: califd6152ae5c8: Gained carrier Sep 11 00:29:30.537961 containerd[1713]: 2025-09-11 00:29:30.384 [INFO][4209] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 11 00:29:30.537961 containerd[1713]: 2025-09-11 00:29:30.398 [INFO][4209] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.1.0--n--4da84ffec3-k8s-whisker--cff77ff5f--89884-eth0 whisker-cff77ff5f- calico-system bad9ce28-db05-4446-ab0f-edef2165bd2e 906 0 2025-09-11 00:29:29 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:cff77ff5f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4372.1.0-n-4da84ffec3 whisker-cff77ff5f-89884 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] califd6152ae5c8 [] [] }} ContainerID="d2774446ba9b36fd93f41702907c821740168775f36b8877569d7e6b5893dd67" Namespace="calico-system" Pod="whisker-cff77ff5f-89884" WorkloadEndpoint="ci--4372.1.0--n--4da84ffec3-k8s-whisker--cff77ff5f--89884-" Sep 11 00:29:30.537961 containerd[1713]: 2025-09-11 00:29:30.398 [INFO][4209] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d2774446ba9b36fd93f41702907c821740168775f36b8877569d7e6b5893dd67" Namespace="calico-system" Pod="whisker-cff77ff5f-89884" WorkloadEndpoint="ci--4372.1.0--n--4da84ffec3-k8s-whisker--cff77ff5f--89884-eth0" Sep 11 00:29:30.537961 containerd[1713]: 2025-09-11 00:29:30.451 [INFO][4270] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d2774446ba9b36fd93f41702907c821740168775f36b8877569d7e6b5893dd67" HandleID="k8s-pod-network.d2774446ba9b36fd93f41702907c821740168775f36b8877569d7e6b5893dd67" Workload="ci--4372.1.0--n--4da84ffec3-k8s-whisker--cff77ff5f--89884-eth0" Sep 11 00:29:30.538182 containerd[1713]: 2025-09-11 00:29:30.451 [INFO][4270] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d2774446ba9b36fd93f41702907c821740168775f36b8877569d7e6b5893dd67" HandleID="k8s-pod-network.d2774446ba9b36fd93f41702907c821740168775f36b8877569d7e6b5893dd67" Workload="ci--4372.1.0--n--4da84ffec3-k8s-whisker--cff77ff5f--89884-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003337b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4372.1.0-n-4da84ffec3", "pod":"whisker-cff77ff5f-89884", "timestamp":"2025-09-11 00:29:30.45150678 +0000 UTC"}, Hostname:"ci-4372.1.0-n-4da84ffec3", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 11 00:29:30.538182 containerd[1713]: 2025-09-11 00:29:30.451 [INFO][4270] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 11 00:29:30.538182 containerd[1713]: 2025-09-11 00:29:30.451 [INFO][4270] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 11 00:29:30.538182 containerd[1713]: 2025-09-11 00:29:30.451 [INFO][4270] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.1.0-n-4da84ffec3' Sep 11 00:29:30.538182 containerd[1713]: 2025-09-11 00:29:30.462 [INFO][4270] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d2774446ba9b36fd93f41702907c821740168775f36b8877569d7e6b5893dd67" host="ci-4372.1.0-n-4da84ffec3" Sep 11 00:29:30.538182 containerd[1713]: 2025-09-11 00:29:30.466 [INFO][4270] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.1.0-n-4da84ffec3" Sep 11 00:29:30.538182 containerd[1713]: 2025-09-11 00:29:30.471 [INFO][4270] ipam/ipam.go 511: Trying affinity for 192.168.85.0/26 host="ci-4372.1.0-n-4da84ffec3" Sep 11 00:29:30.538182 containerd[1713]: 2025-09-11 00:29:30.473 [INFO][4270] ipam/ipam.go 158: Attempting to load block cidr=192.168.85.0/26 host="ci-4372.1.0-n-4da84ffec3" Sep 11 00:29:30.538182 containerd[1713]: 2025-09-11 00:29:30.475 [INFO][4270] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.85.0/26 host="ci-4372.1.0-n-4da84ffec3" Sep 11 00:29:30.539223 containerd[1713]: 2025-09-11 00:29:30.475 [INFO][4270] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.85.0/26 handle="k8s-pod-network.d2774446ba9b36fd93f41702907c821740168775f36b8877569d7e6b5893dd67" host="ci-4372.1.0-n-4da84ffec3" Sep 11 00:29:30.539223 containerd[1713]: 2025-09-11 00:29:30.477 [INFO][4270] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d2774446ba9b36fd93f41702907c821740168775f36b8877569d7e6b5893dd67 Sep 11 00:29:30.539223 containerd[1713]: 2025-09-11 00:29:30.484 [INFO][4270] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.85.0/26 handle="k8s-pod-network.d2774446ba9b36fd93f41702907c821740168775f36b8877569d7e6b5893dd67" host="ci-4372.1.0-n-4da84ffec3" Sep 11 00:29:30.539223 containerd[1713]: 2025-09-11 00:29:30.494 [INFO][4270] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.85.1/26] block=192.168.85.0/26 handle="k8s-pod-network.d2774446ba9b36fd93f41702907c821740168775f36b8877569d7e6b5893dd67" host="ci-4372.1.0-n-4da84ffec3" Sep 11 00:29:30.539223 containerd[1713]: 2025-09-11 00:29:30.494 [INFO][4270] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.85.1/26] handle="k8s-pod-network.d2774446ba9b36fd93f41702907c821740168775f36b8877569d7e6b5893dd67" host="ci-4372.1.0-n-4da84ffec3" Sep 11 00:29:30.539223 containerd[1713]: 2025-09-11 00:29:30.494 [INFO][4270] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 11 00:29:30.539223 containerd[1713]: 2025-09-11 00:29:30.494 [INFO][4270] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.85.1/26] IPv6=[] ContainerID="d2774446ba9b36fd93f41702907c821740168775f36b8877569d7e6b5893dd67" HandleID="k8s-pod-network.d2774446ba9b36fd93f41702907c821740168775f36b8877569d7e6b5893dd67" Workload="ci--4372.1.0--n--4da84ffec3-k8s-whisker--cff77ff5f--89884-eth0" Sep 11 00:29:30.541312 containerd[1713]: 2025-09-11 00:29:30.500 [INFO][4209] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d2774446ba9b36fd93f41702907c821740168775f36b8877569d7e6b5893dd67" Namespace="calico-system" Pod="whisker-cff77ff5f-89884" WorkloadEndpoint="ci--4372.1.0--n--4da84ffec3-k8s-whisker--cff77ff5f--89884-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.1.0--n--4da84ffec3-k8s-whisker--cff77ff5f--89884-eth0", GenerateName:"whisker-cff77ff5f-", Namespace:"calico-system", SelfLink:"", UID:"bad9ce28-db05-4446-ab0f-edef2165bd2e", ResourceVersion:"906", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 29, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"cff77ff5f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.1.0-n-4da84ffec3", ContainerID:"", Pod:"whisker-cff77ff5f-89884", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.85.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"califd6152ae5c8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:29:30.541312 containerd[1713]: 2025-09-11 00:29:30.501 [INFO][4209] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.85.1/32] ContainerID="d2774446ba9b36fd93f41702907c821740168775f36b8877569d7e6b5893dd67" Namespace="calico-system" Pod="whisker-cff77ff5f-89884" WorkloadEndpoint="ci--4372.1.0--n--4da84ffec3-k8s-whisker--cff77ff5f--89884-eth0" Sep 11 00:29:30.541479 containerd[1713]: 2025-09-11 00:29:30.501 [INFO][4209] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califd6152ae5c8 ContainerID="d2774446ba9b36fd93f41702907c821740168775f36b8877569d7e6b5893dd67" Namespace="calico-system" Pod="whisker-cff77ff5f-89884" WorkloadEndpoint="ci--4372.1.0--n--4da84ffec3-k8s-whisker--cff77ff5f--89884-eth0" Sep 11 00:29:30.541479 containerd[1713]: 2025-09-11 00:29:30.516 [INFO][4209] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d2774446ba9b36fd93f41702907c821740168775f36b8877569d7e6b5893dd67" Namespace="calico-system" Pod="whisker-cff77ff5f-89884" WorkloadEndpoint="ci--4372.1.0--n--4da84ffec3-k8s-whisker--cff77ff5f--89884-eth0" Sep 11 00:29:30.541531 containerd[1713]: 2025-09-11 00:29:30.516 [INFO][4209] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d2774446ba9b36fd93f41702907c821740168775f36b8877569d7e6b5893dd67" Namespace="calico-system" Pod="whisker-cff77ff5f-89884" WorkloadEndpoint="ci--4372.1.0--n--4da84ffec3-k8s-whisker--cff77ff5f--89884-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.1.0--n--4da84ffec3-k8s-whisker--cff77ff5f--89884-eth0", GenerateName:"whisker-cff77ff5f-", Namespace:"calico-system", SelfLink:"", UID:"bad9ce28-db05-4446-ab0f-edef2165bd2e", ResourceVersion:"906", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 29, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"cff77ff5f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.1.0-n-4da84ffec3", ContainerID:"d2774446ba9b36fd93f41702907c821740168775f36b8877569d7e6b5893dd67", Pod:"whisker-cff77ff5f-89884", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.85.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"califd6152ae5c8", MAC:"b2:fd:7e:42:b1:2c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:29:30.541589 containerd[1713]: 2025-09-11 00:29:30.532 [INFO][4209] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d2774446ba9b36fd93f41702907c821740168775f36b8877569d7e6b5893dd67" Namespace="calico-system" Pod="whisker-cff77ff5f-89884" WorkloadEndpoint="ci--4372.1.0--n--4da84ffec3-k8s-whisker--cff77ff5f--89884-eth0" Sep 11 00:29:30.588182 containerd[1713]: time="2025-09-11T00:29:30.588092085Z" level=info msg="connecting to shim d2774446ba9b36fd93f41702907c821740168775f36b8877569d7e6b5893dd67" address="unix:///run/containerd/s/1291a91d3816c5ea3b92abd58140a40e5703d8a204c2e81931aa1f1d7a490223" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:29:30.621821 systemd[1]: Started cri-containerd-d2774446ba9b36fd93f41702907c821740168775f36b8877569d7e6b5893dd67.scope - libcontainer container d2774446ba9b36fd93f41702907c821740168775f36b8877569d7e6b5893dd67. Sep 11 00:29:30.712378 containerd[1713]: time="2025-09-11T00:29:30.712281119Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-cff77ff5f-89884,Uid:bad9ce28-db05-4446-ab0f-edef2165bd2e,Namespace:calico-system,Attempt:0,} returns sandbox id \"d2774446ba9b36fd93f41702907c821740168775f36b8877569d7e6b5893dd67\"" Sep 11 00:29:30.715396 containerd[1713]: time="2025-09-11T00:29:30.715279425Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 11 00:29:31.026528 systemd-networkd[1598]: vxlan.calico: Link UP Sep 11 00:29:31.026540 systemd-networkd[1598]: vxlan.calico: Gained carrier Sep 11 00:29:31.845550 kubelet[3134]: I0911 00:29:31.845507 3134 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dfc938aa-75cb-4bd5-92f2-c0079a9800da" path="/var/lib/kubelet/pods/dfc938aa-75cb-4bd5-92f2-c0079a9800da/volumes" Sep 11 00:29:31.954166 containerd[1713]: time="2025-09-11T00:29:31.954124381Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:29:31.956662 containerd[1713]: time="2025-09-11T00:29:31.956570057Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Sep 11 00:29:31.959503 containerd[1713]: time="2025-09-11T00:29:31.959469101Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:29:31.963472 containerd[1713]: time="2025-09-11T00:29:31.963272629Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:29:31.963832 containerd[1713]: time="2025-09-11T00:29:31.963809591Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 1.248500628s" Sep 11 00:29:31.963886 containerd[1713]: time="2025-09-11T00:29:31.963840974Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 11 00:29:31.966511 containerd[1713]: time="2025-09-11T00:29:31.966470836Z" level=info msg="CreateContainer within sandbox \"d2774446ba9b36fd93f41702907c821740168775f36b8877569d7e6b5893dd67\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 11 00:29:31.988818 containerd[1713]: time="2025-09-11T00:29:31.987527446Z" level=info msg="Container e8d0faf893a9df9befb6ba9aeeb6794f38ec212debbf2dfb6127dd665fe51837: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:29:32.009690 containerd[1713]: time="2025-09-11T00:29:32.009662848Z" level=info msg="CreateContainer within sandbox \"d2774446ba9b36fd93f41702907c821740168775f36b8877569d7e6b5893dd67\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"e8d0faf893a9df9befb6ba9aeeb6794f38ec212debbf2dfb6127dd665fe51837\"" Sep 11 00:29:32.010244 containerd[1713]: time="2025-09-11T00:29:32.010220006Z" level=info msg="StartContainer for \"e8d0faf893a9df9befb6ba9aeeb6794f38ec212debbf2dfb6127dd665fe51837\"" Sep 11 00:29:32.011157 containerd[1713]: time="2025-09-11T00:29:32.011118889Z" level=info msg="connecting to shim e8d0faf893a9df9befb6ba9aeeb6794f38ec212debbf2dfb6127dd665fe51837" address="unix:///run/containerd/s/1291a91d3816c5ea3b92abd58140a40e5703d8a204c2e81931aa1f1d7a490223" protocol=ttrpc version=3 Sep 11 00:29:32.034529 systemd[1]: Started cri-containerd-e8d0faf893a9df9befb6ba9aeeb6794f38ec212debbf2dfb6127dd665fe51837.scope - libcontainer container e8d0faf893a9df9befb6ba9aeeb6794f38ec212debbf2dfb6127dd665fe51837. Sep 11 00:29:32.047502 systemd-networkd[1598]: califd6152ae5c8: Gained IPv6LL Sep 11 00:29:32.078569 containerd[1713]: time="2025-09-11T00:29:32.078530719Z" level=info msg="StartContainer for \"e8d0faf893a9df9befb6ba9aeeb6794f38ec212debbf2dfb6127dd665fe51837\" returns successfully" Sep 11 00:29:32.080802 containerd[1713]: time="2025-09-11T00:29:32.080778704Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 11 00:29:32.431551 systemd-networkd[1598]: vxlan.calico: Gained IPv6LL Sep 11 00:29:33.845257 containerd[1713]: time="2025-09-11T00:29:33.845193553Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xnj7p,Uid:390a3f06-83fb-443b-bd56-822dbd89138c,Namespace:kube-system,Attempt:0,}" Sep 11 00:29:33.846107 containerd[1713]: time="2025-09-11T00:29:33.845535113Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cbtm5,Uid:b7af8a9b-1045-4cb1-926d-47a54f3633ef,Namespace:calico-system,Attempt:0,}" Sep 11 00:29:34.019764 systemd-networkd[1598]: calic4a1f6badba: Link UP Sep 11 00:29:34.020691 systemd-networkd[1598]: calic4a1f6badba: Gained carrier Sep 11 00:29:34.043042 containerd[1713]: 2025-09-11 00:29:33.923 [INFO][4499] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.1.0--n--4da84ffec3-k8s-coredns--668d6bf9bc--xnj7p-eth0 coredns-668d6bf9bc- kube-system 390a3f06-83fb-443b-bd56-822dbd89138c 842 0 2025-09-11 00:28:55 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4372.1.0-n-4da84ffec3 coredns-668d6bf9bc-xnj7p eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic4a1f6badba [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="f1c05ce9277dbcdde2410be8317c9ed2bb562e3c9f1d02fe237169a09d8a28c1" Namespace="kube-system" Pod="coredns-668d6bf9bc-xnj7p" WorkloadEndpoint="ci--4372.1.0--n--4da84ffec3-k8s-coredns--668d6bf9bc--xnj7p-" Sep 11 00:29:34.043042 containerd[1713]: 2025-09-11 00:29:33.923 [INFO][4499] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f1c05ce9277dbcdde2410be8317c9ed2bb562e3c9f1d02fe237169a09d8a28c1" Namespace="kube-system" Pod="coredns-668d6bf9bc-xnj7p" WorkloadEndpoint="ci--4372.1.0--n--4da84ffec3-k8s-coredns--668d6bf9bc--xnj7p-eth0" Sep 11 00:29:34.043042 containerd[1713]: 2025-09-11 00:29:33.967 [INFO][4524] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f1c05ce9277dbcdde2410be8317c9ed2bb562e3c9f1d02fe237169a09d8a28c1" HandleID="k8s-pod-network.f1c05ce9277dbcdde2410be8317c9ed2bb562e3c9f1d02fe237169a09d8a28c1" Workload="ci--4372.1.0--n--4da84ffec3-k8s-coredns--668d6bf9bc--xnj7p-eth0" Sep 11 00:29:34.043249 containerd[1713]: 2025-09-11 00:29:33.969 [INFO][4524] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f1c05ce9277dbcdde2410be8317c9ed2bb562e3c9f1d02fe237169a09d8a28c1" HandleID="k8s-pod-network.f1c05ce9277dbcdde2410be8317c9ed2bb562e3c9f1d02fe237169a09d8a28c1" Workload="ci--4372.1.0--n--4da84ffec3-k8s-coredns--668d6bf9bc--xnj7p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002598d0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4372.1.0-n-4da84ffec3", "pod":"coredns-668d6bf9bc-xnj7p", "timestamp":"2025-09-11 00:29:33.967534149 +0000 UTC"}, Hostname:"ci-4372.1.0-n-4da84ffec3", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 11 00:29:34.043249 containerd[1713]: 2025-09-11 00:29:33.969 [INFO][4524] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 11 00:29:34.043249 containerd[1713]: 2025-09-11 00:29:33.969 [INFO][4524] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 11 00:29:34.043249 containerd[1713]: 2025-09-11 00:29:33.969 [INFO][4524] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.1.0-n-4da84ffec3' Sep 11 00:29:34.043249 containerd[1713]: 2025-09-11 00:29:33.977 [INFO][4524] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f1c05ce9277dbcdde2410be8317c9ed2bb562e3c9f1d02fe237169a09d8a28c1" host="ci-4372.1.0-n-4da84ffec3" Sep 11 00:29:34.043249 containerd[1713]: 2025-09-11 00:29:33.981 [INFO][4524] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.1.0-n-4da84ffec3" Sep 11 00:29:34.043249 containerd[1713]: 2025-09-11 00:29:33.985 [INFO][4524] ipam/ipam.go 511: Trying affinity for 192.168.85.0/26 host="ci-4372.1.0-n-4da84ffec3" Sep 11 00:29:34.043249 containerd[1713]: 2025-09-11 00:29:33.987 [INFO][4524] ipam/ipam.go 158: Attempting to load block cidr=192.168.85.0/26 host="ci-4372.1.0-n-4da84ffec3" Sep 11 00:29:34.043249 containerd[1713]: 2025-09-11 00:29:33.990 [INFO][4524] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.85.0/26 host="ci-4372.1.0-n-4da84ffec3" Sep 11 00:29:34.043663 containerd[1713]: 2025-09-11 00:29:33.990 [INFO][4524] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.85.0/26 handle="k8s-pod-network.f1c05ce9277dbcdde2410be8317c9ed2bb562e3c9f1d02fe237169a09d8a28c1" host="ci-4372.1.0-n-4da84ffec3" Sep 11 00:29:34.043663 containerd[1713]: 2025-09-11 00:29:33.992 [INFO][4524] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f1c05ce9277dbcdde2410be8317c9ed2bb562e3c9f1d02fe237169a09d8a28c1 Sep 11 00:29:34.043663 containerd[1713]: 2025-09-11 00:29:33.998 [INFO][4524] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.85.0/26 handle="k8s-pod-network.f1c05ce9277dbcdde2410be8317c9ed2bb562e3c9f1d02fe237169a09d8a28c1" host="ci-4372.1.0-n-4da84ffec3" Sep 11 00:29:34.043663 containerd[1713]: 2025-09-11 00:29:34.008 [INFO][4524] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.85.2/26] block=192.168.85.0/26 handle="k8s-pod-network.f1c05ce9277dbcdde2410be8317c9ed2bb562e3c9f1d02fe237169a09d8a28c1" host="ci-4372.1.0-n-4da84ffec3" Sep 11 00:29:34.043663 containerd[1713]: 2025-09-11 00:29:34.008 [INFO][4524] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.85.2/26] handle="k8s-pod-network.f1c05ce9277dbcdde2410be8317c9ed2bb562e3c9f1d02fe237169a09d8a28c1" host="ci-4372.1.0-n-4da84ffec3" Sep 11 00:29:34.043663 containerd[1713]: 2025-09-11 00:29:34.008 [INFO][4524] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 11 00:29:34.043663 containerd[1713]: 2025-09-11 00:29:34.008 [INFO][4524] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.85.2/26] IPv6=[] ContainerID="f1c05ce9277dbcdde2410be8317c9ed2bb562e3c9f1d02fe237169a09d8a28c1" HandleID="k8s-pod-network.f1c05ce9277dbcdde2410be8317c9ed2bb562e3c9f1d02fe237169a09d8a28c1" Workload="ci--4372.1.0--n--4da84ffec3-k8s-coredns--668d6bf9bc--xnj7p-eth0" Sep 11 00:29:34.043928 containerd[1713]: 2025-09-11 00:29:34.013 [INFO][4499] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f1c05ce9277dbcdde2410be8317c9ed2bb562e3c9f1d02fe237169a09d8a28c1" Namespace="kube-system" Pod="coredns-668d6bf9bc-xnj7p" WorkloadEndpoint="ci--4372.1.0--n--4da84ffec3-k8s-coredns--668d6bf9bc--xnj7p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.1.0--n--4da84ffec3-k8s-coredns--668d6bf9bc--xnj7p-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"390a3f06-83fb-443b-bd56-822dbd89138c", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 28, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.1.0-n-4da84ffec3", ContainerID:"", Pod:"coredns-668d6bf9bc-xnj7p", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.85.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic4a1f6badba", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:29:34.043928 containerd[1713]: 2025-09-11 00:29:34.013 [INFO][4499] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.85.2/32] ContainerID="f1c05ce9277dbcdde2410be8317c9ed2bb562e3c9f1d02fe237169a09d8a28c1" Namespace="kube-system" Pod="coredns-668d6bf9bc-xnj7p" WorkloadEndpoint="ci--4372.1.0--n--4da84ffec3-k8s-coredns--668d6bf9bc--xnj7p-eth0" Sep 11 00:29:34.043928 containerd[1713]: 2025-09-11 00:29:34.013 [INFO][4499] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic4a1f6badba ContainerID="f1c05ce9277dbcdde2410be8317c9ed2bb562e3c9f1d02fe237169a09d8a28c1" Namespace="kube-system" Pod="coredns-668d6bf9bc-xnj7p" WorkloadEndpoint="ci--4372.1.0--n--4da84ffec3-k8s-coredns--668d6bf9bc--xnj7p-eth0" Sep 11 00:29:34.043928 containerd[1713]: 2025-09-11 00:29:34.021 [INFO][4499] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f1c05ce9277dbcdde2410be8317c9ed2bb562e3c9f1d02fe237169a09d8a28c1" Namespace="kube-system" Pod="coredns-668d6bf9bc-xnj7p" WorkloadEndpoint="ci--4372.1.0--n--4da84ffec3-k8s-coredns--668d6bf9bc--xnj7p-eth0" Sep 11 00:29:34.043928 containerd[1713]: 2025-09-11 00:29:34.022 [INFO][4499] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f1c05ce9277dbcdde2410be8317c9ed2bb562e3c9f1d02fe237169a09d8a28c1" Namespace="kube-system" Pod="coredns-668d6bf9bc-xnj7p" WorkloadEndpoint="ci--4372.1.0--n--4da84ffec3-k8s-coredns--668d6bf9bc--xnj7p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.1.0--n--4da84ffec3-k8s-coredns--668d6bf9bc--xnj7p-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"390a3f06-83fb-443b-bd56-822dbd89138c", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 28, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.1.0-n-4da84ffec3", ContainerID:"f1c05ce9277dbcdde2410be8317c9ed2bb562e3c9f1d02fe237169a09d8a28c1", Pod:"coredns-668d6bf9bc-xnj7p", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.85.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic4a1f6badba", MAC:"fe:15:df:ab:b6:e8", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:29:34.043928 containerd[1713]: 2025-09-11 00:29:34.040 [INFO][4499] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f1c05ce9277dbcdde2410be8317c9ed2bb562e3c9f1d02fe237169a09d8a28c1" Namespace="kube-system" Pod="coredns-668d6bf9bc-xnj7p" WorkloadEndpoint="ci--4372.1.0--n--4da84ffec3-k8s-coredns--668d6bf9bc--xnj7p-eth0" Sep 11 00:29:34.099600 containerd[1713]: time="2025-09-11T00:29:34.099376491Z" level=info msg="connecting to shim f1c05ce9277dbcdde2410be8317c9ed2bb562e3c9f1d02fe237169a09d8a28c1" address="unix:///run/containerd/s/d27324be756632c5e1ef850205de6e014bdc9d61cefde671dbe43e5ce542fd9d" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:29:34.140563 systemd-networkd[1598]: cali3dacf90b341: Link UP Sep 11 00:29:34.142738 systemd-networkd[1598]: cali3dacf90b341: Gained carrier Sep 11 00:29:34.156697 systemd[1]: Started cri-containerd-f1c05ce9277dbcdde2410be8317c9ed2bb562e3c9f1d02fe237169a09d8a28c1.scope - libcontainer container f1c05ce9277dbcdde2410be8317c9ed2bb562e3c9f1d02fe237169a09d8a28c1. Sep 11 00:29:34.180672 containerd[1713]: 2025-09-11 00:29:33.923 [INFO][4508] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.1.0--n--4da84ffec3-k8s-csi--node--driver--cbtm5-eth0 csi-node-driver- calico-system b7af8a9b-1045-4cb1-926d-47a54f3633ef 725 0 2025-09-11 00:29:11 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4372.1.0-n-4da84ffec3 csi-node-driver-cbtm5 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali3dacf90b341 [] [] }} ContainerID="f1521d73d8227d6fde3f3934aa8263a381b13e54ffcb668dca762efd8c8d19bd" Namespace="calico-system" Pod="csi-node-driver-cbtm5" WorkloadEndpoint="ci--4372.1.0--n--4da84ffec3-k8s-csi--node--driver--cbtm5-" Sep 11 00:29:34.180672 containerd[1713]: 2025-09-11 00:29:33.923 [INFO][4508] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f1521d73d8227d6fde3f3934aa8263a381b13e54ffcb668dca762efd8c8d19bd" Namespace="calico-system" Pod="csi-node-driver-cbtm5" WorkloadEndpoint="ci--4372.1.0--n--4da84ffec3-k8s-csi--node--driver--cbtm5-eth0" Sep 11 00:29:34.180672 containerd[1713]: 2025-09-11 00:29:33.975 [INFO][4523] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f1521d73d8227d6fde3f3934aa8263a381b13e54ffcb668dca762efd8c8d19bd" HandleID="k8s-pod-network.f1521d73d8227d6fde3f3934aa8263a381b13e54ffcb668dca762efd8c8d19bd" Workload="ci--4372.1.0--n--4da84ffec3-k8s-csi--node--driver--cbtm5-eth0" Sep 11 00:29:34.180672 containerd[1713]: 2025-09-11 00:29:33.975 [INFO][4523] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f1521d73d8227d6fde3f3934aa8263a381b13e54ffcb668dca762efd8c8d19bd" HandleID="k8s-pod-network.f1521d73d8227d6fde3f3934aa8263a381b13e54ffcb668dca762efd8c8d19bd" Workload="ci--4372.1.0--n--4da84ffec3-k8s-csi--node--driver--cbtm5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5600), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4372.1.0-n-4da84ffec3", "pod":"csi-node-driver-cbtm5", "timestamp":"2025-09-11 00:29:33.975020995 +0000 UTC"}, Hostname:"ci-4372.1.0-n-4da84ffec3", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 11 00:29:34.180672 containerd[1713]: 2025-09-11 00:29:33.976 [INFO][4523] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 11 00:29:34.180672 containerd[1713]: 2025-09-11 00:29:34.008 [INFO][4523] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 11 00:29:34.180672 containerd[1713]: 2025-09-11 00:29:34.008 [INFO][4523] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.1.0-n-4da84ffec3' Sep 11 00:29:34.180672 containerd[1713]: 2025-09-11 00:29:34.084 [INFO][4523] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f1521d73d8227d6fde3f3934aa8263a381b13e54ffcb668dca762efd8c8d19bd" host="ci-4372.1.0-n-4da84ffec3" Sep 11 00:29:34.180672 containerd[1713]: 2025-09-11 00:29:34.092 [INFO][4523] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.1.0-n-4da84ffec3" Sep 11 00:29:34.180672 containerd[1713]: 2025-09-11 00:29:34.099 [INFO][4523] ipam/ipam.go 511: Trying affinity for 192.168.85.0/26 host="ci-4372.1.0-n-4da84ffec3" Sep 11 00:29:34.180672 containerd[1713]: 2025-09-11 00:29:34.102 [INFO][4523] ipam/ipam.go 158: Attempting to load block cidr=192.168.85.0/26 host="ci-4372.1.0-n-4da84ffec3" Sep 11 00:29:34.180672 containerd[1713]: 2025-09-11 00:29:34.105 [INFO][4523] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.85.0/26 host="ci-4372.1.0-n-4da84ffec3" Sep 11 00:29:34.180672 containerd[1713]: 2025-09-11 00:29:34.106 [INFO][4523] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.85.0/26 handle="k8s-pod-network.f1521d73d8227d6fde3f3934aa8263a381b13e54ffcb668dca762efd8c8d19bd" host="ci-4372.1.0-n-4da84ffec3" Sep 11 00:29:34.180672 containerd[1713]: 2025-09-11 00:29:34.108 [INFO][4523] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f1521d73d8227d6fde3f3934aa8263a381b13e54ffcb668dca762efd8c8d19bd Sep 11 00:29:34.180672 containerd[1713]: 2025-09-11 00:29:34.121 [INFO][4523] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.85.0/26 handle="k8s-pod-network.f1521d73d8227d6fde3f3934aa8263a381b13e54ffcb668dca762efd8c8d19bd" host="ci-4372.1.0-n-4da84ffec3" Sep 11 00:29:34.180672 containerd[1713]: 2025-09-11 00:29:34.131 [INFO][4523] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.85.3/26] block=192.168.85.0/26 handle="k8s-pod-network.f1521d73d8227d6fde3f3934aa8263a381b13e54ffcb668dca762efd8c8d19bd" host="ci-4372.1.0-n-4da84ffec3" Sep 11 00:29:34.180672 containerd[1713]: 2025-09-11 00:29:34.131 [INFO][4523] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.85.3/26] handle="k8s-pod-network.f1521d73d8227d6fde3f3934aa8263a381b13e54ffcb668dca762efd8c8d19bd" host="ci-4372.1.0-n-4da84ffec3" Sep 11 00:29:34.180672 containerd[1713]: 2025-09-11 00:29:34.131 [INFO][4523] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 11 00:29:34.180672 containerd[1713]: 2025-09-11 00:29:34.131 [INFO][4523] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.85.3/26] IPv6=[] ContainerID="f1521d73d8227d6fde3f3934aa8263a381b13e54ffcb668dca762efd8c8d19bd" HandleID="k8s-pod-network.f1521d73d8227d6fde3f3934aa8263a381b13e54ffcb668dca762efd8c8d19bd" Workload="ci--4372.1.0--n--4da84ffec3-k8s-csi--node--driver--cbtm5-eth0" Sep 11 00:29:34.181086 containerd[1713]: 2025-09-11 00:29:34.133 [INFO][4508] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f1521d73d8227d6fde3f3934aa8263a381b13e54ffcb668dca762efd8c8d19bd" Namespace="calico-system" Pod="csi-node-driver-cbtm5" WorkloadEndpoint="ci--4372.1.0--n--4da84ffec3-k8s-csi--node--driver--cbtm5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.1.0--n--4da84ffec3-k8s-csi--node--driver--cbtm5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b7af8a9b-1045-4cb1-926d-47a54f3633ef", ResourceVersion:"725", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 29, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.1.0-n-4da84ffec3", ContainerID:"", Pod:"csi-node-driver-cbtm5", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.85.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3dacf90b341", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:29:34.181086 containerd[1713]: 2025-09-11 00:29:34.134 [INFO][4508] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.85.3/32] ContainerID="f1521d73d8227d6fde3f3934aa8263a381b13e54ffcb668dca762efd8c8d19bd" Namespace="calico-system" Pod="csi-node-driver-cbtm5" WorkloadEndpoint="ci--4372.1.0--n--4da84ffec3-k8s-csi--node--driver--cbtm5-eth0" Sep 11 00:29:34.181086 containerd[1713]: 2025-09-11 00:29:34.134 [INFO][4508] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3dacf90b341 ContainerID="f1521d73d8227d6fde3f3934aa8263a381b13e54ffcb668dca762efd8c8d19bd" Namespace="calico-system" Pod="csi-node-driver-cbtm5" WorkloadEndpoint="ci--4372.1.0--n--4da84ffec3-k8s-csi--node--driver--cbtm5-eth0" Sep 11 00:29:34.181086 containerd[1713]: 2025-09-11 00:29:34.150 [INFO][4508] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f1521d73d8227d6fde3f3934aa8263a381b13e54ffcb668dca762efd8c8d19bd" Namespace="calico-system" Pod="csi-node-driver-cbtm5" WorkloadEndpoint="ci--4372.1.0--n--4da84ffec3-k8s-csi--node--driver--cbtm5-eth0" Sep 11 00:29:34.181086 containerd[1713]: 2025-09-11 00:29:34.152 [INFO][4508] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f1521d73d8227d6fde3f3934aa8263a381b13e54ffcb668dca762efd8c8d19bd" Namespace="calico-system" Pod="csi-node-driver-cbtm5" WorkloadEndpoint="ci--4372.1.0--n--4da84ffec3-k8s-csi--node--driver--cbtm5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.1.0--n--4da84ffec3-k8s-csi--node--driver--cbtm5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b7af8a9b-1045-4cb1-926d-47a54f3633ef", ResourceVersion:"725", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 29, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.1.0-n-4da84ffec3", ContainerID:"f1521d73d8227d6fde3f3934aa8263a381b13e54ffcb668dca762efd8c8d19bd", Pod:"csi-node-driver-cbtm5", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.85.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3dacf90b341", MAC:"ae:4d:58:5b:35:5a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:29:34.181086 containerd[1713]: 2025-09-11 00:29:34.175 [INFO][4508] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f1521d73d8227d6fde3f3934aa8263a381b13e54ffcb668dca762efd8c8d19bd" Namespace="calico-system" Pod="csi-node-driver-cbtm5" WorkloadEndpoint="ci--4372.1.0--n--4da84ffec3-k8s-csi--node--driver--cbtm5-eth0" Sep 11 00:29:34.227373 containerd[1713]: time="2025-09-11T00:29:34.227341019Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xnj7p,Uid:390a3f06-83fb-443b-bd56-822dbd89138c,Namespace:kube-system,Attempt:0,} returns sandbox id \"f1c05ce9277dbcdde2410be8317c9ed2bb562e3c9f1d02fe237169a09d8a28c1\"" Sep 11 00:29:34.230508 containerd[1713]: time="2025-09-11T00:29:34.230477175Z" level=info msg="CreateContainer within sandbox \"f1c05ce9277dbcdde2410be8317c9ed2bb562e3c9f1d02fe237169a09d8a28c1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 11 00:29:34.257677 containerd[1713]: time="2025-09-11T00:29:34.257647939Z" level=info msg="connecting to shim f1521d73d8227d6fde3f3934aa8263a381b13e54ffcb668dca762efd8c8d19bd" address="unix:///run/containerd/s/5dc083565bbe89a390268f0e6872a63bb6fa35207104bb09d7b30742573113c6" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:29:34.272524 containerd[1713]: time="2025-09-11T00:29:34.272498365Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:29:34.274923 containerd[1713]: time="2025-09-11T00:29:34.274899964Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Sep 11 00:29:34.275533 systemd[1]: Started cri-containerd-f1521d73d8227d6fde3f3934aa8263a381b13e54ffcb668dca762efd8c8d19bd.scope - libcontainer container f1521d73d8227d6fde3f3934aa8263a381b13e54ffcb668dca762efd8c8d19bd. Sep 11 00:29:34.281042 containerd[1713]: time="2025-09-11T00:29:34.280800366Z" level=info msg="Container 1b141ad3a3ebe8ba8d0ef0c413f8c4bffe6b530dc23e8083582b15d092435508: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:29:34.284570 containerd[1713]: time="2025-09-11T00:29:34.284533705Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:29:34.296889 containerd[1713]: time="2025-09-11T00:29:34.296858005Z" level=info msg="CreateContainer within sandbox \"f1c05ce9277dbcdde2410be8317c9ed2bb562e3c9f1d02fe237169a09d8a28c1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1b141ad3a3ebe8ba8d0ef0c413f8c4bffe6b530dc23e8083582b15d092435508\"" Sep 11 00:29:34.297642 containerd[1713]: time="2025-09-11T00:29:34.297506336Z" level=info msg="StartContainer for \"1b141ad3a3ebe8ba8d0ef0c413f8c4bffe6b530dc23e8083582b15d092435508\"" Sep 11 00:29:34.298797 containerd[1713]: time="2025-09-11T00:29:34.298767380Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:29:34.299551 containerd[1713]: time="2025-09-11T00:29:34.299479015Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 2.21866877s" Sep 11 00:29:34.299551 containerd[1713]: time="2025-09-11T00:29:34.299513303Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 11 00:29:34.302209 containerd[1713]: time="2025-09-11T00:29:34.302021011Z" level=info msg="connecting to shim 1b141ad3a3ebe8ba8d0ef0c413f8c4bffe6b530dc23e8083582b15d092435508" address="unix:///run/containerd/s/d27324be756632c5e1ef850205de6e014bdc9d61cefde671dbe43e5ce542fd9d" protocol=ttrpc version=3 Sep 11 00:29:34.303332 containerd[1713]: time="2025-09-11T00:29:34.303188784Z" level=info msg="CreateContainer within sandbox \"d2774446ba9b36fd93f41702907c821740168775f36b8877569d7e6b5893dd67\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 11 00:29:34.307723 containerd[1713]: time="2025-09-11T00:29:34.307698397Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cbtm5,Uid:b7af8a9b-1045-4cb1-926d-47a54f3633ef,Namespace:calico-system,Attempt:0,} returns sandbox id \"f1521d73d8227d6fde3f3934aa8263a381b13e54ffcb668dca762efd8c8d19bd\"" Sep 11 00:29:34.310457 containerd[1713]: time="2025-09-11T00:29:34.309268242Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 11 00:29:34.322517 systemd[1]: Started cri-containerd-1b141ad3a3ebe8ba8d0ef0c413f8c4bffe6b530dc23e8083582b15d092435508.scope - libcontainer container 1b141ad3a3ebe8ba8d0ef0c413f8c4bffe6b530dc23e8083582b15d092435508. Sep 11 00:29:34.329073 containerd[1713]: time="2025-09-11T00:29:34.329048352Z" level=info msg="Container aa98417914cadccd9b9fad60e046747da1f731c6756ea3981eaac9ee5e99014d: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:29:34.354614 containerd[1713]: time="2025-09-11T00:29:34.353451851Z" level=info msg="CreateContainer within sandbox \"d2774446ba9b36fd93f41702907c821740168775f36b8877569d7e6b5893dd67\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"aa98417914cadccd9b9fad60e046747da1f731c6756ea3981eaac9ee5e99014d\"" Sep 11 00:29:34.355862 containerd[1713]: time="2025-09-11T00:29:34.354671470Z" level=info msg="StartContainer for \"aa98417914cadccd9b9fad60e046747da1f731c6756ea3981eaac9ee5e99014d\"" Sep 11 00:29:34.356953 containerd[1713]: time="2025-09-11T00:29:34.356924325Z" level=info msg="connecting to shim aa98417914cadccd9b9fad60e046747da1f731c6756ea3981eaac9ee5e99014d" address="unix:///run/containerd/s/1291a91d3816c5ea3b92abd58140a40e5703d8a204c2e81931aa1f1d7a490223" protocol=ttrpc version=3 Sep 11 00:29:34.361405 containerd[1713]: time="2025-09-11T00:29:34.360080933Z" level=info msg="StartContainer for \"1b141ad3a3ebe8ba8d0ef0c413f8c4bffe6b530dc23e8083582b15d092435508\" returns successfully" Sep 11 00:29:34.379557 systemd[1]: Started cri-containerd-aa98417914cadccd9b9fad60e046747da1f731c6756ea3981eaac9ee5e99014d.scope - libcontainer container aa98417914cadccd9b9fad60e046747da1f731c6756ea3981eaac9ee5e99014d. Sep 11 00:29:34.436073 containerd[1713]: time="2025-09-11T00:29:34.436042687Z" level=info msg="StartContainer for \"aa98417914cadccd9b9fad60e046747da1f731c6756ea3981eaac9ee5e99014d\" returns successfully" Sep 11 00:29:34.617651 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1171132237.mount: Deactivated successfully. Sep 11 00:29:34.844229 containerd[1713]: time="2025-09-11T00:29:34.844166797Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-r824m,Uid:6de28527-91a4-4436-8fe3-69f9a4c814c5,Namespace:calico-system,Attempt:0,}" Sep 11 00:29:34.844841 containerd[1713]: time="2025-09-11T00:29:34.844166874Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fbf5c6d8-9hpwn,Uid:1f68b9e3-b7a9-4e5a-95d3-b579e5278782,Namespace:calico-apiserver,Attempt:0,}" Sep 11 00:29:34.960932 systemd-networkd[1598]: calid5b4b2fb033: Link UP Sep 11 00:29:34.961122 systemd-networkd[1598]: calid5b4b2fb033: Gained carrier Sep 11 00:29:34.984179 containerd[1713]: 2025-09-11 00:29:34.898 [INFO][4719] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.1.0--n--4da84ffec3-k8s-calico--apiserver--5fbf5c6d8--9hpwn-eth0 calico-apiserver-5fbf5c6d8- calico-apiserver 1f68b9e3-b7a9-4e5a-95d3-b579e5278782 843 0 2025-09-11 00:29:07 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5fbf5c6d8 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4372.1.0-n-4da84ffec3 calico-apiserver-5fbf5c6d8-9hpwn eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calid5b4b2fb033 [] [] }} ContainerID="c2f22d1304fccda31eab6fb61c573f194bd936593616cf56a5cb8c63bdb2112a" Namespace="calico-apiserver" Pod="calico-apiserver-5fbf5c6d8-9hpwn" WorkloadEndpoint="ci--4372.1.0--n--4da84ffec3-k8s-calico--apiserver--5fbf5c6d8--9hpwn-" Sep 11 00:29:34.984179 containerd[1713]: 2025-09-11 00:29:34.899 [INFO][4719] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c2f22d1304fccda31eab6fb61c573f194bd936593616cf56a5cb8c63bdb2112a" Namespace="calico-apiserver" Pod="calico-apiserver-5fbf5c6d8-9hpwn" WorkloadEndpoint="ci--4372.1.0--n--4da84ffec3-k8s-calico--apiserver--5fbf5c6d8--9hpwn-eth0" Sep 11 00:29:34.984179 containerd[1713]: 2025-09-11 00:29:34.927 [INFO][4745] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c2f22d1304fccda31eab6fb61c573f194bd936593616cf56a5cb8c63bdb2112a" HandleID="k8s-pod-network.c2f22d1304fccda31eab6fb61c573f194bd936593616cf56a5cb8c63bdb2112a" Workload="ci--4372.1.0--n--4da84ffec3-k8s-calico--apiserver--5fbf5c6d8--9hpwn-eth0" Sep 11 00:29:34.984179 containerd[1713]: 2025-09-11 00:29:34.927 [INFO][4745] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c2f22d1304fccda31eab6fb61c573f194bd936593616cf56a5cb8c63bdb2112a" HandleID="k8s-pod-network.c2f22d1304fccda31eab6fb61c573f194bd936593616cf56a5cb8c63bdb2112a" Workload="ci--4372.1.0--n--4da84ffec3-k8s-calico--apiserver--5fbf5c6d8--9hpwn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f720), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4372.1.0-n-4da84ffec3", "pod":"calico-apiserver-5fbf5c6d8-9hpwn", "timestamp":"2025-09-11 00:29:34.927411699 +0000 UTC"}, Hostname:"ci-4372.1.0-n-4da84ffec3", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 11 00:29:34.984179 containerd[1713]: 2025-09-11 00:29:34.927 [INFO][4745] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 11 00:29:34.984179 containerd[1713]: 2025-09-11 00:29:34.927 [INFO][4745] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 11 00:29:34.984179 containerd[1713]: 2025-09-11 00:29:34.927 [INFO][4745] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.1.0-n-4da84ffec3' Sep 11 00:29:34.984179 containerd[1713]: 2025-09-11 00:29:34.932 [INFO][4745] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c2f22d1304fccda31eab6fb61c573f194bd936593616cf56a5cb8c63bdb2112a" host="ci-4372.1.0-n-4da84ffec3" Sep 11 00:29:34.984179 containerd[1713]: 2025-09-11 00:29:34.935 [INFO][4745] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.1.0-n-4da84ffec3" Sep 11 00:29:34.984179 containerd[1713]: 2025-09-11 00:29:34.938 [INFO][4745] ipam/ipam.go 511: Trying affinity for 192.168.85.0/26 host="ci-4372.1.0-n-4da84ffec3" Sep 11 00:29:34.984179 containerd[1713]: 2025-09-11 00:29:34.939 [INFO][4745] ipam/ipam.go 158: Attempting to load block cidr=192.168.85.0/26 host="ci-4372.1.0-n-4da84ffec3" Sep 11 00:29:34.984179 containerd[1713]: 2025-09-11 00:29:34.941 [INFO][4745] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.85.0/26 host="ci-4372.1.0-n-4da84ffec3" Sep 11 00:29:34.984179 containerd[1713]: 2025-09-11 00:29:34.941 [INFO][4745] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.85.0/26 handle="k8s-pod-network.c2f22d1304fccda31eab6fb61c573f194bd936593616cf56a5cb8c63bdb2112a" host="ci-4372.1.0-n-4da84ffec3" Sep 11 00:29:34.984179 containerd[1713]: 2025-09-11 00:29:34.942 [INFO][4745] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c2f22d1304fccda31eab6fb61c573f194bd936593616cf56a5cb8c63bdb2112a Sep 11 00:29:34.984179 containerd[1713]: 2025-09-11 00:29:34.946 [INFO][4745] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.85.0/26 handle="k8s-pod-network.c2f22d1304fccda31eab6fb61c573f194bd936593616cf56a5cb8c63bdb2112a" host="ci-4372.1.0-n-4da84ffec3" Sep 11 00:29:34.984179 containerd[1713]: 2025-09-11 00:29:34.955 [INFO][4745] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.85.4/26] block=192.168.85.0/26 handle="k8s-pod-network.c2f22d1304fccda31eab6fb61c573f194bd936593616cf56a5cb8c63bdb2112a" host="ci-4372.1.0-n-4da84ffec3" Sep 11 00:29:34.984179 containerd[1713]: 2025-09-11 00:29:34.955 [INFO][4745] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.85.4/26] handle="k8s-pod-network.c2f22d1304fccda31eab6fb61c573f194bd936593616cf56a5cb8c63bdb2112a" host="ci-4372.1.0-n-4da84ffec3" Sep 11 00:29:34.984179 containerd[1713]: 2025-09-11 00:29:34.955 [INFO][4745] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 11 00:29:34.984179 containerd[1713]: 2025-09-11 00:29:34.955 [INFO][4745] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.85.4/26] IPv6=[] ContainerID="c2f22d1304fccda31eab6fb61c573f194bd936593616cf56a5cb8c63bdb2112a" HandleID="k8s-pod-network.c2f22d1304fccda31eab6fb61c573f194bd936593616cf56a5cb8c63bdb2112a" Workload="ci--4372.1.0--n--4da84ffec3-k8s-calico--apiserver--5fbf5c6d8--9hpwn-eth0" Sep 11 00:29:34.984983 containerd[1713]: 2025-09-11 00:29:34.956 [INFO][4719] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c2f22d1304fccda31eab6fb61c573f194bd936593616cf56a5cb8c63bdb2112a" Namespace="calico-apiserver" Pod="calico-apiserver-5fbf5c6d8-9hpwn" WorkloadEndpoint="ci--4372.1.0--n--4da84ffec3-k8s-calico--apiserver--5fbf5c6d8--9hpwn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.1.0--n--4da84ffec3-k8s-calico--apiserver--5fbf5c6d8--9hpwn-eth0", GenerateName:"calico-apiserver-5fbf5c6d8-", Namespace:"calico-apiserver", SelfLink:"", UID:"1f68b9e3-b7a9-4e5a-95d3-b579e5278782", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 29, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5fbf5c6d8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.1.0-n-4da84ffec3", ContainerID:"", Pod:"calico-apiserver-5fbf5c6d8-9hpwn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.85.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid5b4b2fb033", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:29:34.984983 containerd[1713]: 2025-09-11 00:29:34.957 [INFO][4719] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.85.4/32] ContainerID="c2f22d1304fccda31eab6fb61c573f194bd936593616cf56a5cb8c63bdb2112a" Namespace="calico-apiserver" Pod="calico-apiserver-5fbf5c6d8-9hpwn" WorkloadEndpoint="ci--4372.1.0--n--4da84ffec3-k8s-calico--apiserver--5fbf5c6d8--9hpwn-eth0" Sep 11 00:29:34.984983 containerd[1713]: 2025-09-11 00:29:34.957 [INFO][4719] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid5b4b2fb033 ContainerID="c2f22d1304fccda31eab6fb61c573f194bd936593616cf56a5cb8c63bdb2112a" Namespace="calico-apiserver" Pod="calico-apiserver-5fbf5c6d8-9hpwn" WorkloadEndpoint="ci--4372.1.0--n--4da84ffec3-k8s-calico--apiserver--5fbf5c6d8--9hpwn-eth0" Sep 11 00:29:34.984983 containerd[1713]: 2025-09-11 00:29:34.959 [INFO][4719] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c2f22d1304fccda31eab6fb61c573f194bd936593616cf56a5cb8c63bdb2112a" Namespace="calico-apiserver" Pod="calico-apiserver-5fbf5c6d8-9hpwn" WorkloadEndpoint="ci--4372.1.0--n--4da84ffec3-k8s-calico--apiserver--5fbf5c6d8--9hpwn-eth0" Sep 11 00:29:34.984983 containerd[1713]: 2025-09-11 00:29:34.959 [INFO][4719] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c2f22d1304fccda31eab6fb61c573f194bd936593616cf56a5cb8c63bdb2112a" Namespace="calico-apiserver" Pod="calico-apiserver-5fbf5c6d8-9hpwn" WorkloadEndpoint="ci--4372.1.0--n--4da84ffec3-k8s-calico--apiserver--5fbf5c6d8--9hpwn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.1.0--n--4da84ffec3-k8s-calico--apiserver--5fbf5c6d8--9hpwn-eth0", GenerateName:"calico-apiserver-5fbf5c6d8-", Namespace:"calico-apiserver", SelfLink:"", UID:"1f68b9e3-b7a9-4e5a-95d3-b579e5278782", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 29, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5fbf5c6d8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.1.0-n-4da84ffec3", ContainerID:"c2f22d1304fccda31eab6fb61c573f194bd936593616cf56a5cb8c63bdb2112a", Pod:"calico-apiserver-5fbf5c6d8-9hpwn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.85.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid5b4b2fb033", MAC:"66:6c:32:5d:01:57", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:29:34.984983 containerd[1713]: 2025-09-11 00:29:34.976 [INFO][4719] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c2f22d1304fccda31eab6fb61c573f194bd936593616cf56a5cb8c63bdb2112a" Namespace="calico-apiserver" Pod="calico-apiserver-5fbf5c6d8-9hpwn" WorkloadEndpoint="ci--4372.1.0--n--4da84ffec3-k8s-calico--apiserver--5fbf5c6d8--9hpwn-eth0" Sep 11 00:29:35.020024 kubelet[3134]: I0911 00:29:35.019975 3134 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-xnj7p" podStartSLOduration=40.019961718 podStartE2EDuration="40.019961718s" podCreationTimestamp="2025-09-11 00:28:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-11 00:29:34.993444666 +0000 UTC m=+45.282132256" watchObservedRunningTime="2025-09-11 00:29:35.019961718 +0000 UTC m=+45.308649302" Sep 11 00:29:35.039447 kubelet[3134]: I0911 00:29:35.039261 3134 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-cff77ff5f-89884" podStartSLOduration=2.452381625 podStartE2EDuration="6.03924708s" podCreationTimestamp="2025-09-11 00:29:29 +0000 UTC" firstStartedPulling="2025-09-11 00:29:30.714752188 +0000 UTC m=+41.003439773" lastFinishedPulling="2025-09-11 00:29:34.301617638 +0000 UTC m=+44.590305228" observedRunningTime="2025-09-11 00:29:35.038946733 +0000 UTC m=+45.327634319" watchObservedRunningTime="2025-09-11 00:29:35.03924708 +0000 UTC m=+45.327934668" Sep 11 00:29:35.066984 containerd[1713]: time="2025-09-11T00:29:35.066942707Z" level=info msg="connecting to shim c2f22d1304fccda31eab6fb61c573f194bd936593616cf56a5cb8c63bdb2112a" address="unix:///run/containerd/s/6b3c81361ed788f9e706bf36f5bf57fb113a8848ee570a69cbd04434f682c08d" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:29:35.104669 systemd[1]: Started cri-containerd-c2f22d1304fccda31eab6fb61c573f194bd936593616cf56a5cb8c63bdb2112a.scope - libcontainer container c2f22d1304fccda31eab6fb61c573f194bd936593616cf56a5cb8c63bdb2112a. Sep 11 00:29:35.118487 systemd-networkd[1598]: cali1dd2ab3f211: Link UP Sep 11 00:29:35.118858 systemd-networkd[1598]: cali1dd2ab3f211: Gained carrier Sep 11 00:29:35.142046 containerd[1713]: 2025-09-11 00:29:34.897 [INFO][4727] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.1.0--n--4da84ffec3-k8s-goldmane--54d579b49d--r824m-eth0 goldmane-54d579b49d- calico-system 6de28527-91a4-4436-8fe3-69f9a4c814c5 840 0 2025-09-11 00:29:10 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4372.1.0-n-4da84ffec3 goldmane-54d579b49d-r824m eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali1dd2ab3f211 [] [] }} ContainerID="42fe1583ae84594fe8bd81677589d9a27997b06c5f3fb29576d61f330d8e8bb6" Namespace="calico-system" Pod="goldmane-54d579b49d-r824m" WorkloadEndpoint="ci--4372.1.0--n--4da84ffec3-k8s-goldmane--54d579b49d--r824m-" Sep 11 00:29:35.142046 containerd[1713]: 2025-09-11 00:29:34.897 [INFO][4727] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="42fe1583ae84594fe8bd81677589d9a27997b06c5f3fb29576d61f330d8e8bb6" Namespace="calico-system" Pod="goldmane-54d579b49d-r824m" WorkloadEndpoint="ci--4372.1.0--n--4da84ffec3-k8s-goldmane--54d579b49d--r824m-eth0" Sep 11 00:29:35.142046 containerd[1713]: 2025-09-11 00:29:34.929 [INFO][4743] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="42fe1583ae84594fe8bd81677589d9a27997b06c5f3fb29576d61f330d8e8bb6" HandleID="k8s-pod-network.42fe1583ae84594fe8bd81677589d9a27997b06c5f3fb29576d61f330d8e8bb6" Workload="ci--4372.1.0--n--4da84ffec3-k8s-goldmane--54d579b49d--r824m-eth0" Sep 11 00:29:35.142046 containerd[1713]: 2025-09-11 00:29:34.929 [INFO][4743] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="42fe1583ae84594fe8bd81677589d9a27997b06c5f3fb29576d61f330d8e8bb6" HandleID="k8s-pod-network.42fe1583ae84594fe8bd81677589d9a27997b06c5f3fb29576d61f330d8e8bb6" Workload="ci--4372.1.0--n--4da84ffec3-k8s-goldmane--54d579b49d--r824m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f2b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4372.1.0-n-4da84ffec3", "pod":"goldmane-54d579b49d-r824m", "timestamp":"2025-09-11 00:29:34.929198784 +0000 UTC"}, Hostname:"ci-4372.1.0-n-4da84ffec3", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 11 00:29:35.142046 containerd[1713]: 2025-09-11 00:29:34.929 [INFO][4743] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 11 00:29:35.142046 containerd[1713]: 2025-09-11 00:29:34.955 [INFO][4743] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 11 00:29:35.142046 containerd[1713]: 2025-09-11 00:29:34.955 [INFO][4743] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.1.0-n-4da84ffec3' Sep 11 00:29:35.142046 containerd[1713]: 2025-09-11 00:29:35.033 [INFO][4743] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.42fe1583ae84594fe8bd81677589d9a27997b06c5f3fb29576d61f330d8e8bb6" host="ci-4372.1.0-n-4da84ffec3" Sep 11 00:29:35.142046 containerd[1713]: 2025-09-11 00:29:35.056 [INFO][4743] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.1.0-n-4da84ffec3" Sep 11 00:29:35.142046 containerd[1713]: 2025-09-11 00:29:35.073 [INFO][4743] ipam/ipam.go 511: Trying affinity for 192.168.85.0/26 host="ci-4372.1.0-n-4da84ffec3" Sep 11 00:29:35.142046 containerd[1713]: 2025-09-11 00:29:35.083 [INFO][4743] ipam/ipam.go 158: Attempting to load block cidr=192.168.85.0/26 host="ci-4372.1.0-n-4da84ffec3" Sep 11 00:29:35.142046 containerd[1713]: 2025-09-11 00:29:35.087 [INFO][4743] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.85.0/26 host="ci-4372.1.0-n-4da84ffec3" Sep 11 00:29:35.142046 containerd[1713]: 2025-09-11 00:29:35.087 [INFO][4743] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.85.0/26 handle="k8s-pod-network.42fe1583ae84594fe8bd81677589d9a27997b06c5f3fb29576d61f330d8e8bb6" host="ci-4372.1.0-n-4da84ffec3" Sep 11 00:29:35.142046 containerd[1713]: 2025-09-11 00:29:35.091 [INFO][4743] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.42fe1583ae84594fe8bd81677589d9a27997b06c5f3fb29576d61f330d8e8bb6 Sep 11 00:29:35.142046 containerd[1713]: 2025-09-11 00:29:35.103 [INFO][4743] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.85.0/26 handle="k8s-pod-network.42fe1583ae84594fe8bd81677589d9a27997b06c5f3fb29576d61f330d8e8bb6" host="ci-4372.1.0-n-4da84ffec3" Sep 11 00:29:35.142046 containerd[1713]: 2025-09-11 00:29:35.110 [INFO][4743] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.85.5/26] block=192.168.85.0/26 handle="k8s-pod-network.42fe1583ae84594fe8bd81677589d9a27997b06c5f3fb29576d61f330d8e8bb6" host="ci-4372.1.0-n-4da84ffec3" Sep 11 00:29:35.142046 containerd[1713]: 2025-09-11 00:29:35.110 [INFO][4743] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.85.5/26] handle="k8s-pod-network.42fe1583ae84594fe8bd81677589d9a27997b06c5f3fb29576d61f330d8e8bb6" host="ci-4372.1.0-n-4da84ffec3" Sep 11 00:29:35.142046 containerd[1713]: 2025-09-11 00:29:35.110 [INFO][4743] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 11 00:29:35.142046 containerd[1713]: 2025-09-11 00:29:35.110 [INFO][4743] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.85.5/26] IPv6=[] ContainerID="42fe1583ae84594fe8bd81677589d9a27997b06c5f3fb29576d61f330d8e8bb6" HandleID="k8s-pod-network.42fe1583ae84594fe8bd81677589d9a27997b06c5f3fb29576d61f330d8e8bb6" Workload="ci--4372.1.0--n--4da84ffec3-k8s-goldmane--54d579b49d--r824m-eth0" Sep 11 00:29:35.142617 containerd[1713]: 2025-09-11 00:29:35.112 [INFO][4727] cni-plugin/k8s.go 418: Populated endpoint ContainerID="42fe1583ae84594fe8bd81677589d9a27997b06c5f3fb29576d61f330d8e8bb6" Namespace="calico-system" Pod="goldmane-54d579b49d-r824m" WorkloadEndpoint="ci--4372.1.0--n--4da84ffec3-k8s-goldmane--54d579b49d--r824m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.1.0--n--4da84ffec3-k8s-goldmane--54d579b49d--r824m-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"6de28527-91a4-4436-8fe3-69f9a4c814c5", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 29, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.1.0-n-4da84ffec3", ContainerID:"", Pod:"goldmane-54d579b49d-r824m", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.85.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali1dd2ab3f211", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:29:35.142617 containerd[1713]: 2025-09-11 00:29:35.112 [INFO][4727] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.85.5/32] ContainerID="42fe1583ae84594fe8bd81677589d9a27997b06c5f3fb29576d61f330d8e8bb6" Namespace="calico-system" Pod="goldmane-54d579b49d-r824m" WorkloadEndpoint="ci--4372.1.0--n--4da84ffec3-k8s-goldmane--54d579b49d--r824m-eth0" Sep 11 00:29:35.142617 containerd[1713]: 2025-09-11 00:29:35.112 [INFO][4727] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1dd2ab3f211 ContainerID="42fe1583ae84594fe8bd81677589d9a27997b06c5f3fb29576d61f330d8e8bb6" Namespace="calico-system" Pod="goldmane-54d579b49d-r824m" WorkloadEndpoint="ci--4372.1.0--n--4da84ffec3-k8s-goldmane--54d579b49d--r824m-eth0" Sep 11 00:29:35.142617 containerd[1713]: 2025-09-11 00:29:35.120 [INFO][4727] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="42fe1583ae84594fe8bd81677589d9a27997b06c5f3fb29576d61f330d8e8bb6" Namespace="calico-system" Pod="goldmane-54d579b49d-r824m" WorkloadEndpoint="ci--4372.1.0--n--4da84ffec3-k8s-goldmane--54d579b49d--r824m-eth0" Sep 11 00:29:35.142617 containerd[1713]: 2025-09-11 00:29:35.120 [INFO][4727] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="42fe1583ae84594fe8bd81677589d9a27997b06c5f3fb29576d61f330d8e8bb6" Namespace="calico-system" Pod="goldmane-54d579b49d-r824m" WorkloadEndpoint="ci--4372.1.0--n--4da84ffec3-k8s-goldmane--54d579b49d--r824m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.1.0--n--4da84ffec3-k8s-goldmane--54d579b49d--r824m-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"6de28527-91a4-4436-8fe3-69f9a4c814c5", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 29, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.1.0-n-4da84ffec3", ContainerID:"42fe1583ae84594fe8bd81677589d9a27997b06c5f3fb29576d61f330d8e8bb6", Pod:"goldmane-54d579b49d-r824m", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.85.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali1dd2ab3f211", MAC:"fe:fc:fb:5a:02:f6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:29:35.142617 containerd[1713]: 2025-09-11 00:29:35.140 [INFO][4727] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="42fe1583ae84594fe8bd81677589d9a27997b06c5f3fb29576d61f330d8e8bb6" Namespace="calico-system" Pod="goldmane-54d579b49d-r824m" WorkloadEndpoint="ci--4372.1.0--n--4da84ffec3-k8s-goldmane--54d579b49d--r824m-eth0" Sep 11 00:29:35.173306 containerd[1713]: time="2025-09-11T00:29:35.173272638Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fbf5c6d8-9hpwn,Uid:1f68b9e3-b7a9-4e5a-95d3-b579e5278782,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"c2f22d1304fccda31eab6fb61c573f194bd936593616cf56a5cb8c63bdb2112a\"" Sep 11 00:29:35.200352 containerd[1713]: time="2025-09-11T00:29:35.200326722Z" level=info msg="connecting to shim 42fe1583ae84594fe8bd81677589d9a27997b06c5f3fb29576d61f330d8e8bb6" address="unix:///run/containerd/s/fe511ad45f4829453033eef306f9a4485a2ad7ecbec5f6e9cfe07462ea3e8dbc" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:29:35.215536 systemd[1]: Started cri-containerd-42fe1583ae84594fe8bd81677589d9a27997b06c5f3fb29576d61f330d8e8bb6.scope - libcontainer container 42fe1583ae84594fe8bd81677589d9a27997b06c5f3fb29576d61f330d8e8bb6. Sep 11 00:29:35.272459 containerd[1713]: time="2025-09-11T00:29:35.272357092Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-r824m,Uid:6de28527-91a4-4436-8fe3-69f9a4c814c5,Namespace:calico-system,Attempt:0,} returns sandbox id \"42fe1583ae84594fe8bd81677589d9a27997b06c5f3fb29576d61f330d8e8bb6\"" Sep 11 00:29:35.439686 systemd-networkd[1598]: calic4a1f6badba: Gained IPv6LL Sep 11 00:29:35.614990 containerd[1713]: time="2025-09-11T00:29:35.614896113Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:29:35.617332 containerd[1713]: time="2025-09-11T00:29:35.617305878Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 11 00:29:35.620074 containerd[1713]: time="2025-09-11T00:29:35.620021853Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:29:35.625223 containerd[1713]: time="2025-09-11T00:29:35.624690974Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:29:35.625223 containerd[1713]: time="2025-09-11T00:29:35.625046660Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 1.315750931s" Sep 11 00:29:35.625223 containerd[1713]: time="2025-09-11T00:29:35.625074694Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 11 00:29:35.626677 containerd[1713]: time="2025-09-11T00:29:35.626648305Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 11 00:29:35.628411 containerd[1713]: time="2025-09-11T00:29:35.628056550Z" level=info msg="CreateContainer within sandbox \"f1521d73d8227d6fde3f3934aa8263a381b13e54ffcb668dca762efd8c8d19bd\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 11 00:29:35.631846 systemd-networkd[1598]: cali3dacf90b341: Gained IPv6LL Sep 11 00:29:35.652651 containerd[1713]: time="2025-09-11T00:29:35.652623501Z" level=info msg="Container 105b7dfe904d32672e5f610b78ebc842edc96d6fa2beb90eca23bfdc71a4ae28: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:29:35.678339 containerd[1713]: time="2025-09-11T00:29:35.678316693Z" level=info msg="CreateContainer within sandbox \"f1521d73d8227d6fde3f3934aa8263a381b13e54ffcb668dca762efd8c8d19bd\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"105b7dfe904d32672e5f610b78ebc842edc96d6fa2beb90eca23bfdc71a4ae28\"" Sep 11 00:29:35.679533 containerd[1713]: time="2025-09-11T00:29:35.678714864Z" level=info msg="StartContainer for \"105b7dfe904d32672e5f610b78ebc842edc96d6fa2beb90eca23bfdc71a4ae28\"" Sep 11 00:29:35.680888 containerd[1713]: time="2025-09-11T00:29:35.680852250Z" level=info msg="connecting to shim 105b7dfe904d32672e5f610b78ebc842edc96d6fa2beb90eca23bfdc71a4ae28" address="unix:///run/containerd/s/5dc083565bbe89a390268f0e6872a63bb6fa35207104bb09d7b30742573113c6" protocol=ttrpc version=3 Sep 11 00:29:35.700572 systemd[1]: Started cri-containerd-105b7dfe904d32672e5f610b78ebc842edc96d6fa2beb90eca23bfdc71a4ae28.scope - libcontainer container 105b7dfe904d32672e5f610b78ebc842edc96d6fa2beb90eca23bfdc71a4ae28. Sep 11 00:29:35.732066 containerd[1713]: time="2025-09-11T00:29:35.732048575Z" level=info msg="StartContainer for \"105b7dfe904d32672e5f610b78ebc842edc96d6fa2beb90eca23bfdc71a4ae28\" returns successfully" Sep 11 00:29:35.845268 containerd[1713]: time="2025-09-11T00:29:35.843852159Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fbf5c6d8-z67wz,Uid:d766bed5-827d-43f6-8bff-229467c4b9d1,Namespace:calico-apiserver,Attempt:0,}" Sep 11 00:29:35.845372 containerd[1713]: time="2025-09-11T00:29:35.844955444Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79475b87fb-9zlnb,Uid:25f80610-42d7-46ea-b9f4-52f2ac7254cc,Namespace:calico-system,Attempt:0,}" Sep 11 00:29:35.845531 containerd[1713]: time="2025-09-11T00:29:35.844999370Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8ksdt,Uid:175c79fb-099e-4954-8929-ac0445a05992,Namespace:kube-system,Attempt:0,}" Sep 11 00:29:36.012259 systemd-networkd[1598]: calid05dc232c8a: Link UP Sep 11 00:29:36.013615 systemd-networkd[1598]: calid05dc232c8a: Gained carrier Sep 11 00:29:36.015570 systemd-networkd[1598]: calid5b4b2fb033: Gained IPv6LL Sep 11 00:29:36.033479 containerd[1713]: 2025-09-11 00:29:35.897 [INFO][4905] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.1.0--n--4da84ffec3-k8s-calico--apiserver--5fbf5c6d8--z67wz-eth0 calico-apiserver-5fbf5c6d8- calico-apiserver d766bed5-827d-43f6-8bff-229467c4b9d1 839 0 2025-09-11 00:29:07 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5fbf5c6d8 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4372.1.0-n-4da84ffec3 calico-apiserver-5fbf5c6d8-z67wz eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calid05dc232c8a [] [] }} ContainerID="47e3aca92ffdcc5cfbd209d3064d0de4c21ccd365eac573d4cd6cbf4a9b96908" Namespace="calico-apiserver" Pod="calico-apiserver-5fbf5c6d8-z67wz" WorkloadEndpoint="ci--4372.1.0--n--4da84ffec3-k8s-calico--apiserver--5fbf5c6d8--z67wz-" Sep 11 00:29:36.033479 containerd[1713]: 2025-09-11 00:29:35.897 [INFO][4905] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="47e3aca92ffdcc5cfbd209d3064d0de4c21ccd365eac573d4cd6cbf4a9b96908" Namespace="calico-apiserver" Pod="calico-apiserver-5fbf5c6d8-z67wz" WorkloadEndpoint="ci--4372.1.0--n--4da84ffec3-k8s-calico--apiserver--5fbf5c6d8--z67wz-eth0" Sep 11 00:29:36.033479 containerd[1713]: 2025-09-11 00:29:35.942 [INFO][4938] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="47e3aca92ffdcc5cfbd209d3064d0de4c21ccd365eac573d4cd6cbf4a9b96908" HandleID="k8s-pod-network.47e3aca92ffdcc5cfbd209d3064d0de4c21ccd365eac573d4cd6cbf4a9b96908" Workload="ci--4372.1.0--n--4da84ffec3-k8s-calico--apiserver--5fbf5c6d8--z67wz-eth0" Sep 11 00:29:36.033479 containerd[1713]: 2025-09-11 00:29:35.942 [INFO][4938] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="47e3aca92ffdcc5cfbd209d3064d0de4c21ccd365eac573d4cd6cbf4a9b96908" HandleID="k8s-pod-network.47e3aca92ffdcc5cfbd209d3064d0de4c21ccd365eac573d4cd6cbf4a9b96908" Workload="ci--4372.1.0--n--4da84ffec3-k8s-calico--apiserver--5fbf5c6d8--z67wz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f0a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4372.1.0-n-4da84ffec3", "pod":"calico-apiserver-5fbf5c6d8-z67wz", "timestamp":"2025-09-11 00:29:35.941179458 +0000 UTC"}, Hostname:"ci-4372.1.0-n-4da84ffec3", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 11 00:29:36.033479 containerd[1713]: 2025-09-11 00:29:35.942 [INFO][4938] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 11 00:29:36.033479 containerd[1713]: 2025-09-11 00:29:35.942 [INFO][4938] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 11 00:29:36.033479 containerd[1713]: 2025-09-11 00:29:35.942 [INFO][4938] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.1.0-n-4da84ffec3' Sep 11 00:29:36.033479 containerd[1713]: 2025-09-11 00:29:35.956 [INFO][4938] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.47e3aca92ffdcc5cfbd209d3064d0de4c21ccd365eac573d4cd6cbf4a9b96908" host="ci-4372.1.0-n-4da84ffec3" Sep 11 00:29:36.033479 containerd[1713]: 2025-09-11 00:29:35.963 [INFO][4938] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.1.0-n-4da84ffec3" Sep 11 00:29:36.033479 containerd[1713]: 2025-09-11 00:29:35.976 [INFO][4938] ipam/ipam.go 511: Trying affinity for 192.168.85.0/26 host="ci-4372.1.0-n-4da84ffec3" Sep 11 00:29:36.033479 containerd[1713]: 2025-09-11 00:29:35.979 [INFO][4938] ipam/ipam.go 158: Attempting to load block cidr=192.168.85.0/26 host="ci-4372.1.0-n-4da84ffec3" Sep 11 00:29:36.033479 containerd[1713]: 2025-09-11 00:29:35.981 [INFO][4938] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.85.0/26 host="ci-4372.1.0-n-4da84ffec3" Sep 11 00:29:36.033479 containerd[1713]: 2025-09-11 00:29:35.981 [INFO][4938] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.85.0/26 handle="k8s-pod-network.47e3aca92ffdcc5cfbd209d3064d0de4c21ccd365eac573d4cd6cbf4a9b96908" host="ci-4372.1.0-n-4da84ffec3" Sep 11 00:29:36.033479 containerd[1713]: 2025-09-11 00:29:35.983 [INFO][4938] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.47e3aca92ffdcc5cfbd209d3064d0de4c21ccd365eac573d4cd6cbf4a9b96908 Sep 11 00:29:36.033479 containerd[1713]: 2025-09-11 00:29:35.988 [INFO][4938] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.85.0/26 handle="k8s-pod-network.47e3aca92ffdcc5cfbd209d3064d0de4c21ccd365eac573d4cd6cbf4a9b96908" host="ci-4372.1.0-n-4da84ffec3" Sep 11 00:29:36.033479 containerd[1713]: 2025-09-11 00:29:35.999 [INFO][4938] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.85.6/26] block=192.168.85.0/26 handle="k8s-pod-network.47e3aca92ffdcc5cfbd209d3064d0de4c21ccd365eac573d4cd6cbf4a9b96908" host="ci-4372.1.0-n-4da84ffec3" Sep 11 00:29:36.033479 containerd[1713]: 2025-09-11 00:29:35.999 [INFO][4938] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.85.6/26] handle="k8s-pod-network.47e3aca92ffdcc5cfbd209d3064d0de4c21ccd365eac573d4cd6cbf4a9b96908" host="ci-4372.1.0-n-4da84ffec3" Sep 11 00:29:36.033479 containerd[1713]: 2025-09-11 00:29:35.999 [INFO][4938] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 11 00:29:36.033479 containerd[1713]: 2025-09-11 00:29:35.999 [INFO][4938] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.85.6/26] IPv6=[] ContainerID="47e3aca92ffdcc5cfbd209d3064d0de4c21ccd365eac573d4cd6cbf4a9b96908" HandleID="k8s-pod-network.47e3aca92ffdcc5cfbd209d3064d0de4c21ccd365eac573d4cd6cbf4a9b96908" Workload="ci--4372.1.0--n--4da84ffec3-k8s-calico--apiserver--5fbf5c6d8--z67wz-eth0" Sep 11 00:29:36.034233 containerd[1713]: 2025-09-11 00:29:36.007 [INFO][4905] cni-plugin/k8s.go 418: Populated endpoint ContainerID="47e3aca92ffdcc5cfbd209d3064d0de4c21ccd365eac573d4cd6cbf4a9b96908" Namespace="calico-apiserver" Pod="calico-apiserver-5fbf5c6d8-z67wz" WorkloadEndpoint="ci--4372.1.0--n--4da84ffec3-k8s-calico--apiserver--5fbf5c6d8--z67wz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.1.0--n--4da84ffec3-k8s-calico--apiserver--5fbf5c6d8--z67wz-eth0", GenerateName:"calico-apiserver-5fbf5c6d8-", Namespace:"calico-apiserver", SelfLink:"", UID:"d766bed5-827d-43f6-8bff-229467c4b9d1", ResourceVersion:"839", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 29, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5fbf5c6d8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.1.0-n-4da84ffec3", ContainerID:"", Pod:"calico-apiserver-5fbf5c6d8-z67wz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.85.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid05dc232c8a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:29:36.034233 containerd[1713]: 2025-09-11 00:29:36.008 [INFO][4905] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.85.6/32] ContainerID="47e3aca92ffdcc5cfbd209d3064d0de4c21ccd365eac573d4cd6cbf4a9b96908" Namespace="calico-apiserver" Pod="calico-apiserver-5fbf5c6d8-z67wz" WorkloadEndpoint="ci--4372.1.0--n--4da84ffec3-k8s-calico--apiserver--5fbf5c6d8--z67wz-eth0" Sep 11 00:29:36.034233 containerd[1713]: 2025-09-11 00:29:36.008 [INFO][4905] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid05dc232c8a ContainerID="47e3aca92ffdcc5cfbd209d3064d0de4c21ccd365eac573d4cd6cbf4a9b96908" Namespace="calico-apiserver" Pod="calico-apiserver-5fbf5c6d8-z67wz" WorkloadEndpoint="ci--4372.1.0--n--4da84ffec3-k8s-calico--apiserver--5fbf5c6d8--z67wz-eth0" Sep 11 00:29:36.034233 containerd[1713]: 2025-09-11 00:29:36.014 [INFO][4905] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="47e3aca92ffdcc5cfbd209d3064d0de4c21ccd365eac573d4cd6cbf4a9b96908" Namespace="calico-apiserver" Pod="calico-apiserver-5fbf5c6d8-z67wz" WorkloadEndpoint="ci--4372.1.0--n--4da84ffec3-k8s-calico--apiserver--5fbf5c6d8--z67wz-eth0" Sep 11 00:29:36.034233 containerd[1713]: 2025-09-11 00:29:36.014 [INFO][4905] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="47e3aca92ffdcc5cfbd209d3064d0de4c21ccd365eac573d4cd6cbf4a9b96908" Namespace="calico-apiserver" Pod="calico-apiserver-5fbf5c6d8-z67wz" WorkloadEndpoint="ci--4372.1.0--n--4da84ffec3-k8s-calico--apiserver--5fbf5c6d8--z67wz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.1.0--n--4da84ffec3-k8s-calico--apiserver--5fbf5c6d8--z67wz-eth0", GenerateName:"calico-apiserver-5fbf5c6d8-", Namespace:"calico-apiserver", SelfLink:"", UID:"d766bed5-827d-43f6-8bff-229467c4b9d1", ResourceVersion:"839", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 29, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5fbf5c6d8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.1.0-n-4da84ffec3", ContainerID:"47e3aca92ffdcc5cfbd209d3064d0de4c21ccd365eac573d4cd6cbf4a9b96908", Pod:"calico-apiserver-5fbf5c6d8-z67wz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.85.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid05dc232c8a", MAC:"36:3d:d6:6a:53:22", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:29:36.034233 containerd[1713]: 2025-09-11 00:29:36.028 [INFO][4905] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="47e3aca92ffdcc5cfbd209d3064d0de4c21ccd365eac573d4cd6cbf4a9b96908" Namespace="calico-apiserver" Pod="calico-apiserver-5fbf5c6d8-z67wz" WorkloadEndpoint="ci--4372.1.0--n--4da84ffec3-k8s-calico--apiserver--5fbf5c6d8--z67wz-eth0" Sep 11 00:29:36.093028 systemd-networkd[1598]: cali7990d9e7fdc: Link UP Sep 11 00:29:36.097057 systemd-networkd[1598]: cali7990d9e7fdc: Gained carrier Sep 11 00:29:36.102553 containerd[1713]: time="2025-09-11T00:29:36.102216312Z" level=info msg="connecting to shim 47e3aca92ffdcc5cfbd209d3064d0de4c21ccd365eac573d4cd6cbf4a9b96908" address="unix:///run/containerd/s/78054e032969ef37754a504a2f5fe1ac61879b330d2a2662a0ea97ab53adf82f" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:29:36.118593 containerd[1713]: 2025-09-11 00:29:35.927 [INFO][4914] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.1.0--n--4da84ffec3-k8s-calico--kube--controllers--79475b87fb--9zlnb-eth0 calico-kube-controllers-79475b87fb- calico-system 25f80610-42d7-46ea-b9f4-52f2ac7254cc 841 0 2025-09-11 00:29:11 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:79475b87fb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4372.1.0-n-4da84ffec3 calico-kube-controllers-79475b87fb-9zlnb eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali7990d9e7fdc [] [] }} ContainerID="e6c679373c22a220bdafc3fe0e2193593af4b6f10ae5129b4dcef3cbd862156f" Namespace="calico-system" Pod="calico-kube-controllers-79475b87fb-9zlnb" WorkloadEndpoint="ci--4372.1.0--n--4da84ffec3-k8s-calico--kube--controllers--79475b87fb--9zlnb-" Sep 11 00:29:36.118593 containerd[1713]: 2025-09-11 00:29:35.927 [INFO][4914] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e6c679373c22a220bdafc3fe0e2193593af4b6f10ae5129b4dcef3cbd862156f" Namespace="calico-system" Pod="calico-kube-controllers-79475b87fb-9zlnb" WorkloadEndpoint="ci--4372.1.0--n--4da84ffec3-k8s-calico--kube--controllers--79475b87fb--9zlnb-eth0" Sep 11 00:29:36.118593 containerd[1713]: 2025-09-11 00:29:35.977 [INFO][4948] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e6c679373c22a220bdafc3fe0e2193593af4b6f10ae5129b4dcef3cbd862156f" HandleID="k8s-pod-network.e6c679373c22a220bdafc3fe0e2193593af4b6f10ae5129b4dcef3cbd862156f" Workload="ci--4372.1.0--n--4da84ffec3-k8s-calico--kube--controllers--79475b87fb--9zlnb-eth0" Sep 11 00:29:36.118593 containerd[1713]: 2025-09-11 00:29:35.978 [INFO][4948] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e6c679373c22a220bdafc3fe0e2193593af4b6f10ae5129b4dcef3cbd862156f" HandleID="k8s-pod-network.e6c679373c22a220bdafc3fe0e2193593af4b6f10ae5129b4dcef3cbd862156f" Workload="ci--4372.1.0--n--4da84ffec3-k8s-calico--kube--controllers--79475b87fb--9zlnb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f5a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4372.1.0-n-4da84ffec3", "pod":"calico-kube-controllers-79475b87fb-9zlnb", "timestamp":"2025-09-11 00:29:35.977282125 +0000 UTC"}, Hostname:"ci-4372.1.0-n-4da84ffec3", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 11 00:29:36.118593 containerd[1713]: 2025-09-11 00:29:35.978 [INFO][4948] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 11 00:29:36.118593 containerd[1713]: 2025-09-11 00:29:35.999 [INFO][4948] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 11 00:29:36.118593 containerd[1713]: 2025-09-11 00:29:35.999 [INFO][4948] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.1.0-n-4da84ffec3' Sep 11 00:29:36.118593 containerd[1713]: 2025-09-11 00:29:36.056 [INFO][4948] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e6c679373c22a220bdafc3fe0e2193593af4b6f10ae5129b4dcef3cbd862156f" host="ci-4372.1.0-n-4da84ffec3" Sep 11 00:29:36.118593 containerd[1713]: 2025-09-11 00:29:36.061 [INFO][4948] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.1.0-n-4da84ffec3" Sep 11 00:29:36.118593 containerd[1713]: 2025-09-11 00:29:36.070 [INFO][4948] ipam/ipam.go 511: Trying affinity for 192.168.85.0/26 host="ci-4372.1.0-n-4da84ffec3" Sep 11 00:29:36.118593 containerd[1713]: 2025-09-11 00:29:36.072 [INFO][4948] ipam/ipam.go 158: Attempting to load block cidr=192.168.85.0/26 host="ci-4372.1.0-n-4da84ffec3" Sep 11 00:29:36.118593 containerd[1713]: 2025-09-11 00:29:36.073 [INFO][4948] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.85.0/26 host="ci-4372.1.0-n-4da84ffec3" Sep 11 00:29:36.118593 containerd[1713]: 2025-09-11 00:29:36.073 [INFO][4948] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.85.0/26 handle="k8s-pod-network.e6c679373c22a220bdafc3fe0e2193593af4b6f10ae5129b4dcef3cbd862156f" host="ci-4372.1.0-n-4da84ffec3" Sep 11 00:29:36.118593 containerd[1713]: 2025-09-11 00:29:36.074 [INFO][4948] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e6c679373c22a220bdafc3fe0e2193593af4b6f10ae5129b4dcef3cbd862156f Sep 11 00:29:36.118593 containerd[1713]: 2025-09-11 00:29:36.078 [INFO][4948] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.85.0/26 handle="k8s-pod-network.e6c679373c22a220bdafc3fe0e2193593af4b6f10ae5129b4dcef3cbd862156f" host="ci-4372.1.0-n-4da84ffec3" Sep 11 00:29:36.118593 containerd[1713]: 2025-09-11 00:29:36.088 [INFO][4948] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.85.7/26] block=192.168.85.0/26 handle="k8s-pod-network.e6c679373c22a220bdafc3fe0e2193593af4b6f10ae5129b4dcef3cbd862156f" host="ci-4372.1.0-n-4da84ffec3" Sep 11 00:29:36.118593 containerd[1713]: 2025-09-11 00:29:36.088 [INFO][4948] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.85.7/26] handle="k8s-pod-network.e6c679373c22a220bdafc3fe0e2193593af4b6f10ae5129b4dcef3cbd862156f" host="ci-4372.1.0-n-4da84ffec3" Sep 11 00:29:36.118593 containerd[1713]: 2025-09-11 00:29:36.088 [INFO][4948] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 11 00:29:36.118593 containerd[1713]: 2025-09-11 00:29:36.088 [INFO][4948] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.85.7/26] IPv6=[] ContainerID="e6c679373c22a220bdafc3fe0e2193593af4b6f10ae5129b4dcef3cbd862156f" HandleID="k8s-pod-network.e6c679373c22a220bdafc3fe0e2193593af4b6f10ae5129b4dcef3cbd862156f" Workload="ci--4372.1.0--n--4da84ffec3-k8s-calico--kube--controllers--79475b87fb--9zlnb-eth0" Sep 11 00:29:36.119137 containerd[1713]: 2025-09-11 00:29:36.089 [INFO][4914] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e6c679373c22a220bdafc3fe0e2193593af4b6f10ae5129b4dcef3cbd862156f" Namespace="calico-system" Pod="calico-kube-controllers-79475b87fb-9zlnb" WorkloadEndpoint="ci--4372.1.0--n--4da84ffec3-k8s-calico--kube--controllers--79475b87fb--9zlnb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.1.0--n--4da84ffec3-k8s-calico--kube--controllers--79475b87fb--9zlnb-eth0", GenerateName:"calico-kube-controllers-79475b87fb-", Namespace:"calico-system", SelfLink:"", UID:"25f80610-42d7-46ea-b9f4-52f2ac7254cc", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 29, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"79475b87fb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.1.0-n-4da84ffec3", ContainerID:"", Pod:"calico-kube-controllers-79475b87fb-9zlnb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.85.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7990d9e7fdc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:29:36.119137 containerd[1713]: 2025-09-11 00:29:36.089 [INFO][4914] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.85.7/32] ContainerID="e6c679373c22a220bdafc3fe0e2193593af4b6f10ae5129b4dcef3cbd862156f" Namespace="calico-system" Pod="calico-kube-controllers-79475b87fb-9zlnb" WorkloadEndpoint="ci--4372.1.0--n--4da84ffec3-k8s-calico--kube--controllers--79475b87fb--9zlnb-eth0" Sep 11 00:29:36.119137 containerd[1713]: 2025-09-11 00:29:36.089 [INFO][4914] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7990d9e7fdc ContainerID="e6c679373c22a220bdafc3fe0e2193593af4b6f10ae5129b4dcef3cbd862156f" Namespace="calico-system" Pod="calico-kube-controllers-79475b87fb-9zlnb" WorkloadEndpoint="ci--4372.1.0--n--4da84ffec3-k8s-calico--kube--controllers--79475b87fb--9zlnb-eth0" Sep 11 00:29:36.119137 containerd[1713]: 2025-09-11 00:29:36.101 [INFO][4914] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e6c679373c22a220bdafc3fe0e2193593af4b6f10ae5129b4dcef3cbd862156f" Namespace="calico-system" Pod="calico-kube-controllers-79475b87fb-9zlnb" WorkloadEndpoint="ci--4372.1.0--n--4da84ffec3-k8s-calico--kube--controllers--79475b87fb--9zlnb-eth0" Sep 11 00:29:36.119137 containerd[1713]: 2025-09-11 00:29:36.102 [INFO][4914] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e6c679373c22a220bdafc3fe0e2193593af4b6f10ae5129b4dcef3cbd862156f" Namespace="calico-system" Pod="calico-kube-controllers-79475b87fb-9zlnb" WorkloadEndpoint="ci--4372.1.0--n--4da84ffec3-k8s-calico--kube--controllers--79475b87fb--9zlnb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.1.0--n--4da84ffec3-k8s-calico--kube--controllers--79475b87fb--9zlnb-eth0", GenerateName:"calico-kube-controllers-79475b87fb-", Namespace:"calico-system", SelfLink:"", UID:"25f80610-42d7-46ea-b9f4-52f2ac7254cc", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 29, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"79475b87fb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.1.0-n-4da84ffec3", ContainerID:"e6c679373c22a220bdafc3fe0e2193593af4b6f10ae5129b4dcef3cbd862156f", Pod:"calico-kube-controllers-79475b87fb-9zlnb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.85.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7990d9e7fdc", MAC:"3e:90:1b:45:84:ce", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:29:36.119137 containerd[1713]: 2025-09-11 00:29:36.116 [INFO][4914] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e6c679373c22a220bdafc3fe0e2193593af4b6f10ae5129b4dcef3cbd862156f" Namespace="calico-system" Pod="calico-kube-controllers-79475b87fb-9zlnb" WorkloadEndpoint="ci--4372.1.0--n--4da84ffec3-k8s-calico--kube--controllers--79475b87fb--9zlnb-eth0" Sep 11 00:29:36.134511 systemd[1]: Started cri-containerd-47e3aca92ffdcc5cfbd209d3064d0de4c21ccd365eac573d4cd6cbf4a9b96908.scope - libcontainer container 47e3aca92ffdcc5cfbd209d3064d0de4c21ccd365eac573d4cd6cbf4a9b96908. Sep 11 00:29:36.170491 containerd[1713]: time="2025-09-11T00:29:36.170459704Z" level=info msg="connecting to shim e6c679373c22a220bdafc3fe0e2193593af4b6f10ae5129b4dcef3cbd862156f" address="unix:///run/containerd/s/06b3ac136646f21fdaaa4b5c2dce475563d7ac6be7bda77738a0ef6a61f57810" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:29:36.199418 systemd[1]: Started cri-containerd-e6c679373c22a220bdafc3fe0e2193593af4b6f10ae5129b4dcef3cbd862156f.scope - libcontainer container e6c679373c22a220bdafc3fe0e2193593af4b6f10ae5129b4dcef3cbd862156f. Sep 11 00:29:36.206118 containerd[1713]: time="2025-09-11T00:29:36.205116762Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fbf5c6d8-z67wz,Uid:d766bed5-827d-43f6-8bff-229467c4b9d1,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"47e3aca92ffdcc5cfbd209d3064d0de4c21ccd365eac573d4cd6cbf4a9b96908\"" Sep 11 00:29:36.220126 systemd-networkd[1598]: calice96d790fa9: Link UP Sep 11 00:29:36.220933 systemd-networkd[1598]: calice96d790fa9: Gained carrier Sep 11 00:29:36.242912 containerd[1713]: 2025-09-11 00:29:35.935 [INFO][4927] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.1.0--n--4da84ffec3-k8s-coredns--668d6bf9bc--8ksdt-eth0 coredns-668d6bf9bc- kube-system 175c79fb-099e-4954-8929-ac0445a05992 831 0 2025-09-11 00:28:55 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4372.1.0-n-4da84ffec3 coredns-668d6bf9bc-8ksdt eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calice96d790fa9 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="ea8249376bc052e6e9f1aef25e2f6639d688e3f8f5d72458adc94f8b840284a7" Namespace="kube-system" Pod="coredns-668d6bf9bc-8ksdt" WorkloadEndpoint="ci--4372.1.0--n--4da84ffec3-k8s-coredns--668d6bf9bc--8ksdt-" Sep 11 00:29:36.242912 containerd[1713]: 2025-09-11 00:29:35.935 [INFO][4927] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ea8249376bc052e6e9f1aef25e2f6639d688e3f8f5d72458adc94f8b840284a7" Namespace="kube-system" Pod="coredns-668d6bf9bc-8ksdt" WorkloadEndpoint="ci--4372.1.0--n--4da84ffec3-k8s-coredns--668d6bf9bc--8ksdt-eth0" Sep 11 00:29:36.242912 containerd[1713]: 2025-09-11 00:29:35.986 [INFO][4954] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ea8249376bc052e6e9f1aef25e2f6639d688e3f8f5d72458adc94f8b840284a7" HandleID="k8s-pod-network.ea8249376bc052e6e9f1aef25e2f6639d688e3f8f5d72458adc94f8b840284a7" Workload="ci--4372.1.0--n--4da84ffec3-k8s-coredns--668d6bf9bc--8ksdt-eth0" Sep 11 00:29:36.242912 containerd[1713]: 2025-09-11 00:29:35.986 [INFO][4954] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ea8249376bc052e6e9f1aef25e2f6639d688e3f8f5d72458adc94f8b840284a7" HandleID="k8s-pod-network.ea8249376bc052e6e9f1aef25e2f6639d688e3f8f5d72458adc94f8b840284a7" Workload="ci--4372.1.0--n--4da84ffec3-k8s-coredns--668d6bf9bc--8ksdt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5d80), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4372.1.0-n-4da84ffec3", "pod":"coredns-668d6bf9bc-8ksdt", "timestamp":"2025-09-11 00:29:35.986295744 +0000 UTC"}, Hostname:"ci-4372.1.0-n-4da84ffec3", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 11 00:29:36.242912 containerd[1713]: 2025-09-11 00:29:35.991 [INFO][4954] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 11 00:29:36.242912 containerd[1713]: 2025-09-11 00:29:36.088 [INFO][4954] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 11 00:29:36.242912 containerd[1713]: 2025-09-11 00:29:36.088 [INFO][4954] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.1.0-n-4da84ffec3' Sep 11 00:29:36.242912 containerd[1713]: 2025-09-11 00:29:36.162 [INFO][4954] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ea8249376bc052e6e9f1aef25e2f6639d688e3f8f5d72458adc94f8b840284a7" host="ci-4372.1.0-n-4da84ffec3" Sep 11 00:29:36.242912 containerd[1713]: 2025-09-11 00:29:36.171 [INFO][4954] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.1.0-n-4da84ffec3" Sep 11 00:29:36.242912 containerd[1713]: 2025-09-11 00:29:36.178 [INFO][4954] ipam/ipam.go 511: Trying affinity for 192.168.85.0/26 host="ci-4372.1.0-n-4da84ffec3" Sep 11 00:29:36.242912 containerd[1713]: 2025-09-11 00:29:36.181 [INFO][4954] ipam/ipam.go 158: Attempting to load block cidr=192.168.85.0/26 host="ci-4372.1.0-n-4da84ffec3" Sep 11 00:29:36.242912 containerd[1713]: 2025-09-11 00:29:36.185 [INFO][4954] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.85.0/26 host="ci-4372.1.0-n-4da84ffec3" Sep 11 00:29:36.242912 containerd[1713]: 2025-09-11 00:29:36.185 [INFO][4954] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.85.0/26 handle="k8s-pod-network.ea8249376bc052e6e9f1aef25e2f6639d688e3f8f5d72458adc94f8b840284a7" host="ci-4372.1.0-n-4da84ffec3" Sep 11 00:29:36.242912 containerd[1713]: 2025-09-11 00:29:36.187 [INFO][4954] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ea8249376bc052e6e9f1aef25e2f6639d688e3f8f5d72458adc94f8b840284a7 Sep 11 00:29:36.242912 containerd[1713]: 2025-09-11 00:29:36.196 [INFO][4954] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.85.0/26 handle="k8s-pod-network.ea8249376bc052e6e9f1aef25e2f6639d688e3f8f5d72458adc94f8b840284a7" host="ci-4372.1.0-n-4da84ffec3" Sep 11 00:29:36.242912 containerd[1713]: 2025-09-11 00:29:36.209 [INFO][4954] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.85.8/26] block=192.168.85.0/26 handle="k8s-pod-network.ea8249376bc052e6e9f1aef25e2f6639d688e3f8f5d72458adc94f8b840284a7" host="ci-4372.1.0-n-4da84ffec3" Sep 11 00:29:36.242912 containerd[1713]: 2025-09-11 00:29:36.210 [INFO][4954] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.85.8/26] handle="k8s-pod-network.ea8249376bc052e6e9f1aef25e2f6639d688e3f8f5d72458adc94f8b840284a7" host="ci-4372.1.0-n-4da84ffec3" Sep 11 00:29:36.242912 containerd[1713]: 2025-09-11 00:29:36.210 [INFO][4954] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 11 00:29:36.242912 containerd[1713]: 2025-09-11 00:29:36.211 [INFO][4954] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.85.8/26] IPv6=[] ContainerID="ea8249376bc052e6e9f1aef25e2f6639d688e3f8f5d72458adc94f8b840284a7" HandleID="k8s-pod-network.ea8249376bc052e6e9f1aef25e2f6639d688e3f8f5d72458adc94f8b840284a7" Workload="ci--4372.1.0--n--4da84ffec3-k8s-coredns--668d6bf9bc--8ksdt-eth0" Sep 11 00:29:36.243693 containerd[1713]: 2025-09-11 00:29:36.214 [INFO][4927] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ea8249376bc052e6e9f1aef25e2f6639d688e3f8f5d72458adc94f8b840284a7" Namespace="kube-system" Pod="coredns-668d6bf9bc-8ksdt" WorkloadEndpoint="ci--4372.1.0--n--4da84ffec3-k8s-coredns--668d6bf9bc--8ksdt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.1.0--n--4da84ffec3-k8s-coredns--668d6bf9bc--8ksdt-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"175c79fb-099e-4954-8929-ac0445a05992", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 28, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.1.0-n-4da84ffec3", ContainerID:"", Pod:"coredns-668d6bf9bc-8ksdt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.85.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calice96d790fa9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:29:36.243693 containerd[1713]: 2025-09-11 00:29:36.215 [INFO][4927] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.85.8/32] ContainerID="ea8249376bc052e6e9f1aef25e2f6639d688e3f8f5d72458adc94f8b840284a7" Namespace="kube-system" Pod="coredns-668d6bf9bc-8ksdt" WorkloadEndpoint="ci--4372.1.0--n--4da84ffec3-k8s-coredns--668d6bf9bc--8ksdt-eth0" Sep 11 00:29:36.243693 containerd[1713]: 2025-09-11 00:29:36.215 [INFO][4927] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calice96d790fa9 ContainerID="ea8249376bc052e6e9f1aef25e2f6639d688e3f8f5d72458adc94f8b840284a7" Namespace="kube-system" Pod="coredns-668d6bf9bc-8ksdt" WorkloadEndpoint="ci--4372.1.0--n--4da84ffec3-k8s-coredns--668d6bf9bc--8ksdt-eth0" Sep 11 00:29:36.243693 containerd[1713]: 2025-09-11 00:29:36.222 [INFO][4927] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ea8249376bc052e6e9f1aef25e2f6639d688e3f8f5d72458adc94f8b840284a7" Namespace="kube-system" Pod="coredns-668d6bf9bc-8ksdt" WorkloadEndpoint="ci--4372.1.0--n--4da84ffec3-k8s-coredns--668d6bf9bc--8ksdt-eth0" Sep 11 00:29:36.243693 containerd[1713]: 2025-09-11 00:29:36.223 [INFO][4927] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ea8249376bc052e6e9f1aef25e2f6639d688e3f8f5d72458adc94f8b840284a7" Namespace="kube-system" Pod="coredns-668d6bf9bc-8ksdt" WorkloadEndpoint="ci--4372.1.0--n--4da84ffec3-k8s-coredns--668d6bf9bc--8ksdt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.1.0--n--4da84ffec3-k8s-coredns--668d6bf9bc--8ksdt-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"175c79fb-099e-4954-8929-ac0445a05992", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 28, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.1.0-n-4da84ffec3", ContainerID:"ea8249376bc052e6e9f1aef25e2f6639d688e3f8f5d72458adc94f8b840284a7", Pod:"coredns-668d6bf9bc-8ksdt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.85.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calice96d790fa9", MAC:"82:d5:a2:aa:a9:5e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:29:36.243693 containerd[1713]: 2025-09-11 00:29:36.241 [INFO][4927] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ea8249376bc052e6e9f1aef25e2f6639d688e3f8f5d72458adc94f8b840284a7" Namespace="kube-system" Pod="coredns-668d6bf9bc-8ksdt" WorkloadEndpoint="ci--4372.1.0--n--4da84ffec3-k8s-coredns--668d6bf9bc--8ksdt-eth0" Sep 11 00:29:36.276267 containerd[1713]: time="2025-09-11T00:29:36.276169658Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79475b87fb-9zlnb,Uid:25f80610-42d7-46ea-b9f4-52f2ac7254cc,Namespace:calico-system,Attempt:0,} returns sandbox id \"e6c679373c22a220bdafc3fe0e2193593af4b6f10ae5129b4dcef3cbd862156f\"" Sep 11 00:29:36.297259 containerd[1713]: time="2025-09-11T00:29:36.297199348Z" level=info msg="connecting to shim ea8249376bc052e6e9f1aef25e2f6639d688e3f8f5d72458adc94f8b840284a7" address="unix:///run/containerd/s/7837fbe09d9b60ebed58491b370d97039f2e3a7436c1fe4f2e26b84f1040b27c" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:29:36.316560 systemd[1]: Started cri-containerd-ea8249376bc052e6e9f1aef25e2f6639d688e3f8f5d72458adc94f8b840284a7.scope - libcontainer container ea8249376bc052e6e9f1aef25e2f6639d688e3f8f5d72458adc94f8b840284a7. Sep 11 00:29:36.359403 containerd[1713]: time="2025-09-11T00:29:36.359366148Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8ksdt,Uid:175c79fb-099e-4954-8929-ac0445a05992,Namespace:kube-system,Attempt:0,} returns sandbox id \"ea8249376bc052e6e9f1aef25e2f6639d688e3f8f5d72458adc94f8b840284a7\"" Sep 11 00:29:36.362136 containerd[1713]: time="2025-09-11T00:29:36.362117592Z" level=info msg="CreateContainer within sandbox \"ea8249376bc052e6e9f1aef25e2f6639d688e3f8f5d72458adc94f8b840284a7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 11 00:29:36.389991 containerd[1713]: time="2025-09-11T00:29:36.389967624Z" level=info msg="Container 360f4dd94e63de2c39e7522f5d313e01d9f5098a7f767824704bb085b76657a2: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:29:36.401095 containerd[1713]: time="2025-09-11T00:29:36.401069179Z" level=info msg="CreateContainer within sandbox \"ea8249376bc052e6e9f1aef25e2f6639d688e3f8f5d72458adc94f8b840284a7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"360f4dd94e63de2c39e7522f5d313e01d9f5098a7f767824704bb085b76657a2\"" Sep 11 00:29:36.401574 containerd[1713]: time="2025-09-11T00:29:36.401462897Z" level=info msg="StartContainer for \"360f4dd94e63de2c39e7522f5d313e01d9f5098a7f767824704bb085b76657a2\"" Sep 11 00:29:36.402251 containerd[1713]: time="2025-09-11T00:29:36.402222579Z" level=info msg="connecting to shim 360f4dd94e63de2c39e7522f5d313e01d9f5098a7f767824704bb085b76657a2" address="unix:///run/containerd/s/7837fbe09d9b60ebed58491b370d97039f2e3a7436c1fe4f2e26b84f1040b27c" protocol=ttrpc version=3 Sep 11 00:29:36.420532 systemd[1]: Started cri-containerd-360f4dd94e63de2c39e7522f5d313e01d9f5098a7f767824704bb085b76657a2.scope - libcontainer container 360f4dd94e63de2c39e7522f5d313e01d9f5098a7f767824704bb085b76657a2. Sep 11 00:29:36.445886 containerd[1713]: time="2025-09-11T00:29:36.445861446Z" level=info msg="StartContainer for \"360f4dd94e63de2c39e7522f5d313e01d9f5098a7f767824704bb085b76657a2\" returns successfully" Sep 11 00:29:36.525168 kubelet[3134]: I0911 00:29:36.525080 3134 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 11 00:29:36.527554 systemd-networkd[1598]: cali1dd2ab3f211: Gained IPv6LL Sep 11 00:29:36.585990 containerd[1713]: time="2025-09-11T00:29:36.585957762Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7d92f42a5ee2aacc73039347d07b3054ffadae24ffbdaf2a0b5356e74dc33b6d\" id:\"e1348ffa66cfd9c88910035ea5bf9de932057b4b9f293527b419663cfb3473aa\" pid:5184 exited_at:{seconds:1757550576 nanos:585747155}" Sep 11 00:29:36.672995 containerd[1713]: time="2025-09-11T00:29:36.672914808Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7d92f42a5ee2aacc73039347d07b3054ffadae24ffbdaf2a0b5356e74dc33b6d\" id:\"aff29afad29ad9897afccee941ced00e1e9850cbd36fbf690d897a9bdf1ccb96\" pid:5208 exited_at:{seconds:1757550576 nanos:672754258}" Sep 11 00:29:37.008588 kubelet[3134]: I0911 00:29:37.008532 3134 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-8ksdt" podStartSLOduration=42.00851164 podStartE2EDuration="42.00851164s" podCreationTimestamp="2025-09-11 00:28:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-11 00:29:37.007777617 +0000 UTC m=+47.296465204" watchObservedRunningTime="2025-09-11 00:29:37.00851164 +0000 UTC m=+47.297199230" Sep 11 00:29:37.103572 systemd-networkd[1598]: calid05dc232c8a: Gained IPv6LL Sep 11 00:29:37.359511 systemd-networkd[1598]: cali7990d9e7fdc: Gained IPv6LL Sep 11 00:29:37.788841 containerd[1713]: time="2025-09-11T00:29:37.788798449Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:29:37.792766 containerd[1713]: time="2025-09-11T00:29:37.792653922Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Sep 11 00:29:37.795541 containerd[1713]: time="2025-09-11T00:29:37.795515239Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:29:37.799404 containerd[1713]: time="2025-09-11T00:29:37.799341035Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:29:37.800081 containerd[1713]: time="2025-09-11T00:29:37.799946745Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 2.17326941s" Sep 11 00:29:37.800081 containerd[1713]: time="2025-09-11T00:29:37.799978280Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 11 00:29:37.800993 containerd[1713]: time="2025-09-11T00:29:37.800971963Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 11 00:29:37.802185 containerd[1713]: time="2025-09-11T00:29:37.802161435Z" level=info msg="CreateContainer within sandbox \"c2f22d1304fccda31eab6fb61c573f194bd936593616cf56a5cb8c63bdb2112a\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 11 00:29:37.821655 containerd[1713]: time="2025-09-11T00:29:37.821629012Z" level=info msg="Container 9064423fba411ccbbcf5cb01a91d4401890fab4d7ab8831253a7ec60faca6ded: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:29:37.828183 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1765815126.mount: Deactivated successfully. Sep 11 00:29:37.839035 containerd[1713]: time="2025-09-11T00:29:37.839011320Z" level=info msg="CreateContainer within sandbox \"c2f22d1304fccda31eab6fb61c573f194bd936593616cf56a5cb8c63bdb2112a\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"9064423fba411ccbbcf5cb01a91d4401890fab4d7ab8831253a7ec60faca6ded\"" Sep 11 00:29:37.841750 containerd[1713]: time="2025-09-11T00:29:37.839666762Z" level=info msg="StartContainer for \"9064423fba411ccbbcf5cb01a91d4401890fab4d7ab8831253a7ec60faca6ded\"" Sep 11 00:29:37.841750 containerd[1713]: time="2025-09-11T00:29:37.841058689Z" level=info msg="connecting to shim 9064423fba411ccbbcf5cb01a91d4401890fab4d7ab8831253a7ec60faca6ded" address="unix:///run/containerd/s/6b3c81361ed788f9e706bf36f5bf57fb113a8848ee570a69cbd04434f682c08d" protocol=ttrpc version=3 Sep 11 00:29:37.867497 systemd[1]: Started cri-containerd-9064423fba411ccbbcf5cb01a91d4401890fab4d7ab8831253a7ec60faca6ded.scope - libcontainer container 9064423fba411ccbbcf5cb01a91d4401890fab4d7ab8831253a7ec60faca6ded. Sep 11 00:29:37.912724 containerd[1713]: time="2025-09-11T00:29:37.912698249Z" level=info msg="StartContainer for \"9064423fba411ccbbcf5cb01a91d4401890fab4d7ab8831253a7ec60faca6ded\" returns successfully" Sep 11 00:29:38.191499 systemd-networkd[1598]: calice96d790fa9: Gained IPv6LL Sep 11 00:29:39.001758 kubelet[3134]: I0911 00:29:39.001727 3134 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 11 00:29:40.106460 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1254222376.mount: Deactivated successfully. Sep 11 00:29:40.746183 containerd[1713]: time="2025-09-11T00:29:40.746139415Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:29:40.748532 containerd[1713]: time="2025-09-11T00:29:40.748496246Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Sep 11 00:29:40.751369 containerd[1713]: time="2025-09-11T00:29:40.751329307Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:29:40.755348 containerd[1713]: time="2025-09-11T00:29:40.755144665Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:29:40.755710 containerd[1713]: time="2025-09-11T00:29:40.755685360Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 2.954683681s" Sep 11 00:29:40.755756 containerd[1713]: time="2025-09-11T00:29:40.755719956Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 11 00:29:40.757101 containerd[1713]: time="2025-09-11T00:29:40.757073711Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 11 00:29:40.758234 containerd[1713]: time="2025-09-11T00:29:40.758195863Z" level=info msg="CreateContainer within sandbox \"42fe1583ae84594fe8bd81677589d9a27997b06c5f3fb29576d61f330d8e8bb6\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 11 00:29:40.786031 containerd[1713]: time="2025-09-11T00:29:40.783467668Z" level=info msg="Container b94b07646c23490a03c16a5cd6e01214208da867632b2466bb5cdd73e04c97f5: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:29:40.804743 containerd[1713]: time="2025-09-11T00:29:40.804707037Z" level=info msg="CreateContainer within sandbox \"42fe1583ae84594fe8bd81677589d9a27997b06c5f3fb29576d61f330d8e8bb6\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"b94b07646c23490a03c16a5cd6e01214208da867632b2466bb5cdd73e04c97f5\"" Sep 11 00:29:40.806624 containerd[1713]: time="2025-09-11T00:29:40.806600898Z" level=info msg="StartContainer for \"b94b07646c23490a03c16a5cd6e01214208da867632b2466bb5cdd73e04c97f5\"" Sep 11 00:29:40.807619 containerd[1713]: time="2025-09-11T00:29:40.807593745Z" level=info msg="connecting to shim b94b07646c23490a03c16a5cd6e01214208da867632b2466bb5cdd73e04c97f5" address="unix:///run/containerd/s/fe511ad45f4829453033eef306f9a4485a2ad7ecbec5f6e9cfe07462ea3e8dbc" protocol=ttrpc version=3 Sep 11 00:29:40.830514 systemd[1]: Started cri-containerd-b94b07646c23490a03c16a5cd6e01214208da867632b2466bb5cdd73e04c97f5.scope - libcontainer container b94b07646c23490a03c16a5cd6e01214208da867632b2466bb5cdd73e04c97f5. Sep 11 00:29:40.879861 containerd[1713]: time="2025-09-11T00:29:40.879838623Z" level=info msg="StartContainer for \"b94b07646c23490a03c16a5cd6e01214208da867632b2466bb5cdd73e04c97f5\" returns successfully" Sep 11 00:29:41.021464 kubelet[3134]: I0911 00:29:41.021350 3134 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5fbf5c6d8-9hpwn" podStartSLOduration=31.39475639 podStartE2EDuration="34.021330447s" podCreationTimestamp="2025-09-11 00:29:07 +0000 UTC" firstStartedPulling="2025-09-11 00:29:35.174223276 +0000 UTC m=+45.462910868" lastFinishedPulling="2025-09-11 00:29:37.800797343 +0000 UTC m=+48.089484925" observedRunningTime="2025-09-11 00:29:38.016488913 +0000 UTC m=+48.305176510" watchObservedRunningTime="2025-09-11 00:29:41.021330447 +0000 UTC m=+51.310018038" Sep 11 00:29:41.025594 kubelet[3134]: I0911 00:29:41.025539 3134 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-r824m" podStartSLOduration=25.544007355 podStartE2EDuration="31.025523205s" podCreationTimestamp="2025-09-11 00:29:10 +0000 UTC" firstStartedPulling="2025-09-11 00:29:35.275132632 +0000 UTC m=+45.563820222" lastFinishedPulling="2025-09-11 00:29:40.756648488 +0000 UTC m=+51.045336072" observedRunningTime="2025-09-11 00:29:41.023233744 +0000 UTC m=+51.311921340" watchObservedRunningTime="2025-09-11 00:29:41.025523205 +0000 UTC m=+51.314210794" Sep 11 00:29:41.088969 containerd[1713]: time="2025-09-11T00:29:41.088715914Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b94b07646c23490a03c16a5cd6e01214208da867632b2466bb5cdd73e04c97f5\" id:\"2a7f31a59afc9b9e19eefab1eecea009bc81ca1ccc195ef766dd4f3902b5e86f\" pid:5331 exit_status:1 exited_at:{seconds:1757550581 nanos:88440490}" Sep 11 00:29:42.312814 containerd[1713]: time="2025-09-11T00:29:42.312770850Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b94b07646c23490a03c16a5cd6e01214208da867632b2466bb5cdd73e04c97f5\" id:\"3a279b6b7641ad0aadf15db2da7203b8b1f7fdde520e907e42e30988184c3709\" pid:5369 exit_status:1 exited_at:{seconds:1757550582 nanos:312252135}" Sep 11 00:29:42.365842 containerd[1713]: time="2025-09-11T00:29:42.365798074Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:29:42.368845 containerd[1713]: time="2025-09-11T00:29:42.368811531Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 11 00:29:42.371567 containerd[1713]: time="2025-09-11T00:29:42.371518862Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:29:42.376514 containerd[1713]: time="2025-09-11T00:29:42.376448693Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:29:42.377784 containerd[1713]: time="2025-09-11T00:29:42.377670080Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 1.620561752s" Sep 11 00:29:42.377784 containerd[1713]: time="2025-09-11T00:29:42.377704551Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 11 00:29:42.379588 containerd[1713]: time="2025-09-11T00:29:42.379537522Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 11 00:29:42.380666 containerd[1713]: time="2025-09-11T00:29:42.380637236Z" level=info msg="CreateContainer within sandbox \"f1521d73d8227d6fde3f3934aa8263a381b13e54ffcb668dca762efd8c8d19bd\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 11 00:29:42.409173 containerd[1713]: time="2025-09-11T00:29:42.407543040Z" level=info msg="Container 85c8785bb7db242125ca5cfb8789d15758f1e672dcb69c578b0a7533fae4c04d: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:29:42.428589 containerd[1713]: time="2025-09-11T00:29:42.428564484Z" level=info msg="CreateContainer within sandbox \"f1521d73d8227d6fde3f3934aa8263a381b13e54ffcb668dca762efd8c8d19bd\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"85c8785bb7db242125ca5cfb8789d15758f1e672dcb69c578b0a7533fae4c04d\"" Sep 11 00:29:42.429528 containerd[1713]: time="2025-09-11T00:29:42.429508792Z" level=info msg="StartContainer for \"85c8785bb7db242125ca5cfb8789d15758f1e672dcb69c578b0a7533fae4c04d\"" Sep 11 00:29:42.431654 containerd[1713]: time="2025-09-11T00:29:42.431531300Z" level=info msg="connecting to shim 85c8785bb7db242125ca5cfb8789d15758f1e672dcb69c578b0a7533fae4c04d" address="unix:///run/containerd/s/5dc083565bbe89a390268f0e6872a63bb6fa35207104bb09d7b30742573113c6" protocol=ttrpc version=3 Sep 11 00:29:42.459745 systemd[1]: Started cri-containerd-85c8785bb7db242125ca5cfb8789d15758f1e672dcb69c578b0a7533fae4c04d.scope - libcontainer container 85c8785bb7db242125ca5cfb8789d15758f1e672dcb69c578b0a7533fae4c04d. Sep 11 00:29:42.523074 containerd[1713]: time="2025-09-11T00:29:42.523052784Z" level=info msg="StartContainer for \"85c8785bb7db242125ca5cfb8789d15758f1e672dcb69c578b0a7533fae4c04d\" returns successfully" Sep 11 00:29:42.754961 containerd[1713]: time="2025-09-11T00:29:42.754817140Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:29:42.763120 containerd[1713]: time="2025-09-11T00:29:42.761864824Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 11 00:29:42.763252 containerd[1713]: time="2025-09-11T00:29:42.763106228Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 383.541821ms" Sep 11 00:29:42.763312 containerd[1713]: time="2025-09-11T00:29:42.763300512Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 11 00:29:42.765845 containerd[1713]: time="2025-09-11T00:29:42.765824294Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 11 00:29:42.766913 containerd[1713]: time="2025-09-11T00:29:42.766891301Z" level=info msg="CreateContainer within sandbox \"47e3aca92ffdcc5cfbd209d3064d0de4c21ccd365eac573d4cd6cbf4a9b96908\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 11 00:29:42.785408 containerd[1713]: time="2025-09-11T00:29:42.785373605Z" level=info msg="Container beef23500ad968334487f04ad618938c67b24302ee39bd596eb717b4c6002118: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:29:42.808504 containerd[1713]: time="2025-09-11T00:29:42.808475556Z" level=info msg="CreateContainer within sandbox \"47e3aca92ffdcc5cfbd209d3064d0de4c21ccd365eac573d4cd6cbf4a9b96908\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"beef23500ad968334487f04ad618938c67b24302ee39bd596eb717b4c6002118\"" Sep 11 00:29:42.809184 containerd[1713]: time="2025-09-11T00:29:42.809011220Z" level=info msg="StartContainer for \"beef23500ad968334487f04ad618938c67b24302ee39bd596eb717b4c6002118\"" Sep 11 00:29:42.810442 containerd[1713]: time="2025-09-11T00:29:42.810109419Z" level=info msg="connecting to shim beef23500ad968334487f04ad618938c67b24302ee39bd596eb717b4c6002118" address="unix:///run/containerd/s/78054e032969ef37754a504a2f5fe1ac61879b330d2a2662a0ea97ab53adf82f" protocol=ttrpc version=3 Sep 11 00:29:42.834502 systemd[1]: Started cri-containerd-beef23500ad968334487f04ad618938c67b24302ee39bd596eb717b4c6002118.scope - libcontainer container beef23500ad968334487f04ad618938c67b24302ee39bd596eb717b4c6002118. Sep 11 00:29:42.915593 kubelet[3134]: I0911 00:29:42.915506 3134 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 11 00:29:42.915593 kubelet[3134]: I0911 00:29:42.915536 3134 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 11 00:29:42.945892 containerd[1713]: time="2025-09-11T00:29:42.945852763Z" level=info msg="StartContainer for \"beef23500ad968334487f04ad618938c67b24302ee39bd596eb717b4c6002118\" returns successfully" Sep 11 00:29:43.130850 kubelet[3134]: I0911 00:29:43.130552 3134 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-cbtm5" podStartSLOduration=24.060355636 podStartE2EDuration="32.130533425s" podCreationTimestamp="2025-09-11 00:29:11 +0000 UTC" firstStartedPulling="2025-09-11 00:29:34.308591788 +0000 UTC m=+44.597279361" lastFinishedPulling="2025-09-11 00:29:42.378769572 +0000 UTC m=+52.667457150" observedRunningTime="2025-09-11 00:29:43.056339102 +0000 UTC m=+53.345026702" watchObservedRunningTime="2025-09-11 00:29:43.130533425 +0000 UTC m=+53.419221017" Sep 11 00:29:44.021708 kubelet[3134]: I0911 00:29:44.021634 3134 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 11 00:29:45.787304 containerd[1713]: time="2025-09-11T00:29:45.786668218Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:29:45.792708 containerd[1713]: time="2025-09-11T00:29:45.792522391Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Sep 11 00:29:45.798457 containerd[1713]: time="2025-09-11T00:29:45.798426771Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:29:45.804613 containerd[1713]: time="2025-09-11T00:29:45.804524003Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:29:45.805770 containerd[1713]: time="2025-09-11T00:29:45.805247078Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 3.039157132s" Sep 11 00:29:45.805770 containerd[1713]: time="2025-09-11T00:29:45.805278022Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 11 00:29:45.822729 containerd[1713]: time="2025-09-11T00:29:45.822701841Z" level=info msg="CreateContainer within sandbox \"e6c679373c22a220bdafc3fe0e2193593af4b6f10ae5129b4dcef3cbd862156f\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 11 00:29:45.857317 containerd[1713]: time="2025-09-11T00:29:45.856705412Z" level=info msg="Container 8dff6d6c166c7f358bc62bc72e45bd8e01f15aa9858b8fd516854c2c1d5a3a12: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:29:45.860009 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3476944538.mount: Deactivated successfully. Sep 11 00:29:45.872818 containerd[1713]: time="2025-09-11T00:29:45.872637119Z" level=info msg="CreateContainer within sandbox \"e6c679373c22a220bdafc3fe0e2193593af4b6f10ae5129b4dcef3cbd862156f\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"8dff6d6c166c7f358bc62bc72e45bd8e01f15aa9858b8fd516854c2c1d5a3a12\"" Sep 11 00:29:45.874394 containerd[1713]: time="2025-09-11T00:29:45.874036997Z" level=info msg="StartContainer for \"8dff6d6c166c7f358bc62bc72e45bd8e01f15aa9858b8fd516854c2c1d5a3a12\"" Sep 11 00:29:45.876012 containerd[1713]: time="2025-09-11T00:29:45.875975711Z" level=info msg="connecting to shim 8dff6d6c166c7f358bc62bc72e45bd8e01f15aa9858b8fd516854c2c1d5a3a12" address="unix:///run/containerd/s/06b3ac136646f21fdaaa4b5c2dce475563d7ac6be7bda77738a0ef6a61f57810" protocol=ttrpc version=3 Sep 11 00:29:45.901572 systemd[1]: Started cri-containerd-8dff6d6c166c7f358bc62bc72e45bd8e01f15aa9858b8fd516854c2c1d5a3a12.scope - libcontainer container 8dff6d6c166c7f358bc62bc72e45bd8e01f15aa9858b8fd516854c2c1d5a3a12. Sep 11 00:29:45.988593 containerd[1713]: time="2025-09-11T00:29:45.988525798Z" level=info msg="StartContainer for \"8dff6d6c166c7f358bc62bc72e45bd8e01f15aa9858b8fd516854c2c1d5a3a12\" returns successfully" Sep 11 00:29:46.081101 kubelet[3134]: I0911 00:29:46.080978 3134 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-79475b87fb-9zlnb" podStartSLOduration=25.552959687 podStartE2EDuration="35.080957931s" podCreationTimestamp="2025-09-11 00:29:11 +0000 UTC" firstStartedPulling="2025-09-11 00:29:36.278742797 +0000 UTC m=+46.567430379" lastFinishedPulling="2025-09-11 00:29:45.806741038 +0000 UTC m=+56.095428623" observedRunningTime="2025-09-11 00:29:46.079729401 +0000 UTC m=+56.368416993" watchObservedRunningTime="2025-09-11 00:29:46.080957931 +0000 UTC m=+56.369645518" Sep 11 00:29:46.082150 kubelet[3134]: I0911 00:29:46.082108 3134 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5fbf5c6d8-z67wz" podStartSLOduration=32.524485293 podStartE2EDuration="39.082094735s" podCreationTimestamp="2025-09-11 00:29:07 +0000 UTC" firstStartedPulling="2025-09-11 00:29:36.207342711 +0000 UTC m=+46.496030297" lastFinishedPulling="2025-09-11 00:29:42.764952147 +0000 UTC m=+53.053639739" observedRunningTime="2025-09-11 00:29:43.132317437 +0000 UTC m=+53.421005030" watchObservedRunningTime="2025-09-11 00:29:46.082094735 +0000 UTC m=+56.370782324" Sep 11 00:29:46.188810 containerd[1713]: time="2025-09-11T00:29:46.188759363Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8dff6d6c166c7f358bc62bc72e45bd8e01f15aa9858b8fd516854c2c1d5a3a12\" id:\"8ef004ec9e72816ba841acbfc632f473e5ee64653d65bcb58c941df6555dba96\" pid:5497 exit_status:1 exited_at:{seconds:1757550586 nanos:186961454}" Sep 11 00:29:47.114102 containerd[1713]: time="2025-09-11T00:29:47.114059952Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8dff6d6c166c7f358bc62bc72e45bd8e01f15aa9858b8fd516854c2c1d5a3a12\" id:\"0bb9f893a5de1bb654f8551d9fffa592a24b27498fd040aaa958c6e7cc4ce2e1\" pid:5531 exited_at:{seconds:1757550587 nanos:113240471}" Sep 11 00:29:59.871158 containerd[1713]: time="2025-09-11T00:29:59.871119596Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b94b07646c23490a03c16a5cd6e01214208da867632b2466bb5cdd73e04c97f5\" id:\"dc471da167527210472ee97d13840fb9ca3520bf9ec714a7e861a07454d43fb5\" pid:5565 exited_at:{seconds:1757550599 nanos:870721854}" Sep 11 00:30:00.146591 update_engine[1698]: I20250911 00:30:00.146266 1698 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Sep 11 00:30:00.146591 update_engine[1698]: I20250911 00:30:00.146306 1698 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Sep 11 00:30:00.146591 update_engine[1698]: I20250911 00:30:00.146466 1698 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Sep 11 00:30:00.146968 update_engine[1698]: I20250911 00:30:00.146883 1698 omaha_request_params.cc:62] Current group set to beta Sep 11 00:30:00.147027 update_engine[1698]: I20250911 00:30:00.146993 1698 update_attempter.cc:499] Already updated boot flags. Skipping. Sep 11 00:30:00.147027 update_engine[1698]: I20250911 00:30:00.147001 1698 update_attempter.cc:643] Scheduling an action processor start. Sep 11 00:30:00.147027 update_engine[1698]: I20250911 00:30:00.147018 1698 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Sep 11 00:30:00.147095 update_engine[1698]: I20250911 00:30:00.147048 1698 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Sep 11 00:30:00.147120 update_engine[1698]: I20250911 00:30:00.147100 1698 omaha_request_action.cc:271] Posting an Omaha request to disabled Sep 11 00:30:00.147120 update_engine[1698]: I20250911 00:30:00.147105 1698 omaha_request_action.cc:272] Request: Sep 11 00:30:00.147120 update_engine[1698]: Sep 11 00:30:00.147120 update_engine[1698]: Sep 11 00:30:00.147120 update_engine[1698]: Sep 11 00:30:00.147120 update_engine[1698]: Sep 11 00:30:00.147120 update_engine[1698]: Sep 11 00:30:00.147120 update_engine[1698]: Sep 11 00:30:00.147120 update_engine[1698]: Sep 11 00:30:00.147120 update_engine[1698]: Sep 11 00:30:00.147120 update_engine[1698]: I20250911 00:30:00.147111 1698 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 11 00:30:00.147773 locksmithd[1737]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Sep 11 00:30:00.148371 update_engine[1698]: I20250911 00:30:00.148343 1698 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 11 00:30:00.148694 update_engine[1698]: I20250911 00:30:00.148672 1698 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 11 00:30:00.185815 update_engine[1698]: E20250911 00:30:00.185774 1698 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 11 00:30:00.185913 update_engine[1698]: I20250911 00:30:00.185860 1698 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Sep 11 00:30:06.671715 containerd[1713]: time="2025-09-11T00:30:06.671674480Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7d92f42a5ee2aacc73039347d07b3054ffadae24ffbdaf2a0b5356e74dc33b6d\" id:\"225714833bdd23b99462f587f5c155c18ab5fa4c018b903de6df0c06ae9c5853\" pid:5589 exited_at:{seconds:1757550606 nanos:671419482}" Sep 11 00:30:09.030624 kubelet[3134]: I0911 00:30:09.030527 3134 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 11 00:30:10.141489 update_engine[1698]: I20250911 00:30:10.141424 1698 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 11 00:30:10.141880 update_engine[1698]: I20250911 00:30:10.141660 1698 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 11 00:30:10.141912 update_engine[1698]: I20250911 00:30:10.141901 1698 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 11 00:30:10.185367 update_engine[1698]: E20250911 00:30:10.185322 1698 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 11 00:30:10.185478 update_engine[1698]: I20250911 00:30:10.185413 1698 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Sep 11 00:30:12.104407 containerd[1713]: time="2025-09-11T00:30:12.104349353Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b94b07646c23490a03c16a5cd6e01214208da867632b2466bb5cdd73e04c97f5\" id:\"8064dbaf85eb2543ef48d5ce19cf8f66c3df48981e09ceb8e7d92e015e29e39d\" pid:5620 exited_at:{seconds:1757550612 nanos:104094573}" Sep 11 00:30:13.009594 systemd[1]: Started sshd@7-10.200.8.15:22-10.200.16.10:54646.service - OpenSSH per-connection server daemon (10.200.16.10:54646). Sep 11 00:30:13.684003 sshd[5637]: Accepted publickey for core from 10.200.16.10 port 54646 ssh2: RSA SHA256:WsqnZe1Vz7kcM6FJ5Bl6636L4nXESJA3OI736agNivA Sep 11 00:30:13.684794 sshd-session[5637]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:30:13.690265 systemd-logind[1697]: New session 10 of user core. Sep 11 00:30:13.694567 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 11 00:30:14.197640 sshd[5639]: Connection closed by 10.200.16.10 port 54646 Sep 11 00:30:14.198222 sshd-session[5637]: pam_unix(sshd:session): session closed for user core Sep 11 00:30:14.201084 systemd[1]: sshd@7-10.200.8.15:22-10.200.16.10:54646.service: Deactivated successfully. Sep 11 00:30:14.203036 systemd[1]: session-10.scope: Deactivated successfully. Sep 11 00:30:14.203840 systemd-logind[1697]: Session 10 logged out. Waiting for processes to exit. Sep 11 00:30:14.206076 systemd-logind[1697]: Removed session 10. Sep 11 00:30:17.162863 containerd[1713]: time="2025-09-11T00:30:17.162817889Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8dff6d6c166c7f358bc62bc72e45bd8e01f15aa9858b8fd516854c2c1d5a3a12\" id:\"2aaa21cf55bfb6595078295ab1269487d0476080fcd32eb5ff9fb52e75f4422e\" pid:5666 exited_at:{seconds:1757550617 nanos:162303333}" Sep 11 00:30:17.182928 containerd[1713]: time="2025-09-11T00:30:17.182894724Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8dff6d6c166c7f358bc62bc72e45bd8e01f15aa9858b8fd516854c2c1d5a3a12\" id:\"2130e37de555384f9e53155032240e412dd17a9697ec65430e2b79d5443e515f\" pid:5684 exited_at:{seconds:1757550617 nanos:182733532}" Sep 11 00:30:19.312803 systemd[1]: Started sshd@8-10.200.8.15:22-10.200.16.10:54658.service - OpenSSH per-connection server daemon (10.200.16.10:54658). Sep 11 00:30:19.957469 sshd[5697]: Accepted publickey for core from 10.200.16.10 port 54658 ssh2: RSA SHA256:WsqnZe1Vz7kcM6FJ5Bl6636L4nXESJA3OI736agNivA Sep 11 00:30:19.960326 sshd-session[5697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:30:19.968570 systemd-logind[1697]: New session 11 of user core. Sep 11 00:30:19.973554 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 11 00:30:20.132474 update_engine[1698]: I20250911 00:30:20.132417 1698 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 11 00:30:20.133875 update_engine[1698]: I20250911 00:30:20.133838 1698 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 11 00:30:20.134158 update_engine[1698]: I20250911 00:30:20.134140 1698 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 11 00:30:20.151794 update_engine[1698]: E20250911 00:30:20.151609 1698 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 11 00:30:20.151794 update_engine[1698]: I20250911 00:30:20.151669 1698 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Sep 11 00:30:20.520962 sshd[5699]: Connection closed by 10.200.16.10 port 54658 Sep 11 00:30:20.522507 sshd-session[5697]: pam_unix(sshd:session): session closed for user core Sep 11 00:30:20.526985 systemd-logind[1697]: Session 11 logged out. Waiting for processes to exit. Sep 11 00:30:20.529492 systemd[1]: sshd@8-10.200.8.15:22-10.200.16.10:54658.service: Deactivated successfully. Sep 11 00:30:20.532862 systemd[1]: session-11.scope: Deactivated successfully. Sep 11 00:30:20.536422 systemd-logind[1697]: Removed session 11. Sep 11 00:30:22.105942 kubelet[3134]: I0911 00:30:22.105453 3134 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 11 00:30:25.637696 systemd[1]: Started sshd@9-10.200.8.15:22-10.200.16.10:50412.service - OpenSSH per-connection server daemon (10.200.16.10:50412). Sep 11 00:30:26.289082 sshd[5714]: Accepted publickey for core from 10.200.16.10 port 50412 ssh2: RSA SHA256:WsqnZe1Vz7kcM6FJ5Bl6636L4nXESJA3OI736agNivA Sep 11 00:30:26.290238 sshd-session[5714]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:30:26.295738 systemd-logind[1697]: New session 12 of user core. Sep 11 00:30:26.303932 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 11 00:30:26.824698 sshd[5717]: Connection closed by 10.200.16.10 port 50412 Sep 11 00:30:26.825554 sshd-session[5714]: pam_unix(sshd:session): session closed for user core Sep 11 00:30:26.829956 systemd-logind[1697]: Session 12 logged out. Waiting for processes to exit. Sep 11 00:30:26.831993 systemd[1]: sshd@9-10.200.8.15:22-10.200.16.10:50412.service: Deactivated successfully. Sep 11 00:30:26.835332 systemd[1]: session-12.scope: Deactivated successfully. Sep 11 00:30:26.840464 systemd-logind[1697]: Removed session 12. Sep 11 00:30:26.937843 systemd[1]: Started sshd@10-10.200.8.15:22-10.200.16.10:50420.service - OpenSSH per-connection server daemon (10.200.16.10:50420). Sep 11 00:30:27.581610 sshd[5732]: Accepted publickey for core from 10.200.16.10 port 50420 ssh2: RSA SHA256:WsqnZe1Vz7kcM6FJ5Bl6636L4nXESJA3OI736agNivA Sep 11 00:30:27.582966 sshd-session[5732]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:30:27.591851 systemd-logind[1697]: New session 13 of user core. Sep 11 00:30:27.596555 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 11 00:30:28.173170 sshd[5734]: Connection closed by 10.200.16.10 port 50420 Sep 11 00:30:28.174655 sshd-session[5732]: pam_unix(sshd:session): session closed for user core Sep 11 00:30:28.179460 systemd-logind[1697]: Session 13 logged out. Waiting for processes to exit. Sep 11 00:30:28.180667 systemd[1]: sshd@10-10.200.8.15:22-10.200.16.10:50420.service: Deactivated successfully. Sep 11 00:30:28.183083 systemd[1]: session-13.scope: Deactivated successfully. Sep 11 00:30:28.186435 systemd-logind[1697]: Removed session 13. Sep 11 00:30:28.291431 systemd[1]: Started sshd@11-10.200.8.15:22-10.200.16.10:50428.service - OpenSSH per-connection server daemon (10.200.16.10:50428). Sep 11 00:30:28.935842 sshd[5744]: Accepted publickey for core from 10.200.16.10 port 50428 ssh2: RSA SHA256:WsqnZe1Vz7kcM6FJ5Bl6636L4nXESJA3OI736agNivA Sep 11 00:30:28.936926 sshd-session[5744]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:30:28.942528 systemd-logind[1697]: New session 14 of user core. Sep 11 00:30:28.948616 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 11 00:30:29.471408 sshd[5746]: Connection closed by 10.200.16.10 port 50428 Sep 11 00:30:29.471909 sshd-session[5744]: pam_unix(sshd:session): session closed for user core Sep 11 00:30:29.477035 systemd-logind[1697]: Session 14 logged out. Waiting for processes to exit. Sep 11 00:30:29.478094 systemd[1]: sshd@11-10.200.8.15:22-10.200.16.10:50428.service: Deactivated successfully. Sep 11 00:30:29.482101 systemd[1]: session-14.scope: Deactivated successfully. Sep 11 00:30:29.484416 systemd-logind[1697]: Removed session 14. Sep 11 00:30:30.133123 update_engine[1698]: I20250911 00:30:30.133066 1698 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 11 00:30:30.133529 update_engine[1698]: I20250911 00:30:30.133307 1698 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 11 00:30:30.133612 update_engine[1698]: I20250911 00:30:30.133574 1698 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 11 00:30:30.174069 update_engine[1698]: E20250911 00:30:30.173374 1698 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 11 00:30:30.174069 update_engine[1698]: I20250911 00:30:30.173468 1698 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Sep 11 00:30:30.174069 update_engine[1698]: I20250911 00:30:30.173476 1698 omaha_request_action.cc:617] Omaha request response: Sep 11 00:30:30.174069 update_engine[1698]: E20250911 00:30:30.173552 1698 omaha_request_action.cc:636] Omaha request network transfer failed. Sep 11 00:30:30.174069 update_engine[1698]: I20250911 00:30:30.173571 1698 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Sep 11 00:30:30.174069 update_engine[1698]: I20250911 00:30:30.173576 1698 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 11 00:30:30.174069 update_engine[1698]: I20250911 00:30:30.173581 1698 update_attempter.cc:306] Processing Done. Sep 11 00:30:30.174069 update_engine[1698]: E20250911 00:30:30.173597 1698 update_attempter.cc:619] Update failed. Sep 11 00:30:30.174069 update_engine[1698]: I20250911 00:30:30.173602 1698 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Sep 11 00:30:30.174069 update_engine[1698]: I20250911 00:30:30.173609 1698 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Sep 11 00:30:30.174069 update_engine[1698]: I20250911 00:30:30.173614 1698 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Sep 11 00:30:30.174069 update_engine[1698]: I20250911 00:30:30.173686 1698 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Sep 11 00:30:30.174069 update_engine[1698]: I20250911 00:30:30.173709 1698 omaha_request_action.cc:271] Posting an Omaha request to disabled Sep 11 00:30:30.174069 update_engine[1698]: I20250911 00:30:30.173713 1698 omaha_request_action.cc:272] Request: Sep 11 00:30:30.174069 update_engine[1698]: Sep 11 00:30:30.174069 update_engine[1698]: Sep 11 00:30:30.174522 update_engine[1698]: Sep 11 00:30:30.174522 update_engine[1698]: Sep 11 00:30:30.174522 update_engine[1698]: Sep 11 00:30:30.174522 update_engine[1698]: Sep 11 00:30:30.174522 update_engine[1698]: I20250911 00:30:30.173717 1698 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 11 00:30:30.174522 update_engine[1698]: I20250911 00:30:30.173840 1698 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 11 00:30:30.174522 update_engine[1698]: I20250911 00:30:30.174024 1698 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 11 00:30:30.175565 locksmithd[1737]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Sep 11 00:30:30.202726 update_engine[1698]: E20250911 00:30:30.202687 1698 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 11 00:30:30.202811 update_engine[1698]: I20250911 00:30:30.202737 1698 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Sep 11 00:30:30.202811 update_engine[1698]: I20250911 00:30:30.202744 1698 omaha_request_action.cc:617] Omaha request response: Sep 11 00:30:30.202811 update_engine[1698]: I20250911 00:30:30.202750 1698 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 11 00:30:30.202811 update_engine[1698]: I20250911 00:30:30.202754 1698 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 11 00:30:30.202811 update_engine[1698]: I20250911 00:30:30.202757 1698 update_attempter.cc:306] Processing Done. Sep 11 00:30:30.202811 update_engine[1698]: I20250911 00:30:30.202762 1698 update_attempter.cc:310] Error event sent. Sep 11 00:30:30.202811 update_engine[1698]: I20250911 00:30:30.202771 1698 update_check_scheduler.cc:74] Next update check in 40m2s Sep 11 00:30:30.203115 locksmithd[1737]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Sep 11 00:30:34.603530 systemd[1]: Started sshd@12-10.200.8.15:22-10.200.16.10:45760.service - OpenSSH per-connection server daemon (10.200.16.10:45760). Sep 11 00:30:35.244881 sshd[5762]: Accepted publickey for core from 10.200.16.10 port 45760 ssh2: RSA SHA256:WsqnZe1Vz7kcM6FJ5Bl6636L4nXESJA3OI736agNivA Sep 11 00:30:35.246048 sshd-session[5762]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:30:35.250541 systemd-logind[1697]: New session 15 of user core. Sep 11 00:30:35.254538 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 11 00:30:35.754098 sshd[5764]: Connection closed by 10.200.16.10 port 45760 Sep 11 00:30:35.754633 sshd-session[5762]: pam_unix(sshd:session): session closed for user core Sep 11 00:30:35.757492 systemd[1]: sshd@12-10.200.8.15:22-10.200.16.10:45760.service: Deactivated successfully. Sep 11 00:30:35.759340 systemd[1]: session-15.scope: Deactivated successfully. Sep 11 00:30:35.760563 systemd-logind[1697]: Session 15 logged out. Waiting for processes to exit. Sep 11 00:30:35.762258 systemd-logind[1697]: Removed session 15. Sep 11 00:30:35.871491 systemd[1]: Started sshd@13-10.200.8.15:22-10.200.16.10:45772.service - OpenSSH per-connection server daemon (10.200.16.10:45772). Sep 11 00:30:36.516490 sshd[5776]: Accepted publickey for core from 10.200.16.10 port 45772 ssh2: RSA SHA256:WsqnZe1Vz7kcM6FJ5Bl6636L4nXESJA3OI736agNivA Sep 11 00:30:36.517593 sshd-session[5776]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:30:36.521465 systemd-logind[1697]: New session 16 of user core. Sep 11 00:30:36.526536 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 11 00:30:36.655209 containerd[1713]: time="2025-09-11T00:30:36.655173148Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7d92f42a5ee2aacc73039347d07b3054ffadae24ffbdaf2a0b5356e74dc33b6d\" id:\"456e9e20de9ce00adfa357e24f1cc1733a9244869cfcc8ade7360016cb4d6ebe\" pid:5792 exited_at:{seconds:1757550636 nanos:654873374}" Sep 11 00:30:37.075223 sshd[5778]: Connection closed by 10.200.16.10 port 45772 Sep 11 00:30:37.075938 sshd-session[5776]: pam_unix(sshd:session): session closed for user core Sep 11 00:30:37.078675 systemd[1]: sshd@13-10.200.8.15:22-10.200.16.10:45772.service: Deactivated successfully. Sep 11 00:30:37.081239 systemd[1]: session-16.scope: Deactivated successfully. Sep 11 00:30:37.082163 systemd-logind[1697]: Session 16 logged out. Waiting for processes to exit. Sep 11 00:30:37.083431 systemd-logind[1697]: Removed session 16. Sep 11 00:30:37.188511 systemd[1]: Started sshd@14-10.200.8.15:22-10.200.16.10:45776.service - OpenSSH per-connection server daemon (10.200.16.10:45776). Sep 11 00:30:37.831130 sshd[5812]: Accepted publickey for core from 10.200.16.10 port 45776 ssh2: RSA SHA256:WsqnZe1Vz7kcM6FJ5Bl6636L4nXESJA3OI736agNivA Sep 11 00:30:37.832257 sshd-session[5812]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:30:37.836832 systemd-logind[1697]: New session 17 of user core. Sep 11 00:30:37.840529 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 11 00:30:38.670621 sshd[5814]: Connection closed by 10.200.16.10 port 45776 Sep 11 00:30:38.672420 sshd-session[5812]: pam_unix(sshd:session): session closed for user core Sep 11 00:30:38.677649 systemd[1]: sshd@14-10.200.8.15:22-10.200.16.10:45776.service: Deactivated successfully. Sep 11 00:30:38.680177 systemd[1]: session-17.scope: Deactivated successfully. Sep 11 00:30:38.681865 systemd-logind[1697]: Session 17 logged out. Waiting for processes to exit. Sep 11 00:30:38.682907 systemd-logind[1697]: Removed session 17. Sep 11 00:30:38.784791 systemd[1]: Started sshd@15-10.200.8.15:22-10.200.16.10:45790.service - OpenSSH per-connection server daemon (10.200.16.10:45790). Sep 11 00:30:39.425082 sshd[5832]: Accepted publickey for core from 10.200.16.10 port 45790 ssh2: RSA SHA256:WsqnZe1Vz7kcM6FJ5Bl6636L4nXESJA3OI736agNivA Sep 11 00:30:39.426231 sshd-session[5832]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:30:39.431090 systemd-logind[1697]: New session 18 of user core. Sep 11 00:30:39.436552 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 11 00:30:40.066804 sshd[5834]: Connection closed by 10.200.16.10 port 45790 Sep 11 00:30:40.067355 sshd-session[5832]: pam_unix(sshd:session): session closed for user core Sep 11 00:30:40.070667 systemd[1]: sshd@15-10.200.8.15:22-10.200.16.10:45790.service: Deactivated successfully. Sep 11 00:30:40.072770 systemd[1]: session-18.scope: Deactivated successfully. Sep 11 00:30:40.073673 systemd-logind[1697]: Session 18 logged out. Waiting for processes to exit. Sep 11 00:30:40.074959 systemd-logind[1697]: Removed session 18. Sep 11 00:30:40.179828 systemd[1]: Started sshd@16-10.200.8.15:22-10.200.16.10:54742.service - OpenSSH per-connection server daemon (10.200.16.10:54742). Sep 11 00:30:40.850275 sshd[5844]: Accepted publickey for core from 10.200.16.10 port 54742 ssh2: RSA SHA256:WsqnZe1Vz7kcM6FJ5Bl6636L4nXESJA3OI736agNivA Sep 11 00:30:40.853261 sshd-session[5844]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:30:40.859593 systemd-logind[1697]: New session 19 of user core. Sep 11 00:30:40.865750 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 11 00:30:41.403099 sshd[5846]: Connection closed by 10.200.16.10 port 54742 Sep 11 00:30:41.404591 sshd-session[5844]: pam_unix(sshd:session): session closed for user core Sep 11 00:30:41.408041 systemd-logind[1697]: Session 19 logged out. Waiting for processes to exit. Sep 11 00:30:41.409897 systemd[1]: sshd@16-10.200.8.15:22-10.200.16.10:54742.service: Deactivated successfully. Sep 11 00:30:41.412490 systemd[1]: session-19.scope: Deactivated successfully. Sep 11 00:30:41.415125 systemd-logind[1697]: Removed session 19. Sep 11 00:30:42.098658 containerd[1713]: time="2025-09-11T00:30:42.098556348Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b94b07646c23490a03c16a5cd6e01214208da867632b2466bb5cdd73e04c97f5\" id:\"4c7190ade336194468cd5c0bdacde29505d82cf373b735471d05c8f7fed05c6d\" pid:5869 exited_at:{seconds:1757550642 nanos:97798990}" Sep 11 00:30:46.526615 systemd[1]: Started sshd@17-10.200.8.15:22-10.200.16.10:54746.service - OpenSSH per-connection server daemon (10.200.16.10:54746). Sep 11 00:30:47.083426 containerd[1713]: time="2025-09-11T00:30:47.083320947Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8dff6d6c166c7f358bc62bc72e45bd8e01f15aa9858b8fd516854c2c1d5a3a12\" id:\"a16ba14e67d1869af1c789a2ce499ac53da2d7b08a6baf49ee287915760093e1\" pid:5895 exited_at:{seconds:1757550647 nanos:82893515}" Sep 11 00:30:47.182592 sshd[5882]: Accepted publickey for core from 10.200.16.10 port 54746 ssh2: RSA SHA256:WsqnZe1Vz7kcM6FJ5Bl6636L4nXESJA3OI736agNivA Sep 11 00:30:47.183987 sshd-session[5882]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:30:47.189210 systemd-logind[1697]: New session 20 of user core. Sep 11 00:30:47.195660 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 11 00:30:47.683351 sshd[5904]: Connection closed by 10.200.16.10 port 54746 Sep 11 00:30:47.683862 sshd-session[5882]: pam_unix(sshd:session): session closed for user core Sep 11 00:30:47.687054 systemd[1]: sshd@17-10.200.8.15:22-10.200.16.10:54746.service: Deactivated successfully. Sep 11 00:30:47.688834 systemd[1]: session-20.scope: Deactivated successfully. Sep 11 00:30:47.690077 systemd-logind[1697]: Session 20 logged out. Waiting for processes to exit. Sep 11 00:30:47.691846 systemd-logind[1697]: Removed session 20. Sep 11 00:30:52.801948 systemd[1]: Started sshd@18-10.200.8.15:22-10.200.16.10:59048.service - OpenSSH per-connection server daemon (10.200.16.10:59048). Sep 11 00:30:53.450150 sshd[5924]: Accepted publickey for core from 10.200.16.10 port 59048 ssh2: RSA SHA256:WsqnZe1Vz7kcM6FJ5Bl6636L4nXESJA3OI736agNivA Sep 11 00:30:53.451996 sshd-session[5924]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:30:53.456936 systemd-logind[1697]: New session 21 of user core. Sep 11 00:30:53.463861 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 11 00:30:53.959576 sshd[5926]: Connection closed by 10.200.16.10 port 59048 Sep 11 00:30:53.960565 sshd-session[5924]: pam_unix(sshd:session): session closed for user core Sep 11 00:30:53.963837 systemd-logind[1697]: Session 21 logged out. Waiting for processes to exit. Sep 11 00:30:53.963975 systemd[1]: sshd@18-10.200.8.15:22-10.200.16.10:59048.service: Deactivated successfully. Sep 11 00:30:53.965853 systemd[1]: session-21.scope: Deactivated successfully. Sep 11 00:30:53.967229 systemd-logind[1697]: Removed session 21. Sep 11 00:30:59.078211 systemd[1]: Started sshd@19-10.200.8.15:22-10.200.16.10:59064.service - OpenSSH per-connection server daemon (10.200.16.10:59064). Sep 11 00:30:59.725012 sshd[5942]: Accepted publickey for core from 10.200.16.10 port 59064 ssh2: RSA SHA256:WsqnZe1Vz7kcM6FJ5Bl6636L4nXESJA3OI736agNivA Sep 11 00:30:59.726732 sshd-session[5942]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:30:59.731887 systemd-logind[1697]: New session 22 of user core. Sep 11 00:30:59.739678 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 11 00:31:00.020488 containerd[1713]: time="2025-09-11T00:31:00.019699694Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b94b07646c23490a03c16a5cd6e01214208da867632b2466bb5cdd73e04c97f5\" id:\"6d84679ec79aaa842e9e5f8f70ae2962dd6eb768d1525e54ad93b89c0a9009c8\" pid:5958 exited_at:{seconds:1757550660 nanos:18464372}" Sep 11 00:31:00.245101 sshd[5944]: Connection closed by 10.200.16.10 port 59064 Sep 11 00:31:00.246600 sshd-session[5942]: pam_unix(sshd:session): session closed for user core Sep 11 00:31:00.250125 systemd-logind[1697]: Session 22 logged out. Waiting for processes to exit. Sep 11 00:31:00.250413 systemd[1]: sshd@19-10.200.8.15:22-10.200.16.10:59064.service: Deactivated successfully. Sep 11 00:31:00.252716 systemd[1]: session-22.scope: Deactivated successfully. Sep 11 00:31:00.255470 systemd-logind[1697]: Removed session 22. Sep 11 00:31:05.360850 systemd[1]: Started sshd@20-10.200.8.15:22-10.200.16.10:56986.service - OpenSSH per-connection server daemon (10.200.16.10:56986). Sep 11 00:31:06.001417 sshd[5978]: Accepted publickey for core from 10.200.16.10 port 56986 ssh2: RSA SHA256:WsqnZe1Vz7kcM6FJ5Bl6636L4nXESJA3OI736agNivA Sep 11 00:31:06.002581 sshd-session[5978]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:31:06.006508 systemd-logind[1697]: New session 23 of user core. Sep 11 00:31:06.013715 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 11 00:31:06.498311 sshd[5980]: Connection closed by 10.200.16.10 port 56986 Sep 11 00:31:06.499568 sshd-session[5978]: pam_unix(sshd:session): session closed for user core Sep 11 00:31:06.502774 systemd-logind[1697]: Session 23 logged out. Waiting for processes to exit. Sep 11 00:31:06.503040 systemd[1]: sshd@20-10.200.8.15:22-10.200.16.10:56986.service: Deactivated successfully. Sep 11 00:31:06.505170 systemd[1]: session-23.scope: Deactivated successfully. Sep 11 00:31:06.507022 systemd-logind[1697]: Removed session 23. Sep 11 00:31:06.653245 containerd[1713]: time="2025-09-11T00:31:06.653161357Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7d92f42a5ee2aacc73039347d07b3054ffadae24ffbdaf2a0b5356e74dc33b6d\" id:\"012a9b0c0e51cf7ff5391377bc5ef3921034df8099312a4e671f976411ef33c5\" pid:6003 exited_at:{seconds:1757550666 nanos:652915212}" Sep 11 00:31:11.612641 systemd[1]: Started sshd@21-10.200.8.15:22-10.200.16.10:57248.service - OpenSSH per-connection server daemon (10.200.16.10:57248). Sep 11 00:31:12.075238 containerd[1713]: time="2025-09-11T00:31:12.075189909Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b94b07646c23490a03c16a5cd6e01214208da867632b2466bb5cdd73e04c97f5\" id:\"5a95a691cb94bb6fce91ca7d9342a8bf1a8a2aeff34f7fbe864812323aa4a0a2\" pid:6051 exited_at:{seconds:1757550672 nanos:74942120}" Sep 11 00:31:12.259234 sshd[6037]: Accepted publickey for core from 10.200.16.10 port 57248 ssh2: RSA SHA256:WsqnZe1Vz7kcM6FJ5Bl6636L4nXESJA3OI736agNivA Sep 11 00:31:12.260331 sshd-session[6037]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:31:12.264447 systemd-logind[1697]: New session 24 of user core. Sep 11 00:31:12.273531 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 11 00:31:12.765440 sshd[6061]: Connection closed by 10.200.16.10 port 57248 Sep 11 00:31:12.765972 sshd-session[6037]: pam_unix(sshd:session): session closed for user core Sep 11 00:31:12.769261 systemd[1]: sshd@21-10.200.8.15:22-10.200.16.10:57248.service: Deactivated successfully. Sep 11 00:31:12.771184 systemd[1]: session-24.scope: Deactivated successfully. Sep 11 00:31:12.772097 systemd-logind[1697]: Session 24 logged out. Waiting for processes to exit. Sep 11 00:31:12.773854 systemd-logind[1697]: Removed session 24. Sep 11 00:31:17.072823 containerd[1713]: time="2025-09-11T00:31:17.072779952Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8dff6d6c166c7f358bc62bc72e45bd8e01f15aa9858b8fd516854c2c1d5a3a12\" id:\"f94bd053e953780b26027c8ca571a4a0b22cc1160ce0def2615f729c7edcf7d3\" pid:6085 exited_at:{seconds:1757550677 nanos:72517085}" Sep 11 00:31:17.126472 containerd[1713]: time="2025-09-11T00:31:17.126433626Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8dff6d6c166c7f358bc62bc72e45bd8e01f15aa9858b8fd516854c2c1d5a3a12\" id:\"76b9b051d6d5031f1f9c9a9032793e3e5b8a62bb6c054dc21900f3a32c805c88\" pid:6106 exited_at:{seconds:1757550677 nanos:126188863}"