Sep 16 04:56:45.935039 kernel: Linux version 6.12.47-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue Sep 16 03:05:42 -00 2025 Sep 16 04:56:45.935064 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=0b876f86a632750e9937176808a48c2452d5168964273bcfc3c72f2a26140c06 Sep 16 04:56:45.935073 kernel: BIOS-provided physical RAM map: Sep 16 04:56:45.935080 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 16 04:56:45.935085 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Sep 16 04:56:45.935091 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000044fdfff] usable Sep 16 04:56:45.935098 kernel: BIOS-e820: [mem 0x00000000044fe000-0x00000000048fdfff] reserved Sep 16 04:56:45.935105 kernel: BIOS-e820: [mem 0x00000000048fe000-0x000000003ff1efff] usable Sep 16 04:56:45.935111 kernel: BIOS-e820: [mem 0x000000003ff1f000-0x000000003ffc8fff] reserved Sep 16 04:56:45.935117 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Sep 16 04:56:45.935123 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Sep 16 04:56:45.935129 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Sep 16 04:56:45.935135 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Sep 16 04:56:45.935141 kernel: printk: legacy bootconsole [earlyser0] enabled Sep 16 04:56:45.935149 kernel: NX (Execute Disable) protection: active Sep 16 04:56:45.935156 kernel: APIC: Static calls initialized Sep 16 04:56:45.935162 kernel: efi: EFI v2.7 by Microsoft Sep 16 04:56:45.935168 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff88000 SMBIOS 3.0=0x3ff86000 MEMATTR=0x3ead5518 RNG=0x3ffd2018 Sep 16 04:56:45.935174 kernel: random: crng init done Sep 16 04:56:45.935181 kernel: secureboot: Secure boot disabled Sep 16 04:56:45.935187 kernel: SMBIOS 3.1.0 present. Sep 16 04:56:45.935194 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 01/28/2025 Sep 16 04:56:45.935200 kernel: DMI: Memory slots populated: 2/2 Sep 16 04:56:45.935207 kernel: Hypervisor detected: Microsoft Hyper-V Sep 16 04:56:45.935214 kernel: Hyper-V: privilege flags low 0xae7f, high 0x3b8030, hints 0x9e4e24, misc 0xe0bed7b2 Sep 16 04:56:45.935220 kernel: Hyper-V: Nested features: 0x3e0101 Sep 16 04:56:45.935226 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Sep 16 04:56:45.935233 kernel: Hyper-V: Using hypercall for remote TLB flush Sep 16 04:56:45.935239 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Sep 16 04:56:45.935245 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Sep 16 04:56:45.935462 kernel: tsc: Detected 2299.999 MHz processor Sep 16 04:56:45.935469 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 16 04:56:45.935477 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 16 04:56:45.935483 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x10000000000 Sep 16 04:56:45.935492 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Sep 16 04:56:45.935499 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 16 04:56:45.935505 kernel: e820: update [mem 0x48000000-0xffffffff] usable ==> reserved Sep 16 04:56:45.935512 kernel: last_pfn = 0x40000 max_arch_pfn = 0x10000000000 Sep 16 04:56:45.935518 kernel: Using GB pages for direct mapping Sep 16 04:56:45.935525 kernel: ACPI: Early table checksum verification disabled Sep 16 04:56:45.935535 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Sep 16 04:56:45.935543 kernel: ACPI: XSDT 0x000000003FFF90E8 00005C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 16 04:56:45.935549 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 16 04:56:45.935556 kernel: ACPI: DSDT 0x000000003FFD6000 01E27A (v02 MSFTVM DSDT01 00000001 INTL 20230628) Sep 16 04:56:45.935563 kernel: ACPI: FACS 0x000000003FFFE000 000040 Sep 16 04:56:45.935570 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 16 04:56:45.935577 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 16 04:56:45.935585 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 16 04:56:45.935592 kernel: ACPI: APIC 0x000000003FFD5000 000052 (v05 HVLITE HVLITETB 00000000 MSHV 00000000) Sep 16 04:56:45.935598 kernel: ACPI: SRAT 0x000000003FFD4000 0000A0 (v03 HVLITE HVLITETB 00000000 MSHV 00000000) Sep 16 04:56:45.935605 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 16 04:56:45.935612 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Sep 16 04:56:45.935619 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4279] Sep 16 04:56:45.935625 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Sep 16 04:56:45.935632 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Sep 16 04:56:45.935639 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Sep 16 04:56:45.935647 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Sep 16 04:56:45.935654 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5051] Sep 16 04:56:45.935660 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd409f] Sep 16 04:56:45.935667 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Sep 16 04:56:45.935674 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Sep 16 04:56:45.935680 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] Sep 16 04:56:45.935687 kernel: NUMA: Node 0 [mem 0x00001000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00001000-0x2bfffffff] Sep 16 04:56:45.935694 kernel: NODE_DATA(0) allocated [mem 0x2bfff8dc0-0x2bfffffff] Sep 16 04:56:45.935701 kernel: Zone ranges: Sep 16 04:56:45.935708 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 16 04:56:45.935715 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Sep 16 04:56:45.935722 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Sep 16 04:56:45.935729 kernel: Device empty Sep 16 04:56:45.935735 kernel: Movable zone start for each node Sep 16 04:56:45.935742 kernel: Early memory node ranges Sep 16 04:56:45.935749 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Sep 16 04:56:45.935755 kernel: node 0: [mem 0x0000000000100000-0x00000000044fdfff] Sep 16 04:56:45.935762 kernel: node 0: [mem 0x00000000048fe000-0x000000003ff1efff] Sep 16 04:56:45.935770 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Sep 16 04:56:45.935777 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Sep 16 04:56:45.935783 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Sep 16 04:56:45.935790 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 16 04:56:45.935797 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Sep 16 04:56:45.935804 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Sep 16 04:56:45.935811 kernel: On node 0, zone DMA32: 224 pages in unavailable ranges Sep 16 04:56:45.935817 kernel: ACPI: PM-Timer IO Port: 0x408 Sep 16 04:56:45.935824 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 16 04:56:45.935833 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 16 04:56:45.935839 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 16 04:56:45.935846 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Sep 16 04:56:45.935852 kernel: TSC deadline timer available Sep 16 04:56:45.935859 kernel: CPU topo: Max. logical packages: 1 Sep 16 04:56:45.935866 kernel: CPU topo: Max. logical dies: 1 Sep 16 04:56:45.935872 kernel: CPU topo: Max. dies per package: 1 Sep 16 04:56:45.935879 kernel: CPU topo: Max. threads per core: 2 Sep 16 04:56:45.935886 kernel: CPU topo: Num. cores per package: 1 Sep 16 04:56:45.935894 kernel: CPU topo: Num. threads per package: 2 Sep 16 04:56:45.935901 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Sep 16 04:56:45.935907 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Sep 16 04:56:45.935914 kernel: Booting paravirtualized kernel on Hyper-V Sep 16 04:56:45.935921 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 16 04:56:45.935928 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Sep 16 04:56:45.935934 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Sep 16 04:56:45.935941 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Sep 16 04:56:45.935948 kernel: pcpu-alloc: [0] 0 1 Sep 16 04:56:45.935956 kernel: Hyper-V: PV spinlocks enabled Sep 16 04:56:45.935963 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 16 04:56:45.935971 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=0b876f86a632750e9937176808a48c2452d5168964273bcfc3c72f2a26140c06 Sep 16 04:56:45.935978 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 16 04:56:45.935984 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Sep 16 04:56:45.935991 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 16 04:56:45.935998 kernel: Fallback order for Node 0: 0 Sep 16 04:56:45.936005 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2095807 Sep 16 04:56:45.936013 kernel: Policy zone: Normal Sep 16 04:56:45.936020 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 16 04:56:45.936027 kernel: software IO TLB: area num 2. Sep 16 04:56:45.936034 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 16 04:56:45.936040 kernel: ftrace: allocating 40125 entries in 157 pages Sep 16 04:56:45.936047 kernel: ftrace: allocated 157 pages with 5 groups Sep 16 04:56:45.936054 kernel: Dynamic Preempt: voluntary Sep 16 04:56:45.936061 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 16 04:56:45.936068 kernel: rcu: RCU event tracing is enabled. Sep 16 04:56:45.936082 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 16 04:56:45.936089 kernel: Trampoline variant of Tasks RCU enabled. Sep 16 04:56:45.936097 kernel: Rude variant of Tasks RCU enabled. Sep 16 04:56:45.936106 kernel: Tracing variant of Tasks RCU enabled. Sep 16 04:56:45.936113 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 16 04:56:45.936120 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 16 04:56:45.936127 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 16 04:56:45.936135 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 16 04:56:45.936142 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 16 04:56:45.936150 kernel: Using NULL legacy PIC Sep 16 04:56:45.936159 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Sep 16 04:56:45.936166 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 16 04:56:45.936173 kernel: Console: colour dummy device 80x25 Sep 16 04:56:45.936181 kernel: printk: legacy console [tty1] enabled Sep 16 04:56:45.936188 kernel: printk: legacy console [ttyS0] enabled Sep 16 04:56:45.936195 kernel: printk: legacy bootconsole [earlyser0] disabled Sep 16 04:56:45.936202 kernel: ACPI: Core revision 20240827 Sep 16 04:56:45.936210 kernel: Failed to register legacy timer interrupt Sep 16 04:56:45.936218 kernel: APIC: Switch to symmetric I/O mode setup Sep 16 04:56:45.936225 kernel: x2apic enabled Sep 16 04:56:45.936232 kernel: APIC: Switched APIC routing to: physical x2apic Sep 16 04:56:45.936239 kernel: Hyper-V: Host Build 10.0.26100.1293-1-0 Sep 16 04:56:45.936247 kernel: Hyper-V: enabling crash_kexec_post_notifiers Sep 16 04:56:45.936294 kernel: Hyper-V: Disabling IBT because of Hyper-V bug Sep 16 04:56:45.936301 kernel: Hyper-V: Using IPI hypercalls Sep 16 04:56:45.936309 kernel: APIC: send_IPI() replaced with hv_send_ipi() Sep 16 04:56:45.936318 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Sep 16 04:56:45.936326 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Sep 16 04:56:45.936334 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Sep 16 04:56:45.936341 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Sep 16 04:56:45.936349 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Sep 16 04:56:45.936357 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2127345424d, max_idle_ns: 440795318347 ns Sep 16 04:56:45.936365 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 4599.99 BogoMIPS (lpj=2299999) Sep 16 04:56:45.936373 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 16 04:56:45.936381 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Sep 16 04:56:45.936389 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Sep 16 04:56:45.936397 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 16 04:56:45.936405 kernel: Spectre V2 : Mitigation: Retpolines Sep 16 04:56:45.936412 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 16 04:56:45.936420 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Sep 16 04:56:45.936428 kernel: RETBleed: Vulnerable Sep 16 04:56:45.936436 kernel: Speculative Store Bypass: Vulnerable Sep 16 04:56:45.936443 kernel: active return thunk: its_return_thunk Sep 16 04:56:45.936451 kernel: ITS: Mitigation: Aligned branch/return thunks Sep 16 04:56:45.936459 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 16 04:56:45.936466 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 16 04:56:45.936475 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 16 04:56:45.936483 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Sep 16 04:56:45.936490 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Sep 16 04:56:45.936497 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Sep 16 04:56:45.936504 kernel: x86/fpu: Supporting XSAVE feature 0x800: 'Control-flow User registers' Sep 16 04:56:45.936512 kernel: x86/fpu: Supporting XSAVE feature 0x20000: 'AMX Tile config' Sep 16 04:56:45.936519 kernel: x86/fpu: Supporting XSAVE feature 0x40000: 'AMX Tile data' Sep 16 04:56:45.936526 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 16 04:56:45.936533 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Sep 16 04:56:45.936540 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Sep 16 04:56:45.936548 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Sep 16 04:56:45.936555 kernel: x86/fpu: xstate_offset[11]: 2432, xstate_sizes[11]: 16 Sep 16 04:56:45.936562 kernel: x86/fpu: xstate_offset[17]: 2496, xstate_sizes[17]: 64 Sep 16 04:56:45.936569 kernel: x86/fpu: xstate_offset[18]: 2560, xstate_sizes[18]: 8192 Sep 16 04:56:45.936576 kernel: x86/fpu: Enabled xstate features 0x608e7, context size is 10752 bytes, using 'compacted' format. Sep 16 04:56:45.936584 kernel: Freeing SMP alternatives memory: 32K Sep 16 04:56:45.936592 kernel: pid_max: default: 32768 minimum: 301 Sep 16 04:56:45.936599 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 16 04:56:45.936607 kernel: landlock: Up and running. Sep 16 04:56:45.936614 kernel: SELinux: Initializing. Sep 16 04:56:45.936622 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 16 04:56:45.936630 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 16 04:56:45.936639 kernel: smpboot: CPU0: Intel INTEL(R) XEON(R) PLATINUM 8573C (family: 0x6, model: 0xcf, stepping: 0x2) Sep 16 04:56:45.936647 kernel: Performance Events: unsupported p6 CPU model 207 no PMU driver, software events only. Sep 16 04:56:45.936655 kernel: signal: max sigframe size: 11952 Sep 16 04:56:45.936662 kernel: rcu: Hierarchical SRCU implementation. Sep 16 04:56:45.936670 kernel: rcu: Max phase no-delay instances is 400. Sep 16 04:56:45.936677 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 16 04:56:45.936685 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 16 04:56:45.936693 kernel: smp: Bringing up secondary CPUs ... Sep 16 04:56:45.936701 kernel: smpboot: x86: Booting SMP configuration: Sep 16 04:56:45.936710 kernel: .... node #0, CPUs: #1 Sep 16 04:56:45.936718 kernel: smp: Brought up 1 node, 2 CPUs Sep 16 04:56:45.936726 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Sep 16 04:56:45.936734 kernel: Memory: 8077028K/8383228K available (14336K kernel code, 2432K rwdata, 9992K rodata, 54096K init, 2868K bss, 299992K reserved, 0K cma-reserved) Sep 16 04:56:45.936742 kernel: devtmpfs: initialized Sep 16 04:56:45.936750 kernel: x86/mm: Memory block size: 128MB Sep 16 04:56:45.936758 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Sep 16 04:56:45.936766 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 16 04:56:45.936774 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 16 04:56:45.936783 kernel: pinctrl core: initialized pinctrl subsystem Sep 16 04:56:45.936791 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 16 04:56:45.936799 kernel: audit: initializing netlink subsys (disabled) Sep 16 04:56:45.936807 kernel: audit: type=2000 audit(1757998603.028:1): state=initialized audit_enabled=0 res=1 Sep 16 04:56:45.936815 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 16 04:56:45.936823 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 16 04:56:45.936831 kernel: cpuidle: using governor menu Sep 16 04:56:45.936838 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 16 04:56:45.936846 kernel: dca service started, version 1.12.1 Sep 16 04:56:45.936855 kernel: e820: reserve RAM buffer [mem 0x044fe000-0x07ffffff] Sep 16 04:56:45.936863 kernel: e820: reserve RAM buffer [mem 0x3ff1f000-0x3fffffff] Sep 16 04:56:45.936871 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 16 04:56:45.936879 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 16 04:56:45.936887 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 16 04:56:45.936894 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 16 04:56:45.936902 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 16 04:56:45.936910 kernel: ACPI: Added _OSI(Module Device) Sep 16 04:56:45.936918 kernel: ACPI: Added _OSI(Processor Device) Sep 16 04:56:45.936927 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 16 04:56:45.936935 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 16 04:56:45.936943 kernel: ACPI: Interpreter enabled Sep 16 04:56:45.936950 kernel: ACPI: PM: (supports S0 S5) Sep 16 04:56:45.936958 kernel: ACPI: Using IOAPIC for interrupt routing Sep 16 04:56:45.936966 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 16 04:56:45.936974 kernel: PCI: Ignoring E820 reservations for host bridge windows Sep 16 04:56:45.936981 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Sep 16 04:56:45.936989 kernel: iommu: Default domain type: Translated Sep 16 04:56:45.936998 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 16 04:56:45.937006 kernel: efivars: Registered efivars operations Sep 16 04:56:45.937013 kernel: PCI: Using ACPI for IRQ routing Sep 16 04:56:45.937021 kernel: PCI: System does not support PCI Sep 16 04:56:45.937029 kernel: vgaarb: loaded Sep 16 04:56:45.937037 kernel: clocksource: Switched to clocksource tsc-early Sep 16 04:56:45.937045 kernel: VFS: Disk quotas dquot_6.6.0 Sep 16 04:56:45.937053 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 16 04:56:45.937060 kernel: pnp: PnP ACPI init Sep 16 04:56:45.937070 kernel: pnp: PnP ACPI: found 3 devices Sep 16 04:56:45.937078 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 16 04:56:45.937086 kernel: NET: Registered PF_INET protocol family Sep 16 04:56:45.937093 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 16 04:56:45.937101 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Sep 16 04:56:45.937109 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 16 04:56:45.937117 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 16 04:56:45.937125 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Sep 16 04:56:45.937133 kernel: TCP: Hash tables configured (established 65536 bind 65536) Sep 16 04:56:45.937142 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Sep 16 04:56:45.937150 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Sep 16 04:56:45.937158 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 16 04:56:45.937166 kernel: NET: Registered PF_XDP protocol family Sep 16 04:56:45.937173 kernel: PCI: CLS 0 bytes, default 64 Sep 16 04:56:45.937181 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Sep 16 04:56:45.937189 kernel: software IO TLB: mapped [mem 0x000000003a9d3000-0x000000003e9d3000] (64MB) Sep 16 04:56:45.937197 kernel: RAPL PMU: API unit is 2^-32 Joules, 1 fixed counters, 10737418240 ms ovfl timer Sep 16 04:56:45.937205 kernel: RAPL PMU: hw unit of domain psys 2^-0 Joules Sep 16 04:56:45.937214 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2127345424d, max_idle_ns: 440795318347 ns Sep 16 04:56:45.937221 kernel: clocksource: Switched to clocksource tsc Sep 16 04:56:45.937229 kernel: Initialise system trusted keyrings Sep 16 04:56:45.937237 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Sep 16 04:56:45.937245 kernel: Key type asymmetric registered Sep 16 04:56:45.937273 kernel: Asymmetric key parser 'x509' registered Sep 16 04:56:45.937281 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 16 04:56:45.937289 kernel: io scheduler mq-deadline registered Sep 16 04:56:45.937297 kernel: io scheduler kyber registered Sep 16 04:56:45.937306 kernel: io scheduler bfq registered Sep 16 04:56:45.937314 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 16 04:56:45.937322 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 16 04:56:45.937330 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 16 04:56:45.937337 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Sep 16 04:56:45.937346 kernel: serial8250: ttyS2 at I/O 0x3e8 (irq = 4, base_baud = 115200) is a 16550A Sep 16 04:56:45.937353 kernel: i8042: PNP: No PS/2 controller found. Sep 16 04:56:45.937474 kernel: rtc_cmos 00:02: registered as rtc0 Sep 16 04:56:45.937548 kernel: rtc_cmos 00:02: setting system clock to 2025-09-16T04:56:45 UTC (1757998605) Sep 16 04:56:45.937613 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Sep 16 04:56:45.937623 kernel: intel_pstate: Intel P-state driver initializing Sep 16 04:56:45.937631 kernel: efifb: probing for efifb Sep 16 04:56:45.937639 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Sep 16 04:56:45.937647 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Sep 16 04:56:45.937655 kernel: efifb: scrolling: redraw Sep 16 04:56:45.937663 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 16 04:56:45.937672 kernel: Console: switching to colour frame buffer device 128x48 Sep 16 04:56:45.937680 kernel: fb0: EFI VGA frame buffer device Sep 16 04:56:45.937687 kernel: pstore: Using crash dump compression: deflate Sep 16 04:56:45.937695 kernel: pstore: Registered efi_pstore as persistent store backend Sep 16 04:56:45.937703 kernel: NET: Registered PF_INET6 protocol family Sep 16 04:56:45.937710 kernel: Segment Routing with IPv6 Sep 16 04:56:45.937718 kernel: In-situ OAM (IOAM) with IPv6 Sep 16 04:56:45.937726 kernel: NET: Registered PF_PACKET protocol family Sep 16 04:56:45.937733 kernel: Key type dns_resolver registered Sep 16 04:56:45.937741 kernel: IPI shorthand broadcast: enabled Sep 16 04:56:45.937750 kernel: sched_clock: Marking stable (2774003478, 83642333)->(3147843158, -290197347) Sep 16 04:56:45.937758 kernel: registered taskstats version 1 Sep 16 04:56:45.937765 kernel: Loading compiled-in X.509 certificates Sep 16 04:56:45.937773 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.47-flatcar: d1d5b0d56b9b23dabf19e645632ff93bf659b3bf' Sep 16 04:56:45.937781 kernel: Demotion targets for Node 0: null Sep 16 04:56:45.937788 kernel: Key type .fscrypt registered Sep 16 04:56:45.937796 kernel: Key type fscrypt-provisioning registered Sep 16 04:56:45.937804 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 16 04:56:45.937813 kernel: ima: Allocated hash algorithm: sha1 Sep 16 04:56:45.937820 kernel: ima: No architecture policies found Sep 16 04:56:45.937828 kernel: clk: Disabling unused clocks Sep 16 04:56:45.937836 kernel: Warning: unable to open an initial console. Sep 16 04:56:45.937843 kernel: Freeing unused kernel image (initmem) memory: 54096K Sep 16 04:56:45.937851 kernel: Write protecting the kernel read-only data: 24576k Sep 16 04:56:45.937859 kernel: Freeing unused kernel image (rodata/data gap) memory: 248K Sep 16 04:56:45.937866 kernel: Run /init as init process Sep 16 04:56:45.937874 kernel: with arguments: Sep 16 04:56:45.937883 kernel: /init Sep 16 04:56:45.937891 kernel: with environment: Sep 16 04:56:45.937898 kernel: HOME=/ Sep 16 04:56:45.937906 kernel: TERM=linux Sep 16 04:56:45.937913 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 16 04:56:45.937922 systemd[1]: Successfully made /usr/ read-only. Sep 16 04:56:45.937933 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 16 04:56:45.937942 systemd[1]: Detected virtualization microsoft. Sep 16 04:56:45.937951 systemd[1]: Detected architecture x86-64. Sep 16 04:56:45.937959 systemd[1]: Running in initrd. Sep 16 04:56:45.937967 systemd[1]: No hostname configured, using default hostname. Sep 16 04:56:45.937976 systemd[1]: Hostname set to . Sep 16 04:56:45.937984 systemd[1]: Initializing machine ID from random generator. Sep 16 04:56:45.937992 systemd[1]: Queued start job for default target initrd.target. Sep 16 04:56:45.938000 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 16 04:56:45.938008 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 16 04:56:45.938018 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 16 04:56:45.938026 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 16 04:56:45.938035 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 16 04:56:45.938044 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 16 04:56:45.938053 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 16 04:56:45.938061 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 16 04:56:45.938070 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 16 04:56:45.938079 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 16 04:56:45.938087 systemd[1]: Reached target paths.target - Path Units. Sep 16 04:56:45.938095 systemd[1]: Reached target slices.target - Slice Units. Sep 16 04:56:45.938104 systemd[1]: Reached target swap.target - Swaps. Sep 16 04:56:45.938112 systemd[1]: Reached target timers.target - Timer Units. Sep 16 04:56:45.938120 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 16 04:56:45.938128 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 16 04:56:45.938136 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 16 04:56:45.938146 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 16 04:56:45.938154 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 16 04:56:45.938163 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 16 04:56:45.938171 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 16 04:56:45.938179 systemd[1]: Reached target sockets.target - Socket Units. Sep 16 04:56:45.938187 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 16 04:56:45.938196 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 16 04:56:45.938204 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 16 04:56:45.938213 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 16 04:56:45.938223 systemd[1]: Starting systemd-fsck-usr.service... Sep 16 04:56:45.938231 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 16 04:56:45.938247 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 16 04:56:45.938614 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 16 04:56:45.938642 systemd-journald[205]: Collecting audit messages is disabled. Sep 16 04:56:45.938666 systemd-journald[205]: Journal started Sep 16 04:56:45.938687 systemd-journald[205]: Runtime Journal (/run/log/journal/c7e0ceda0c654c4d84119d50dddfe0fe) is 8M, max 158.9M, 150.9M free. Sep 16 04:56:45.943400 systemd[1]: Started systemd-journald.service - Journal Service. Sep 16 04:56:45.943754 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 16 04:56:45.948134 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 16 04:56:45.950737 systemd-modules-load[206]: Inserted module 'overlay' Sep 16 04:56:45.954151 systemd[1]: Finished systemd-fsck-usr.service. Sep 16 04:56:45.960351 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 16 04:56:45.966414 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 16 04:56:45.980609 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 16 04:56:45.990266 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 16 04:56:45.988822 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 16 04:56:46.003137 kernel: Bridge firewalling registered Sep 16 04:56:45.995123 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 16 04:56:45.997368 systemd-modules-load[206]: Inserted module 'br_netfilter' Sep 16 04:56:45.998474 systemd-tmpfiles[219]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 16 04:56:45.998502 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 16 04:56:46.003264 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 16 04:56:46.010821 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 16 04:56:46.015376 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 16 04:56:46.028554 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 16 04:56:46.033841 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 16 04:56:46.037471 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 16 04:56:46.043349 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 16 04:56:46.047156 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 16 04:56:46.063407 dracut-cmdline[244]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=0b876f86a632750e9937176808a48c2452d5168964273bcfc3c72f2a26140c06 Sep 16 04:56:46.090051 systemd-resolved[245]: Positive Trust Anchors: Sep 16 04:56:46.090065 systemd-resolved[245]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 16 04:56:46.090097 systemd-resolved[245]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 16 04:56:46.110761 systemd-resolved[245]: Defaulting to hostname 'linux'. Sep 16 04:56:46.113808 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 16 04:56:46.118994 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 16 04:56:46.131270 kernel: SCSI subsystem initialized Sep 16 04:56:46.138267 kernel: Loading iSCSI transport class v2.0-870. Sep 16 04:56:46.146276 kernel: iscsi: registered transport (tcp) Sep 16 04:56:46.161574 kernel: iscsi: registered transport (qla4xxx) Sep 16 04:56:46.161614 kernel: QLogic iSCSI HBA Driver Sep 16 04:56:46.173170 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 16 04:56:46.185172 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 16 04:56:46.186435 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 16 04:56:46.214661 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 16 04:56:46.218354 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 16 04:56:46.257269 kernel: raid6: avx512x4 gen() 46848 MB/s Sep 16 04:56:46.275264 kernel: raid6: avx512x2 gen() 46750 MB/s Sep 16 04:56:46.292261 kernel: raid6: avx512x1 gen() 29441 MB/s Sep 16 04:56:46.310262 kernel: raid6: avx2x4 gen() 41840 MB/s Sep 16 04:56:46.327263 kernel: raid6: avx2x2 gen() 42276 MB/s Sep 16 04:56:46.344840 kernel: raid6: avx2x1 gen() 32750 MB/s Sep 16 04:56:46.344854 kernel: raid6: using algorithm avx512x4 gen() 46848 MB/s Sep 16 04:56:46.363504 kernel: raid6: .... xor() 7793 MB/s, rmw enabled Sep 16 04:56:46.363525 kernel: raid6: using avx512x2 recovery algorithm Sep 16 04:56:46.379265 kernel: xor: automatically using best checksumming function avx Sep 16 04:56:46.485268 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 16 04:56:46.489027 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 16 04:56:46.491424 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 16 04:56:46.512880 systemd-udevd[454]: Using default interface naming scheme 'v255'. Sep 16 04:56:46.516502 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 16 04:56:46.525044 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 16 04:56:46.537194 dracut-pre-trigger[468]: rd.md=0: removing MD RAID activation Sep 16 04:56:46.552932 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 16 04:56:46.555172 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 16 04:56:46.583928 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 16 04:56:46.590624 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 16 04:56:46.630270 kernel: cryptd: max_cpu_qlen set to 1000 Sep 16 04:56:46.649599 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 16 04:56:46.650034 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 16 04:56:46.655542 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 16 04:56:46.660220 kernel: hv_vmbus: Vmbus version:5.3 Sep 16 04:56:46.660238 kernel: AES CTR mode by8 optimization enabled Sep 16 04:56:46.663006 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 16 04:56:46.687330 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 16 04:56:46.689230 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 16 04:56:46.697704 kernel: hv_vmbus: registering driver hyperv_keyboard Sep 16 04:56:46.697737 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 16 04:56:46.697749 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 16 04:56:46.707371 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 16 04:56:46.709990 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 16 04:56:46.722295 kernel: hv_vmbus: registering driver hv_pci Sep 16 04:56:46.724275 kernel: PTP clock support registered Sep 16 04:56:46.732183 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 16 04:56:46.741283 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Sep 16 04:56:46.744322 kernel: hv_utils: Registering HyperV Utility Driver Sep 16 04:56:46.744355 kernel: hv_vmbus: registering driver hv_utils Sep 16 04:56:46.751081 kernel: hv_vmbus: registering driver hv_netvsc Sep 16 04:56:46.751115 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI VMBus probing: Using version 0x10004 Sep 16 04:56:46.756458 kernel: hv_vmbus: registering driver hv_storvsc Sep 16 04:56:46.756492 kernel: hv_utils: Shutdown IC version 3.2 Sep 16 04:56:46.760271 kernel: hv_utils: Heartbeat IC version 3.0 Sep 16 04:56:46.760307 kernel: hv_utils: TimeSync IC version 4.0 Sep 16 04:56:46.862480 kernel: hv_vmbus: registering driver hid_hyperv Sep 16 04:56:46.862071 systemd-resolved[245]: Clock change detected. Flushing caches. Sep 16 04:56:46.868231 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI host bridge to bus c05b:00 Sep 16 04:56:46.868830 kernel: scsi host0: storvsc_host_t Sep 16 04:56:46.871726 kernel: pci_bus c05b:00: root bus resource [mem 0xfc0000000-0xfc007ffff window] Sep 16 04:56:46.871861 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Sep 16 04:56:46.871872 kernel: pci_bus c05b:00: No busn resource found for root bus, will use [bus 00-ff] Sep 16 04:56:46.878729 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Sep 16 04:56:46.878769 kernel: pci c05b:00:00.0: [1414:00a9] type 00 class 0x010802 PCIe Endpoint Sep 16 04:56:46.878785 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Sep 16 04:56:46.885910 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit] Sep 16 04:56:46.885957 kernel: hv_netvsc f8615163-0000-1000-2000-7ced8d73f401 (unnamed net_device) (uninitialized): VF slot 1 added Sep 16 04:56:46.906116 kernel: pci_bus c05b:00: busn_res: [bus 00-ff] end is updated to 00 Sep 16 04:56:46.908358 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit]: assigned Sep 16 04:56:46.911284 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Sep 16 04:56:46.911471 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 16 04:56:46.913102 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Sep 16 04:56:46.928240 kernel: nvme nvme0: pci function c05b:00:00.0 Sep 16 04:56:46.928422 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#282 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Sep 16 04:56:46.929534 kernel: nvme c05b:00:00.0: enabling device (0000 -> 0002) Sep 16 04:56:46.950104 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#256 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Sep 16 04:56:47.085099 kernel: nvme nvme0: 2/0/0 default/read/poll queues Sep 16 04:56:47.089098 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 16 04:56:47.342103 kernel: nvme nvme0: using unchecked data buffer Sep 16 04:56:47.564600 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - MSFT NVMe Accelerator v1.0 ROOT. Sep 16 04:56:47.574078 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - MSFT NVMe Accelerator v1.0 USR-A. Sep 16 04:56:47.577393 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - MSFT NVMe Accelerator v1.0 USR-A. Sep 16 04:56:47.591664 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - MSFT NVMe Accelerator v1.0 EFI-SYSTEM. Sep 16 04:56:47.595224 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 16 04:56:47.610245 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Sep 16 04:56:47.612105 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 16 04:56:47.617132 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 16 04:56:47.617376 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 16 04:56:47.617946 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 16 04:56:47.620181 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 16 04:56:47.651638 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 16 04:56:47.656131 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 16 04:56:47.661108 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 16 04:56:47.907597 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI VMBus probing: Using version 0x10004 Sep 16 04:56:47.907754 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI host bridge to bus 7870:00 Sep 16 04:56:47.910438 kernel: pci_bus 7870:00: root bus resource [mem 0xfc2000000-0xfc4007fff window] Sep 16 04:56:47.911945 kernel: pci_bus 7870:00: No busn resource found for root bus, will use [bus 00-ff] Sep 16 04:56:47.917167 kernel: pci 7870:00:00.0: [1414:00ba] type 00 class 0x020000 PCIe Endpoint Sep 16 04:56:47.920111 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref] Sep 16 04:56:47.924153 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref] Sep 16 04:56:47.926288 kernel: pci 7870:00:00.0: enabling Extended Tags Sep 16 04:56:47.941145 kernel: pci_bus 7870:00: busn_res: [bus 00-ff] end is updated to 00 Sep 16 04:56:47.941306 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref]: assigned Sep 16 04:56:47.945176 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref]: assigned Sep 16 04:56:47.948775 kernel: mana 7870:00:00.0: enabling device (0000 -> 0002) Sep 16 04:56:47.959100 kernel: mana 7870:00:00.0: Microsoft Azure Network Adapter protocol version: 0.1.1 Sep 16 04:56:47.962491 kernel: hv_netvsc f8615163-0000-1000-2000-7ced8d73f401 eth0: VF registering: eth1 Sep 16 04:56:47.963363 kernel: mana 7870:00:00.0 eth1: joined to eth0 Sep 16 04:56:47.965102 kernel: mana 7870:00:00.0 enP30832s1: renamed from eth1 Sep 16 04:56:48.662652 disk-uuid[676]: The operation has completed successfully. Sep 16 04:56:48.664760 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 16 04:56:48.709114 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 16 04:56:48.709191 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 16 04:56:48.738236 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 16 04:56:48.762020 sh[715]: Success Sep 16 04:56:48.790509 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 16 04:56:48.790555 kernel: device-mapper: uevent: version 1.0.3 Sep 16 04:56:48.791965 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 16 04:56:48.800104 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Sep 16 04:56:49.027323 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 16 04:56:49.031187 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 16 04:56:49.040262 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 16 04:56:49.052121 kernel: BTRFS: device fsid f1b91845-3914-4d21-a370-6d760ee45b2e devid 1 transid 36 /dev/mapper/usr (254:0) scanned by mount (728) Sep 16 04:56:49.055140 kernel: BTRFS info (device dm-0): first mount of filesystem f1b91845-3914-4d21-a370-6d760ee45b2e Sep 16 04:56:49.055284 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 16 04:56:49.383626 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 16 04:56:49.383698 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 16 04:56:49.385544 kernel: BTRFS info (device dm-0): enabling free space tree Sep 16 04:56:49.425270 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 16 04:56:49.425743 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 16 04:56:49.431948 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 16 04:56:49.435187 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 16 04:56:49.438326 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 16 04:56:49.467392 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (761) Sep 16 04:56:49.467426 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 8b047ef5-4757-404a-b211-2a505a425364 Sep 16 04:56:49.468507 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Sep 16 04:56:49.512840 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 16 04:56:49.519372 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 16 04:56:49.519390 kernel: BTRFS info (device nvme0n1p6): turning on async discard Sep 16 04:56:49.519400 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Sep 16 04:56:49.517075 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 16 04:56:49.525098 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 8b047ef5-4757-404a-b211-2a505a425364 Sep 16 04:56:49.525204 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 16 04:56:49.529218 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 16 04:56:49.554158 systemd-networkd[894]: lo: Link UP Sep 16 04:56:49.558157 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Sep 16 04:56:49.554165 systemd-networkd[894]: lo: Gained carrier Sep 16 04:56:49.555747 systemd-networkd[894]: Enumeration completed Sep 16 04:56:49.564588 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Sep 16 04:56:49.556065 systemd-networkd[894]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 16 04:56:49.571149 kernel: hv_netvsc f8615163-0000-1000-2000-7ced8d73f401 eth0: Data path switched to VF: enP30832s1 Sep 16 04:56:49.556068 systemd-networkd[894]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 16 04:56:49.556288 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 16 04:56:49.561620 systemd[1]: Reached target network.target - Network. Sep 16 04:56:49.566442 systemd-networkd[894]: enP30832s1: Link UP Sep 16 04:56:49.566501 systemd-networkd[894]: eth0: Link UP Sep 16 04:56:49.566623 systemd-networkd[894]: eth0: Gained carrier Sep 16 04:56:49.566634 systemd-networkd[894]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 16 04:56:49.574074 systemd-networkd[894]: enP30832s1: Gained carrier Sep 16 04:56:49.581123 systemd-networkd[894]: eth0: DHCPv4 address 10.200.8.38/24, gateway 10.200.8.1 acquired from 168.63.129.16 Sep 16 04:56:50.929215 systemd-networkd[894]: eth0: Gained IPv6LL Sep 16 04:56:50.946920 ignition[897]: Ignition 2.22.0 Sep 16 04:56:50.946932 ignition[897]: Stage: fetch-offline Sep 16 04:56:50.949399 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 16 04:56:50.947044 ignition[897]: no configs at "/usr/lib/ignition/base.d" Sep 16 04:56:50.947051 ignition[897]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 16 04:56:50.947168 ignition[897]: parsed url from cmdline: "" Sep 16 04:56:50.947171 ignition[897]: no config URL provided Sep 16 04:56:50.947175 ignition[897]: reading system config file "/usr/lib/ignition/user.ign" Sep 16 04:56:50.947180 ignition[897]: no config at "/usr/lib/ignition/user.ign" Sep 16 04:56:50.947185 ignition[897]: failed to fetch config: resource requires networking Sep 16 04:56:50.947405 ignition[897]: Ignition finished successfully Sep 16 04:56:50.964992 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 16 04:56:50.991899 ignition[906]: Ignition 2.22.0 Sep 16 04:56:50.991920 ignition[906]: Stage: fetch Sep 16 04:56:50.992126 ignition[906]: no configs at "/usr/lib/ignition/base.d" Sep 16 04:56:50.992134 ignition[906]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 16 04:56:50.992318 ignition[906]: parsed url from cmdline: "" Sep 16 04:56:50.992321 ignition[906]: no config URL provided Sep 16 04:56:50.992325 ignition[906]: reading system config file "/usr/lib/ignition/user.ign" Sep 16 04:56:50.992329 ignition[906]: no config at "/usr/lib/ignition/user.ign" Sep 16 04:56:50.992344 ignition[906]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Sep 16 04:56:51.090759 ignition[906]: GET result: OK Sep 16 04:56:51.090812 ignition[906]: config has been read from IMDS userdata Sep 16 04:56:51.090835 ignition[906]: parsing config with SHA512: 6c8de9c9bff46f89f1f207df31f4ebab61e48490740f19517c84e12c3b94c9e6d5d6ef55cf8a40ea3975d99c1948963ceef85b2d9f5722451c089f9261819fc4 Sep 16 04:56:51.096568 unknown[906]: fetched base config from "system" Sep 16 04:56:51.096800 ignition[906]: fetch: fetch complete Sep 16 04:56:51.096573 unknown[906]: fetched base config from "system" Sep 16 04:56:51.096804 ignition[906]: fetch: fetch passed Sep 16 04:56:51.096578 unknown[906]: fetched user config from "azure" Sep 16 04:56:51.096836 ignition[906]: Ignition finished successfully Sep 16 04:56:51.100118 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 16 04:56:51.104275 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 16 04:56:51.128258 ignition[913]: Ignition 2.22.0 Sep 16 04:56:51.128264 ignition[913]: Stage: kargs Sep 16 04:56:51.130362 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 16 04:56:51.128444 ignition[913]: no configs at "/usr/lib/ignition/base.d" Sep 16 04:56:51.128449 ignition[913]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 16 04:56:51.138201 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 16 04:56:51.129004 ignition[913]: kargs: kargs passed Sep 16 04:56:51.129024 ignition[913]: Ignition finished successfully Sep 16 04:56:51.157291 ignition[920]: Ignition 2.22.0 Sep 16 04:56:51.157300 ignition[920]: Stage: disks Sep 16 04:56:51.160111 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 16 04:56:51.157468 ignition[920]: no configs at "/usr/lib/ignition/base.d" Sep 16 04:56:51.163297 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 16 04:56:51.157474 ignition[920]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 16 04:56:51.165712 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 16 04:56:51.158613 ignition[920]: disks: disks passed Sep 16 04:56:51.168127 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 16 04:56:51.158638 ignition[920]: Ignition finished successfully Sep 16 04:56:51.170500 systemd[1]: Reached target sysinit.target - System Initialization. Sep 16 04:56:51.173115 systemd[1]: Reached target basic.target - Basic System. Sep 16 04:56:51.178720 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 16 04:56:51.232841 systemd-fsck[928]: ROOT: clean, 15/7326000 files, 477845/7359488 blocks Sep 16 04:56:51.236370 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 16 04:56:51.240873 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 16 04:56:53.210234 kernel: EXT4-fs (nvme0n1p9): mounted filesystem fb1cb44f-955b-4cd0-8849-33ce3640d547 r/w with ordered data mode. Quota mode: none. Sep 16 04:56:53.210860 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 16 04:56:53.213623 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 16 04:56:53.246945 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 16 04:56:53.263699 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 16 04:56:53.269812 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Sep 16 04:56:53.274929 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 16 04:56:53.274963 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 16 04:56:53.281101 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (937) Sep 16 04:56:53.284276 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 16 04:56:53.290204 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 8b047ef5-4757-404a-b211-2a505a425364 Sep 16 04:56:53.290243 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Sep 16 04:56:53.291210 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 16 04:56:53.298342 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 16 04:56:53.298379 kernel: BTRFS info (device nvme0n1p6): turning on async discard Sep 16 04:56:53.299232 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Sep 16 04:56:53.300585 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 16 04:56:53.847518 coreos-metadata[939]: Sep 16 04:56:53.847 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Sep 16 04:56:53.851828 coreos-metadata[939]: Sep 16 04:56:53.851 INFO Fetch successful Sep 16 04:56:53.853226 coreos-metadata[939]: Sep 16 04:56:53.851 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Sep 16 04:56:53.862870 coreos-metadata[939]: Sep 16 04:56:53.862 INFO Fetch successful Sep 16 04:56:53.876271 coreos-metadata[939]: Sep 16 04:56:53.876 INFO wrote hostname ci-4459.0.0-n-140c1315ab to /sysroot/etc/hostname Sep 16 04:56:53.877749 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 16 04:56:54.107498 initrd-setup-root[968]: cut: /sysroot/etc/passwd: No such file or directory Sep 16 04:56:54.152586 initrd-setup-root[975]: cut: /sysroot/etc/group: No such file or directory Sep 16 04:56:54.170494 initrd-setup-root[982]: cut: /sysroot/etc/shadow: No such file or directory Sep 16 04:56:54.189887 initrd-setup-root[989]: cut: /sysroot/etc/gshadow: No such file or directory Sep 16 04:56:55.174251 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 16 04:56:55.178479 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 16 04:56:55.185189 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 16 04:56:55.192016 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 16 04:56:55.196813 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 8b047ef5-4757-404a-b211-2a505a425364 Sep 16 04:56:55.216767 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 16 04:56:55.224635 ignition[1056]: INFO : Ignition 2.22.0 Sep 16 04:56:55.224635 ignition[1056]: INFO : Stage: mount Sep 16 04:56:55.227422 ignition[1056]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 16 04:56:55.227422 ignition[1056]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 16 04:56:55.227422 ignition[1056]: INFO : mount: mount passed Sep 16 04:56:55.227422 ignition[1056]: INFO : Ignition finished successfully Sep 16 04:56:55.226726 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 16 04:56:55.229858 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 16 04:56:55.243589 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 16 04:56:55.270103 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (1070) Sep 16 04:56:55.270135 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 8b047ef5-4757-404a-b211-2a505a425364 Sep 16 04:56:55.272102 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Sep 16 04:56:55.275391 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 16 04:56:55.275430 kernel: BTRFS info (device nvme0n1p6): turning on async discard Sep 16 04:56:55.276143 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Sep 16 04:56:55.277740 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 16 04:56:55.302974 ignition[1087]: INFO : Ignition 2.22.0 Sep 16 04:56:55.302974 ignition[1087]: INFO : Stage: files Sep 16 04:56:55.309127 ignition[1087]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 16 04:56:55.309127 ignition[1087]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 16 04:56:55.309127 ignition[1087]: DEBUG : files: compiled without relabeling support, skipping Sep 16 04:56:55.318259 ignition[1087]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 16 04:56:55.318259 ignition[1087]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 16 04:56:55.388973 ignition[1087]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 16 04:56:55.391035 ignition[1087]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 16 04:56:55.391035 ignition[1087]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 16 04:56:55.389245 unknown[1087]: wrote ssh authorized keys file for user: core Sep 16 04:56:55.460695 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 16 04:56:55.465148 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Sep 16 04:56:55.737294 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 16 04:56:55.780458 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 16 04:56:55.784142 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 16 04:56:55.784142 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 16 04:56:55.784142 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 16 04:56:55.784142 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 16 04:56:55.784142 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 16 04:56:55.784142 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 16 04:56:55.784142 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 16 04:56:55.784142 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 16 04:56:55.809353 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 16 04:56:55.809353 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 16 04:56:55.809353 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 16 04:56:55.809353 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 16 04:56:55.809353 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 16 04:56:55.809353 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Sep 16 04:56:56.365433 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 16 04:56:57.954637 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 16 04:56:57.954637 ignition[1087]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 16 04:56:57.982045 ignition[1087]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 16 04:56:57.995637 ignition[1087]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 16 04:56:57.995637 ignition[1087]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 16 04:56:57.995637 ignition[1087]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Sep 16 04:56:58.002448 ignition[1087]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Sep 16 04:56:58.002448 ignition[1087]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 16 04:56:58.002448 ignition[1087]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 16 04:56:58.002448 ignition[1087]: INFO : files: files passed Sep 16 04:56:58.002448 ignition[1087]: INFO : Ignition finished successfully Sep 16 04:56:58.000235 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 16 04:56:58.008893 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 16 04:56:58.024024 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 16 04:56:58.030253 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 16 04:56:58.031776 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 16 04:56:58.080002 initrd-setup-root-after-ignition[1117]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 16 04:56:58.080002 initrd-setup-root-after-ignition[1117]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 16 04:56:58.091179 initrd-setup-root-after-ignition[1121]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 16 04:56:58.084142 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 16 04:56:58.089807 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 16 04:56:58.095119 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 16 04:56:58.128368 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 16 04:56:58.128447 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 16 04:56:58.133463 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 16 04:56:58.136714 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 16 04:56:58.139181 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 16 04:56:58.141196 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 16 04:56:58.158471 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 16 04:56:58.161427 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 16 04:56:58.183707 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 16 04:56:58.186629 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 16 04:56:58.190906 systemd[1]: Stopped target timers.target - Timer Units. Sep 16 04:56:58.193523 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 16 04:56:58.194939 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 16 04:56:58.198807 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 16 04:56:58.202249 systemd[1]: Stopped target basic.target - Basic System. Sep 16 04:56:58.204897 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 16 04:56:58.207694 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 16 04:56:58.212287 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 16 04:56:58.215369 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 16 04:56:58.215958 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 16 04:56:58.221376 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 16 04:56:58.226537 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 16 04:56:58.230227 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 16 04:56:58.233652 systemd[1]: Stopped target swap.target - Swaps. Sep 16 04:56:58.238401 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 16 04:56:58.238489 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 16 04:56:58.245432 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 16 04:56:58.248565 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 16 04:56:58.249046 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 16 04:56:58.250053 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 16 04:56:58.258183 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 16 04:56:58.258281 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 16 04:56:58.260948 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 16 04:56:58.261035 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 16 04:56:58.262415 systemd[1]: ignition-files.service: Deactivated successfully. Sep 16 04:56:58.262498 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 16 04:56:58.263073 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 16 04:56:58.263160 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 16 04:56:58.265228 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 16 04:56:58.265663 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 16 04:56:58.265753 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 16 04:56:58.269219 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 16 04:56:58.269511 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 16 04:56:58.269605 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 16 04:56:58.269899 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 16 04:56:58.313458 ignition[1141]: INFO : Ignition 2.22.0 Sep 16 04:56:58.313458 ignition[1141]: INFO : Stage: umount Sep 16 04:56:58.313458 ignition[1141]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 16 04:56:58.313458 ignition[1141]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 16 04:56:58.313458 ignition[1141]: INFO : umount: umount passed Sep 16 04:56:58.313458 ignition[1141]: INFO : Ignition finished successfully Sep 16 04:56:58.269970 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 16 04:56:58.278565 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 16 04:56:58.282626 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 16 04:56:58.310777 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 16 04:56:58.310856 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 16 04:56:58.313673 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 16 04:56:58.313739 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 16 04:56:58.317180 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 16 04:56:58.317214 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 16 04:56:58.324859 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 16 04:56:58.325417 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 16 04:56:58.326706 systemd[1]: Stopped target network.target - Network. Sep 16 04:56:58.331126 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 16 04:56:58.331164 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 16 04:56:58.331656 systemd[1]: Stopped target paths.target - Path Units. Sep 16 04:56:58.331675 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 16 04:56:58.336970 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 16 04:56:58.342334 systemd[1]: Stopped target slices.target - Slice Units. Sep 16 04:56:58.347025 systemd[1]: Stopped target sockets.target - Socket Units. Sep 16 04:56:58.352151 systemd[1]: iscsid.socket: Deactivated successfully. Sep 16 04:56:58.352183 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 16 04:56:58.356076 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 16 04:56:58.356110 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 16 04:56:58.359707 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 16 04:56:58.359751 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 16 04:56:58.364766 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 16 04:56:58.364842 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 16 04:56:58.379094 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 16 04:56:58.382543 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 16 04:56:58.392255 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 16 04:56:58.392717 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 16 04:56:58.392796 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 16 04:56:58.400798 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 16 04:56:58.400915 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 16 04:56:58.400981 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 16 04:56:58.406667 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 16 04:56:58.406860 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 16 04:56:58.406939 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 16 04:56:58.412116 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 16 04:56:58.424150 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 16 04:56:58.424186 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 16 04:56:58.428142 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 16 04:56:58.428183 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 16 04:56:58.431334 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 16 04:56:58.437185 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 16 04:56:58.437232 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 16 04:56:58.439930 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 16 04:56:58.439964 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 16 04:56:58.443489 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 16 04:56:58.443520 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 16 04:56:58.452000 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 16 04:56:58.452045 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 16 04:56:58.456366 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 16 04:56:58.460368 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 16 04:56:58.460418 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 16 04:56:58.478015 kernel: hv_netvsc f8615163-0000-1000-2000-7ced8d73f401 eth0: Data path switched from VF: enP30832s1 Sep 16 04:56:58.478233 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Sep 16 04:56:58.479182 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 16 04:56:58.480123 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 16 04:56:58.485908 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 16 04:56:58.487200 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 16 04:56:58.491201 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 16 04:56:58.491249 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 16 04:56:58.495178 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 16 04:56:58.495199 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 16 04:56:58.498904 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 16 04:56:58.498972 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 16 04:56:58.508190 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 16 04:56:58.509345 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 16 04:56:58.515202 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 16 04:56:58.516029 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 16 04:56:58.520443 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 16 04:56:58.524902 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 16 04:56:58.524945 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 16 04:56:58.529663 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 16 04:56:58.529706 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 16 04:56:58.536180 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 16 04:56:58.536222 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 16 04:56:58.544590 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Sep 16 04:56:58.544631 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 16 04:56:58.544658 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 16 04:56:58.544934 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 16 04:56:58.545001 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 16 04:56:58.548254 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 16 04:56:58.553185 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 16 04:56:58.582652 systemd[1]: Switching root. Sep 16 04:56:58.664876 systemd-journald[205]: Journal stopped Sep 16 04:57:05.710257 systemd-journald[205]: Received SIGTERM from PID 1 (systemd). Sep 16 04:57:05.710292 kernel: SELinux: policy capability network_peer_controls=1 Sep 16 04:57:05.710304 kernel: SELinux: policy capability open_perms=1 Sep 16 04:57:05.710313 kernel: SELinux: policy capability extended_socket_class=1 Sep 16 04:57:05.710320 kernel: SELinux: policy capability always_check_network=0 Sep 16 04:57:05.710328 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 16 04:57:05.710337 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 16 04:57:05.710346 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 16 04:57:05.710354 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 16 04:57:05.710362 kernel: SELinux: policy capability userspace_initial_context=0 Sep 16 04:57:05.710371 kernel: audit: type=1403 audit(1757998619.987:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 16 04:57:05.710380 systemd[1]: Successfully loaded SELinux policy in 189.754ms. Sep 16 04:57:05.710390 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 3.669ms. Sep 16 04:57:05.710400 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 16 04:57:05.710411 systemd[1]: Detected virtualization microsoft. Sep 16 04:57:05.710420 systemd[1]: Detected architecture x86-64. Sep 16 04:57:05.710429 systemd[1]: Detected first boot. Sep 16 04:57:05.710439 systemd[1]: Hostname set to . Sep 16 04:57:05.710449 systemd[1]: Initializing machine ID from random generator. Sep 16 04:57:05.710458 zram_generator::config[1184]: No configuration found. Sep 16 04:57:05.710470 kernel: Guest personality initialized and is inactive Sep 16 04:57:05.710479 kernel: VMCI host device registered (name=vmci, major=10, minor=124) Sep 16 04:57:05.710487 kernel: Initialized host personality Sep 16 04:57:05.710495 kernel: NET: Registered PF_VSOCK protocol family Sep 16 04:57:05.710504 systemd[1]: Populated /etc with preset unit settings. Sep 16 04:57:05.710516 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 16 04:57:05.710525 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 16 04:57:05.710534 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 16 04:57:05.710543 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 16 04:57:05.710552 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 16 04:57:05.710562 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 16 04:57:05.710570 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 16 04:57:05.710579 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 16 04:57:05.710590 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 16 04:57:05.710599 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 16 04:57:05.710608 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 16 04:57:05.710617 systemd[1]: Created slice user.slice - User and Session Slice. Sep 16 04:57:05.710626 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 16 04:57:05.710635 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 16 04:57:05.710644 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 16 04:57:05.710656 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 16 04:57:05.710667 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 16 04:57:05.710677 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 16 04:57:05.710685 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 16 04:57:05.710694 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 16 04:57:05.710704 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 16 04:57:05.710713 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 16 04:57:05.710722 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 16 04:57:05.710732 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 16 04:57:05.710741 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 16 04:57:05.710751 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 16 04:57:05.710759 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 16 04:57:05.710768 systemd[1]: Reached target slices.target - Slice Units. Sep 16 04:57:05.710778 systemd[1]: Reached target swap.target - Swaps. Sep 16 04:57:05.710787 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 16 04:57:05.710796 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 16 04:57:05.710807 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 16 04:57:05.710817 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 16 04:57:05.710826 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 16 04:57:05.710836 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 16 04:57:05.710845 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 16 04:57:05.710856 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 16 04:57:05.710866 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 16 04:57:05.710875 systemd[1]: Mounting media.mount - External Media Directory... Sep 16 04:57:05.710885 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 16 04:57:05.710894 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 16 04:57:05.710903 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 16 04:57:05.710912 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 16 04:57:05.710921 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 16 04:57:05.710931 systemd[1]: Reached target machines.target - Containers. Sep 16 04:57:05.710941 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 16 04:57:05.710950 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 16 04:57:05.710959 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 16 04:57:05.710969 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 16 04:57:05.710977 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 16 04:57:05.710986 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 16 04:57:05.710996 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 16 04:57:05.711005 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 16 04:57:05.711015 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 16 04:57:05.711024 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 16 04:57:05.711033 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 16 04:57:05.711042 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 16 04:57:05.711051 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 16 04:57:05.711061 systemd[1]: Stopped systemd-fsck-usr.service. Sep 16 04:57:05.711070 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 16 04:57:05.711079 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 16 04:57:05.712053 kernel: fuse: init (API version 7.41) Sep 16 04:57:05.712071 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 16 04:57:05.712082 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 16 04:57:05.712105 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 16 04:57:05.712113 kernel: loop: module loaded Sep 16 04:57:05.712145 systemd-journald[1267]: Collecting audit messages is disabled. Sep 16 04:57:05.712168 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 16 04:57:05.712177 systemd-journald[1267]: Journal started Sep 16 04:57:05.712196 systemd-journald[1267]: Runtime Journal (/run/log/journal/af8b8df7406545e8a5303659cc02ec54) is 8M, max 158.9M, 150.9M free. Sep 16 04:57:05.712233 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 16 04:57:05.256152 systemd[1]: Queued start job for default target multi-user.target. Sep 16 04:57:05.260443 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Sep 16 04:57:05.260719 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 16 04:57:05.723211 systemd[1]: verity-setup.service: Deactivated successfully. Sep 16 04:57:05.723261 systemd[1]: Stopped verity-setup.service. Sep 16 04:57:05.730107 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 16 04:57:05.739106 systemd[1]: Started systemd-journald.service - Journal Service. Sep 16 04:57:05.742723 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 16 04:57:05.746161 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 16 04:57:05.748297 systemd[1]: Mounted media.mount - External Media Directory. Sep 16 04:57:05.749604 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 16 04:57:05.751760 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 16 04:57:05.753509 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 16 04:57:05.756706 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 16 04:57:05.759310 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 16 04:57:05.759429 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 16 04:57:05.761528 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 16 04:57:05.761663 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 16 04:57:05.763964 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 16 04:57:05.764182 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 16 04:57:05.767205 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 16 04:57:05.767559 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 16 04:57:05.770165 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 16 04:57:05.770305 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 16 04:57:05.771917 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 16 04:57:05.773701 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 16 04:57:05.778421 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 16 04:57:05.786835 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 16 04:57:05.793178 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 16 04:57:05.803356 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 16 04:57:05.805814 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 16 04:57:05.805840 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 16 04:57:05.808705 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 16 04:57:05.814178 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 16 04:57:05.816232 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 16 04:57:05.834045 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 16 04:57:05.837185 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 16 04:57:05.839239 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 16 04:57:05.841244 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 16 04:57:05.844175 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 16 04:57:05.852992 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 16 04:57:05.859250 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 16 04:57:05.863913 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 16 04:57:05.866390 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 16 04:57:05.868694 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 16 04:57:05.873453 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 16 04:57:05.877294 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 16 04:57:05.879952 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 16 04:57:05.884050 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 16 04:57:05.889203 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 16 04:57:05.896117 kernel: ACPI: bus type drm_connector registered Sep 16 04:57:05.896138 systemd-journald[1267]: Time spent on flushing to /var/log/journal/af8b8df7406545e8a5303659cc02ec54 is 11.600ms for 992 entries. Sep 16 04:57:05.896138 systemd-journald[1267]: System Journal (/var/log/journal/af8b8df7406545e8a5303659cc02ec54) is 8M, max 2.6G, 2.6G free. Sep 16 04:57:06.037252 systemd-journald[1267]: Received client request to flush runtime journal. Sep 16 04:57:06.037287 kernel: loop0: detected capacity change from 0 to 110984 Sep 16 04:57:05.899663 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 16 04:57:05.904749 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 16 04:57:05.904895 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 16 04:57:06.011944 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 16 04:57:06.037904 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 16 04:57:06.126859 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 16 04:57:06.261735 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 16 04:57:06.434751 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 16 04:57:06.494101 kernel: loop1: detected capacity change from 0 to 27936 Sep 16 04:57:06.620854 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 16 04:57:06.623986 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 16 04:57:06.768437 systemd-tmpfiles[1343]: ACLs are not supported, ignoring. Sep 16 04:57:06.768453 systemd-tmpfiles[1343]: ACLs are not supported, ignoring. Sep 16 04:57:06.770584 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 16 04:57:06.985102 kernel: loop2: detected capacity change from 0 to 229808 Sep 16 04:57:07.055112 kernel: loop3: detected capacity change from 0 to 128016 Sep 16 04:57:07.547119 kernel: loop4: detected capacity change from 0 to 110984 Sep 16 04:57:07.557116 kernel: loop5: detected capacity change from 0 to 27936 Sep 16 04:57:07.566104 kernel: loop6: detected capacity change from 0 to 229808 Sep 16 04:57:07.578105 kernel: loop7: detected capacity change from 0 to 128016 Sep 16 04:57:07.579583 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 16 04:57:07.584267 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 16 04:57:07.588847 (sd-merge)[1350]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Sep 16 04:57:07.589545 (sd-merge)[1350]: Merged extensions into '/usr'. Sep 16 04:57:07.593883 systemd[1]: Reload requested from client PID 1322 ('systemd-sysext') (unit systemd-sysext.service)... Sep 16 04:57:07.593893 systemd[1]: Reloading... Sep 16 04:57:07.611845 systemd-udevd[1352]: Using default interface naming scheme 'v255'. Sep 16 04:57:07.646227 zram_generator::config[1376]: No configuration found. Sep 16 04:57:07.823020 systemd[1]: Reloading finished in 228 ms. Sep 16 04:57:07.838884 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 16 04:57:07.845825 systemd[1]: Starting ensure-sysext.service... Sep 16 04:57:07.849061 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 16 04:57:07.890704 systemd[1]: Reload requested from client PID 1435 ('systemctl') (unit ensure-sysext.service)... Sep 16 04:57:07.890715 systemd[1]: Reloading... Sep 16 04:57:07.916783 systemd-tmpfiles[1436]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 16 04:57:07.916806 systemd-tmpfiles[1436]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 16 04:57:07.917012 systemd-tmpfiles[1436]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 16 04:57:07.917238 systemd-tmpfiles[1436]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 16 04:57:07.917832 systemd-tmpfiles[1436]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 16 04:57:07.919199 systemd-tmpfiles[1436]: ACLs are not supported, ignoring. Sep 16 04:57:07.919248 systemd-tmpfiles[1436]: ACLs are not supported, ignoring. Sep 16 04:57:07.947114 zram_generator::config[1467]: No configuration found. Sep 16 04:57:07.965168 systemd-tmpfiles[1436]: Detected autofs mount point /boot during canonicalization of boot. Sep 16 04:57:07.965180 systemd-tmpfiles[1436]: Skipping /boot Sep 16 04:57:07.970482 systemd-tmpfiles[1436]: Detected autofs mount point /boot during canonicalization of boot. Sep 16 04:57:07.970496 systemd-tmpfiles[1436]: Skipping /boot Sep 16 04:57:08.091341 systemd[1]: Reloading finished in 200 ms. Sep 16 04:57:08.106412 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 16 04:57:08.112248 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 16 04:57:08.152448 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 16 04:57:08.158211 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 16 04:57:08.167493 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 16 04:57:08.171219 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 16 04:57:08.177123 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 16 04:57:08.177288 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 16 04:57:08.178115 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 16 04:57:08.182290 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 16 04:57:08.185303 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 16 04:57:08.188471 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 16 04:57:08.188581 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 16 04:57:08.188668 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 16 04:57:08.189574 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 16 04:57:08.190054 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 16 04:57:08.193795 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 16 04:57:08.194118 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 16 04:57:08.202718 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 16 04:57:08.202863 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 16 04:57:08.206564 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 16 04:57:08.213919 systemd[1]: Expecting device dev-ptp_hyperv.device - /dev/ptp_hyperv... Sep 16 04:57:08.215798 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 16 04:57:08.216129 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 16 04:57:08.217071 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 16 04:57:08.220299 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 16 04:57:08.227149 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 16 04:57:08.229784 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 16 04:57:08.231686 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 16 04:57:08.231797 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 16 04:57:08.231957 systemd[1]: Reached target time-set.target - System Time Set. Sep 16 04:57:08.235056 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 16 04:57:08.236338 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 16 04:57:08.236529 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 16 04:57:08.238936 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 16 04:57:08.239062 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 16 04:57:08.240580 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 16 04:57:08.240695 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 16 04:57:08.244487 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 16 04:57:08.244622 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 16 04:57:08.248166 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 16 04:57:08.248256 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 16 04:57:08.249784 systemd[1]: Finished ensure-sysext.service. Sep 16 04:57:08.255188 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 16 04:57:08.297475 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 16 04:57:08.355161 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 16 04:57:08.364111 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 16 04:57:08.406341 systemd-resolved[1528]: Positive Trust Anchors: Sep 16 04:57:08.406355 systemd-resolved[1528]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 16 04:57:08.406390 systemd-resolved[1528]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 16 04:57:08.429296 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 16 04:57:08.472114 systemd-resolved[1528]: Using system hostname 'ci-4459.0.0-n-140c1315ab'. Sep 16 04:57:08.472855 augenrules[1605]: No rules Sep 16 04:57:08.473311 systemd[1]: audit-rules.service: Deactivated successfully. Sep 16 04:57:08.473491 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 16 04:57:08.487634 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 16 04:57:08.491465 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 16 04:57:08.500420 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 16 04:57:08.542105 kernel: mousedev: PS/2 mouse device common for all mice Sep 16 04:57:08.546568 kernel: hv_vmbus: registering driver hyperv_fb Sep 16 04:57:08.551377 kernel: hv_vmbus: registering driver hv_balloon Sep 16 04:57:08.556147 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Sep 16 04:57:08.559107 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Sep 16 04:57:08.561034 kernel: Console: switching to colour dummy device 80x25 Sep 16 04:57:08.565118 kernel: Console: switching to colour frame buffer device 128x48 Sep 16 04:57:08.572109 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Sep 16 04:57:08.577156 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#23 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Sep 16 04:57:08.580798 systemd-networkd[1586]: lo: Link UP Sep 16 04:57:08.580807 systemd-networkd[1586]: lo: Gained carrier Sep 16 04:57:08.582054 systemd-networkd[1586]: Enumeration completed Sep 16 04:57:08.582198 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 16 04:57:08.584510 systemd[1]: Reached target network.target - Network. Sep 16 04:57:08.586713 systemd-networkd[1586]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 16 04:57:08.586720 systemd-networkd[1586]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 16 04:57:08.587614 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 16 04:57:08.590214 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Sep 16 04:57:08.591741 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 16 04:57:08.598630 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Sep 16 04:57:08.603708 kernel: hv_netvsc f8615163-0000-1000-2000-7ced8d73f401 eth0: Data path switched to VF: enP30832s1 Sep 16 04:57:08.602993 systemd-networkd[1586]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 16 04:57:08.603025 systemd-networkd[1586]: enP30832s1: Link UP Sep 16 04:57:08.603111 systemd-networkd[1586]: eth0: Link UP Sep 16 04:57:08.603113 systemd-networkd[1586]: eth0: Gained carrier Sep 16 04:57:08.603123 systemd-networkd[1586]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 16 04:57:08.610298 systemd-networkd[1586]: enP30832s1: Gained carrier Sep 16 04:57:08.615259 systemd-networkd[1586]: eth0: DHCPv4 address 10.200.8.38/24, gateway 10.200.8.1 acquired from 168.63.129.16 Sep 16 04:57:08.635950 systemd[1]: Condition check resulted in dev-ptp_hyperv.device - /dev/ptp_hyperv being skipped. Sep 16 04:57:08.658011 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 16 04:57:08.685791 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 16 04:57:08.734130 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 16 04:57:08.734801 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 16 04:57:08.739519 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 16 04:57:08.743315 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 16 04:57:08.832287 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Sep 16 04:57:08.835594 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 16 04:57:08.910102 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Sep 16 04:57:08.918584 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 16 04:57:10.166101 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 16 04:57:10.500586 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 16 04:57:10.504318 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 16 04:57:10.577222 systemd-networkd[1586]: eth0: Gained IPv6LL Sep 16 04:57:10.578971 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 16 04:57:10.580939 systemd[1]: Reached target network-online.target - Network is Online. Sep 16 04:57:14.626529 ldconfig[1317]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 16 04:57:14.637111 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 16 04:57:14.639665 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 16 04:57:14.666201 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 16 04:57:14.667898 systemd[1]: Reached target sysinit.target - System Initialization. Sep 16 04:57:14.669506 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 16 04:57:14.671092 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 16 04:57:14.672658 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Sep 16 04:57:14.675268 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 16 04:57:14.678186 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 16 04:57:14.681143 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 16 04:57:14.682652 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 16 04:57:14.682675 systemd[1]: Reached target paths.target - Path Units. Sep 16 04:57:14.685136 systemd[1]: Reached target timers.target - Timer Units. Sep 16 04:57:14.714385 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 16 04:57:14.718020 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 16 04:57:14.720638 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 16 04:57:14.724248 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 16 04:57:14.725985 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 16 04:57:14.735491 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 16 04:57:14.738360 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 16 04:57:14.741611 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 16 04:57:14.744733 systemd[1]: Reached target sockets.target - Socket Units. Sep 16 04:57:14.746187 systemd[1]: Reached target basic.target - Basic System. Sep 16 04:57:14.749165 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 16 04:57:14.749189 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 16 04:57:14.763823 systemd[1]: Starting chronyd.service - NTP client/server... Sep 16 04:57:14.767176 systemd[1]: Starting containerd.service - containerd container runtime... Sep 16 04:57:14.777941 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 16 04:57:14.781296 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 16 04:57:14.789373 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 16 04:57:14.792662 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 16 04:57:14.799179 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 16 04:57:14.802331 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 16 04:57:14.805249 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Sep 16 04:57:14.809366 systemd[1]: hv_fcopy_uio_daemon.service - Hyper-V FCOPY UIO daemon was skipped because of an unmet condition check (ConditionPathExists=/sys/bus/vmbus/devices/eb765408-105f-49b6-b4aa-c123b64d17d4/uio). Sep 16 04:57:14.810238 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Sep 16 04:57:14.812888 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Sep 16 04:57:14.814486 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 04:57:14.820743 jq[1686]: false Sep 16 04:57:14.821386 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 16 04:57:14.828319 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 16 04:57:14.833000 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 16 04:57:14.839219 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 16 04:57:14.843298 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 16 04:57:14.848865 KVP[1689]: KVP starting; pid is:1689 Sep 16 04:57:14.853952 chronyd[1678]: chronyd version 4.7 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +NTS +SECHASH +IPV6 -DEBUG) Sep 16 04:57:14.854306 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 16 04:57:14.856926 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 16 04:57:14.857335 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 16 04:57:14.858377 systemd[1]: Starting update-engine.service - Update Engine... Sep 16 04:57:14.862236 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 16 04:57:14.875330 KVP[1689]: KVP LIC Version: 3.1 Sep 16 04:57:14.876179 kernel: hv_utils: KVP IC version 4.0 Sep 16 04:57:14.877187 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 16 04:57:14.877424 oslogin_cache_refresh[1688]: Refreshing passwd entry cache Sep 16 04:57:14.878072 google_oslogin_nss_cache[1688]: oslogin_cache_refresh[1688]: Refreshing passwd entry cache Sep 16 04:57:14.880395 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 16 04:57:14.880584 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 16 04:57:14.887540 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 16 04:57:14.887707 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 16 04:57:14.889724 google_oslogin_nss_cache[1688]: oslogin_cache_refresh[1688]: Failure getting users, quitting Sep 16 04:57:14.889724 google_oslogin_nss_cache[1688]: oslogin_cache_refresh[1688]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 16 04:57:14.889724 google_oslogin_nss_cache[1688]: oslogin_cache_refresh[1688]: Refreshing group entry cache Sep 16 04:57:14.889421 oslogin_cache_refresh[1688]: Failure getting users, quitting Sep 16 04:57:14.889435 oslogin_cache_refresh[1688]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 16 04:57:14.889470 oslogin_cache_refresh[1688]: Refreshing group entry cache Sep 16 04:57:14.901154 chronyd[1678]: Timezone right/UTC failed leap second check, ignoring Sep 16 04:57:14.901293 chronyd[1678]: Loaded seccomp filter (level 2) Sep 16 04:57:14.901653 systemd[1]: Started chronyd.service - NTP client/server. Sep 16 04:57:14.904651 google_oslogin_nss_cache[1688]: oslogin_cache_refresh[1688]: Failure getting groups, quitting Sep 16 04:57:14.904651 google_oslogin_nss_cache[1688]: oslogin_cache_refresh[1688]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 16 04:57:14.904496 oslogin_cache_refresh[1688]: Failure getting groups, quitting Sep 16 04:57:14.904504 oslogin_cache_refresh[1688]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 16 04:57:14.905861 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Sep 16 04:57:14.910357 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Sep 16 04:57:14.916225 jq[1698]: true Sep 16 04:57:14.933020 update_engine[1697]: I20250916 04:57:14.932029 1697 main.cc:92] Flatcar Update Engine starting Sep 16 04:57:14.933234 extend-filesystems[1687]: Found /dev/nvme0n1p6 Sep 16 04:57:14.935347 (ntainerd)[1722]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 16 04:57:14.937199 systemd[1]: motdgen.service: Deactivated successfully. Sep 16 04:57:14.937394 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 16 04:57:14.941898 jq[1723]: true Sep 16 04:57:14.968169 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 16 04:57:14.971042 extend-filesystems[1687]: Found /dev/nvme0n1p9 Sep 16 04:57:14.974205 extend-filesystems[1687]: Checking size of /dev/nvme0n1p9 Sep 16 04:57:14.984031 tar[1705]: linux-amd64/LICENSE Sep 16 04:57:14.984031 tar[1705]: linux-amd64/helm Sep 16 04:57:14.985999 systemd-logind[1696]: New seat seat0. Sep 16 04:57:14.987466 systemd-logind[1696]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 16 04:57:14.987596 systemd[1]: Started systemd-logind.service - User Login Management. Sep 16 04:57:15.005107 extend-filesystems[1687]: Old size kept for /dev/nvme0n1p9 Sep 16 04:57:15.007894 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 16 04:57:15.008160 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 16 04:57:15.045253 bash[1749]: Updated "/home/core/.ssh/authorized_keys" Sep 16 04:57:15.045051 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 16 04:57:15.048521 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 16 04:57:15.285711 dbus-daemon[1681]: [system] SELinux support is enabled Sep 16 04:57:15.285871 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 16 04:57:15.290769 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 16 04:57:15.290794 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 16 04:57:15.293330 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 16 04:57:15.293350 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 16 04:57:15.296512 update_engine[1697]: I20250916 04:57:15.296469 1697 update_check_scheduler.cc:74] Next update check in 6m19s Sep 16 04:57:15.300108 systemd[1]: Started update-engine.service - Update Engine. Sep 16 04:57:15.303377 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 16 04:57:15.305369 dbus-daemon[1681]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 16 04:57:15.409922 coreos-metadata[1680]: Sep 16 04:57:15.409 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Sep 16 04:57:15.416736 coreos-metadata[1680]: Sep 16 04:57:15.416 INFO Fetch successful Sep 16 04:57:15.417034 coreos-metadata[1680]: Sep 16 04:57:15.416 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Sep 16 04:57:15.420662 coreos-metadata[1680]: Sep 16 04:57:15.420 INFO Fetch successful Sep 16 04:57:15.420662 coreos-metadata[1680]: Sep 16 04:57:15.420 INFO Fetching http://168.63.129.16/machine/0df9e1b5-d5fa-4e80-bb39-b1073e4e56f8/f624dd73%2Dc40e%2D47c6%2Daaab%2Dc56326b38271.%5Fci%2D4459.0.0%2Dn%2D140c1315ab?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Sep 16 04:57:15.422868 coreos-metadata[1680]: Sep 16 04:57:15.422 INFO Fetch successful Sep 16 04:57:15.423070 coreos-metadata[1680]: Sep 16 04:57:15.423 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Sep 16 04:57:15.433001 coreos-metadata[1680]: Sep 16 04:57:15.432 INFO Fetch successful Sep 16 04:57:15.488911 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 16 04:57:15.492746 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 16 04:57:15.623487 tar[1705]: linux-amd64/README.md Sep 16 04:57:15.630823 locksmithd[1777]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 16 04:57:15.638069 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 16 04:57:15.902845 sshd_keygen[1725]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 16 04:57:15.924806 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 16 04:57:15.927959 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 16 04:57:15.933453 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Sep 16 04:57:15.950760 systemd[1]: issuegen.service: Deactivated successfully. Sep 16 04:57:15.950996 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 16 04:57:15.958295 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 16 04:57:15.967201 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Sep 16 04:57:15.996613 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 16 04:57:16.003343 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 16 04:57:16.006893 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 16 04:57:16.012343 systemd[1]: Reached target getty.target - Login Prompts. Sep 16 04:57:16.083885 containerd[1722]: time="2025-09-16T04:57:16Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 16 04:57:16.084748 containerd[1722]: time="2025-09-16T04:57:16.084720930Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 16 04:57:16.095837 containerd[1722]: time="2025-09-16T04:57:16.095099599Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="10.449µs" Sep 16 04:57:16.095837 containerd[1722]: time="2025-09-16T04:57:16.095129942Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 16 04:57:16.095837 containerd[1722]: time="2025-09-16T04:57:16.095148381Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 16 04:57:16.095837 containerd[1722]: time="2025-09-16T04:57:16.095268061Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 16 04:57:16.095837 containerd[1722]: time="2025-09-16T04:57:16.095284427Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 16 04:57:16.095837 containerd[1722]: time="2025-09-16T04:57:16.095306203Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 16 04:57:16.095837 containerd[1722]: time="2025-09-16T04:57:16.095348098Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 16 04:57:16.095837 containerd[1722]: time="2025-09-16T04:57:16.095357310Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 16 04:57:16.095837 containerd[1722]: time="2025-09-16T04:57:16.095545747Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 16 04:57:16.095837 containerd[1722]: time="2025-09-16T04:57:16.095555725Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 16 04:57:16.095837 containerd[1722]: time="2025-09-16T04:57:16.095569625Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 16 04:57:16.095837 containerd[1722]: time="2025-09-16T04:57:16.095578077Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 16 04:57:16.096124 containerd[1722]: time="2025-09-16T04:57:16.095633925Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 16 04:57:16.096124 containerd[1722]: time="2025-09-16T04:57:16.095779011Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 16 04:57:16.096124 containerd[1722]: time="2025-09-16T04:57:16.095798748Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 16 04:57:16.096124 containerd[1722]: time="2025-09-16T04:57:16.095808155Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 16 04:57:16.096124 containerd[1722]: time="2025-09-16T04:57:16.095854651Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 16 04:57:16.096908 containerd[1722]: time="2025-09-16T04:57:16.096716817Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 16 04:57:16.096908 containerd[1722]: time="2025-09-16T04:57:16.096800530Z" level=info msg="metadata content store policy set" policy=shared Sep 16 04:57:16.117189 containerd[1722]: time="2025-09-16T04:57:16.117161848Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 16 04:57:16.117310 containerd[1722]: time="2025-09-16T04:57:16.117296231Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 16 04:57:16.117393 containerd[1722]: time="2025-09-16T04:57:16.117383610Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 16 04:57:16.117450 containerd[1722]: time="2025-09-16T04:57:16.117441675Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 16 04:57:16.117502 containerd[1722]: time="2025-09-16T04:57:16.117494049Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 16 04:57:16.117537 containerd[1722]: time="2025-09-16T04:57:16.117530551Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 16 04:57:16.117572 containerd[1722]: time="2025-09-16T04:57:16.117565933Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 16 04:57:16.117606 containerd[1722]: time="2025-09-16T04:57:16.117599631Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 16 04:57:16.117640 containerd[1722]: time="2025-09-16T04:57:16.117634013Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 16 04:57:16.117675 containerd[1722]: time="2025-09-16T04:57:16.117669296Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 16 04:57:16.117711 containerd[1722]: time="2025-09-16T04:57:16.117704694Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 16 04:57:16.117746 containerd[1722]: time="2025-09-16T04:57:16.117739933Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 16 04:57:16.117888 containerd[1722]: time="2025-09-16T04:57:16.117879702Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 16 04:57:16.117928 containerd[1722]: time="2025-09-16T04:57:16.117920512Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 16 04:57:16.117969 containerd[1722]: time="2025-09-16T04:57:16.117962441Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 16 04:57:16.118008 containerd[1722]: time="2025-09-16T04:57:16.118001261Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 16 04:57:16.118047 containerd[1722]: time="2025-09-16T04:57:16.118040045Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 16 04:57:16.118081 containerd[1722]: time="2025-09-16T04:57:16.118074239Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 16 04:57:16.118142 containerd[1722]: time="2025-09-16T04:57:16.118134266Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 16 04:57:16.118185 containerd[1722]: time="2025-09-16T04:57:16.118177282Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 16 04:57:16.118226 containerd[1722]: time="2025-09-16T04:57:16.118217988Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 16 04:57:16.118274 containerd[1722]: time="2025-09-16T04:57:16.118266305Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 16 04:57:16.118320 containerd[1722]: time="2025-09-16T04:57:16.118311957Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 16 04:57:16.118430 containerd[1722]: time="2025-09-16T04:57:16.118420055Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 16 04:57:16.118483 containerd[1722]: time="2025-09-16T04:57:16.118475442Z" level=info msg="Start snapshots syncer" Sep 16 04:57:16.118541 containerd[1722]: time="2025-09-16T04:57:16.118532416Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 16 04:57:16.118897 containerd[1722]: time="2025-09-16T04:57:16.118862070Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 16 04:57:16.119111 containerd[1722]: time="2025-09-16T04:57:16.119081785Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 16 04:57:16.119241 containerd[1722]: time="2025-09-16T04:57:16.119228984Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 16 04:57:16.119427 containerd[1722]: time="2025-09-16T04:57:16.119415018Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 16 04:57:16.119510 containerd[1722]: time="2025-09-16T04:57:16.119500208Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 16 04:57:16.119547 containerd[1722]: time="2025-09-16T04:57:16.119540586Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 16 04:57:16.119581 containerd[1722]: time="2025-09-16T04:57:16.119573733Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 16 04:57:16.119626 containerd[1722]: time="2025-09-16T04:57:16.119618050Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 16 04:57:16.119661 containerd[1722]: time="2025-09-16T04:57:16.119654781Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 16 04:57:16.119695 containerd[1722]: time="2025-09-16T04:57:16.119687177Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 16 04:57:16.119739 containerd[1722]: time="2025-09-16T04:57:16.119733708Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 16 04:57:16.119782 containerd[1722]: time="2025-09-16T04:57:16.119769454Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 16 04:57:16.119823 containerd[1722]: time="2025-09-16T04:57:16.119815308Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 16 04:57:16.119885 containerd[1722]: time="2025-09-16T04:57:16.119876779Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 16 04:57:16.120104 containerd[1722]: time="2025-09-16T04:57:16.119946565Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 16 04:57:16.120104 containerd[1722]: time="2025-09-16T04:57:16.119956055Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 16 04:57:16.120104 containerd[1722]: time="2025-09-16T04:57:16.119965152Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 16 04:57:16.120104 containerd[1722]: time="2025-09-16T04:57:16.119972697Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 16 04:57:16.120104 containerd[1722]: time="2025-09-16T04:57:16.119981578Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 16 04:57:16.120104 containerd[1722]: time="2025-09-16T04:57:16.119991171Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 16 04:57:16.120104 containerd[1722]: time="2025-09-16T04:57:16.120011545Z" level=info msg="runtime interface created" Sep 16 04:57:16.120104 containerd[1722]: time="2025-09-16T04:57:16.120017395Z" level=info msg="created NRI interface" Sep 16 04:57:16.120104 containerd[1722]: time="2025-09-16T04:57:16.120025981Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 16 04:57:16.120104 containerd[1722]: time="2025-09-16T04:57:16.120042885Z" level=info msg="Connect containerd service" Sep 16 04:57:16.120104 containerd[1722]: time="2025-09-16T04:57:16.120066389Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 16 04:57:16.120963 containerd[1722]: time="2025-09-16T04:57:16.120921289Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 16 04:57:16.179845 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:57:16.183338 (kubelet)[1830]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 16 04:57:16.609161 containerd[1722]: time="2025-09-16T04:57:16.608818233Z" level=info msg="Start subscribing containerd event" Sep 16 04:57:16.609161 containerd[1722]: time="2025-09-16T04:57:16.608866762Z" level=info msg="Start recovering state" Sep 16 04:57:16.609161 containerd[1722]: time="2025-09-16T04:57:16.608956739Z" level=info msg="Start event monitor" Sep 16 04:57:16.609161 containerd[1722]: time="2025-09-16T04:57:16.608967600Z" level=info msg="Start cni network conf syncer for default" Sep 16 04:57:16.609161 containerd[1722]: time="2025-09-16T04:57:16.608993420Z" level=info msg="Start streaming server" Sep 16 04:57:16.609161 containerd[1722]: time="2025-09-16T04:57:16.609012385Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 16 04:57:16.609161 containerd[1722]: time="2025-09-16T04:57:16.609024486Z" level=info msg="runtime interface starting up..." Sep 16 04:57:16.609161 containerd[1722]: time="2025-09-16T04:57:16.609030469Z" level=info msg="starting plugins..." Sep 16 04:57:16.609161 containerd[1722]: time="2025-09-16T04:57:16.609041424Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 16 04:57:16.609526 containerd[1722]: time="2025-09-16T04:57:16.609514199Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 16 04:57:16.609600 containerd[1722]: time="2025-09-16T04:57:16.609590614Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 16 04:57:16.610124 containerd[1722]: time="2025-09-16T04:57:16.609989444Z" level=info msg="containerd successfully booted in 0.526434s" Sep 16 04:57:16.611113 systemd[1]: Started containerd.service - containerd container runtime. Sep 16 04:57:16.613718 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 16 04:57:16.617122 systemd[1]: Startup finished in 2.888s (kernel) + 14.009s (initrd) + 16.818s (userspace) = 33.717s. Sep 16 04:57:16.645646 kubelet[1830]: E0916 04:57:16.645611 1830 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 16 04:57:16.647349 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 16 04:57:16.647474 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 16 04:57:16.647709 systemd[1]: kubelet.service: Consumed 924ms CPU time, 267.5M memory peak. Sep 16 04:57:17.307969 login[1820]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Sep 16 04:57:17.308152 login[1819]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 16 04:57:17.317426 systemd-logind[1696]: New session 2 of user core. Sep 16 04:57:17.318333 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 16 04:57:17.319685 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 16 04:57:17.353357 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 16 04:57:17.355325 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 16 04:57:17.384887 (systemd)[1852]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 16 04:57:17.386522 systemd-logind[1696]: New session c1 of user core. Sep 16 04:57:17.683062 systemd[1852]: Queued start job for default target default.target. Sep 16 04:57:17.687771 systemd[1852]: Created slice app.slice - User Application Slice. Sep 16 04:57:17.687787 systemd[1852]: Reached target paths.target - Paths. Sep 16 04:57:17.687815 systemd[1852]: Reached target timers.target - Timers. Sep 16 04:57:17.689148 systemd[1852]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 16 04:57:17.698143 systemd[1852]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 16 04:57:17.698261 systemd[1852]: Reached target sockets.target - Sockets. Sep 16 04:57:17.698289 systemd[1852]: Reached target basic.target - Basic System. Sep 16 04:57:17.698311 systemd[1852]: Reached target default.target - Main User Target. Sep 16 04:57:17.698331 systemd[1852]: Startup finished in 307ms. Sep 16 04:57:17.698473 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 16 04:57:17.708221 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 16 04:57:18.007629 waagent[1817]: 2025-09-16T04:57:18.007531Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Sep 16 04:57:18.007974 waagent[1817]: 2025-09-16T04:57:18.007932Z INFO Daemon Daemon OS: flatcar 4459.0.0 Sep 16 04:57:18.008065 waagent[1817]: 2025-09-16T04:57:18.008040Z INFO Daemon Daemon Python: 3.11.13 Sep 16 04:57:18.011645 waagent[1817]: 2025-09-16T04:57:18.011604Z INFO Daemon Daemon Run daemon Sep 16 04:57:18.012775 waagent[1817]: 2025-09-16T04:57:18.012735Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4459.0.0' Sep 16 04:57:18.017103 waagent[1817]: 2025-09-16T04:57:18.014945Z INFO Daemon Daemon Using waagent for provisioning Sep 16 04:57:18.017103 waagent[1817]: 2025-09-16T04:57:18.015474Z INFO Daemon Daemon Activate resource disk Sep 16 04:57:18.017103 waagent[1817]: 2025-09-16T04:57:18.015680Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Sep 16 04:57:18.027831 waagent[1817]: 2025-09-16T04:57:18.017298Z INFO Daemon Daemon Found device: None Sep 16 04:57:18.027831 waagent[1817]: 2025-09-16T04:57:18.017427Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Sep 16 04:57:18.027831 waagent[1817]: 2025-09-16T04:57:18.017486Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Sep 16 04:57:18.027831 waagent[1817]: 2025-09-16T04:57:18.018513Z INFO Daemon Daemon Clean protocol and wireserver endpoint Sep 16 04:57:18.027831 waagent[1817]: 2025-09-16T04:57:18.018791Z INFO Daemon Daemon Running default provisioning handler Sep 16 04:57:18.027831 waagent[1817]: 2025-09-16T04:57:18.024856Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Sep 16 04:57:18.027831 waagent[1817]: 2025-09-16T04:57:18.025337Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Sep 16 04:57:18.027831 waagent[1817]: 2025-09-16T04:57:18.025431Z INFO Daemon Daemon cloud-init is enabled: False Sep 16 04:57:18.027831 waagent[1817]: 2025-09-16T04:57:18.026016Z INFO Daemon Daemon Copying ovf-env.xml Sep 16 04:57:18.142584 waagent[1817]: 2025-09-16T04:57:18.142221Z INFO Daemon Daemon Successfully mounted dvd Sep 16 04:57:18.180855 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Sep 16 04:57:18.182699 waagent[1817]: 2025-09-16T04:57:18.182651Z INFO Daemon Daemon Detect protocol endpoint Sep 16 04:57:18.186457 waagent[1817]: 2025-09-16T04:57:18.183290Z INFO Daemon Daemon Clean protocol and wireserver endpoint Sep 16 04:57:18.186457 waagent[1817]: 2025-09-16T04:57:18.183930Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Sep 16 04:57:18.186457 waagent[1817]: 2025-09-16T04:57:18.184212Z INFO Daemon Daemon Test for route to 168.63.129.16 Sep 16 04:57:18.186457 waagent[1817]: 2025-09-16T04:57:18.184356Z INFO Daemon Daemon Route to 168.63.129.16 exists Sep 16 04:57:18.186457 waagent[1817]: 2025-09-16T04:57:18.184632Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Sep 16 04:57:18.198180 waagent[1817]: 2025-09-16T04:57:18.198152Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Sep 16 04:57:18.199340 waagent[1817]: 2025-09-16T04:57:18.198796Z INFO Daemon Daemon Wire protocol version:2012-11-30 Sep 16 04:57:18.199340 waagent[1817]: 2025-09-16T04:57:18.198971Z INFO Daemon Daemon Server preferred version:2015-04-05 Sep 16 04:57:18.289414 waagent[1817]: 2025-09-16T04:57:18.289320Z INFO Daemon Daemon Initializing goal state during protocol detection Sep 16 04:57:18.291482 waagent[1817]: 2025-09-16T04:57:18.289597Z INFO Daemon Daemon Forcing an update of the goal state. Sep 16 04:57:18.298575 waagent[1817]: 2025-09-16T04:57:18.298543Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Sep 16 04:57:18.309568 login[1820]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 16 04:57:18.313387 systemd-logind[1696]: New session 1 of user core. Sep 16 04:57:18.317920 waagent[1817]: 2025-09-16T04:57:18.317892Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.175 Sep 16 04:57:18.319530 waagent[1817]: 2025-09-16T04:57:18.319502Z INFO Daemon Sep 16 04:57:18.320217 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 16 04:57:18.320415 waagent[1817]: 2025-09-16T04:57:18.319761Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: fa865ba1-6210-4e69-a551-7f73cc2f7169 eTag: 11969137161864023523 source: Fabric] Sep 16 04:57:18.327103 waagent[1817]: 2025-09-16T04:57:18.327051Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Sep 16 04:57:18.329709 waagent[1817]: 2025-09-16T04:57:18.329672Z INFO Daemon Sep 16 04:57:18.330513 waagent[1817]: 2025-09-16T04:57:18.330447Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Sep 16 04:57:18.335771 waagent[1817]: 2025-09-16T04:57:18.335745Z INFO Daemon Daemon Downloading artifacts profile blob Sep 16 04:57:18.483918 waagent[1817]: 2025-09-16T04:57:18.483875Z INFO Daemon Downloaded certificate {'thumbprint': '1D41FBACDAA333BCB5EA6908567E06B0BDAF35D1', 'hasPrivateKey': True} Sep 16 04:57:18.486119 waagent[1817]: 2025-09-16T04:57:18.484633Z INFO Daemon Fetch goal state completed Sep 16 04:57:18.535779 waagent[1817]: 2025-09-16T04:57:18.535722Z INFO Daemon Daemon Starting provisioning Sep 16 04:57:18.536841 waagent[1817]: 2025-09-16T04:57:18.536220Z INFO Daemon Daemon Handle ovf-env.xml. Sep 16 04:57:18.536841 waagent[1817]: 2025-09-16T04:57:18.536407Z INFO Daemon Daemon Set hostname [ci-4459.0.0-n-140c1315ab] Sep 16 04:57:18.563967 waagent[1817]: 2025-09-16T04:57:18.563930Z INFO Daemon Daemon Publish hostname [ci-4459.0.0-n-140c1315ab] Sep 16 04:57:18.565491 waagent[1817]: 2025-09-16T04:57:18.565457Z INFO Daemon Daemon Examine /proc/net/route for primary interface Sep 16 04:57:18.566728 waagent[1817]: 2025-09-16T04:57:18.566701Z INFO Daemon Daemon Primary interface is [eth0] Sep 16 04:57:18.587884 systemd-networkd[1586]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 16 04:57:18.587890 systemd-networkd[1586]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 16 04:57:18.587912 systemd-networkd[1586]: eth0: DHCP lease lost Sep 16 04:57:18.588663 waagent[1817]: 2025-09-16T04:57:18.588616Z INFO Daemon Daemon Create user account if not exists Sep 16 04:57:18.590112 waagent[1817]: 2025-09-16T04:57:18.589159Z INFO Daemon Daemon User core already exists, skip useradd Sep 16 04:57:18.590112 waagent[1817]: 2025-09-16T04:57:18.589380Z INFO Daemon Daemon Configure sudoer Sep 16 04:57:18.593475 waagent[1817]: 2025-09-16T04:57:18.593353Z INFO Daemon Daemon Configure sshd Sep 16 04:57:18.597462 waagent[1817]: 2025-09-16T04:57:18.597424Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Sep 16 04:57:18.601840 waagent[1817]: 2025-09-16T04:57:18.597933Z INFO Daemon Daemon Deploy ssh public key. Sep 16 04:57:18.608120 systemd-networkd[1586]: eth0: DHCPv4 address 10.200.8.38/24, gateway 10.200.8.1 acquired from 168.63.129.16 Sep 16 04:57:19.707786 waagent[1817]: 2025-09-16T04:57:19.707739Z INFO Daemon Daemon Provisioning complete Sep 16 04:57:19.716810 waagent[1817]: 2025-09-16T04:57:19.716780Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Sep 16 04:57:19.718238 waagent[1817]: 2025-09-16T04:57:19.718208Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Sep 16 04:57:19.720449 waagent[1817]: 2025-09-16T04:57:19.720424Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Sep 16 04:57:19.813323 waagent[1902]: 2025-09-16T04:57:19.813266Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Sep 16 04:57:19.813584 waagent[1902]: 2025-09-16T04:57:19.813351Z INFO ExtHandler ExtHandler OS: flatcar 4459.0.0 Sep 16 04:57:19.813584 waagent[1902]: 2025-09-16T04:57:19.813388Z INFO ExtHandler ExtHandler Python: 3.11.13 Sep 16 04:57:19.813584 waagent[1902]: 2025-09-16T04:57:19.813441Z INFO ExtHandler ExtHandler CPU Arch: x86_64 Sep 16 04:57:19.860369 waagent[1902]: 2025-09-16T04:57:19.860327Z INFO ExtHandler ExtHandler Distro: flatcar-4459.0.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.13; Arch: x86_64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Sep 16 04:57:19.860479 waagent[1902]: 2025-09-16T04:57:19.860457Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 16 04:57:19.860536 waagent[1902]: 2025-09-16T04:57:19.860503Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 16 04:57:19.871185 waagent[1902]: 2025-09-16T04:57:19.871132Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Sep 16 04:57:19.877688 waagent[1902]: 2025-09-16T04:57:19.877657Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.175 Sep 16 04:57:19.877989 waagent[1902]: 2025-09-16T04:57:19.877964Z INFO ExtHandler Sep 16 04:57:19.878031 waagent[1902]: 2025-09-16T04:57:19.878013Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: ead8e2b2-0427-46a1-9a84-6f437938873d eTag: 11969137161864023523 source: Fabric] Sep 16 04:57:19.878253 waagent[1902]: 2025-09-16T04:57:19.878232Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Sep 16 04:57:19.878556 waagent[1902]: 2025-09-16T04:57:19.878535Z INFO ExtHandler Sep 16 04:57:19.878594 waagent[1902]: 2025-09-16T04:57:19.878570Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Sep 16 04:57:19.882505 waagent[1902]: 2025-09-16T04:57:19.882482Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Sep 16 04:57:19.942990 waagent[1902]: 2025-09-16T04:57:19.942946Z INFO ExtHandler Downloaded certificate {'thumbprint': '1D41FBACDAA333BCB5EA6908567E06B0BDAF35D1', 'hasPrivateKey': True} Sep 16 04:57:19.943307 waagent[1902]: 2025-09-16T04:57:19.943281Z INFO ExtHandler Fetch goal state completed Sep 16 04:57:19.953942 waagent[1902]: 2025-09-16T04:57:19.953900Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.4.2 1 Jul 2025 (Library: OpenSSL 3.4.2 1 Jul 2025) Sep 16 04:57:19.957639 waagent[1902]: 2025-09-16T04:57:19.957596Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 1902 Sep 16 04:57:19.957733 waagent[1902]: 2025-09-16T04:57:19.957699Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Sep 16 04:57:19.957951 waagent[1902]: 2025-09-16T04:57:19.957913Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Sep 16 04:57:19.958829 waagent[1902]: 2025-09-16T04:57:19.958795Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4459.0.0', '', 'Flatcar Container Linux by Kinvolk'] Sep 16 04:57:19.959095 waagent[1902]: 2025-09-16T04:57:19.959058Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4459.0.0', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Sep 16 04:57:19.959185 waagent[1902]: 2025-09-16T04:57:19.959164Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Sep 16 04:57:19.959512 waagent[1902]: 2025-09-16T04:57:19.959490Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Sep 16 04:57:20.042472 waagent[1902]: 2025-09-16T04:57:20.042448Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Sep 16 04:57:20.042602 waagent[1902]: 2025-09-16T04:57:20.042583Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Sep 16 04:57:20.047675 waagent[1902]: 2025-09-16T04:57:20.047325Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Sep 16 04:57:20.051925 systemd[1]: Reload requested from client PID 1917 ('systemctl') (unit waagent.service)... Sep 16 04:57:20.051973 systemd[1]: Reloading... Sep 16 04:57:20.117105 zram_generator::config[1952]: No configuration found. Sep 16 04:57:20.246101 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#313 cmd 0x4a status: scsi 0x0 srb 0x20 hv 0xc0000001 Sep 16 04:57:20.290693 systemd[1]: Reloading finished in 238 ms. Sep 16 04:57:20.303102 waagent[1902]: 2025-09-16T04:57:20.300897Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Sep 16 04:57:20.303102 waagent[1902]: 2025-09-16T04:57:20.300985Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Sep 16 04:57:20.695631 waagent[1902]: 2025-09-16T04:57:20.695544Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Sep 16 04:57:20.695829 waagent[1902]: 2025-09-16T04:57:20.695803Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Sep 16 04:57:20.696592 waagent[1902]: 2025-09-16T04:57:20.696452Z INFO ExtHandler ExtHandler Starting env monitor service. Sep 16 04:57:20.696592 waagent[1902]: 2025-09-16T04:57:20.696502Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 16 04:57:20.696736 waagent[1902]: 2025-09-16T04:57:20.696710Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 16 04:57:20.696927 waagent[1902]: 2025-09-16T04:57:20.696907Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Sep 16 04:57:20.697206 waagent[1902]: 2025-09-16T04:57:20.697168Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Sep 16 04:57:20.697257 waagent[1902]: 2025-09-16T04:57:20.697235Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Sep 16 04:57:20.697257 waagent[1902]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Sep 16 04:57:20.697257 waagent[1902]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Sep 16 04:57:20.697257 waagent[1902]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Sep 16 04:57:20.697257 waagent[1902]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Sep 16 04:57:20.697257 waagent[1902]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Sep 16 04:57:20.697257 waagent[1902]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Sep 16 04:57:20.697533 waagent[1902]: 2025-09-16T04:57:20.697415Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Sep 16 04:57:20.697558 waagent[1902]: 2025-09-16T04:57:20.697529Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 16 04:57:20.697584 waagent[1902]: 2025-09-16T04:57:20.697566Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 16 04:57:20.697694 waagent[1902]: 2025-09-16T04:57:20.697671Z INFO EnvHandler ExtHandler Configure routes Sep 16 04:57:20.697734 waagent[1902]: 2025-09-16T04:57:20.697716Z INFO EnvHandler ExtHandler Gateway:None Sep 16 04:57:20.697824 waagent[1902]: 2025-09-16T04:57:20.697754Z INFO EnvHandler ExtHandler Routes:None Sep 16 04:57:20.697824 waagent[1902]: 2025-09-16T04:57:20.697776Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Sep 16 04:57:20.698251 waagent[1902]: 2025-09-16T04:57:20.698224Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Sep 16 04:57:20.698310 waagent[1902]: 2025-09-16T04:57:20.698292Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Sep 16 04:57:20.698699 waagent[1902]: 2025-09-16T04:57:20.698659Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Sep 16 04:57:20.707946 waagent[1902]: 2025-09-16T04:57:20.707914Z INFO ExtHandler ExtHandler Sep 16 04:57:20.708002 waagent[1902]: 2025-09-16T04:57:20.707969Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 1a84fc57-166c-4cc5-afb0-6a995f7573bd correlation 54c244b7-b872-4e56-a310-e21c88267f31 created: 2025-09-16T04:56:04.295128Z] Sep 16 04:57:20.708254 waagent[1902]: 2025-09-16T04:57:20.708229Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Sep 16 04:57:20.708579 waagent[1902]: 2025-09-16T04:57:20.708560Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 0 ms] Sep 16 04:57:20.738726 waagent[1902]: 2025-09-16T04:57:20.738678Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Sep 16 04:57:20.738726 waagent[1902]: Try `iptables -h' or 'iptables --help' for more information.) Sep 16 04:57:20.739005 waagent[1902]: 2025-09-16T04:57:20.738977Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 4ADAA03B-4599-49B0-BE9D-ABECAFC24CD3;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Sep 16 04:57:20.811125 waagent[1902]: 2025-09-16T04:57:20.811038Z INFO MonitorHandler ExtHandler Network interfaces: Sep 16 04:57:20.811125 waagent[1902]: Executing ['ip', '-a', '-o', 'link']: Sep 16 04:57:20.811125 waagent[1902]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Sep 16 04:57:20.811125 waagent[1902]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:73:f4:01 brd ff:ff:ff:ff:ff:ff\ alias Network Device Sep 16 04:57:20.811125 waagent[1902]: 3: enP30832s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:73:f4:01 brd ff:ff:ff:ff:ff:ff\ altname enP30832p0s0 Sep 16 04:57:20.811125 waagent[1902]: Executing ['ip', '-4', '-a', '-o', 'address']: Sep 16 04:57:20.811125 waagent[1902]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Sep 16 04:57:20.811125 waagent[1902]: 2: eth0 inet 10.200.8.38/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Sep 16 04:57:20.811125 waagent[1902]: Executing ['ip', '-6', '-a', '-o', 'address']: Sep 16 04:57:20.811125 waagent[1902]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Sep 16 04:57:20.811125 waagent[1902]: 2: eth0 inet6 fe80::7eed:8dff:fe73:f401/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Sep 16 04:57:20.811406 waagent[1902]: 2025-09-16T04:57:20.811227Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Sep 16 04:57:20.811406 waagent[1902]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Sep 16 04:57:20.811406 waagent[1902]: pkts bytes target prot opt in out source destination Sep 16 04:57:20.811406 waagent[1902]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Sep 16 04:57:20.811406 waagent[1902]: pkts bytes target prot opt in out source destination Sep 16 04:57:20.811406 waagent[1902]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Sep 16 04:57:20.811406 waagent[1902]: pkts bytes target prot opt in out source destination Sep 16 04:57:20.811406 waagent[1902]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Sep 16 04:57:20.811406 waagent[1902]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Sep 16 04:57:20.811406 waagent[1902]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Sep 16 04:57:20.856738 waagent[1902]: 2025-09-16T04:57:20.856702Z INFO EnvHandler ExtHandler Current Firewall rules: Sep 16 04:57:20.856738 waagent[1902]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Sep 16 04:57:20.856738 waagent[1902]: pkts bytes target prot opt in out source destination Sep 16 04:57:20.856738 waagent[1902]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Sep 16 04:57:20.856738 waagent[1902]: pkts bytes target prot opt in out source destination Sep 16 04:57:20.856738 waagent[1902]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Sep 16 04:57:20.856738 waagent[1902]: pkts bytes target prot opt in out source destination Sep 16 04:57:20.856738 waagent[1902]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Sep 16 04:57:20.856738 waagent[1902]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Sep 16 04:57:20.856738 waagent[1902]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Sep 16 04:57:26.757869 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 16 04:57:26.759240 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 04:57:27.197011 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:57:27.205283 (kubelet)[2055]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 16 04:57:27.242027 kubelet[2055]: E0916 04:57:27.241977 2055 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 16 04:57:27.244647 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 16 04:57:27.244764 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 16 04:57:27.245029 systemd[1]: kubelet.service: Consumed 123ms CPU time, 110.8M memory peak. Sep 16 04:57:37.258119 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 16 04:57:37.259523 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 04:57:37.731398 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:57:37.734074 (kubelet)[2070]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 16 04:57:37.766162 kubelet[2070]: E0916 04:57:37.766112 2070 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 16 04:57:37.767760 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 16 04:57:37.767886 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 16 04:57:37.768192 systemd[1]: kubelet.service: Consumed 114ms CPU time, 108.7M memory peak. Sep 16 04:57:38.682744 chronyd[1678]: Selected source PHC0 Sep 16 04:57:48.008067 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 16 04:57:48.009414 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 04:57:48.471928 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:57:48.476380 (kubelet)[2085]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 16 04:57:48.511409 kubelet[2085]: E0916 04:57:48.511377 2085 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 16 04:57:48.512832 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 16 04:57:48.512954 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 16 04:57:48.513250 systemd[1]: kubelet.service: Consumed 117ms CPU time, 110.3M memory peak. Sep 16 04:57:51.201416 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 16 04:57:51.202431 systemd[1]: Started sshd@0-10.200.8.38:22-10.200.16.10:51422.service - OpenSSH per-connection server daemon (10.200.16.10:51422). Sep 16 04:57:51.950936 sshd[2093]: Accepted publickey for core from 10.200.16.10 port 51422 ssh2: RSA SHA256:Gt12JRTkVM33b5eHTclAVwi2ZfrwSLC40ZPSMLLbmPQ Sep 16 04:57:51.951886 sshd-session[2093]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:57:51.955726 systemd-logind[1696]: New session 3 of user core. Sep 16 04:57:51.961208 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 16 04:57:52.497352 systemd[1]: Started sshd@1-10.200.8.38:22-10.200.16.10:51436.service - OpenSSH per-connection server daemon (10.200.16.10:51436). Sep 16 04:57:53.126435 sshd[2099]: Accepted publickey for core from 10.200.16.10 port 51436 ssh2: RSA SHA256:Gt12JRTkVM33b5eHTclAVwi2ZfrwSLC40ZPSMLLbmPQ Sep 16 04:57:53.127292 sshd-session[2099]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:57:53.130933 systemd-logind[1696]: New session 4 of user core. Sep 16 04:57:53.139193 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 16 04:57:53.576106 sshd[2102]: Connection closed by 10.200.16.10 port 51436 Sep 16 04:57:53.576490 sshd-session[2099]: pam_unix(sshd:session): session closed for user core Sep 16 04:57:53.578980 systemd[1]: sshd@1-10.200.8.38:22-10.200.16.10:51436.service: Deactivated successfully. Sep 16 04:57:53.580168 systemd[1]: session-4.scope: Deactivated successfully. Sep 16 04:57:53.580720 systemd-logind[1696]: Session 4 logged out. Waiting for processes to exit. Sep 16 04:57:53.581680 systemd-logind[1696]: Removed session 4. Sep 16 04:57:53.691051 systemd[1]: Started sshd@2-10.200.8.38:22-10.200.16.10:51438.service - OpenSSH per-connection server daemon (10.200.16.10:51438). Sep 16 04:57:54.314191 sshd[2108]: Accepted publickey for core from 10.200.16.10 port 51438 ssh2: RSA SHA256:Gt12JRTkVM33b5eHTclAVwi2ZfrwSLC40ZPSMLLbmPQ Sep 16 04:57:54.315032 sshd-session[2108]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:57:54.318862 systemd-logind[1696]: New session 5 of user core. Sep 16 04:57:54.323224 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 16 04:57:54.752645 sshd[2111]: Connection closed by 10.200.16.10 port 51438 Sep 16 04:57:54.753233 sshd-session[2108]: pam_unix(sshd:session): session closed for user core Sep 16 04:57:54.755847 systemd[1]: sshd@2-10.200.8.38:22-10.200.16.10:51438.service: Deactivated successfully. Sep 16 04:57:54.757162 systemd[1]: session-5.scope: Deactivated successfully. Sep 16 04:57:54.757778 systemd-logind[1696]: Session 5 logged out. Waiting for processes to exit. Sep 16 04:57:54.758687 systemd-logind[1696]: Removed session 5. Sep 16 04:57:54.862242 systemd[1]: Started sshd@3-10.200.8.38:22-10.200.16.10:51446.service - OpenSSH per-connection server daemon (10.200.16.10:51446). Sep 16 04:57:55.493571 sshd[2117]: Accepted publickey for core from 10.200.16.10 port 51446 ssh2: RSA SHA256:Gt12JRTkVM33b5eHTclAVwi2ZfrwSLC40ZPSMLLbmPQ Sep 16 04:57:55.494386 sshd-session[2117]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:57:55.498039 systemd-logind[1696]: New session 6 of user core. Sep 16 04:57:55.507217 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 16 04:57:55.934499 sshd[2120]: Connection closed by 10.200.16.10 port 51446 Sep 16 04:57:55.936472 sshd-session[2117]: pam_unix(sshd:session): session closed for user core Sep 16 04:57:55.938783 systemd[1]: sshd@3-10.200.8.38:22-10.200.16.10:51446.service: Deactivated successfully. Sep 16 04:57:55.940137 systemd[1]: session-6.scope: Deactivated successfully. Sep 16 04:57:55.941194 systemd-logind[1696]: Session 6 logged out. Waiting for processes to exit. Sep 16 04:57:55.941785 systemd-logind[1696]: Removed session 6. Sep 16 04:57:56.048014 systemd[1]: Started sshd@4-10.200.8.38:22-10.200.16.10:51456.service - OpenSSH per-connection server daemon (10.200.16.10:51456). Sep 16 04:57:56.672885 sshd[2126]: Accepted publickey for core from 10.200.16.10 port 51456 ssh2: RSA SHA256:Gt12JRTkVM33b5eHTclAVwi2ZfrwSLC40ZPSMLLbmPQ Sep 16 04:57:56.673822 sshd-session[2126]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:57:56.677796 systemd-logind[1696]: New session 7 of user core. Sep 16 04:57:56.683251 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 16 04:57:56.698205 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Sep 16 04:57:57.196016 sudo[2130]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 16 04:57:57.196244 sudo[2130]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 16 04:57:57.222793 sudo[2130]: pam_unix(sudo:session): session closed for user root Sep 16 04:57:57.322342 sshd[2129]: Connection closed by 10.200.16.10 port 51456 Sep 16 04:57:57.322825 sshd-session[2126]: pam_unix(sshd:session): session closed for user core Sep 16 04:57:57.325316 systemd[1]: sshd@4-10.200.8.38:22-10.200.16.10:51456.service: Deactivated successfully. Sep 16 04:57:57.326591 systemd[1]: session-7.scope: Deactivated successfully. Sep 16 04:57:57.327662 systemd-logind[1696]: Session 7 logged out. Waiting for processes to exit. Sep 16 04:57:57.328783 systemd-logind[1696]: Removed session 7. Sep 16 04:57:57.432315 systemd[1]: Started sshd@5-10.200.8.38:22-10.200.16.10:51462.service - OpenSSH per-connection server daemon (10.200.16.10:51462). Sep 16 04:57:58.058548 sshd[2136]: Accepted publickey for core from 10.200.16.10 port 51462 ssh2: RSA SHA256:Gt12JRTkVM33b5eHTclAVwi2ZfrwSLC40ZPSMLLbmPQ Sep 16 04:57:58.059426 sshd-session[2136]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:57:58.063266 systemd-logind[1696]: New session 8 of user core. Sep 16 04:57:58.067230 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 16 04:57:58.400169 sudo[2141]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 16 04:57:58.400395 sudo[2141]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 16 04:57:58.406602 sudo[2141]: pam_unix(sudo:session): session closed for user root Sep 16 04:57:58.410315 sudo[2140]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 16 04:57:58.410519 sudo[2140]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 16 04:57:58.417392 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 16 04:57:58.444157 augenrules[2163]: No rules Sep 16 04:57:58.444933 systemd[1]: audit-rules.service: Deactivated successfully. Sep 16 04:57:58.445075 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 16 04:57:58.445644 sudo[2140]: pam_unix(sudo:session): session closed for user root Sep 16 04:57:58.547748 sshd[2139]: Connection closed by 10.200.16.10 port 51462 Sep 16 04:57:58.548099 sshd-session[2136]: pam_unix(sshd:session): session closed for user core Sep 16 04:57:58.550777 systemd[1]: sshd@5-10.200.8.38:22-10.200.16.10:51462.service: Deactivated successfully. Sep 16 04:57:58.552037 systemd[1]: session-8.scope: Deactivated successfully. Sep 16 04:57:58.552941 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Sep 16 04:57:58.554080 systemd-logind[1696]: Session 8 logged out. Waiting for processes to exit. Sep 16 04:57:58.555194 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 04:57:58.556075 systemd-logind[1696]: Removed session 8. Sep 16 04:57:58.657467 systemd[1]: Started sshd@6-10.200.8.38:22-10.200.16.10:51476.service - OpenSSH per-connection server daemon (10.200.16.10:51476). Sep 16 04:57:59.033860 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:57:59.039322 (kubelet)[2183]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 16 04:57:59.072654 kubelet[2183]: E0916 04:57:59.072623 2183 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 16 04:57:59.074131 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 16 04:57:59.074259 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 16 04:57:59.074637 systemd[1]: kubelet.service: Consumed 116ms CPU time, 107.6M memory peak. Sep 16 04:57:59.283221 sshd[2175]: Accepted publickey for core from 10.200.16.10 port 51476 ssh2: RSA SHA256:Gt12JRTkVM33b5eHTclAVwi2ZfrwSLC40ZPSMLLbmPQ Sep 16 04:57:59.284156 sshd-session[2175]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:57:59.288139 systemd-logind[1696]: New session 9 of user core. Sep 16 04:57:59.293207 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 16 04:57:59.624581 sudo[2191]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 16 04:57:59.624787 sudo[2191]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 16 04:58:00.901455 update_engine[1697]: I20250916 04:58:00.901398 1697 update_attempter.cc:509] Updating boot flags... Sep 16 04:58:01.327048 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 16 04:58:01.335345 (dockerd)[2232]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 16 04:58:02.749139 dockerd[2232]: time="2025-09-16T04:58:02.748884497Z" level=info msg="Starting up" Sep 16 04:58:02.752053 dockerd[2232]: time="2025-09-16T04:58:02.752023670Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 16 04:58:02.760535 dockerd[2232]: time="2025-09-16T04:58:02.760499698Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 16 04:58:02.870032 dockerd[2232]: time="2025-09-16T04:58:02.870005864Z" level=info msg="Loading containers: start." Sep 16 04:58:02.953114 kernel: Initializing XFRM netlink socket Sep 16 04:58:03.440829 systemd-networkd[1586]: docker0: Link UP Sep 16 04:58:03.453568 dockerd[2232]: time="2025-09-16T04:58:03.453541894Z" level=info msg="Loading containers: done." Sep 16 04:58:03.494001 dockerd[2232]: time="2025-09-16T04:58:03.493974161Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 16 04:58:03.494122 dockerd[2232]: time="2025-09-16T04:58:03.494033679Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 16 04:58:03.494122 dockerd[2232]: time="2025-09-16T04:58:03.494111477Z" level=info msg="Initializing buildkit" Sep 16 04:58:03.530916 dockerd[2232]: time="2025-09-16T04:58:03.530880946Z" level=info msg="Completed buildkit initialization" Sep 16 04:58:03.537205 dockerd[2232]: time="2025-09-16T04:58:03.537170090Z" level=info msg="Daemon has completed initialization" Sep 16 04:58:03.537644 dockerd[2232]: time="2025-09-16T04:58:03.537577959Z" level=info msg="API listen on /run/docker.sock" Sep 16 04:58:03.537350 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 16 04:58:04.622814 containerd[1722]: time="2025-09-16T04:58:04.622780193Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Sep 16 04:58:05.385135 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3497842905.mount: Deactivated successfully. Sep 16 04:58:06.647018 containerd[1722]: time="2025-09-16T04:58:06.646978463Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:58:06.649299 containerd[1722]: time="2025-09-16T04:58:06.649269277Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=30114901" Sep 16 04:58:06.651999 containerd[1722]: time="2025-09-16T04:58:06.651961252Z" level=info msg="ImageCreate event name:\"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:58:06.655985 containerd[1722]: time="2025-09-16T04:58:06.655808257Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:58:06.656443 containerd[1722]: time="2025-09-16T04:58:06.656419938Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"30111492\" in 2.033606363s" Sep 16 04:58:06.656481 containerd[1722]: time="2025-09-16T04:58:06.656454552Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\"" Sep 16 04:58:06.657209 containerd[1722]: time="2025-09-16T04:58:06.657175603Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Sep 16 04:58:08.187949 containerd[1722]: time="2025-09-16T04:58:08.187908773Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:58:08.190341 containerd[1722]: time="2025-09-16T04:58:08.190309732Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=26020852" Sep 16 04:58:08.192889 containerd[1722]: time="2025-09-16T04:58:08.192853407Z" level=info msg="ImageCreate event name:\"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:58:08.196958 containerd[1722]: time="2025-09-16T04:58:08.196908040Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:58:08.197988 containerd[1722]: time="2025-09-16T04:58:08.197561181Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"27681301\" in 1.540359296s" Sep 16 04:58:08.197988 containerd[1722]: time="2025-09-16T04:58:08.197592330Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\"" Sep 16 04:58:08.198205 containerd[1722]: time="2025-09-16T04:58:08.198078040Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Sep 16 04:58:09.257886 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Sep 16 04:58:09.260344 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 04:58:09.725465 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:58:09.728588 (kubelet)[2511]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 16 04:58:09.761592 kubelet[2511]: E0916 04:58:09.761559 2511 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 16 04:58:09.763014 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 16 04:58:09.763166 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 16 04:58:09.763460 systemd[1]: kubelet.service: Consumed 125ms CPU time, 108.6M memory peak. Sep 16 04:58:09.811881 containerd[1722]: time="2025-09-16T04:58:09.811623766Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:58:09.814111 containerd[1722]: time="2025-09-16T04:58:09.814029023Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=20155576" Sep 16 04:58:09.816705 containerd[1722]: time="2025-09-16T04:58:09.816666558Z" level=info msg="ImageCreate event name:\"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:58:09.820434 containerd[1722]: time="2025-09-16T04:58:09.820393602Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:58:09.821100 containerd[1722]: time="2025-09-16T04:58:09.821023146Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"21816043\" in 1.622904764s" Sep 16 04:58:09.821100 containerd[1722]: time="2025-09-16T04:58:09.821051218Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\"" Sep 16 04:58:09.821756 containerd[1722]: time="2025-09-16T04:58:09.821737955Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Sep 16 04:58:11.092601 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4271375828.mount: Deactivated successfully. Sep 16 04:58:11.447253 containerd[1722]: time="2025-09-16T04:58:11.447163036Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:58:11.449306 containerd[1722]: time="2025-09-16T04:58:11.449273580Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=31929477" Sep 16 04:58:11.460959 containerd[1722]: time="2025-09-16T04:58:11.460929442Z" level=info msg="ImageCreate event name:\"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:58:11.464375 containerd[1722]: time="2025-09-16T04:58:11.464341835Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:58:11.465045 containerd[1722]: time="2025-09-16T04:58:11.464741457Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"31928488\" in 1.642976186s" Sep 16 04:58:11.465045 containerd[1722]: time="2025-09-16T04:58:11.464769953Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\"" Sep 16 04:58:11.465326 containerd[1722]: time="2025-09-16T04:58:11.465308581Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Sep 16 04:58:12.062666 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1313603962.mount: Deactivated successfully. Sep 16 04:58:13.125844 containerd[1722]: time="2025-09-16T04:58:13.125805012Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:58:13.128959 containerd[1722]: time="2025-09-16T04:58:13.128933099Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942246" Sep 16 04:58:13.131461 containerd[1722]: time="2025-09-16T04:58:13.131430829Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:58:13.134762 containerd[1722]: time="2025-09-16T04:58:13.134723787Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:58:13.135568 containerd[1722]: time="2025-09-16T04:58:13.135364359Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.67003104s" Sep 16 04:58:13.135568 containerd[1722]: time="2025-09-16T04:58:13.135391458Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Sep 16 04:58:13.135872 containerd[1722]: time="2025-09-16T04:58:13.135852340Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 16 04:58:13.656313 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2040723565.mount: Deactivated successfully. Sep 16 04:58:13.673776 containerd[1722]: time="2025-09-16T04:58:13.673740567Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 16 04:58:13.676585 containerd[1722]: time="2025-09-16T04:58:13.676466527Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Sep 16 04:58:13.679143 containerd[1722]: time="2025-09-16T04:58:13.679123027Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 16 04:58:13.682589 containerd[1722]: time="2025-09-16T04:58:13.682566921Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 16 04:58:13.683004 containerd[1722]: time="2025-09-16T04:58:13.682985503Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 547.060631ms" Sep 16 04:58:13.683064 containerd[1722]: time="2025-09-16T04:58:13.683054188Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 16 04:58:13.683530 containerd[1722]: time="2025-09-16T04:58:13.683511040Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Sep 16 04:58:14.211922 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3067150742.mount: Deactivated successfully. Sep 16 04:58:15.941211 containerd[1722]: time="2025-09-16T04:58:15.941166741Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:58:15.943979 containerd[1722]: time="2025-09-16T04:58:15.943949892Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58378441" Sep 16 04:58:15.948077 containerd[1722]: time="2025-09-16T04:58:15.948039062Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:58:15.952146 containerd[1722]: time="2025-09-16T04:58:15.952105797Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:58:15.952913 containerd[1722]: time="2025-09-16T04:58:15.952723789Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 2.26919151s" Sep 16 04:58:15.952913 containerd[1722]: time="2025-09-16T04:58:15.952751618Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Sep 16 04:58:18.169264 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:58:18.169408 systemd[1]: kubelet.service: Consumed 125ms CPU time, 108.6M memory peak. Sep 16 04:58:18.171537 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 04:58:18.195946 systemd[1]: Reload requested from client PID 2667 ('systemctl') (unit session-9.scope)... Sep 16 04:58:18.195958 systemd[1]: Reloading... Sep 16 04:58:18.279138 zram_generator::config[2710]: No configuration found. Sep 16 04:58:18.453543 systemd[1]: Reloading finished in 257 ms. Sep 16 04:58:18.479925 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 16 04:58:18.480135 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 16 04:58:18.480380 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:58:18.481934 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 04:58:18.939970 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:58:18.949313 (kubelet)[2781]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 16 04:58:18.983414 kubelet[2781]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 16 04:58:18.983414 kubelet[2781]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 16 04:58:18.983414 kubelet[2781]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 16 04:58:18.983666 kubelet[2781]: I0916 04:58:18.983467 2781 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 16 04:58:19.656107 kubelet[2781]: I0916 04:58:19.655530 2781 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 16 04:58:19.656107 kubelet[2781]: I0916 04:58:19.655560 2781 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 16 04:58:19.656107 kubelet[2781]: I0916 04:58:19.655881 2781 server.go:956] "Client rotation is on, will bootstrap in background" Sep 16 04:58:19.684014 kubelet[2781]: I0916 04:58:19.683994 2781 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 16 04:58:19.684987 kubelet[2781]: E0916 04:58:19.684964 2781 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.8.38:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.8.38:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 16 04:58:19.689612 kubelet[2781]: I0916 04:58:19.689598 2781 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 16 04:58:19.692492 kubelet[2781]: I0916 04:58:19.692475 2781 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 16 04:58:19.692683 kubelet[2781]: I0916 04:58:19.692662 2781 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 16 04:58:19.692812 kubelet[2781]: I0916 04:58:19.692684 2781 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459.0.0-n-140c1315ab","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 16 04:58:19.692912 kubelet[2781]: I0916 04:58:19.692817 2781 topology_manager.go:138] "Creating topology manager with none policy" Sep 16 04:58:19.692912 kubelet[2781]: I0916 04:58:19.692827 2781 container_manager_linux.go:303] "Creating device plugin manager" Sep 16 04:58:19.692950 kubelet[2781]: I0916 04:58:19.692916 2781 state_mem.go:36] "Initialized new in-memory state store" Sep 16 04:58:19.695058 kubelet[2781]: I0916 04:58:19.695044 2781 kubelet.go:480] "Attempting to sync node with API server" Sep 16 04:58:19.695117 kubelet[2781]: I0916 04:58:19.695065 2781 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 16 04:58:19.695117 kubelet[2781]: I0916 04:58:19.695098 2781 kubelet.go:386] "Adding apiserver pod source" Sep 16 04:58:19.695117 kubelet[2781]: I0916 04:58:19.695111 2781 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 16 04:58:19.705068 kubelet[2781]: E0916 04:58:19.705047 2781 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.8.38:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.38:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 16 04:58:19.705765 kubelet[2781]: E0916 04:58:19.705749 2781 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.8.38:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.0.0-n-140c1315ab&limit=500&resourceVersion=0\": dial tcp 10.200.8.38:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 16 04:58:19.705838 kubelet[2781]: I0916 04:58:19.705827 2781 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 16 04:58:19.706615 kubelet[2781]: I0916 04:58:19.706169 2781 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 16 04:58:19.706906 kubelet[2781]: W0916 04:58:19.706893 2781 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 16 04:58:19.711010 kubelet[2781]: I0916 04:58:19.709733 2781 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 16 04:58:19.711010 kubelet[2781]: I0916 04:58:19.709776 2781 server.go:1289] "Started kubelet" Sep 16 04:58:19.714306 kubelet[2781]: I0916 04:58:19.713720 2781 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 16 04:58:19.714441 kubelet[2781]: I0916 04:58:19.714428 2781 server.go:317] "Adding debug handlers to kubelet server" Sep 16 04:58:19.715748 kubelet[2781]: I0916 04:58:19.715725 2781 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 16 04:58:19.717209 kubelet[2781]: I0916 04:58:19.717172 2781 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 16 04:58:19.717349 kubelet[2781]: I0916 04:58:19.717338 2781 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 16 04:58:19.718718 kubelet[2781]: E0916 04:58:19.717463 2781 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.38:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.38:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459.0.0-n-140c1315ab.1865aa80cbcc57ab default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459.0.0-n-140c1315ab,UID:ci-4459.0.0-n-140c1315ab,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459.0.0-n-140c1315ab,},FirstTimestamp:2025-09-16 04:58:19.709749163 +0000 UTC m=+0.757080464,LastTimestamp:2025-09-16 04:58:19.709749163 +0000 UTC m=+0.757080464,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459.0.0-n-140c1315ab,}" Sep 16 04:58:19.718968 kubelet[2781]: I0916 04:58:19.718954 2781 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 16 04:58:19.722130 kubelet[2781]: E0916 04:58:19.722114 2781 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 16 04:58:19.722198 kubelet[2781]: E0916 04:58:19.722161 2781 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.0.0-n-140c1315ab\" not found" Sep 16 04:58:19.722198 kubelet[2781]: I0916 04:58:19.722192 2781 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 16 04:58:19.722387 kubelet[2781]: I0916 04:58:19.722378 2781 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 16 04:58:19.722430 kubelet[2781]: I0916 04:58:19.722423 2781 reconciler.go:26] "Reconciler: start to sync state" Sep 16 04:58:19.722721 kubelet[2781]: E0916 04:58:19.722703 2781 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.8.38:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.38:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 16 04:58:19.723922 kubelet[2781]: E0916 04:58:19.723897 2781 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.0.0-n-140c1315ab?timeout=10s\": dial tcp 10.200.8.38:6443: connect: connection refused" interval="200ms" Sep 16 04:58:19.725135 kubelet[2781]: I0916 04:58:19.725106 2781 factory.go:223] Registration of the containerd container factory successfully Sep 16 04:58:19.725135 kubelet[2781]: I0916 04:58:19.725119 2781 factory.go:223] Registration of the systemd container factory successfully Sep 16 04:58:19.725220 kubelet[2781]: I0916 04:58:19.725168 2781 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 16 04:58:19.749628 kubelet[2781]: I0916 04:58:19.749609 2781 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 16 04:58:19.749628 kubelet[2781]: I0916 04:58:19.749619 2781 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 16 04:58:19.749718 kubelet[2781]: I0916 04:58:19.749647 2781 state_mem.go:36] "Initialized new in-memory state store" Sep 16 04:58:19.755235 kubelet[2781]: I0916 04:58:19.755222 2781 policy_none.go:49] "None policy: Start" Sep 16 04:58:19.755235 kubelet[2781]: I0916 04:58:19.755237 2781 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 16 04:58:19.755298 kubelet[2781]: I0916 04:58:19.755245 2781 state_mem.go:35] "Initializing new in-memory state store" Sep 16 04:58:19.763381 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 16 04:58:19.778748 kubelet[2781]: I0916 04:58:19.777703 2781 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 16 04:58:19.778633 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 16 04:58:19.780045 kubelet[2781]: I0916 04:58:19.780019 2781 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 16 04:58:19.780045 kubelet[2781]: I0916 04:58:19.780044 2781 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 16 04:58:19.780137 kubelet[2781]: I0916 04:58:19.780059 2781 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 16 04:58:19.780137 kubelet[2781]: I0916 04:58:19.780064 2781 kubelet.go:2436] "Starting kubelet main sync loop" Sep 16 04:58:19.780496 kubelet[2781]: E0916 04:58:19.780413 2781 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 16 04:58:19.783819 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 16 04:58:19.785248 kubelet[2781]: E0916 04:58:19.785102 2781 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.8.38:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.38:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 16 04:58:19.788816 kubelet[2781]: E0916 04:58:19.788544 2781 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 16 04:58:19.788816 kubelet[2781]: I0916 04:58:19.788672 2781 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 16 04:58:19.788816 kubelet[2781]: I0916 04:58:19.788680 2781 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 16 04:58:19.788816 kubelet[2781]: I0916 04:58:19.788811 2781 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 16 04:58:19.790151 kubelet[2781]: E0916 04:58:19.790133 2781 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 16 04:58:19.790213 kubelet[2781]: E0916 04:58:19.790166 2781 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4459.0.0-n-140c1315ab\" not found" Sep 16 04:58:19.891067 kubelet[2781]: I0916 04:58:19.890968 2781 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.0.0-n-140c1315ab" Sep 16 04:58:19.891380 kubelet[2781]: E0916 04:58:19.891363 2781 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.38:6443/api/v1/nodes\": dial tcp 10.200.8.38:6443: connect: connection refused" node="ci-4459.0.0-n-140c1315ab" Sep 16 04:58:19.892660 systemd[1]: Created slice kubepods-burstable-podfaeb47b77fb446003ad6a9415ce6f58a.slice - libcontainer container kubepods-burstable-podfaeb47b77fb446003ad6a9415ce6f58a.slice. Sep 16 04:58:19.898590 kubelet[2781]: E0916 04:58:19.898572 2781 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.0.0-n-140c1315ab\" not found" node="ci-4459.0.0-n-140c1315ab" Sep 16 04:58:19.903121 systemd[1]: Created slice kubepods-burstable-pod0a434236d0b56f7015537aeeaa30a10e.slice - libcontainer container kubepods-burstable-pod0a434236d0b56f7015537aeeaa30a10e.slice. Sep 16 04:58:19.905484 kubelet[2781]: E0916 04:58:19.905456 2781 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.0.0-n-140c1315ab\" not found" node="ci-4459.0.0-n-140c1315ab" Sep 16 04:58:19.907738 systemd[1]: Created slice kubepods-burstable-pod0d5f70f981552ba438ac8ef8260d31b2.slice - libcontainer container kubepods-burstable-pod0d5f70f981552ba438ac8ef8260d31b2.slice. Sep 16 04:58:19.910643 kubelet[2781]: E0916 04:58:19.910620 2781 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.0.0-n-140c1315ab\" not found" node="ci-4459.0.0-n-140c1315ab" Sep 16 04:58:19.925269 kubelet[2781]: E0916 04:58:19.925230 2781 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.0.0-n-140c1315ab?timeout=10s\": dial tcp 10.200.8.38:6443: connect: connection refused" interval="400ms" Sep 16 04:58:20.023346 kubelet[2781]: I0916 04:58:20.023274 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0d5f70f981552ba438ac8ef8260d31b2-flexvolume-dir\") pod \"kube-controller-manager-ci-4459.0.0-n-140c1315ab\" (UID: \"0d5f70f981552ba438ac8ef8260d31b2\") " pod="kube-system/kube-controller-manager-ci-4459.0.0-n-140c1315ab" Sep 16 04:58:20.023346 kubelet[2781]: I0916 04:58:20.023344 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0d5f70f981552ba438ac8ef8260d31b2-kubeconfig\") pod \"kube-controller-manager-ci-4459.0.0-n-140c1315ab\" (UID: \"0d5f70f981552ba438ac8ef8260d31b2\") " pod="kube-system/kube-controller-manager-ci-4459.0.0-n-140c1315ab" Sep 16 04:58:20.023583 kubelet[2781]: I0916 04:58:20.023357 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0a434236d0b56f7015537aeeaa30a10e-ca-certs\") pod \"kube-apiserver-ci-4459.0.0-n-140c1315ab\" (UID: \"0a434236d0b56f7015537aeeaa30a10e\") " pod="kube-system/kube-apiserver-ci-4459.0.0-n-140c1315ab" Sep 16 04:58:20.023583 kubelet[2781]: I0916 04:58:20.023370 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0a434236d0b56f7015537aeeaa30a10e-k8s-certs\") pod \"kube-apiserver-ci-4459.0.0-n-140c1315ab\" (UID: \"0a434236d0b56f7015537aeeaa30a10e\") " pod="kube-system/kube-apiserver-ci-4459.0.0-n-140c1315ab" Sep 16 04:58:20.023583 kubelet[2781]: I0916 04:58:20.023383 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0d5f70f981552ba438ac8ef8260d31b2-k8s-certs\") pod \"kube-controller-manager-ci-4459.0.0-n-140c1315ab\" (UID: \"0d5f70f981552ba438ac8ef8260d31b2\") " pod="kube-system/kube-controller-manager-ci-4459.0.0-n-140c1315ab" Sep 16 04:58:20.023583 kubelet[2781]: I0916 04:58:20.023398 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0d5f70f981552ba438ac8ef8260d31b2-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459.0.0-n-140c1315ab\" (UID: \"0d5f70f981552ba438ac8ef8260d31b2\") " pod="kube-system/kube-controller-manager-ci-4459.0.0-n-140c1315ab" Sep 16 04:58:20.023583 kubelet[2781]: I0916 04:58:20.023413 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/faeb47b77fb446003ad6a9415ce6f58a-kubeconfig\") pod \"kube-scheduler-ci-4459.0.0-n-140c1315ab\" (UID: \"faeb47b77fb446003ad6a9415ce6f58a\") " pod="kube-system/kube-scheduler-ci-4459.0.0-n-140c1315ab" Sep 16 04:58:20.023658 kubelet[2781]: I0916 04:58:20.023428 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0a434236d0b56f7015537aeeaa30a10e-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459.0.0-n-140c1315ab\" (UID: \"0a434236d0b56f7015537aeeaa30a10e\") " pod="kube-system/kube-apiserver-ci-4459.0.0-n-140c1315ab" Sep 16 04:58:20.023658 kubelet[2781]: I0916 04:58:20.023442 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0d5f70f981552ba438ac8ef8260d31b2-ca-certs\") pod \"kube-controller-manager-ci-4459.0.0-n-140c1315ab\" (UID: \"0d5f70f981552ba438ac8ef8260d31b2\") " pod="kube-system/kube-controller-manager-ci-4459.0.0-n-140c1315ab" Sep 16 04:58:20.093504 kubelet[2781]: I0916 04:58:20.093254 2781 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.0.0-n-140c1315ab" Sep 16 04:58:20.093504 kubelet[2781]: E0916 04:58:20.093476 2781 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.38:6443/api/v1/nodes\": dial tcp 10.200.8.38:6443: connect: connection refused" node="ci-4459.0.0-n-140c1315ab" Sep 16 04:58:20.199627 containerd[1722]: time="2025-09-16T04:58:20.199551360Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459.0.0-n-140c1315ab,Uid:faeb47b77fb446003ad6a9415ce6f58a,Namespace:kube-system,Attempt:0,}" Sep 16 04:58:20.205971 containerd[1722]: time="2025-09-16T04:58:20.205943711Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459.0.0-n-140c1315ab,Uid:0a434236d0b56f7015537aeeaa30a10e,Namespace:kube-system,Attempt:0,}" Sep 16 04:58:20.211717 containerd[1722]: time="2025-09-16T04:58:20.211695821Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459.0.0-n-140c1315ab,Uid:0d5f70f981552ba438ac8ef8260d31b2,Namespace:kube-system,Attempt:0,}" Sep 16 04:58:20.309666 containerd[1722]: time="2025-09-16T04:58:20.309627275Z" level=info msg="connecting to shim 7e2feee4d278786a4729fad86ae3ecee983b4866f6c85ff82f7dd961fa8b63df" address="unix:///run/containerd/s/485d74d460b7e85963f8d6e355a35b25bbc05491573faeae1531e46de66a26d1" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:58:20.325899 kubelet[2781]: E0916 04:58:20.325858 2781 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.0.0-n-140c1315ab?timeout=10s\": dial tcp 10.200.8.38:6443: connect: connection refused" interval="800ms" Sep 16 04:58:20.335334 systemd[1]: Started cri-containerd-7e2feee4d278786a4729fad86ae3ecee983b4866f6c85ff82f7dd961fa8b63df.scope - libcontainer container 7e2feee4d278786a4729fad86ae3ecee983b4866f6c85ff82f7dd961fa8b63df. Sep 16 04:58:20.340244 containerd[1722]: time="2025-09-16T04:58:20.340217783Z" level=info msg="connecting to shim d16e64cdc5df0426fe2cf9f7bcb91957dbfb011a4c22424af10d57a2a7c5a77e" address="unix:///run/containerd/s/efd06d1e8471269ac03a3896b333d575cc45635dbcb8561e024a9f6cacacce0a" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:58:20.342241 containerd[1722]: time="2025-09-16T04:58:20.342216169Z" level=info msg="connecting to shim 1b564649c309accc9305bf6622e4fd0dffbd22364a0e4ab510ccf5398761ad27" address="unix:///run/containerd/s/c288ae6e1e54a66e3e4c556ef4ee25a2af73ce5bd36db05b971d30c57aebaf74" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:58:20.371280 systemd[1]: Started cri-containerd-d16e64cdc5df0426fe2cf9f7bcb91957dbfb011a4c22424af10d57a2a7c5a77e.scope - libcontainer container d16e64cdc5df0426fe2cf9f7bcb91957dbfb011a4c22424af10d57a2a7c5a77e. Sep 16 04:58:20.374933 systemd[1]: Started cri-containerd-1b564649c309accc9305bf6622e4fd0dffbd22364a0e4ab510ccf5398761ad27.scope - libcontainer container 1b564649c309accc9305bf6622e4fd0dffbd22364a0e4ab510ccf5398761ad27. Sep 16 04:58:20.424902 containerd[1722]: time="2025-09-16T04:58:20.424770709Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459.0.0-n-140c1315ab,Uid:faeb47b77fb446003ad6a9415ce6f58a,Namespace:kube-system,Attempt:0,} returns sandbox id \"7e2feee4d278786a4729fad86ae3ecee983b4866f6c85ff82f7dd961fa8b63df\"" Sep 16 04:58:20.430710 containerd[1722]: time="2025-09-16T04:58:20.430671839Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459.0.0-n-140c1315ab,Uid:0a434236d0b56f7015537aeeaa30a10e,Namespace:kube-system,Attempt:0,} returns sandbox id \"d16e64cdc5df0426fe2cf9f7bcb91957dbfb011a4c22424af10d57a2a7c5a77e\"" Sep 16 04:58:20.435901 containerd[1722]: time="2025-09-16T04:58:20.434159404Z" level=info msg="CreateContainer within sandbox \"7e2feee4d278786a4729fad86ae3ecee983b4866f6c85ff82f7dd961fa8b63df\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 16 04:58:20.437647 containerd[1722]: time="2025-09-16T04:58:20.437623249Z" level=info msg="CreateContainer within sandbox \"d16e64cdc5df0426fe2cf9f7bcb91957dbfb011a4c22424af10d57a2a7c5a77e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 16 04:58:20.453404 containerd[1722]: time="2025-09-16T04:58:20.453321692Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459.0.0-n-140c1315ab,Uid:0d5f70f981552ba438ac8ef8260d31b2,Namespace:kube-system,Attempt:0,} returns sandbox id \"1b564649c309accc9305bf6622e4fd0dffbd22364a0e4ab510ccf5398761ad27\"" Sep 16 04:58:20.461363 containerd[1722]: time="2025-09-16T04:58:20.461335903Z" level=info msg="Container 6a1cebf67465ea81c385ec124518d62465b582ae17ad49a7b8f7eb2006d6d331: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:58:20.462513 containerd[1722]: time="2025-09-16T04:58:20.462490292Z" level=info msg="CreateContainer within sandbox \"1b564649c309accc9305bf6622e4fd0dffbd22364a0e4ab510ccf5398761ad27\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 16 04:58:20.468517 containerd[1722]: time="2025-09-16T04:58:20.468081177Z" level=info msg="Container a78fd1a8896a63444ba4656c6c2ea4b15e6140610bcf7634789a1e57582742d7: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:58:20.492229 containerd[1722]: time="2025-09-16T04:58:20.492207586Z" level=info msg="CreateContainer within sandbox \"7e2feee4d278786a4729fad86ae3ecee983b4866f6c85ff82f7dd961fa8b63df\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"6a1cebf67465ea81c385ec124518d62465b582ae17ad49a7b8f7eb2006d6d331\"" Sep 16 04:58:20.492576 containerd[1722]: time="2025-09-16T04:58:20.492558739Z" level=info msg="StartContainer for \"6a1cebf67465ea81c385ec124518d62465b582ae17ad49a7b8f7eb2006d6d331\"" Sep 16 04:58:20.493292 containerd[1722]: time="2025-09-16T04:58:20.493202424Z" level=info msg="connecting to shim 6a1cebf67465ea81c385ec124518d62465b582ae17ad49a7b8f7eb2006d6d331" address="unix:///run/containerd/s/485d74d460b7e85963f8d6e355a35b25bbc05491573faeae1531e46de66a26d1" protocol=ttrpc version=3 Sep 16 04:58:20.498829 containerd[1722]: time="2025-09-16T04:58:20.498805695Z" level=info msg="CreateContainer within sandbox \"d16e64cdc5df0426fe2cf9f7bcb91957dbfb011a4c22424af10d57a2a7c5a77e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a78fd1a8896a63444ba4656c6c2ea4b15e6140610bcf7634789a1e57582742d7\"" Sep 16 04:58:20.499149 kubelet[2781]: I0916 04:58:20.499106 2781 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.0.0-n-140c1315ab" Sep 16 04:58:20.499710 kubelet[2781]: E0916 04:58:20.499678 2781 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.38:6443/api/v1/nodes\": dial tcp 10.200.8.38:6443: connect: connection refused" node="ci-4459.0.0-n-140c1315ab" Sep 16 04:58:20.500203 containerd[1722]: time="2025-09-16T04:58:20.500145235Z" level=info msg="StartContainer for \"a78fd1a8896a63444ba4656c6c2ea4b15e6140610bcf7634789a1e57582742d7\"" Sep 16 04:58:20.501062 containerd[1722]: time="2025-09-16T04:58:20.501037612Z" level=info msg="connecting to shim a78fd1a8896a63444ba4656c6c2ea4b15e6140610bcf7634789a1e57582742d7" address="unix:///run/containerd/s/efd06d1e8471269ac03a3896b333d575cc45635dbcb8561e024a9f6cacacce0a" protocol=ttrpc version=3 Sep 16 04:58:20.502854 containerd[1722]: time="2025-09-16T04:58:20.502698508Z" level=info msg="Container 84a4fbbfafc674da1f3a4c8431f42421e9f330692de6cbc69311e0d36b235737: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:58:20.513227 systemd[1]: Started cri-containerd-6a1cebf67465ea81c385ec124518d62465b582ae17ad49a7b8f7eb2006d6d331.scope - libcontainer container 6a1cebf67465ea81c385ec124518d62465b582ae17ad49a7b8f7eb2006d6d331. Sep 16 04:58:20.521249 systemd[1]: Started cri-containerd-a78fd1a8896a63444ba4656c6c2ea4b15e6140610bcf7634789a1e57582742d7.scope - libcontainer container a78fd1a8896a63444ba4656c6c2ea4b15e6140610bcf7634789a1e57582742d7. Sep 16 04:58:20.529605 containerd[1722]: time="2025-09-16T04:58:20.529583855Z" level=info msg="CreateContainer within sandbox \"1b564649c309accc9305bf6622e4fd0dffbd22364a0e4ab510ccf5398761ad27\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"84a4fbbfafc674da1f3a4c8431f42421e9f330692de6cbc69311e0d36b235737\"" Sep 16 04:58:20.530183 containerd[1722]: time="2025-09-16T04:58:20.530156554Z" level=info msg="StartContainer for \"84a4fbbfafc674da1f3a4c8431f42421e9f330692de6cbc69311e0d36b235737\"" Sep 16 04:58:20.530800 containerd[1722]: time="2025-09-16T04:58:20.530781130Z" level=info msg="connecting to shim 84a4fbbfafc674da1f3a4c8431f42421e9f330692de6cbc69311e0d36b235737" address="unix:///run/containerd/s/c288ae6e1e54a66e3e4c556ef4ee25a2af73ce5bd36db05b971d30c57aebaf74" protocol=ttrpc version=3 Sep 16 04:58:20.552188 systemd[1]: Started cri-containerd-84a4fbbfafc674da1f3a4c8431f42421e9f330692de6cbc69311e0d36b235737.scope - libcontainer container 84a4fbbfafc674da1f3a4c8431f42421e9f330692de6cbc69311e0d36b235737. Sep 16 04:58:20.605245 containerd[1722]: time="2025-09-16T04:58:20.605204392Z" level=info msg="StartContainer for \"a78fd1a8896a63444ba4656c6c2ea4b15e6140610bcf7634789a1e57582742d7\" returns successfully" Sep 16 04:58:20.607317 containerd[1722]: time="2025-09-16T04:58:20.607293897Z" level=info msg="StartContainer for \"6a1cebf67465ea81c385ec124518d62465b582ae17ad49a7b8f7eb2006d6d331\" returns successfully" Sep 16 04:58:20.650315 containerd[1722]: time="2025-09-16T04:58:20.650284183Z" level=info msg="StartContainer for \"84a4fbbfafc674da1f3a4c8431f42421e9f330692de6cbc69311e0d36b235737\" returns successfully" Sep 16 04:58:20.794059 kubelet[2781]: E0916 04:58:20.794041 2781 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.0.0-n-140c1315ab\" not found" node="ci-4459.0.0-n-140c1315ab" Sep 16 04:58:20.794523 kubelet[2781]: E0916 04:58:20.794510 2781 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.0.0-n-140c1315ab\" not found" node="ci-4459.0.0-n-140c1315ab" Sep 16 04:58:20.798911 kubelet[2781]: E0916 04:58:20.798897 2781 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.0.0-n-140c1315ab\" not found" node="ci-4459.0.0-n-140c1315ab" Sep 16 04:58:21.304632 kubelet[2781]: I0916 04:58:21.304617 2781 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.0.0-n-140c1315ab" Sep 16 04:58:21.800969 kubelet[2781]: E0916 04:58:21.800931 2781 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.0.0-n-140c1315ab\" not found" node="ci-4459.0.0-n-140c1315ab" Sep 16 04:58:21.801723 kubelet[2781]: E0916 04:58:21.801703 2781 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.0.0-n-140c1315ab\" not found" node="ci-4459.0.0-n-140c1315ab" Sep 16 04:58:22.562319 kubelet[2781]: E0916 04:58:22.562282 2781 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4459.0.0-n-140c1315ab\" not found" node="ci-4459.0.0-n-140c1315ab" Sep 16 04:58:22.615103 kubelet[2781]: I0916 04:58:22.615031 2781 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459.0.0-n-140c1315ab" Sep 16 04:58:22.622942 kubelet[2781]: I0916 04:58:22.622812 2781 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.0.0-n-140c1315ab" Sep 16 04:58:22.634256 kubelet[2781]: E0916 04:58:22.634238 2781 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459.0.0-n-140c1315ab\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4459.0.0-n-140c1315ab" Sep 16 04:58:22.634432 kubelet[2781]: I0916 04:58:22.634343 2781 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.0.0-n-140c1315ab" Sep 16 04:58:22.636682 kubelet[2781]: E0916 04:58:22.636655 2781 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459.0.0-n-140c1315ab\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4459.0.0-n-140c1315ab" Sep 16 04:58:22.636850 kubelet[2781]: I0916 04:58:22.636773 2781 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.0.0-n-140c1315ab" Sep 16 04:58:22.639362 kubelet[2781]: E0916 04:58:22.639335 2781 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.0.0-n-140c1315ab\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4459.0.0-n-140c1315ab" Sep 16 04:58:22.701150 kubelet[2781]: I0916 04:58:22.701129 2781 apiserver.go:52] "Watching apiserver" Sep 16 04:58:22.723065 kubelet[2781]: I0916 04:58:22.723043 2781 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 16 04:58:22.799390 kubelet[2781]: I0916 04:58:22.799374 2781 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.0.0-n-140c1315ab" Sep 16 04:58:22.801862 kubelet[2781]: E0916 04:58:22.801845 2781 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.0.0-n-140c1315ab\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4459.0.0-n-140c1315ab" Sep 16 04:58:24.419753 systemd[1]: Reload requested from client PID 3056 ('systemctl') (unit session-9.scope)... Sep 16 04:58:24.419768 systemd[1]: Reloading... Sep 16 04:58:24.516164 zram_generator::config[3103]: No configuration found. Sep 16 04:58:24.681869 systemd[1]: Reloading finished in 261 ms. Sep 16 04:58:24.708291 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 04:58:24.727713 systemd[1]: kubelet.service: Deactivated successfully. Sep 16 04:58:24.727914 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:58:24.727970 systemd[1]: kubelet.service: Consumed 1.016s CPU time, 128.9M memory peak. Sep 16 04:58:24.729259 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 04:58:25.197062 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:58:25.204336 (kubelet)[3170]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 16 04:58:25.236103 kubelet[3170]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 16 04:58:25.236103 kubelet[3170]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 16 04:58:25.236103 kubelet[3170]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 16 04:58:25.236339 kubelet[3170]: I0916 04:58:25.236071 3170 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 16 04:58:25.241901 kubelet[3170]: I0916 04:58:25.241870 3170 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 16 04:58:25.241901 kubelet[3170]: I0916 04:58:25.241887 3170 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 16 04:58:25.242116 kubelet[3170]: I0916 04:58:25.242082 3170 server.go:956] "Client rotation is on, will bootstrap in background" Sep 16 04:58:25.242838 kubelet[3170]: I0916 04:58:25.242822 3170 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Sep 16 04:58:25.244768 kubelet[3170]: I0916 04:58:25.244544 3170 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 16 04:58:25.247824 kubelet[3170]: I0916 04:58:25.247792 3170 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 16 04:58:25.252107 kubelet[3170]: I0916 04:58:25.250574 3170 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 16 04:58:25.252107 kubelet[3170]: I0916 04:58:25.250748 3170 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 16 04:58:25.252107 kubelet[3170]: I0916 04:58:25.250774 3170 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459.0.0-n-140c1315ab","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 16 04:58:25.252107 kubelet[3170]: I0916 04:58:25.251032 3170 topology_manager.go:138] "Creating topology manager with none policy" Sep 16 04:58:25.252308 kubelet[3170]: I0916 04:58:25.251041 3170 container_manager_linux.go:303] "Creating device plugin manager" Sep 16 04:58:25.252308 kubelet[3170]: I0916 04:58:25.251106 3170 state_mem.go:36] "Initialized new in-memory state store" Sep 16 04:58:25.252308 kubelet[3170]: I0916 04:58:25.251258 3170 kubelet.go:480] "Attempting to sync node with API server" Sep 16 04:58:25.252308 kubelet[3170]: I0916 04:58:25.251267 3170 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 16 04:58:25.252308 kubelet[3170]: I0916 04:58:25.251298 3170 kubelet.go:386] "Adding apiserver pod source" Sep 16 04:58:25.252308 kubelet[3170]: I0916 04:58:25.251316 3170 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 16 04:58:25.254508 kubelet[3170]: I0916 04:58:25.254479 3170 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 16 04:58:25.254966 kubelet[3170]: I0916 04:58:25.254952 3170 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 16 04:58:25.257148 kubelet[3170]: I0916 04:58:25.257132 3170 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 16 04:58:25.257214 kubelet[3170]: I0916 04:58:25.257177 3170 server.go:1289] "Started kubelet" Sep 16 04:58:25.260154 kubelet[3170]: I0916 04:58:25.260082 3170 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 16 04:58:25.260460 kubelet[3170]: I0916 04:58:25.260452 3170 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 16 04:58:25.260566 kubelet[3170]: I0916 04:58:25.260555 3170 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 16 04:58:25.261369 kubelet[3170]: I0916 04:58:25.261357 3170 server.go:317] "Adding debug handlers to kubelet server" Sep 16 04:58:25.261480 kubelet[3170]: I0916 04:58:25.261469 3170 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 16 04:58:25.274664 kubelet[3170]: I0916 04:58:25.274640 3170 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 16 04:58:25.275906 kubelet[3170]: I0916 04:58:25.275815 3170 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 16 04:58:25.276005 kubelet[3170]: E0916 04:58:25.275984 3170 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.0.0-n-140c1315ab\" not found" Sep 16 04:58:25.281081 kubelet[3170]: I0916 04:58:25.281068 3170 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 16 04:58:25.281276 kubelet[3170]: I0916 04:58:25.281269 3170 reconciler.go:26] "Reconciler: start to sync state" Sep 16 04:58:25.282041 kubelet[3170]: I0916 04:58:25.281802 3170 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 16 04:58:25.284209 kubelet[3170]: I0916 04:58:25.284193 3170 factory.go:223] Registration of the containerd container factory successfully Sep 16 04:58:25.284549 kubelet[3170]: I0916 04:58:25.284276 3170 factory.go:223] Registration of the systemd container factory successfully Sep 16 04:58:25.284549 kubelet[3170]: I0916 04:58:25.284451 3170 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 16 04:58:25.285759 kubelet[3170]: I0916 04:58:25.285460 3170 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 16 04:58:25.285759 kubelet[3170]: I0916 04:58:25.285480 3170 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 16 04:58:25.285759 kubelet[3170]: I0916 04:58:25.285493 3170 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 16 04:58:25.285759 kubelet[3170]: I0916 04:58:25.285498 3170 kubelet.go:2436] "Starting kubelet main sync loop" Sep 16 04:58:25.285759 kubelet[3170]: E0916 04:58:25.285532 3170 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 16 04:58:25.289398 kubelet[3170]: E0916 04:58:25.288940 3170 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 16 04:58:25.336951 kubelet[3170]: I0916 04:58:25.336940 3170 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 16 04:58:25.337032 kubelet[3170]: I0916 04:58:25.337025 3170 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 16 04:58:25.337067 kubelet[3170]: I0916 04:58:25.337064 3170 state_mem.go:36] "Initialized new in-memory state store" Sep 16 04:58:25.337192 kubelet[3170]: I0916 04:58:25.337183 3170 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 16 04:58:25.337234 kubelet[3170]: I0916 04:58:25.337222 3170 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 16 04:58:25.337258 kubelet[3170]: I0916 04:58:25.337255 3170 policy_none.go:49] "None policy: Start" Sep 16 04:58:25.337289 kubelet[3170]: I0916 04:58:25.337286 3170 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 16 04:58:25.337412 kubelet[3170]: I0916 04:58:25.337314 3170 state_mem.go:35] "Initializing new in-memory state store" Sep 16 04:58:25.337412 kubelet[3170]: I0916 04:58:25.337375 3170 state_mem.go:75] "Updated machine memory state" Sep 16 04:58:25.340128 kubelet[3170]: E0916 04:58:25.340063 3170 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 16 04:58:25.340272 kubelet[3170]: I0916 04:58:25.340257 3170 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 16 04:58:25.340302 kubelet[3170]: I0916 04:58:25.340275 3170 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 16 04:58:25.340506 kubelet[3170]: I0916 04:58:25.340492 3170 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 16 04:58:25.344081 kubelet[3170]: E0916 04:58:25.341917 3170 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 16 04:58:25.386780 kubelet[3170]: I0916 04:58:25.386763 3170 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.0.0-n-140c1315ab" Sep 16 04:58:25.386924 kubelet[3170]: I0916 04:58:25.386915 3170 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.0.0-n-140c1315ab" Sep 16 04:58:25.387039 kubelet[3170]: I0916 04:58:25.386762 3170 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.0.0-n-140c1315ab" Sep 16 04:58:25.401693 kubelet[3170]: I0916 04:58:25.401680 3170 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 16 04:58:25.401959 kubelet[3170]: I0916 04:58:25.401948 3170 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 16 04:58:25.402687 kubelet[3170]: I0916 04:58:25.402672 3170 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 16 04:58:25.442190 kubelet[3170]: I0916 04:58:25.442171 3170 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.0.0-n-140c1315ab" Sep 16 04:58:25.455663 kubelet[3170]: I0916 04:58:25.455601 3170 kubelet_node_status.go:124] "Node was previously registered" node="ci-4459.0.0-n-140c1315ab" Sep 16 04:58:25.455663 kubelet[3170]: I0916 04:58:25.455649 3170 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459.0.0-n-140c1315ab" Sep 16 04:58:25.483175 kubelet[3170]: I0916 04:58:25.483146 3170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0a434236d0b56f7015537aeeaa30a10e-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459.0.0-n-140c1315ab\" (UID: \"0a434236d0b56f7015537aeeaa30a10e\") " pod="kube-system/kube-apiserver-ci-4459.0.0-n-140c1315ab" Sep 16 04:58:25.483260 kubelet[3170]: I0916 04:58:25.483182 3170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0d5f70f981552ba438ac8ef8260d31b2-flexvolume-dir\") pod \"kube-controller-manager-ci-4459.0.0-n-140c1315ab\" (UID: \"0d5f70f981552ba438ac8ef8260d31b2\") " pod="kube-system/kube-controller-manager-ci-4459.0.0-n-140c1315ab" Sep 16 04:58:25.483260 kubelet[3170]: I0916 04:58:25.483203 3170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0d5f70f981552ba438ac8ef8260d31b2-kubeconfig\") pod \"kube-controller-manager-ci-4459.0.0-n-140c1315ab\" (UID: \"0d5f70f981552ba438ac8ef8260d31b2\") " pod="kube-system/kube-controller-manager-ci-4459.0.0-n-140c1315ab" Sep 16 04:58:25.483260 kubelet[3170]: I0916 04:58:25.483222 3170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0d5f70f981552ba438ac8ef8260d31b2-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459.0.0-n-140c1315ab\" (UID: \"0d5f70f981552ba438ac8ef8260d31b2\") " pod="kube-system/kube-controller-manager-ci-4459.0.0-n-140c1315ab" Sep 16 04:58:25.483260 kubelet[3170]: I0916 04:58:25.483237 3170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0a434236d0b56f7015537aeeaa30a10e-ca-certs\") pod \"kube-apiserver-ci-4459.0.0-n-140c1315ab\" (UID: \"0a434236d0b56f7015537aeeaa30a10e\") " pod="kube-system/kube-apiserver-ci-4459.0.0-n-140c1315ab" Sep 16 04:58:25.483260 kubelet[3170]: I0916 04:58:25.483254 3170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0a434236d0b56f7015537aeeaa30a10e-k8s-certs\") pod \"kube-apiserver-ci-4459.0.0-n-140c1315ab\" (UID: \"0a434236d0b56f7015537aeeaa30a10e\") " pod="kube-system/kube-apiserver-ci-4459.0.0-n-140c1315ab" Sep 16 04:58:25.483475 kubelet[3170]: I0916 04:58:25.483277 3170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0d5f70f981552ba438ac8ef8260d31b2-ca-certs\") pod \"kube-controller-manager-ci-4459.0.0-n-140c1315ab\" (UID: \"0d5f70f981552ba438ac8ef8260d31b2\") " pod="kube-system/kube-controller-manager-ci-4459.0.0-n-140c1315ab" Sep 16 04:58:25.483475 kubelet[3170]: I0916 04:58:25.483306 3170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0d5f70f981552ba438ac8ef8260d31b2-k8s-certs\") pod \"kube-controller-manager-ci-4459.0.0-n-140c1315ab\" (UID: \"0d5f70f981552ba438ac8ef8260d31b2\") " pod="kube-system/kube-controller-manager-ci-4459.0.0-n-140c1315ab" Sep 16 04:58:25.483475 kubelet[3170]: I0916 04:58:25.483330 3170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/faeb47b77fb446003ad6a9415ce6f58a-kubeconfig\") pod \"kube-scheduler-ci-4459.0.0-n-140c1315ab\" (UID: \"faeb47b77fb446003ad6a9415ce6f58a\") " pod="kube-system/kube-scheduler-ci-4459.0.0-n-140c1315ab" Sep 16 04:58:26.255688 kubelet[3170]: I0916 04:58:26.255515 3170 apiserver.go:52] "Watching apiserver" Sep 16 04:58:26.281423 kubelet[3170]: I0916 04:58:26.281388 3170 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 16 04:58:26.326455 kubelet[3170]: I0916 04:58:26.326164 3170 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.0.0-n-140c1315ab" Sep 16 04:58:26.326455 kubelet[3170]: I0916 04:58:26.326161 3170 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.0.0-n-140c1315ab" Sep 16 04:58:26.340608 kubelet[3170]: I0916 04:58:26.340580 3170 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 16 04:58:26.340698 kubelet[3170]: E0916 04:58:26.340629 3170 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.0.0-n-140c1315ab\" already exists" pod="kube-system/kube-apiserver-ci-4459.0.0-n-140c1315ab" Sep 16 04:58:26.345315 kubelet[3170]: I0916 04:58:26.345290 3170 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 16 04:58:26.345396 kubelet[3170]: E0916 04:58:26.345347 3170 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459.0.0-n-140c1315ab\" already exists" pod="kube-system/kube-scheduler-ci-4459.0.0-n-140c1315ab" Sep 16 04:58:26.356585 kubelet[3170]: I0916 04:58:26.356525 3170 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4459.0.0-n-140c1315ab" podStartSLOduration=1.3565123350000001 podStartE2EDuration="1.356512335s" podCreationTimestamp="2025-09-16 04:58:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 04:58:26.356416466 +0000 UTC m=+1.148442495" watchObservedRunningTime="2025-09-16 04:58:26.356512335 +0000 UTC m=+1.148538360" Sep 16 04:58:26.356711 kubelet[3170]: I0916 04:58:26.356615 3170 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4459.0.0-n-140c1315ab" podStartSLOduration=1.356611627 podStartE2EDuration="1.356611627s" podCreationTimestamp="2025-09-16 04:58:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 04:58:26.345996433 +0000 UTC m=+1.138022458" watchObservedRunningTime="2025-09-16 04:58:26.356611627 +0000 UTC m=+1.148637654" Sep 16 04:58:31.684260 kubelet[3170]: I0916 04:58:31.684218 3170 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 16 04:58:31.684858 kubelet[3170]: I0916 04:58:31.684628 3170 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 16 04:58:31.684895 containerd[1722]: time="2025-09-16T04:58:31.684482681Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 16 04:58:32.346050 kubelet[3170]: I0916 04:58:32.345995 3170 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4459.0.0-n-140c1315ab" podStartSLOduration=7.345964526 podStartE2EDuration="7.345964526s" podCreationTimestamp="2025-09-16 04:58:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 04:58:26.366563309 +0000 UTC m=+1.158589409" watchObservedRunningTime="2025-09-16 04:58:32.345964526 +0000 UTC m=+7.137990600" Sep 16 04:58:32.358209 systemd[1]: Created slice kubepods-besteffort-pod1b62a109_6db6_4d17_ae29_322384d8fb63.slice - libcontainer container kubepods-besteffort-pod1b62a109_6db6_4d17_ae29_322384d8fb63.slice. Sep 16 04:58:32.427422 kubelet[3170]: I0916 04:58:32.427284 3170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1b62a109-6db6-4d17-ae29-322384d8fb63-kube-proxy\") pod \"kube-proxy-jbtdd\" (UID: \"1b62a109-6db6-4d17-ae29-322384d8fb63\") " pod="kube-system/kube-proxy-jbtdd" Sep 16 04:58:32.427422 kubelet[3170]: I0916 04:58:32.427318 3170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1b62a109-6db6-4d17-ae29-322384d8fb63-xtables-lock\") pod \"kube-proxy-jbtdd\" (UID: \"1b62a109-6db6-4d17-ae29-322384d8fb63\") " pod="kube-system/kube-proxy-jbtdd" Sep 16 04:58:32.427422 kubelet[3170]: I0916 04:58:32.427337 3170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1b62a109-6db6-4d17-ae29-322384d8fb63-lib-modules\") pod \"kube-proxy-jbtdd\" (UID: \"1b62a109-6db6-4d17-ae29-322384d8fb63\") " pod="kube-system/kube-proxy-jbtdd" Sep 16 04:58:32.427422 kubelet[3170]: I0916 04:58:32.427356 3170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4b6c7\" (UniqueName: \"kubernetes.io/projected/1b62a109-6db6-4d17-ae29-322384d8fb63-kube-api-access-4b6c7\") pod \"kube-proxy-jbtdd\" (UID: \"1b62a109-6db6-4d17-ae29-322384d8fb63\") " pod="kube-system/kube-proxy-jbtdd" Sep 16 04:58:32.667317 containerd[1722]: time="2025-09-16T04:58:32.667163966Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jbtdd,Uid:1b62a109-6db6-4d17-ae29-322384d8fb63,Namespace:kube-system,Attempt:0,}" Sep 16 04:58:32.703934 containerd[1722]: time="2025-09-16T04:58:32.703880948Z" level=info msg="connecting to shim 8f43e8a85cebb2c26c63a10c6f7dfd79ef459e769a3d5b65469c0dbcd28eb305" address="unix:///run/containerd/s/0563f6d5c7585a7fb43aea6a9fd8b60ba9116c44be000e7c593936f2a4d8cf66" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:58:32.727269 systemd[1]: Started cri-containerd-8f43e8a85cebb2c26c63a10c6f7dfd79ef459e769a3d5b65469c0dbcd28eb305.scope - libcontainer container 8f43e8a85cebb2c26c63a10c6f7dfd79ef459e769a3d5b65469c0dbcd28eb305. Sep 16 04:58:32.753688 containerd[1722]: time="2025-09-16T04:58:32.753666462Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jbtdd,Uid:1b62a109-6db6-4d17-ae29-322384d8fb63,Namespace:kube-system,Attempt:0,} returns sandbox id \"8f43e8a85cebb2c26c63a10c6f7dfd79ef459e769a3d5b65469c0dbcd28eb305\"" Sep 16 04:58:32.763158 containerd[1722]: time="2025-09-16T04:58:32.763125340Z" level=info msg="CreateContainer within sandbox \"8f43e8a85cebb2c26c63a10c6f7dfd79ef459e769a3d5b65469c0dbcd28eb305\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 16 04:58:32.782601 containerd[1722]: time="2025-09-16T04:58:32.782437850Z" level=info msg="Container 09cfab70f1368525f1aa9524f3501620dd72c6906fb067c228f9ff33eb3bc18c: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:58:32.800125 containerd[1722]: time="2025-09-16T04:58:32.800107216Z" level=info msg="CreateContainer within sandbox \"8f43e8a85cebb2c26c63a10c6f7dfd79ef459e769a3d5b65469c0dbcd28eb305\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"09cfab70f1368525f1aa9524f3501620dd72c6906fb067c228f9ff33eb3bc18c\"" Sep 16 04:58:32.800656 containerd[1722]: time="2025-09-16T04:58:32.800616407Z" level=info msg="StartContainer for \"09cfab70f1368525f1aa9524f3501620dd72c6906fb067c228f9ff33eb3bc18c\"" Sep 16 04:58:32.801955 containerd[1722]: time="2025-09-16T04:58:32.801929005Z" level=info msg="connecting to shim 09cfab70f1368525f1aa9524f3501620dd72c6906fb067c228f9ff33eb3bc18c" address="unix:///run/containerd/s/0563f6d5c7585a7fb43aea6a9fd8b60ba9116c44be000e7c593936f2a4d8cf66" protocol=ttrpc version=3 Sep 16 04:58:32.819212 systemd[1]: Started cri-containerd-09cfab70f1368525f1aa9524f3501620dd72c6906fb067c228f9ff33eb3bc18c.scope - libcontainer container 09cfab70f1368525f1aa9524f3501620dd72c6906fb067c228f9ff33eb3bc18c. Sep 16 04:58:32.854110 containerd[1722]: time="2025-09-16T04:58:32.854031529Z" level=info msg="StartContainer for \"09cfab70f1368525f1aa9524f3501620dd72c6906fb067c228f9ff33eb3bc18c\" returns successfully" Sep 16 04:58:32.897995 systemd[1]: Created slice kubepods-besteffort-pod61690877_8462_4102_99c4_b9b55ca6c739.slice - libcontainer container kubepods-besteffort-pod61690877_8462_4102_99c4_b9b55ca6c739.slice. Sep 16 04:58:32.931046 kubelet[3170]: I0916 04:58:32.930733 3170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/61690877-8462-4102-99c4-b9b55ca6c739-var-lib-calico\") pod \"tigera-operator-755d956888-4l5w7\" (UID: \"61690877-8462-4102-99c4-b9b55ca6c739\") " pod="tigera-operator/tigera-operator-755d956888-4l5w7" Sep 16 04:58:32.931374 kubelet[3170]: I0916 04:58:32.931324 3170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwtfs\" (UniqueName: \"kubernetes.io/projected/61690877-8462-4102-99c4-b9b55ca6c739-kube-api-access-wwtfs\") pod \"tigera-operator-755d956888-4l5w7\" (UID: \"61690877-8462-4102-99c4-b9b55ca6c739\") " pod="tigera-operator/tigera-operator-755d956888-4l5w7" Sep 16 04:58:33.201994 containerd[1722]: time="2025-09-16T04:58:33.201925512Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-4l5w7,Uid:61690877-8462-4102-99c4-b9b55ca6c739,Namespace:tigera-operator,Attempt:0,}" Sep 16 04:58:33.240359 containerd[1722]: time="2025-09-16T04:58:33.240299913Z" level=info msg="connecting to shim 1c088de9a0698eff789b8ce7b357e3a854e24329b7c3d224fde0357433f47415" address="unix:///run/containerd/s/aab1de1f4bac153aa77ffc3bc634f140eb727b4c33690082b504b6808d8cfb7c" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:58:33.260229 systemd[1]: Started cri-containerd-1c088de9a0698eff789b8ce7b357e3a854e24329b7c3d224fde0357433f47415.scope - libcontainer container 1c088de9a0698eff789b8ce7b357e3a854e24329b7c3d224fde0357433f47415. Sep 16 04:58:33.297885 containerd[1722]: time="2025-09-16T04:58:33.297863821Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-4l5w7,Uid:61690877-8462-4102-99c4-b9b55ca6c739,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"1c088de9a0698eff789b8ce7b357e3a854e24329b7c3d224fde0357433f47415\"" Sep 16 04:58:33.298957 containerd[1722]: time="2025-09-16T04:58:33.298919192Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 16 04:58:34.625522 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1076690634.mount: Deactivated successfully. Sep 16 04:58:34.726424 kubelet[3170]: I0916 04:58:34.726230 3170 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-jbtdd" podStartSLOduration=2.726213574 podStartE2EDuration="2.726213574s" podCreationTimestamp="2025-09-16 04:58:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 04:58:33.360211714 +0000 UTC m=+8.152237736" watchObservedRunningTime="2025-09-16 04:58:34.726213574 +0000 UTC m=+9.518239603" Sep 16 04:58:35.122668 containerd[1722]: time="2025-09-16T04:58:35.122634565Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:58:35.125583 containerd[1722]: time="2025-09-16T04:58:35.125556419Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Sep 16 04:58:35.128315 containerd[1722]: time="2025-09-16T04:58:35.128279776Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:58:35.131544 containerd[1722]: time="2025-09-16T04:58:35.131503040Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:58:35.132019 containerd[1722]: time="2025-09-16T04:58:35.131998520Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 1.833039632s" Sep 16 04:58:35.132058 containerd[1722]: time="2025-09-16T04:58:35.132023695Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 16 04:58:35.139206 containerd[1722]: time="2025-09-16T04:58:35.139167746Z" level=info msg="CreateContainer within sandbox \"1c088de9a0698eff789b8ce7b357e3a854e24329b7c3d224fde0357433f47415\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 16 04:58:35.156958 containerd[1722]: time="2025-09-16T04:58:35.156933894Z" level=info msg="Container 399c0d5ccbddd1f534e413c6de6ddc6fe36f7ab79586dd6dbc202435b8493e40: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:58:35.172958 containerd[1722]: time="2025-09-16T04:58:35.172933783Z" level=info msg="CreateContainer within sandbox \"1c088de9a0698eff789b8ce7b357e3a854e24329b7c3d224fde0357433f47415\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"399c0d5ccbddd1f534e413c6de6ddc6fe36f7ab79586dd6dbc202435b8493e40\"" Sep 16 04:58:35.173466 containerd[1722]: time="2025-09-16T04:58:35.173432157Z" level=info msg="StartContainer for \"399c0d5ccbddd1f534e413c6de6ddc6fe36f7ab79586dd6dbc202435b8493e40\"" Sep 16 04:58:35.174419 containerd[1722]: time="2025-09-16T04:58:35.174395081Z" level=info msg="connecting to shim 399c0d5ccbddd1f534e413c6de6ddc6fe36f7ab79586dd6dbc202435b8493e40" address="unix:///run/containerd/s/aab1de1f4bac153aa77ffc3bc634f140eb727b4c33690082b504b6808d8cfb7c" protocol=ttrpc version=3 Sep 16 04:58:35.195218 systemd[1]: Started cri-containerd-399c0d5ccbddd1f534e413c6de6ddc6fe36f7ab79586dd6dbc202435b8493e40.scope - libcontainer container 399c0d5ccbddd1f534e413c6de6ddc6fe36f7ab79586dd6dbc202435b8493e40. Sep 16 04:58:35.222289 containerd[1722]: time="2025-09-16T04:58:35.222248864Z" level=info msg="StartContainer for \"399c0d5ccbddd1f534e413c6de6ddc6fe36f7ab79586dd6dbc202435b8493e40\" returns successfully" Sep 16 04:58:36.186762 kubelet[3170]: I0916 04:58:36.186621 3170 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-4l5w7" podStartSLOduration=2.352597583 podStartE2EDuration="4.186605745s" podCreationTimestamp="2025-09-16 04:58:32 +0000 UTC" firstStartedPulling="2025-09-16 04:58:33.298634045 +0000 UTC m=+8.090660079" lastFinishedPulling="2025-09-16 04:58:35.132642209 +0000 UTC m=+9.924668241" observedRunningTime="2025-09-16 04:58:35.366211955 +0000 UTC m=+10.158237981" watchObservedRunningTime="2025-09-16 04:58:36.186605745 +0000 UTC m=+10.978631769" Sep 16 04:58:40.536510 sudo[2191]: pam_unix(sudo:session): session closed for user root Sep 16 04:58:40.637617 sshd[2190]: Connection closed by 10.200.16.10 port 51476 Sep 16 04:58:40.638016 sshd-session[2175]: pam_unix(sshd:session): session closed for user core Sep 16 04:58:40.642303 systemd-logind[1696]: Session 9 logged out. Waiting for processes to exit. Sep 16 04:58:40.642799 systemd[1]: sshd@6-10.200.8.38:22-10.200.16.10:51476.service: Deactivated successfully. Sep 16 04:58:40.645611 systemd[1]: session-9.scope: Deactivated successfully. Sep 16 04:58:40.645925 systemd[1]: session-9.scope: Consumed 3.151s CPU time, 231.8M memory peak. Sep 16 04:58:40.649491 systemd-logind[1696]: Removed session 9. Sep 16 04:58:43.443863 systemd[1]: Created slice kubepods-besteffort-pod5a4f345a_4c0f_4915_9c12_7e1d5d6ac96d.slice - libcontainer container kubepods-besteffort-pod5a4f345a_4c0f_4915_9c12_7e1d5d6ac96d.slice. Sep 16 04:58:43.501186 kubelet[3170]: I0916 04:58:43.501148 3170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bs4v\" (UniqueName: \"kubernetes.io/projected/5a4f345a-4c0f-4915-9c12-7e1d5d6ac96d-kube-api-access-7bs4v\") pod \"calico-typha-cc66c7f88-zbll8\" (UID: \"5a4f345a-4c0f-4915-9c12-7e1d5d6ac96d\") " pod="calico-system/calico-typha-cc66c7f88-zbll8" Sep 16 04:58:43.501186 kubelet[3170]: I0916 04:58:43.501185 3170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/5a4f345a-4c0f-4915-9c12-7e1d5d6ac96d-typha-certs\") pod \"calico-typha-cc66c7f88-zbll8\" (UID: \"5a4f345a-4c0f-4915-9c12-7e1d5d6ac96d\") " pod="calico-system/calico-typha-cc66c7f88-zbll8" Sep 16 04:58:43.501615 kubelet[3170]: I0916 04:58:43.501206 3170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5a4f345a-4c0f-4915-9c12-7e1d5d6ac96d-tigera-ca-bundle\") pod \"calico-typha-cc66c7f88-zbll8\" (UID: \"5a4f345a-4c0f-4915-9c12-7e1d5d6ac96d\") " pod="calico-system/calico-typha-cc66c7f88-zbll8" Sep 16 04:58:43.750819 containerd[1722]: time="2025-09-16T04:58:43.750720540Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-cc66c7f88-zbll8,Uid:5a4f345a-4c0f-4915-9c12-7e1d5d6ac96d,Namespace:calico-system,Attempt:0,}" Sep 16 04:58:43.791575 containerd[1722]: time="2025-09-16T04:58:43.791546233Z" level=info msg="connecting to shim 62e8ba3cb69d78828da28da4768dee5949f8ab7948f15c43da82896616a8e044" address="unix:///run/containerd/s/df8b5b81982d43bf580beb627f33a5b460221601e022e0978c92ac42f55a721c" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:58:43.819246 systemd[1]: Started cri-containerd-62e8ba3cb69d78828da28da4768dee5949f8ab7948f15c43da82896616a8e044.scope - libcontainer container 62e8ba3cb69d78828da28da4768dee5949f8ab7948f15c43da82896616a8e044. Sep 16 04:58:43.867446 systemd[1]: Created slice kubepods-besteffort-pod02beee91_514c_4ca6_a285_ca4eb2594fe2.slice - libcontainer container kubepods-besteffort-pod02beee91_514c_4ca6_a285_ca4eb2594fe2.slice. Sep 16 04:58:43.880876 containerd[1722]: time="2025-09-16T04:58:43.880810530Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-cc66c7f88-zbll8,Uid:5a4f345a-4c0f-4915-9c12-7e1d5d6ac96d,Namespace:calico-system,Attempt:0,} returns sandbox id \"62e8ba3cb69d78828da28da4768dee5949f8ab7948f15c43da82896616a8e044\"" Sep 16 04:58:43.885054 containerd[1722]: time="2025-09-16T04:58:43.884991738Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 16 04:58:43.903705 kubelet[3170]: I0916 04:58:43.903679 3170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/02beee91-514c-4ca6-a285-ca4eb2594fe2-var-run-calico\") pod \"calico-node-4l4kk\" (UID: \"02beee91-514c-4ca6-a285-ca4eb2594fe2\") " pod="calico-system/calico-node-4l4kk" Sep 16 04:58:43.903852 kubelet[3170]: I0916 04:58:43.903805 3170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/02beee91-514c-4ca6-a285-ca4eb2594fe2-cni-log-dir\") pod \"calico-node-4l4kk\" (UID: \"02beee91-514c-4ca6-a285-ca4eb2594fe2\") " pod="calico-system/calico-node-4l4kk" Sep 16 04:58:43.903852 kubelet[3170]: I0916 04:58:43.903818 3170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/02beee91-514c-4ca6-a285-ca4eb2594fe2-var-lib-calico\") pod \"calico-node-4l4kk\" (UID: \"02beee91-514c-4ca6-a285-ca4eb2594fe2\") " pod="calico-system/calico-node-4l4kk" Sep 16 04:58:43.903852 kubelet[3170]: I0916 04:58:43.903828 3170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/02beee91-514c-4ca6-a285-ca4eb2594fe2-xtables-lock\") pod \"calico-node-4l4kk\" (UID: \"02beee91-514c-4ca6-a285-ca4eb2594fe2\") " pod="calico-system/calico-node-4l4kk" Sep 16 04:58:43.903989 kubelet[3170]: I0916 04:58:43.903839 3170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8v24\" (UniqueName: \"kubernetes.io/projected/02beee91-514c-4ca6-a285-ca4eb2594fe2-kube-api-access-d8v24\") pod \"calico-node-4l4kk\" (UID: \"02beee91-514c-4ca6-a285-ca4eb2594fe2\") " pod="calico-system/calico-node-4l4kk" Sep 16 04:58:43.903989 kubelet[3170]: I0916 04:58:43.903954 3170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/02beee91-514c-4ca6-a285-ca4eb2594fe2-cni-bin-dir\") pod \"calico-node-4l4kk\" (UID: \"02beee91-514c-4ca6-a285-ca4eb2594fe2\") " pod="calico-system/calico-node-4l4kk" Sep 16 04:58:43.903989 kubelet[3170]: I0916 04:58:43.903966 3170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/02beee91-514c-4ca6-a285-ca4eb2594fe2-flexvol-driver-host\") pod \"calico-node-4l4kk\" (UID: \"02beee91-514c-4ca6-a285-ca4eb2594fe2\") " pod="calico-system/calico-node-4l4kk" Sep 16 04:58:43.904083 kubelet[3170]: I0916 04:58:43.903978 3170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/02beee91-514c-4ca6-a285-ca4eb2594fe2-node-certs\") pod \"calico-node-4l4kk\" (UID: \"02beee91-514c-4ca6-a285-ca4eb2594fe2\") " pod="calico-system/calico-node-4l4kk" Sep 16 04:58:43.904083 kubelet[3170]: I0916 04:58:43.904077 3170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/02beee91-514c-4ca6-a285-ca4eb2594fe2-policysync\") pod \"calico-node-4l4kk\" (UID: \"02beee91-514c-4ca6-a285-ca4eb2594fe2\") " pod="calico-system/calico-node-4l4kk" Sep 16 04:58:43.904185 kubelet[3170]: I0916 04:58:43.904104 3170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/02beee91-514c-4ca6-a285-ca4eb2594fe2-tigera-ca-bundle\") pod \"calico-node-4l4kk\" (UID: \"02beee91-514c-4ca6-a285-ca4eb2594fe2\") " pod="calico-system/calico-node-4l4kk" Sep 16 04:58:43.904185 kubelet[3170]: I0916 04:58:43.904119 3170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/02beee91-514c-4ca6-a285-ca4eb2594fe2-lib-modules\") pod \"calico-node-4l4kk\" (UID: \"02beee91-514c-4ca6-a285-ca4eb2594fe2\") " pod="calico-system/calico-node-4l4kk" Sep 16 04:58:43.904185 kubelet[3170]: I0916 04:58:43.904135 3170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/02beee91-514c-4ca6-a285-ca4eb2594fe2-cni-net-dir\") pod \"calico-node-4l4kk\" (UID: \"02beee91-514c-4ca6-a285-ca4eb2594fe2\") " pod="calico-system/calico-node-4l4kk" Sep 16 04:58:44.022403 kubelet[3170]: E0916 04:58:44.022329 3170 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:44.022403 kubelet[3170]: W0916 04:58:44.022360 3170 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:44.022403 kubelet[3170]: E0916 04:58:44.022379 3170 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:44.107855 kubelet[3170]: E0916 04:58:44.107820 3170 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qlxsc" podUID="9b4e9d9e-651f-42b1-b357-b24bcf14db30" Sep 16 04:58:44.171305 containerd[1722]: time="2025-09-16T04:58:44.171273234Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-4l4kk,Uid:02beee91-514c-4ca6-a285-ca4eb2594fe2,Namespace:calico-system,Attempt:0,}" Sep 16 04:58:44.194595 kubelet[3170]: E0916 04:58:44.194570 3170 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:44.194595 kubelet[3170]: W0916 04:58:44.194586 3170 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:44.194698 kubelet[3170]: E0916 04:58:44.194603 3170 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:44.195035 kubelet[3170]: E0916 04:58:44.195020 3170 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:44.195035 kubelet[3170]: W0916 04:58:44.195032 3170 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:44.195185 kubelet[3170]: E0916 04:58:44.195044 3170 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:44.195360 kubelet[3170]: E0916 04:58:44.195347 3170 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:44.195395 kubelet[3170]: W0916 04:58:44.195367 3170 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:44.195395 kubelet[3170]: E0916 04:58:44.195380 3170 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:44.195579 kubelet[3170]: E0916 04:58:44.195562 3170 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:44.195579 kubelet[3170]: W0916 04:58:44.195573 3170 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:44.195632 kubelet[3170]: E0916 04:58:44.195584 3170 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:44.195702 kubelet[3170]: E0916 04:58:44.195694 3170 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:44.195737 kubelet[3170]: W0916 04:58:44.195702 3170 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:44.195737 kubelet[3170]: E0916 04:58:44.195709 3170 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:44.195797 kubelet[3170]: E0916 04:58:44.195791 3170 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:44.195831 kubelet[3170]: W0916 04:58:44.195798 3170 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:44.195831 kubelet[3170]: E0916 04:58:44.195804 3170 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:44.195888 kubelet[3170]: E0916 04:58:44.195882 3170 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:44.195911 kubelet[3170]: W0916 04:58:44.195888 3170 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:44.195911 kubelet[3170]: E0916 04:58:44.195893 3170 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:44.196364 kubelet[3170]: E0916 04:58:44.195971 3170 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:44.196364 kubelet[3170]: W0916 04:58:44.195976 3170 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:44.196364 kubelet[3170]: E0916 04:58:44.195981 3170 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:44.196364 kubelet[3170]: E0916 04:58:44.196076 3170 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:44.196364 kubelet[3170]: W0916 04:58:44.196081 3170 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:44.196364 kubelet[3170]: E0916 04:58:44.196101 3170 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:44.196364 kubelet[3170]: E0916 04:58:44.196184 3170 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:44.196364 kubelet[3170]: W0916 04:58:44.196189 3170 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:44.196364 kubelet[3170]: E0916 04:58:44.196194 3170 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:44.196637 kubelet[3170]: E0916 04:58:44.196477 3170 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:44.196637 kubelet[3170]: W0916 04:58:44.196486 3170 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:44.196637 kubelet[3170]: E0916 04:58:44.196495 3170 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:44.197044 kubelet[3170]: E0916 04:58:44.197030 3170 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:44.197044 kubelet[3170]: W0916 04:58:44.197044 3170 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:44.197185 kubelet[3170]: E0916 04:58:44.197055 3170 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:44.197216 kubelet[3170]: E0916 04:58:44.197191 3170 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:44.197216 kubelet[3170]: W0916 04:58:44.197198 3170 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:44.197216 kubelet[3170]: E0916 04:58:44.197205 3170 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:44.197312 kubelet[3170]: E0916 04:58:44.197304 3170 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:44.197336 kubelet[3170]: W0916 04:58:44.197312 3170 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:44.197336 kubelet[3170]: E0916 04:58:44.197318 3170 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:44.197407 kubelet[3170]: E0916 04:58:44.197401 3170 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:44.197427 kubelet[3170]: W0916 04:58:44.197407 3170 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:44.197427 kubelet[3170]: E0916 04:58:44.197414 3170 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:44.197497 kubelet[3170]: E0916 04:58:44.197491 3170 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:44.197529 kubelet[3170]: W0916 04:58:44.197497 3170 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:44.197529 kubelet[3170]: E0916 04:58:44.197503 3170 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:44.197667 kubelet[3170]: E0916 04:58:44.197589 3170 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:44.197667 kubelet[3170]: W0916 04:58:44.197595 3170 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:44.197667 kubelet[3170]: E0916 04:58:44.197601 3170 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:44.197727 kubelet[3170]: E0916 04:58:44.197674 3170 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:44.197727 kubelet[3170]: W0916 04:58:44.197677 3170 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:44.197727 kubelet[3170]: E0916 04:58:44.197682 3170 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:44.198156 kubelet[3170]: E0916 04:58:44.197782 3170 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:44.198156 kubelet[3170]: W0916 04:58:44.197786 3170 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:44.198156 kubelet[3170]: E0916 04:58:44.197792 3170 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:44.198156 kubelet[3170]: E0916 04:58:44.197894 3170 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:44.198156 kubelet[3170]: W0916 04:58:44.197898 3170 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:44.198156 kubelet[3170]: E0916 04:58:44.197904 3170 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:44.205925 kubelet[3170]: E0916 04:58:44.205896 3170 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:44.206019 kubelet[3170]: W0916 04:58:44.205931 3170 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:44.206019 kubelet[3170]: E0916 04:58:44.205945 3170 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:44.206019 kubelet[3170]: I0916 04:58:44.205971 3170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/9b4e9d9e-651f-42b1-b357-b24bcf14db30-varrun\") pod \"csi-node-driver-qlxsc\" (UID: \"9b4e9d9e-651f-42b1-b357-b24bcf14db30\") " pod="calico-system/csi-node-driver-qlxsc" Sep 16 04:58:44.206148 kubelet[3170]: E0916 04:58:44.206139 3170 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:44.206148 kubelet[3170]: W0916 04:58:44.206146 3170 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:44.206211 kubelet[3170]: E0916 04:58:44.206155 3170 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:44.206211 kubelet[3170]: I0916 04:58:44.206178 3170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxsgb\" (UniqueName: \"kubernetes.io/projected/9b4e9d9e-651f-42b1-b357-b24bcf14db30-kube-api-access-qxsgb\") pod \"csi-node-driver-qlxsc\" (UID: \"9b4e9d9e-651f-42b1-b357-b24bcf14db30\") " pod="calico-system/csi-node-driver-qlxsc" Sep 16 04:58:44.206360 kubelet[3170]: E0916 04:58:44.206329 3170 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:44.206360 kubelet[3170]: W0916 04:58:44.206338 3170 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:44.206360 kubelet[3170]: E0916 04:58:44.206349 3170 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:44.206431 kubelet[3170]: I0916 04:58:44.206369 3170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/9b4e9d9e-651f-42b1-b357-b24bcf14db30-registration-dir\") pod \"csi-node-driver-qlxsc\" (UID: \"9b4e9d9e-651f-42b1-b357-b24bcf14db30\") " pod="calico-system/csi-node-driver-qlxsc" Sep 16 04:58:44.206573 kubelet[3170]: E0916 04:58:44.206535 3170 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:44.206573 kubelet[3170]: W0916 04:58:44.206548 3170 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:44.206573 kubelet[3170]: E0916 04:58:44.206557 3170 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:44.206856 kubelet[3170]: I0916 04:58:44.206572 3170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/9b4e9d9e-651f-42b1-b357-b24bcf14db30-socket-dir\") pod \"csi-node-driver-qlxsc\" (UID: \"9b4e9d9e-651f-42b1-b357-b24bcf14db30\") " pod="calico-system/csi-node-driver-qlxsc" Sep 16 04:58:44.206856 kubelet[3170]: E0916 04:58:44.206669 3170 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:44.206856 kubelet[3170]: W0916 04:58:44.206675 3170 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:44.206856 kubelet[3170]: E0916 04:58:44.206681 3170 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:44.206856 kubelet[3170]: I0916 04:58:44.206694 3170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9b4e9d9e-651f-42b1-b357-b24bcf14db30-kubelet-dir\") pod \"csi-node-driver-qlxsc\" (UID: \"9b4e9d9e-651f-42b1-b357-b24bcf14db30\") " pod="calico-system/csi-node-driver-qlxsc" Sep 16 04:58:44.206856 kubelet[3170]: E0916 04:58:44.206835 3170 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:44.206856 kubelet[3170]: W0916 04:58:44.206841 3170 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:44.206856 kubelet[3170]: E0916 04:58:44.206848 3170 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:44.207770 kubelet[3170]: E0916 04:58:44.206954 3170 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:44.207770 kubelet[3170]: W0916 04:58:44.206958 3170 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:44.207770 kubelet[3170]: E0916 04:58:44.206964 3170 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:44.207770 kubelet[3170]: E0916 04:58:44.207068 3170 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:44.207770 kubelet[3170]: W0916 04:58:44.207073 3170 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:44.207770 kubelet[3170]: E0916 04:58:44.207079 3170 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:44.207770 kubelet[3170]: E0916 04:58:44.207185 3170 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:44.207770 kubelet[3170]: W0916 04:58:44.207190 3170 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:44.207770 kubelet[3170]: E0916 04:58:44.207196 3170 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:44.207770 kubelet[3170]: E0916 04:58:44.207316 3170 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:44.208016 kubelet[3170]: W0916 04:58:44.207322 3170 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:44.208016 kubelet[3170]: E0916 04:58:44.207329 3170 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:44.208016 kubelet[3170]: E0916 04:58:44.207421 3170 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:44.208016 kubelet[3170]: W0916 04:58:44.207427 3170 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:44.208016 kubelet[3170]: E0916 04:58:44.207433 3170 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:44.208175 kubelet[3170]: E0916 04:58:44.208158 3170 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:44.208206 kubelet[3170]: W0916 04:58:44.208176 3170 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:44.208206 kubelet[3170]: E0916 04:58:44.208189 3170 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:44.208316 kubelet[3170]: E0916 04:58:44.208307 3170 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:44.208343 kubelet[3170]: W0916 04:58:44.208316 3170 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:44.208343 kubelet[3170]: E0916 04:58:44.208323 3170 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:44.208477 kubelet[3170]: E0916 04:58:44.208431 3170 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:44.208477 kubelet[3170]: W0916 04:58:44.208436 3170 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:44.208477 kubelet[3170]: E0916 04:58:44.208442 3170 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:44.208740 kubelet[3170]: E0916 04:58:44.208725 3170 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:44.208740 kubelet[3170]: W0916 04:58:44.208738 3170 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:44.208839 kubelet[3170]: E0916 04:58:44.208747 3170 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:44.216050 containerd[1722]: time="2025-09-16T04:58:44.216021897Z" level=info msg="connecting to shim 6b8e81d14616412148d1861dca555ca361b945cffb962d712b1a3ee8f04d144c" address="unix:///run/containerd/s/a6d9ad3051bf555edd1c57e64c1ed4df2883b7d409eca1362405eb71fae5ce3e" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:58:44.235207 systemd[1]: Started cri-containerd-6b8e81d14616412148d1861dca555ca361b945cffb962d712b1a3ee8f04d144c.scope - libcontainer container 6b8e81d14616412148d1861dca555ca361b945cffb962d712b1a3ee8f04d144c. Sep 16 04:58:44.257216 containerd[1722]: time="2025-09-16T04:58:44.257172950Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-4l4kk,Uid:02beee91-514c-4ca6-a285-ca4eb2594fe2,Namespace:calico-system,Attempt:0,} returns sandbox id \"6b8e81d14616412148d1861dca555ca361b945cffb962d712b1a3ee8f04d144c\"" Sep 16 04:58:44.307643 kubelet[3170]: E0916 04:58:44.307562 3170 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:44.307643 kubelet[3170]: W0916 04:58:44.307586 3170 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:44.307643 kubelet[3170]: E0916 04:58:44.307599 3170 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:44.308498 kubelet[3170]: E0916 04:58:44.308481 3170 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:44.308498 kubelet[3170]: W0916 04:58:44.308495 3170 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:44.308606 kubelet[3170]: E0916 04:58:44.308507 3170 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:44.308828 kubelet[3170]: E0916 04:58:44.308798 3170 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:44.308828 kubelet[3170]: W0916 04:58:44.308808 3170 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:44.308828 kubelet[3170]: E0916 04:58:44.308818 3170 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:44.309116 kubelet[3170]: E0916 04:58:44.309109 3170 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:44.309159 kubelet[3170]: W0916 04:58:44.309153 3170 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:44.309225 kubelet[3170]: E0916 04:58:44.309188 3170 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:44.309876 kubelet[3170]: E0916 04:58:44.309860 3170 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:44.309937 kubelet[3170]: W0916 04:58:44.309929 3170 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:44.309977 kubelet[3170]: E0916 04:58:44.309970 3170 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:44.310298 kubelet[3170]: E0916 04:58:44.310220 3170 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:44.310488 kubelet[3170]: W0916 04:58:44.310349 3170 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:44.310488 kubelet[3170]: E0916 04:58:44.310362 3170 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:44.310857 kubelet[3170]: E0916 04:58:44.310777 3170 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:44.310857 kubelet[3170]: W0916 04:58:44.310787 3170 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:44.310857 kubelet[3170]: E0916 04:58:44.310797 3170 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:44.311201 kubelet[3170]: E0916 04:58:44.311193 3170 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:44.311280 kubelet[3170]: W0916 04:58:44.311244 3170 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:44.311280 kubelet[3170]: E0916 04:58:44.311256 3170 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:44.311541 kubelet[3170]: E0916 04:58:44.311514 3170 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:44.311541 kubelet[3170]: W0916 04:58:44.311522 3170 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:44.311668 kubelet[3170]: E0916 04:58:44.311620 3170 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:44.311879 kubelet[3170]: E0916 04:58:44.311853 3170 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:44.311879 kubelet[3170]: W0916 04:58:44.311861 3170 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:44.311879 kubelet[3170]: E0916 04:58:44.311869 3170 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:44.312137 kubelet[3170]: E0916 04:58:44.312081 3170 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:44.312137 kubelet[3170]: W0916 04:58:44.312120 3170 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:44.312137 kubelet[3170]: E0916 04:58:44.312129 3170 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:44.313055 kubelet[3170]: E0916 04:58:44.313038 3170 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:44.313055 kubelet[3170]: W0916 04:58:44.313051 3170 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:44.313172 kubelet[3170]: E0916 04:58:44.313063 3170 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:44.314334 kubelet[3170]: E0916 04:58:44.314288 3170 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:44.314334 kubelet[3170]: W0916 04:58:44.314301 3170 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:44.314334 kubelet[3170]: E0916 04:58:44.314312 3170 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:44.314576 kubelet[3170]: E0916 04:58:44.314569 3170 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:44.314632 kubelet[3170]: W0916 04:58:44.314607 3170 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:44.314632 kubelet[3170]: E0916 04:58:44.314616 3170 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:44.314814 kubelet[3170]: E0916 04:58:44.314808 3170 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:44.314894 kubelet[3170]: W0916 04:58:44.314847 3170 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:44.314894 kubelet[3170]: E0916 04:58:44.314861 3170 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:44.315057 kubelet[3170]: E0916 04:58:44.315034 3170 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:44.315057 kubelet[3170]: W0916 04:58:44.315042 3170 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:44.315057 kubelet[3170]: E0916 04:58:44.315049 3170 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:44.315251 kubelet[3170]: E0916 04:58:44.315244 3170 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:44.315298 kubelet[3170]: W0916 04:58:44.315278 3170 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:44.315298 kubelet[3170]: E0916 04:58:44.315288 3170 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:44.315432 kubelet[3170]: E0916 04:58:44.315418 3170 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:44.315463 kubelet[3170]: W0916 04:58:44.315445 3170 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:44.315463 kubelet[3170]: E0916 04:58:44.315455 3170 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:44.315567 kubelet[3170]: E0916 04:58:44.315559 3170 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:44.315567 kubelet[3170]: W0916 04:58:44.315565 3170 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:44.315618 kubelet[3170]: E0916 04:58:44.315572 3170 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:44.315700 kubelet[3170]: E0916 04:58:44.315692 3170 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:44.315700 kubelet[3170]: W0916 04:58:44.315699 3170 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:44.315744 kubelet[3170]: E0916 04:58:44.315705 3170 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:44.315982 kubelet[3170]: E0916 04:58:44.315953 3170 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:44.315982 kubelet[3170]: W0916 04:58:44.315963 3170 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:44.315982 kubelet[3170]: E0916 04:58:44.315972 3170 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:44.316249 kubelet[3170]: E0916 04:58:44.316220 3170 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:44.316249 kubelet[3170]: W0916 04:58:44.316229 3170 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:44.316249 kubelet[3170]: E0916 04:58:44.316237 3170 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:44.316639 kubelet[3170]: E0916 04:58:44.316625 3170 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:44.316639 kubelet[3170]: W0916 04:58:44.316637 3170 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:44.316712 kubelet[3170]: E0916 04:58:44.316648 3170 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:44.316851 kubelet[3170]: E0916 04:58:44.316838 3170 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:44.316851 kubelet[3170]: W0916 04:58:44.316848 3170 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:44.316925 kubelet[3170]: E0916 04:58:44.316857 3170 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:44.316993 kubelet[3170]: E0916 04:58:44.316984 3170 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:44.316993 kubelet[3170]: W0916 04:58:44.316992 3170 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:44.317033 kubelet[3170]: E0916 04:58:44.316999 3170 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:44.326226 kubelet[3170]: E0916 04:58:44.326207 3170 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:44.326226 kubelet[3170]: W0916 04:58:44.326219 3170 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:44.326320 kubelet[3170]: E0916 04:58:44.326234 3170 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:45.469586 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3364288907.mount: Deactivated successfully. Sep 16 04:58:46.286328 kubelet[3170]: E0916 04:58:46.286280 3170 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qlxsc" podUID="9b4e9d9e-651f-42b1-b357-b24bcf14db30" Sep 16 04:58:46.369635 containerd[1722]: time="2025-09-16T04:58:46.369597467Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:58:46.371657 containerd[1722]: time="2025-09-16T04:58:46.371634004Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=35237389" Sep 16 04:58:46.373992 containerd[1722]: time="2025-09-16T04:58:46.373959909Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:58:46.377695 containerd[1722]: time="2025-09-16T04:58:46.377303769Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:58:46.377695 containerd[1722]: time="2025-09-16T04:58:46.377587004Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 2.492567632s" Sep 16 04:58:46.377695 containerd[1722]: time="2025-09-16T04:58:46.377607842Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 16 04:58:46.378303 containerd[1722]: time="2025-09-16T04:58:46.378276409Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 16 04:58:46.398743 containerd[1722]: time="2025-09-16T04:58:46.398710276Z" level=info msg="CreateContainer within sandbox \"62e8ba3cb69d78828da28da4768dee5949f8ab7948f15c43da82896616a8e044\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 16 04:58:46.419105 containerd[1722]: time="2025-09-16T04:58:46.417182564Z" level=info msg="Container 733e84c6fc6aec4a633f6152c8abb096de1dd686f5a87dc5a0bd02d3e98d28d3: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:58:46.432360 containerd[1722]: time="2025-09-16T04:58:46.432331812Z" level=info msg="CreateContainer within sandbox \"62e8ba3cb69d78828da28da4768dee5949f8ab7948f15c43da82896616a8e044\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"733e84c6fc6aec4a633f6152c8abb096de1dd686f5a87dc5a0bd02d3e98d28d3\"" Sep 16 04:58:46.432656 containerd[1722]: time="2025-09-16T04:58:46.432642604Z" level=info msg="StartContainer for \"733e84c6fc6aec4a633f6152c8abb096de1dd686f5a87dc5a0bd02d3e98d28d3\"" Sep 16 04:58:46.433467 containerd[1722]: time="2025-09-16T04:58:46.433414757Z" level=info msg="connecting to shim 733e84c6fc6aec4a633f6152c8abb096de1dd686f5a87dc5a0bd02d3e98d28d3" address="unix:///run/containerd/s/df8b5b81982d43bf580beb627f33a5b460221601e022e0978c92ac42f55a721c" protocol=ttrpc version=3 Sep 16 04:58:46.457209 systemd[1]: Started cri-containerd-733e84c6fc6aec4a633f6152c8abb096de1dd686f5a87dc5a0bd02d3e98d28d3.scope - libcontainer container 733e84c6fc6aec4a633f6152c8abb096de1dd686f5a87dc5a0bd02d3e98d28d3. Sep 16 04:58:46.498841 containerd[1722]: time="2025-09-16T04:58:46.498666949Z" level=info msg="StartContainer for \"733e84c6fc6aec4a633f6152c8abb096de1dd686f5a87dc5a0bd02d3e98d28d3\" returns successfully" Sep 16 04:58:47.413647 kubelet[3170]: E0916 04:58:47.413623 3170 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:47.413647 kubelet[3170]: W0916 04:58:47.413641 3170 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:47.413647 kubelet[3170]: E0916 04:58:47.413658 3170 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:47.414246 kubelet[3170]: E0916 04:58:47.413769 3170 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:47.414246 kubelet[3170]: W0916 04:58:47.413775 3170 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:47.414246 kubelet[3170]: E0916 04:58:47.413783 3170 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:47.414246 kubelet[3170]: E0916 04:58:47.413868 3170 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:47.414246 kubelet[3170]: W0916 04:58:47.413872 3170 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:47.414246 kubelet[3170]: E0916 04:58:47.413878 3170 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:47.414246 kubelet[3170]: E0916 04:58:47.413995 3170 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:47.414246 kubelet[3170]: W0916 04:58:47.414000 3170 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:47.414246 kubelet[3170]: E0916 04:58:47.414008 3170 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:47.414246 kubelet[3170]: E0916 04:58:47.414113 3170 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:47.414591 kubelet[3170]: W0916 04:58:47.414118 3170 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:47.414591 kubelet[3170]: E0916 04:58:47.414125 3170 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:47.414591 kubelet[3170]: E0916 04:58:47.414205 3170 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:47.414591 kubelet[3170]: W0916 04:58:47.414210 3170 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:47.414591 kubelet[3170]: E0916 04:58:47.414217 3170 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:47.414591 kubelet[3170]: E0916 04:58:47.414293 3170 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:47.414591 kubelet[3170]: W0916 04:58:47.414298 3170 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:47.414591 kubelet[3170]: E0916 04:58:47.414304 3170 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:47.414591 kubelet[3170]: E0916 04:58:47.414386 3170 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:47.414591 kubelet[3170]: W0916 04:58:47.414391 3170 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:47.414912 kubelet[3170]: E0916 04:58:47.414397 3170 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:47.414912 kubelet[3170]: E0916 04:58:47.414479 3170 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:47.414912 kubelet[3170]: W0916 04:58:47.414484 3170 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:47.414912 kubelet[3170]: E0916 04:58:47.414490 3170 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:47.414912 kubelet[3170]: E0916 04:58:47.414567 3170 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:47.414912 kubelet[3170]: W0916 04:58:47.414572 3170 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:47.414912 kubelet[3170]: E0916 04:58:47.414577 3170 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:47.414912 kubelet[3170]: E0916 04:58:47.414651 3170 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:47.414912 kubelet[3170]: W0916 04:58:47.414656 3170 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:47.414912 kubelet[3170]: E0916 04:58:47.414662 3170 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:47.415183 kubelet[3170]: E0916 04:58:47.414739 3170 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:47.415183 kubelet[3170]: W0916 04:58:47.414744 3170 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:47.415183 kubelet[3170]: E0916 04:58:47.414749 3170 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:47.415183 kubelet[3170]: E0916 04:58:47.414825 3170 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:47.415183 kubelet[3170]: W0916 04:58:47.414830 3170 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:47.415183 kubelet[3170]: E0916 04:58:47.414835 3170 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:47.415183 kubelet[3170]: E0916 04:58:47.414910 3170 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:47.415183 kubelet[3170]: W0916 04:58:47.414915 3170 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:47.415183 kubelet[3170]: E0916 04:58:47.414921 3170 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:47.415183 kubelet[3170]: E0916 04:58:47.415007 3170 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:47.415354 kubelet[3170]: W0916 04:58:47.415011 3170 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:47.415354 kubelet[3170]: E0916 04:58:47.415017 3170 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:47.428446 kubelet[3170]: E0916 04:58:47.428327 3170 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:47.428446 kubelet[3170]: W0916 04:58:47.428348 3170 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:47.428446 kubelet[3170]: E0916 04:58:47.428362 3170 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:47.428648 kubelet[3170]: E0916 04:58:47.428636 3170 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:47.428648 kubelet[3170]: W0916 04:58:47.428647 3170 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:47.428707 kubelet[3170]: E0916 04:58:47.428663 3170 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:47.428820 kubelet[3170]: E0916 04:58:47.428810 3170 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:47.428820 kubelet[3170]: W0916 04:58:47.428818 3170 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:47.428874 kubelet[3170]: E0916 04:58:47.428825 3170 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:47.428991 kubelet[3170]: E0916 04:58:47.428981 3170 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:47.428991 kubelet[3170]: W0916 04:58:47.428989 3170 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:47.429062 kubelet[3170]: E0916 04:58:47.428996 3170 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:47.429140 kubelet[3170]: E0916 04:58:47.429110 3170 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:47.429140 kubelet[3170]: W0916 04:58:47.429115 3170 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:47.429140 kubelet[3170]: E0916 04:58:47.429123 3170 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:47.429242 kubelet[3170]: E0916 04:58:47.429235 3170 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:47.429242 kubelet[3170]: W0916 04:58:47.429239 3170 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:47.429304 kubelet[3170]: E0916 04:58:47.429245 3170 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:47.429375 kubelet[3170]: E0916 04:58:47.429362 3170 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:47.429375 kubelet[3170]: W0916 04:58:47.429370 3170 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:47.429529 kubelet[3170]: E0916 04:58:47.429377 3170 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:47.429593 kubelet[3170]: E0916 04:58:47.429577 3170 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:47.429593 kubelet[3170]: W0916 04:58:47.429587 3170 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:47.429659 kubelet[3170]: E0916 04:58:47.429595 3170 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:47.429751 kubelet[3170]: E0916 04:58:47.429739 3170 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:47.429751 kubelet[3170]: W0916 04:58:47.429748 3170 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:47.429830 kubelet[3170]: E0916 04:58:47.429756 3170 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:47.429924 kubelet[3170]: E0916 04:58:47.429841 3170 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:47.429924 kubelet[3170]: W0916 04:58:47.429846 3170 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:47.429924 kubelet[3170]: E0916 04:58:47.429853 3170 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:47.430120 kubelet[3170]: E0916 04:58:47.430013 3170 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:47.430120 kubelet[3170]: W0916 04:58:47.430031 3170 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:47.430120 kubelet[3170]: E0916 04:58:47.430036 3170 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:47.430287 kubelet[3170]: E0916 04:58:47.430264 3170 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:47.430287 kubelet[3170]: W0916 04:58:47.430285 3170 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:47.430344 kubelet[3170]: E0916 04:58:47.430292 3170 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:47.430407 kubelet[3170]: E0916 04:58:47.430397 3170 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:47.430407 kubelet[3170]: W0916 04:58:47.430404 3170 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:47.430454 kubelet[3170]: E0916 04:58:47.430410 3170 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:47.430522 kubelet[3170]: E0916 04:58:47.430513 3170 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:47.430522 kubelet[3170]: W0916 04:58:47.430520 3170 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:47.430579 kubelet[3170]: E0916 04:58:47.430526 3170 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:47.430696 kubelet[3170]: E0916 04:58:47.430683 3170 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:47.430696 kubelet[3170]: W0916 04:58:47.430694 3170 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:47.430753 kubelet[3170]: E0916 04:58:47.430708 3170 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:47.431068 kubelet[3170]: E0916 04:58:47.431036 3170 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:47.431068 kubelet[3170]: W0916 04:58:47.431061 3170 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:47.431182 kubelet[3170]: E0916 04:58:47.431072 3170 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:47.431336 kubelet[3170]: E0916 04:58:47.431323 3170 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:47.431336 kubelet[3170]: W0916 04:58:47.431332 3170 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:47.431415 kubelet[3170]: E0916 04:58:47.431339 3170 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:47.431444 kubelet[3170]: E0916 04:58:47.431439 3170 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:58:47.431470 kubelet[3170]: W0916 04:58:47.431444 3170 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:58:47.431470 kubelet[3170]: E0916 04:58:47.431450 3170 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:58:47.726675 containerd[1722]: time="2025-09-16T04:58:47.726478902Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:58:47.728875 containerd[1722]: time="2025-09-16T04:58:47.728851055Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4446660" Sep 16 04:58:47.731528 containerd[1722]: time="2025-09-16T04:58:47.731500998Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:58:47.734982 containerd[1722]: time="2025-09-16T04:58:47.734940698Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:58:47.735586 containerd[1722]: time="2025-09-16T04:58:47.735326505Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 1.356351482s" Sep 16 04:58:47.735586 containerd[1722]: time="2025-09-16T04:58:47.735352814Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 16 04:58:47.741559 containerd[1722]: time="2025-09-16T04:58:47.741321717Z" level=info msg="CreateContainer within sandbox \"6b8e81d14616412148d1861dca555ca361b945cffb962d712b1a3ee8f04d144c\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 16 04:58:47.764179 containerd[1722]: time="2025-09-16T04:58:47.764154870Z" level=info msg="Container 28db28cd9eed8bcd6aa51351423e40c6686d99057622223f01336f33d3bd3033: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:58:47.780655 containerd[1722]: time="2025-09-16T04:58:47.780632851Z" level=info msg="CreateContainer within sandbox \"6b8e81d14616412148d1861dca555ca361b945cffb962d712b1a3ee8f04d144c\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"28db28cd9eed8bcd6aa51351423e40c6686d99057622223f01336f33d3bd3033\"" Sep 16 04:58:47.781147 containerd[1722]: time="2025-09-16T04:58:47.781129729Z" level=info msg="StartContainer for \"28db28cd9eed8bcd6aa51351423e40c6686d99057622223f01336f33d3bd3033\"" Sep 16 04:58:47.782451 containerd[1722]: time="2025-09-16T04:58:47.782428143Z" level=info msg="connecting to shim 28db28cd9eed8bcd6aa51351423e40c6686d99057622223f01336f33d3bd3033" address="unix:///run/containerd/s/a6d9ad3051bf555edd1c57e64c1ed4df2883b7d409eca1362405eb71fae5ce3e" protocol=ttrpc version=3 Sep 16 04:58:47.809220 systemd[1]: Started cri-containerd-28db28cd9eed8bcd6aa51351423e40c6686d99057622223f01336f33d3bd3033.scope - libcontainer container 28db28cd9eed8bcd6aa51351423e40c6686d99057622223f01336f33d3bd3033. Sep 16 04:58:47.837266 containerd[1722]: time="2025-09-16T04:58:47.837246505Z" level=info msg="StartContainer for \"28db28cd9eed8bcd6aa51351423e40c6686d99057622223f01336f33d3bd3033\" returns successfully" Sep 16 04:58:47.845064 systemd[1]: cri-containerd-28db28cd9eed8bcd6aa51351423e40c6686d99057622223f01336f33d3bd3033.scope: Deactivated successfully. Sep 16 04:58:47.846990 containerd[1722]: time="2025-09-16T04:58:47.846965723Z" level=info msg="received exit event container_id:\"28db28cd9eed8bcd6aa51351423e40c6686d99057622223f01336f33d3bd3033\" id:\"28db28cd9eed8bcd6aa51351423e40c6686d99057622223f01336f33d3bd3033\" pid:3839 exited_at:{seconds:1757998727 nanos:846722458}" Sep 16 04:58:47.847455 containerd[1722]: time="2025-09-16T04:58:47.847439905Z" level=info msg="TaskExit event in podsandbox handler container_id:\"28db28cd9eed8bcd6aa51351423e40c6686d99057622223f01336f33d3bd3033\" id:\"28db28cd9eed8bcd6aa51351423e40c6686d99057622223f01336f33d3bd3033\" pid:3839 exited_at:{seconds:1757998727 nanos:846722458}" Sep 16 04:58:47.863400 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-28db28cd9eed8bcd6aa51351423e40c6686d99057622223f01336f33d3bd3033-rootfs.mount: Deactivated successfully. Sep 16 04:58:48.286795 kubelet[3170]: E0916 04:58:48.286691 3170 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qlxsc" podUID="9b4e9d9e-651f-42b1-b357-b24bcf14db30" Sep 16 04:58:48.363829 kubelet[3170]: I0916 04:58:48.363814 3170 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 16 04:58:48.377653 kubelet[3170]: I0916 04:58:48.377609 3170 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-cc66c7f88-zbll8" podStartSLOduration=2.882918618 podStartE2EDuration="5.377594989s" podCreationTimestamp="2025-09-16 04:58:43 +0000 UTC" firstStartedPulling="2025-09-16 04:58:43.88352546 +0000 UTC m=+18.675551482" lastFinishedPulling="2025-09-16 04:58:46.378201797 +0000 UTC m=+21.170227853" observedRunningTime="2025-09-16 04:58:47.372299549 +0000 UTC m=+22.164325576" watchObservedRunningTime="2025-09-16 04:58:48.377594989 +0000 UTC m=+23.169621068" Sep 16 04:58:50.286640 kubelet[3170]: E0916 04:58:50.286360 3170 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qlxsc" podUID="9b4e9d9e-651f-42b1-b357-b24bcf14db30" Sep 16 04:58:50.362616 kubelet[3170]: I0916 04:58:50.362305 3170 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 16 04:58:50.369629 containerd[1722]: time="2025-09-16T04:58:50.369397124Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 16 04:58:52.286652 kubelet[3170]: E0916 04:58:52.286617 3170 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qlxsc" podUID="9b4e9d9e-651f-42b1-b357-b24bcf14db30" Sep 16 04:58:53.911568 containerd[1722]: time="2025-09-16T04:58:53.911531053Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:58:53.914196 containerd[1722]: time="2025-09-16T04:58:53.914107124Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 16 04:58:53.917032 containerd[1722]: time="2025-09-16T04:58:53.917009391Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:58:53.921042 containerd[1722]: time="2025-09-16T04:58:53.920872085Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:58:53.921389 containerd[1722]: time="2025-09-16T04:58:53.921366839Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 3.551933941s" Sep 16 04:58:53.921431 containerd[1722]: time="2025-09-16T04:58:53.921396189Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 16 04:58:53.933167 containerd[1722]: time="2025-09-16T04:58:53.933142554Z" level=info msg="CreateContainer within sandbox \"6b8e81d14616412148d1861dca555ca361b945cffb962d712b1a3ee8f04d144c\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 16 04:58:53.952428 containerd[1722]: time="2025-09-16T04:58:53.951277724Z" level=info msg="Container d941251a7bcc5dedf2a70ce3cfc5bee8a2950117adaa53e643326983466ddceb: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:58:53.955632 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount592075966.mount: Deactivated successfully. Sep 16 04:58:53.968476 containerd[1722]: time="2025-09-16T04:58:53.968454277Z" level=info msg="CreateContainer within sandbox \"6b8e81d14616412148d1861dca555ca361b945cffb962d712b1a3ee8f04d144c\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"d941251a7bcc5dedf2a70ce3cfc5bee8a2950117adaa53e643326983466ddceb\"" Sep 16 04:58:53.968925 containerd[1722]: time="2025-09-16T04:58:53.968906425Z" level=info msg="StartContainer for \"d941251a7bcc5dedf2a70ce3cfc5bee8a2950117adaa53e643326983466ddceb\"" Sep 16 04:58:53.970249 containerd[1722]: time="2025-09-16T04:58:53.970219405Z" level=info msg="connecting to shim d941251a7bcc5dedf2a70ce3cfc5bee8a2950117adaa53e643326983466ddceb" address="unix:///run/containerd/s/a6d9ad3051bf555edd1c57e64c1ed4df2883b7d409eca1362405eb71fae5ce3e" protocol=ttrpc version=3 Sep 16 04:58:53.991249 systemd[1]: Started cri-containerd-d941251a7bcc5dedf2a70ce3cfc5bee8a2950117adaa53e643326983466ddceb.scope - libcontainer container d941251a7bcc5dedf2a70ce3cfc5bee8a2950117adaa53e643326983466ddceb. Sep 16 04:58:54.023551 containerd[1722]: time="2025-09-16T04:58:54.023527352Z" level=info msg="StartContainer for \"d941251a7bcc5dedf2a70ce3cfc5bee8a2950117adaa53e643326983466ddceb\" returns successfully" Sep 16 04:58:54.286386 kubelet[3170]: E0916 04:58:54.286343 3170 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qlxsc" podUID="9b4e9d9e-651f-42b1-b357-b24bcf14db30" Sep 16 04:58:55.238807 containerd[1722]: time="2025-09-16T04:58:55.238764777Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 16 04:58:55.240805 systemd[1]: cri-containerd-d941251a7bcc5dedf2a70ce3cfc5bee8a2950117adaa53e643326983466ddceb.scope: Deactivated successfully. Sep 16 04:58:55.241190 systemd[1]: cri-containerd-d941251a7bcc5dedf2a70ce3cfc5bee8a2950117adaa53e643326983466ddceb.scope: Consumed 361ms CPU time, 193.4M memory peak, 171.3M written to disk. Sep 16 04:58:55.243169 containerd[1722]: time="2025-09-16T04:58:55.243071136Z" level=info msg="received exit event container_id:\"d941251a7bcc5dedf2a70ce3cfc5bee8a2950117adaa53e643326983466ddceb\" id:\"d941251a7bcc5dedf2a70ce3cfc5bee8a2950117adaa53e643326983466ddceb\" pid:3901 exited_at:{seconds:1757998735 nanos:242688978}" Sep 16 04:58:55.244026 containerd[1722]: time="2025-09-16T04:58:55.244008121Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d941251a7bcc5dedf2a70ce3cfc5bee8a2950117adaa53e643326983466ddceb\" id:\"d941251a7bcc5dedf2a70ce3cfc5bee8a2950117adaa53e643326983466ddceb\" pid:3901 exited_at:{seconds:1757998735 nanos:242688978}" Sep 16 04:58:55.260435 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d941251a7bcc5dedf2a70ce3cfc5bee8a2950117adaa53e643326983466ddceb-rootfs.mount: Deactivated successfully. Sep 16 04:58:55.302869 kubelet[3170]: I0916 04:58:55.302249 3170 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 16 04:58:55.577770 kubelet[3170]: I0916 04:58:55.577748 3170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fbf2e235-6c68-48b4-b222-e2a06c30d917-whisker-ca-bundle\") pod \"whisker-6d457947cc-7jpdh\" (UID: \"fbf2e235-6c68-48b4-b222-e2a06c30d917\") " pod="calico-system/whisker-6d457947cc-7jpdh" Sep 16 04:58:55.577861 kubelet[3170]: I0916 04:58:55.577775 3170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7l7d5\" (UniqueName: \"kubernetes.io/projected/fbf2e235-6c68-48b4-b222-e2a06c30d917-kube-api-access-7l7d5\") pod \"whisker-6d457947cc-7jpdh\" (UID: \"fbf2e235-6c68-48b4-b222-e2a06c30d917\") " pod="calico-system/whisker-6d457947cc-7jpdh" Sep 16 04:58:55.577861 kubelet[3170]: I0916 04:58:55.577792 3170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/fbf2e235-6c68-48b4-b222-e2a06c30d917-whisker-backend-key-pair\") pod \"whisker-6d457947cc-7jpdh\" (UID: \"fbf2e235-6c68-48b4-b222-e2a06c30d917\") " pod="calico-system/whisker-6d457947cc-7jpdh" Sep 16 04:58:55.678179 kubelet[3170]: E0916 04:58:55.678100 3170 configmap.go:193] Couldn't get configMap calico-system/whisker-ca-bundle: object "calico-system"/"whisker-ca-bundle" not registered Sep 16 04:58:55.678179 kubelet[3170]: E0916 04:58:55.678154 3170 secret.go:189] Couldn't get secret calico-system/whisker-backend-key-pair: object "calico-system"/"whisker-backend-key-pair" not registered Sep 16 04:58:55.678179 kubelet[3170]: E0916 04:58:55.678165 3170 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fbf2e235-6c68-48b4-b222-e2a06c30d917-whisker-ca-bundle podName:fbf2e235-6c68-48b4-b222-e2a06c30d917 nodeName:}" failed. No retries permitted until 2025-09-16 04:58:56.178147155 +0000 UTC m=+30.970173174 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "whisker-ca-bundle" (UniqueName: "kubernetes.io/configmap/fbf2e235-6c68-48b4-b222-e2a06c30d917-whisker-ca-bundle") pod "whisker-6d457947cc-7jpdh" (UID: "fbf2e235-6c68-48b4-b222-e2a06c30d917") : object "calico-system"/"whisker-ca-bundle" not registered Sep 16 04:58:55.694024 kubelet[3170]: E0916 04:58:55.678236 3170 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fbf2e235-6c68-48b4-b222-e2a06c30d917-whisker-backend-key-pair podName:fbf2e235-6c68-48b4-b222-e2a06c30d917 nodeName:}" failed. No retries permitted until 2025-09-16 04:58:56.178228583 +0000 UTC m=+30.970254596 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "whisker-backend-key-pair" (UniqueName: "kubernetes.io/secret/fbf2e235-6c68-48b4-b222-e2a06c30d917-whisker-backend-key-pair") pod "whisker-6d457947cc-7jpdh" (UID: "fbf2e235-6c68-48b4-b222-e2a06c30d917") : object "calico-system"/"whisker-backend-key-pair" not registered Sep 16 04:58:55.708905 systemd[1]: Created slice kubepods-burstable-pod4baa5529_b6ae_4993_b06b_11459fee0da1.slice - libcontainer container kubepods-burstable-pod4baa5529_b6ae_4993_b06b_11459fee0da1.slice. Sep 16 04:58:55.779126 kubelet[3170]: I0916 04:58:55.779100 3170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g98sb\" (UniqueName: \"kubernetes.io/projected/4baa5529-b6ae-4993-b06b-11459fee0da1-kube-api-access-g98sb\") pod \"coredns-674b8bbfcf-7mk7b\" (UID: \"4baa5529-b6ae-4993-b06b-11459fee0da1\") " pod="kube-system/coredns-674b8bbfcf-7mk7b" Sep 16 04:58:55.779126 kubelet[3170]: I0916 04:58:55.779127 3170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4baa5529-b6ae-4993-b06b-11459fee0da1-config-volume\") pod \"coredns-674b8bbfcf-7mk7b\" (UID: \"4baa5529-b6ae-4993-b06b-11459fee0da1\") " pod="kube-system/coredns-674b8bbfcf-7mk7b" Sep 16 04:58:55.966390 systemd[1]: Created slice kubepods-besteffort-podfbf2e235_6c68_48b4_b222_e2a06c30d917.slice - libcontainer container kubepods-besteffort-podfbf2e235_6c68_48b4_b222_e2a06c30d917.slice. Sep 16 04:58:55.980110 kubelet[3170]: I0916 04:58:55.980081 3170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8d4217ea-b6cb-4fd0-a095-a5df158bad6e-calico-apiserver-certs\") pod \"calico-apiserver-748bb69b95-qkbmd\" (UID: \"8d4217ea-b6cb-4fd0-a095-a5df158bad6e\") " pod="calico-apiserver/calico-apiserver-748bb69b95-qkbmd" Sep 16 04:58:55.980259 kubelet[3170]: I0916 04:58:55.980230 3170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6vnv\" (UniqueName: \"kubernetes.io/projected/8d4217ea-b6cb-4fd0-a095-a5df158bad6e-kube-api-access-c6vnv\") pod \"calico-apiserver-748bb69b95-qkbmd\" (UID: \"8d4217ea-b6cb-4fd0-a095-a5df158bad6e\") " pod="calico-apiserver/calico-apiserver-748bb69b95-qkbmd" Sep 16 04:58:56.011614 containerd[1722]: time="2025-09-16T04:58:56.011559221Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-7mk7b,Uid:4baa5529-b6ae-4993-b06b-11459fee0da1,Namespace:kube-system,Attempt:0,}" Sep 16 04:58:56.081083 kubelet[3170]: E0916 04:58:56.081061 3170 secret.go:189] Couldn't get secret calico-apiserver/calico-apiserver-certs: object "calico-apiserver"/"calico-apiserver-certs" not registered Sep 16 04:58:56.159024 kubelet[3170]: E0916 04:58:56.081120 3170 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8d4217ea-b6cb-4fd0-a095-a5df158bad6e-calico-apiserver-certs podName:8d4217ea-b6cb-4fd0-a095-a5df158bad6e nodeName:}" failed. No retries permitted until 2025-09-16 04:58:56.581107314 +0000 UTC m=+31.373133335 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/8d4217ea-b6cb-4fd0-a095-a5df158bad6e-calico-apiserver-certs") pod "calico-apiserver-748bb69b95-qkbmd" (UID: "8d4217ea-b6cb-4fd0-a095-a5df158bad6e") : object "calico-apiserver"/"calico-apiserver-certs" not registered Sep 16 04:58:56.159024 kubelet[3170]: E0916 04:58:56.083940 3170 projected.go:289] Couldn't get configMap calico-apiserver/kube-root-ca.crt: object "calico-apiserver"/"kube-root-ca.crt" not registered Sep 16 04:58:56.159024 kubelet[3170]: E0916 04:58:56.083956 3170 projected.go:194] Error preparing data for projected volume kube-api-access-c6vnv for pod calico-apiserver/calico-apiserver-748bb69b95-qkbmd: object "calico-apiserver"/"kube-root-ca.crt" not registered Sep 16 04:58:56.159024 kubelet[3170]: E0916 04:58:56.084022 3170 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8d4217ea-b6cb-4fd0-a095-a5df158bad6e-kube-api-access-c6vnv podName:8d4217ea-b6cb-4fd0-a095-a5df158bad6e nodeName:}" failed. No retries permitted until 2025-09-16 04:58:56.583999605 +0000 UTC m=+31.376025631 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-c6vnv" (UniqueName: "kubernetes.io/projected/8d4217ea-b6cb-4fd0-a095-a5df158bad6e-kube-api-access-c6vnv") pod "calico-apiserver-748bb69b95-qkbmd" (UID: "8d4217ea-b6cb-4fd0-a095-a5df158bad6e") : object "calico-apiserver"/"kube-root-ca.crt" not registered Sep 16 04:58:56.174826 systemd[1]: Created slice kubepods-besteffort-pod8d4217ea_b6cb_4fd0_a095_a5df158bad6e.slice - libcontainer container kubepods-besteffort-pod8d4217ea_b6cb_4fd0_a095_a5df158bad6e.slice. Sep 16 04:58:56.181365 kubelet[3170]: I0916 04:58:56.181343 3170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sg4gs\" (UniqueName: \"kubernetes.io/projected/8250dfb6-0d48-46f0-a766-dbe41444f723-kube-api-access-sg4gs\") pod \"coredns-674b8bbfcf-z9npp\" (UID: \"8250dfb6-0d48-46f0-a766-dbe41444f723\") " pod="kube-system/coredns-674b8bbfcf-z9npp" Sep 16 04:58:56.181449 kubelet[3170]: I0916 04:58:56.181378 3170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8250dfb6-0d48-46f0-a766-dbe41444f723-config-volume\") pod \"coredns-674b8bbfcf-z9npp\" (UID: \"8250dfb6-0d48-46f0-a766-dbe41444f723\") " pod="kube-system/coredns-674b8bbfcf-z9npp" Sep 16 04:58:56.191367 systemd[1]: Created slice kubepods-burstable-pod8250dfb6_0d48_46f0_a766_dbe41444f723.slice - libcontainer container kubepods-burstable-pod8250dfb6_0d48_46f0_a766_dbe41444f723.slice. Sep 16 04:58:56.211251 systemd[1]: Created slice kubepods-besteffort-pod4b74af3f_b2c3_4c95_9de8_d14cd49c421e.slice - libcontainer container kubepods-besteffort-pod4b74af3f_b2c3_4c95_9de8_d14cd49c421e.slice. Sep 16 04:58:56.217737 systemd[1]: Created slice kubepods-besteffort-pod98dcb491_139f_48ae_8a04_0d45651d392d.slice - libcontainer container kubepods-besteffort-pod98dcb491_139f_48ae_8a04_0d45651d392d.slice. Sep 16 04:58:56.227328 systemd[1]: Created slice kubepods-besteffort-pod05ff58dd_4daf_4601_9f2e_e553fb710f78.slice - libcontainer container kubepods-besteffort-pod05ff58dd_4daf_4601_9f2e_e553fb710f78.slice. Sep 16 04:58:56.233404 systemd[1]: Created slice kubepods-besteffort-podceea6ad9_3336_4d54_ac50_786bcebe065c.slice - libcontainer container kubepods-besteffort-podceea6ad9_3336_4d54_ac50_786bcebe065c.slice. Sep 16 04:58:56.258270 containerd[1722]: time="2025-09-16T04:58:56.258233573Z" level=error msg="Failed to destroy network for sandbox \"68f64371790ea77857dd813771f33159391442af448f410101aeee1349bc35e8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:58:56.260655 systemd[1]: run-netns-cni\x2db71f7c9a\x2d1ada\x2d667e\x2dbd3f\x2dd98081fbb0ca.mount: Deactivated successfully. Sep 16 04:58:56.262973 containerd[1722]: time="2025-09-16T04:58:56.262896229Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-7mk7b,Uid:4baa5529-b6ae-4993-b06b-11459fee0da1,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"68f64371790ea77857dd813771f33159391442af448f410101aeee1349bc35e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:58:56.263687 kubelet[3170]: E0916 04:58:56.263654 3170 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"68f64371790ea77857dd813771f33159391442af448f410101aeee1349bc35e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:58:56.263749 kubelet[3170]: E0916 04:58:56.263713 3170 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"68f64371790ea77857dd813771f33159391442af448f410101aeee1349bc35e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-7mk7b" Sep 16 04:58:56.263749 kubelet[3170]: E0916 04:58:56.263740 3170 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"68f64371790ea77857dd813771f33159391442af448f410101aeee1349bc35e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-7mk7b" Sep 16 04:58:56.263815 kubelet[3170]: E0916 04:58:56.263779 3170 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-7mk7b_kube-system(4baa5529-b6ae-4993-b06b-11459fee0da1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-7mk7b_kube-system(4baa5529-b6ae-4993-b06b-11459fee0da1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"68f64371790ea77857dd813771f33159391442af448f410101aeee1349bc35e8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-7mk7b" podUID="4baa5529-b6ae-4993-b06b-11459fee0da1" Sep 16 04:58:56.269231 containerd[1722]: time="2025-09-16T04:58:56.269206930Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6d457947cc-7jpdh,Uid:fbf2e235-6c68-48b4-b222-e2a06c30d917,Namespace:calico-system,Attempt:0,}" Sep 16 04:58:56.281836 kubelet[3170]: I0916 04:58:56.281814 3170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05ff58dd-4daf-4601-9f2e-e553fb710f78-config\") pod \"goldmane-54d579b49d-bn6xl\" (UID: \"05ff58dd-4daf-4601-9f2e-e553fb710f78\") " pod="calico-system/goldmane-54d579b49d-bn6xl" Sep 16 04:58:56.281908 kubelet[3170]: I0916 04:58:56.281849 3170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9s48\" (UniqueName: \"kubernetes.io/projected/ceea6ad9-3336-4d54-ac50-786bcebe065c-kube-api-access-w9s48\") pod \"calico-kube-controllers-797d689774-tnml7\" (UID: \"ceea6ad9-3336-4d54-ac50-786bcebe065c\") " pod="calico-system/calico-kube-controllers-797d689774-tnml7" Sep 16 04:58:56.281908 kubelet[3170]: I0916 04:58:56.281867 3170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wfkh\" (UniqueName: \"kubernetes.io/projected/05ff58dd-4daf-4601-9f2e-e553fb710f78-kube-api-access-6wfkh\") pod \"goldmane-54d579b49d-bn6xl\" (UID: \"05ff58dd-4daf-4601-9f2e-e553fb710f78\") " pod="calico-system/goldmane-54d579b49d-bn6xl" Sep 16 04:58:56.281908 kubelet[3170]: I0916 04:58:56.281884 3170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/4b74af3f-b2c3-4c95-9de8-d14cd49c421e-calico-apiserver-certs\") pod \"calico-apiserver-696d587784-b5t7c\" (UID: \"4b74af3f-b2c3-4c95-9de8-d14cd49c421e\") " pod="calico-apiserver/calico-apiserver-696d587784-b5t7c" Sep 16 04:58:56.281908 kubelet[3170]: I0916 04:58:56.281900 3170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/98dcb491-139f-48ae-8a04-0d45651d392d-calico-apiserver-certs\") pod \"calico-apiserver-696d587784-76trh\" (UID: \"98dcb491-139f-48ae-8a04-0d45651d392d\") " pod="calico-apiserver/calico-apiserver-696d587784-76trh" Sep 16 04:58:56.281997 kubelet[3170]: I0916 04:58:56.281916 3170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ceea6ad9-3336-4d54-ac50-786bcebe065c-tigera-ca-bundle\") pod \"calico-kube-controllers-797d689774-tnml7\" (UID: \"ceea6ad9-3336-4d54-ac50-786bcebe065c\") " pod="calico-system/calico-kube-controllers-797d689774-tnml7" Sep 16 04:58:56.281997 kubelet[3170]: I0916 04:58:56.281935 3170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/05ff58dd-4daf-4601-9f2e-e553fb710f78-goldmane-key-pair\") pod \"goldmane-54d579b49d-bn6xl\" (UID: \"05ff58dd-4daf-4601-9f2e-e553fb710f78\") " pod="calico-system/goldmane-54d579b49d-bn6xl" Sep 16 04:58:56.282040 kubelet[3170]: I0916 04:58:56.282008 3170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/05ff58dd-4daf-4601-9f2e-e553fb710f78-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-bn6xl\" (UID: \"05ff58dd-4daf-4601-9f2e-e553fb710f78\") " pod="calico-system/goldmane-54d579b49d-bn6xl" Sep 16 04:58:56.282040 kubelet[3170]: I0916 04:58:56.282025 3170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwxqc\" (UniqueName: \"kubernetes.io/projected/4b74af3f-b2c3-4c95-9de8-d14cd49c421e-kube-api-access-dwxqc\") pod \"calico-apiserver-696d587784-b5t7c\" (UID: \"4b74af3f-b2c3-4c95-9de8-d14cd49c421e\") " pod="calico-apiserver/calico-apiserver-696d587784-b5t7c" Sep 16 04:58:56.282096 kubelet[3170]: I0916 04:58:56.282041 3170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwjhc\" (UniqueName: \"kubernetes.io/projected/98dcb491-139f-48ae-8a04-0d45651d392d-kube-api-access-zwjhc\") pod \"calico-apiserver-696d587784-76trh\" (UID: \"98dcb491-139f-48ae-8a04-0d45651d392d\") " pod="calico-apiserver/calico-apiserver-696d587784-76trh" Sep 16 04:58:56.290974 systemd[1]: Created slice kubepods-besteffort-pod9b4e9d9e_651f_42b1_b357_b24bcf14db30.slice - libcontainer container kubepods-besteffort-pod9b4e9d9e_651f_42b1_b357_b24bcf14db30.slice. Sep 16 04:58:56.298386 containerd[1722]: time="2025-09-16T04:58:56.298339339Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qlxsc,Uid:9b4e9d9e-651f-42b1-b357-b24bcf14db30,Namespace:calico-system,Attempt:0,}" Sep 16 04:58:56.339232 containerd[1722]: time="2025-09-16T04:58:56.339192633Z" level=error msg="Failed to destroy network for sandbox \"4688d58dbf99f8c97fa5cffeec7209e59a855e12f4f5d9a4f1ac41ffcfa6c441\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:58:56.341021 systemd[1]: run-netns-cni\x2d33f49da2\x2de88e\x2dd62c\x2d933f\x2d97b4cec38716.mount: Deactivated successfully. Sep 16 04:58:56.344666 containerd[1722]: time="2025-09-16T04:58:56.344627325Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6d457947cc-7jpdh,Uid:fbf2e235-6c68-48b4-b222-e2a06c30d917,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4688d58dbf99f8c97fa5cffeec7209e59a855e12f4f5d9a4f1ac41ffcfa6c441\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:58:56.344785 kubelet[3170]: E0916 04:58:56.344764 3170 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4688d58dbf99f8c97fa5cffeec7209e59a855e12f4f5d9a4f1ac41ffcfa6c441\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:58:56.345025 kubelet[3170]: E0916 04:58:56.344804 3170 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4688d58dbf99f8c97fa5cffeec7209e59a855e12f4f5d9a4f1ac41ffcfa6c441\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6d457947cc-7jpdh" Sep 16 04:58:56.345025 kubelet[3170]: E0916 04:58:56.344822 3170 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4688d58dbf99f8c97fa5cffeec7209e59a855e12f4f5d9a4f1ac41ffcfa6c441\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6d457947cc-7jpdh" Sep 16 04:58:56.345025 kubelet[3170]: E0916 04:58:56.344862 3170 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6d457947cc-7jpdh_calico-system(fbf2e235-6c68-48b4-b222-e2a06c30d917)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6d457947cc-7jpdh_calico-system(fbf2e235-6c68-48b4-b222-e2a06c30d917)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4688d58dbf99f8c97fa5cffeec7209e59a855e12f4f5d9a4f1ac41ffcfa6c441\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6d457947cc-7jpdh" podUID="fbf2e235-6c68-48b4-b222-e2a06c30d917" Sep 16 04:58:56.351044 containerd[1722]: time="2025-09-16T04:58:56.351017285Z" level=error msg="Failed to destroy network for sandbox \"a49be44fad3f7c1ab112228f9e10adbc299996383d93964e72a20e5acd246ae4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:58:56.353638 containerd[1722]: time="2025-09-16T04:58:56.353606068Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qlxsc,Uid:9b4e9d9e-651f-42b1-b357-b24bcf14db30,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a49be44fad3f7c1ab112228f9e10adbc299996383d93964e72a20e5acd246ae4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:58:56.353790 kubelet[3170]: E0916 04:58:56.353751 3170 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a49be44fad3f7c1ab112228f9e10adbc299996383d93964e72a20e5acd246ae4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:58:56.353861 kubelet[3170]: E0916 04:58:56.353805 3170 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a49be44fad3f7c1ab112228f9e10adbc299996383d93964e72a20e5acd246ae4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qlxsc" Sep 16 04:58:56.353861 kubelet[3170]: E0916 04:58:56.353823 3170 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a49be44fad3f7c1ab112228f9e10adbc299996383d93964e72a20e5acd246ae4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qlxsc" Sep 16 04:58:56.353949 kubelet[3170]: E0916 04:58:56.353863 3170 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-qlxsc_calico-system(9b4e9d9e-651f-42b1-b357-b24bcf14db30)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-qlxsc_calico-system(9b4e9d9e-651f-42b1-b357-b24bcf14db30)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a49be44fad3f7c1ab112228f9e10adbc299996383d93964e72a20e5acd246ae4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-qlxsc" podUID="9b4e9d9e-651f-42b1-b357-b24bcf14db30" Sep 16 04:58:56.383727 containerd[1722]: time="2025-09-16T04:58:56.383464349Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 16 04:58:56.506640 containerd[1722]: time="2025-09-16T04:58:56.506563297Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-z9npp,Uid:8250dfb6-0d48-46f0-a766-dbe41444f723,Namespace:kube-system,Attempt:0,}" Sep 16 04:58:56.516116 containerd[1722]: time="2025-09-16T04:58:56.515682008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-696d587784-b5t7c,Uid:4b74af3f-b2c3-4c95-9de8-d14cd49c421e,Namespace:calico-apiserver,Attempt:0,}" Sep 16 04:58:56.524526 containerd[1722]: time="2025-09-16T04:58:56.524501855Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-696d587784-76trh,Uid:98dcb491-139f-48ae-8a04-0d45651d392d,Namespace:calico-apiserver,Attempt:0,}" Sep 16 04:58:56.540651 containerd[1722]: time="2025-09-16T04:58:56.540619971Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-797d689774-tnml7,Uid:ceea6ad9-3336-4d54-ac50-786bcebe065c,Namespace:calico-system,Attempt:0,}" Sep 16 04:58:56.541195 containerd[1722]: time="2025-09-16T04:58:56.541165464Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-bn6xl,Uid:05ff58dd-4daf-4601-9f2e-e553fb710f78,Namespace:calico-system,Attempt:0,}" Sep 16 04:58:56.566614 containerd[1722]: time="2025-09-16T04:58:56.566584316Z" level=error msg="Failed to destroy network for sandbox \"7c41a6c42dd2631580708507db4816c642dd359d010f1cb2039f3cb4a2685fc8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:58:56.577068 containerd[1722]: time="2025-09-16T04:58:56.577034679Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-z9npp,Uid:8250dfb6-0d48-46f0-a766-dbe41444f723,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c41a6c42dd2631580708507db4816c642dd359d010f1cb2039f3cb4a2685fc8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:58:56.577479 kubelet[3170]: E0916 04:58:56.577197 3170 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c41a6c42dd2631580708507db4816c642dd359d010f1cb2039f3cb4a2685fc8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:58:56.577479 kubelet[3170]: E0916 04:58:56.577254 3170 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c41a6c42dd2631580708507db4816c642dd359d010f1cb2039f3cb4a2685fc8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-z9npp" Sep 16 04:58:56.577479 kubelet[3170]: E0916 04:58:56.577274 3170 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c41a6c42dd2631580708507db4816c642dd359d010f1cb2039f3cb4a2685fc8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-z9npp" Sep 16 04:58:56.577579 kubelet[3170]: E0916 04:58:56.577323 3170 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-z9npp_kube-system(8250dfb6-0d48-46f0-a766-dbe41444f723)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-z9npp_kube-system(8250dfb6-0d48-46f0-a766-dbe41444f723)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7c41a6c42dd2631580708507db4816c642dd359d010f1cb2039f3cb4a2685fc8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-z9npp" podUID="8250dfb6-0d48-46f0-a766-dbe41444f723" Sep 16 04:58:56.585617 containerd[1722]: time="2025-09-16T04:58:56.585552699Z" level=error msg="Failed to destroy network for sandbox \"62e7d411e0c4edaf9758bd178aae92fa599148f18f41d1fb00d7d712c49d27c8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:58:56.592503 containerd[1722]: time="2025-09-16T04:58:56.592472011Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-696d587784-b5t7c,Uid:4b74af3f-b2c3-4c95-9de8-d14cd49c421e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"62e7d411e0c4edaf9758bd178aae92fa599148f18f41d1fb00d7d712c49d27c8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:58:56.592749 kubelet[3170]: E0916 04:58:56.592725 3170 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62e7d411e0c4edaf9758bd178aae92fa599148f18f41d1fb00d7d712c49d27c8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:58:56.592798 kubelet[3170]: E0916 04:58:56.592764 3170 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62e7d411e0c4edaf9758bd178aae92fa599148f18f41d1fb00d7d712c49d27c8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-696d587784-b5t7c" Sep 16 04:58:56.592831 kubelet[3170]: E0916 04:58:56.592783 3170 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62e7d411e0c4edaf9758bd178aae92fa599148f18f41d1fb00d7d712c49d27c8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-696d587784-b5t7c" Sep 16 04:58:56.592856 kubelet[3170]: E0916 04:58:56.592832 3170 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-696d587784-b5t7c_calico-apiserver(4b74af3f-b2c3-4c95-9de8-d14cd49c421e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-696d587784-b5t7c_calico-apiserver(4b74af3f-b2c3-4c95-9de8-d14cd49c421e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"62e7d411e0c4edaf9758bd178aae92fa599148f18f41d1fb00d7d712c49d27c8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-696d587784-b5t7c" podUID="4b74af3f-b2c3-4c95-9de8-d14cd49c421e" Sep 16 04:58:56.635959 containerd[1722]: time="2025-09-16T04:58:56.635926711Z" level=error msg="Failed to destroy network for sandbox \"73471b9cce1621bd36199f5e0bf9b562a545bebe9df935ea75671cba040a34c0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:58:56.639009 containerd[1722]: time="2025-09-16T04:58:56.638929002Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-797d689774-tnml7,Uid:ceea6ad9-3336-4d54-ac50-786bcebe065c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"73471b9cce1621bd36199f5e0bf9b562a545bebe9df935ea75671cba040a34c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:58:56.639173 kubelet[3170]: E0916 04:58:56.639078 3170 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73471b9cce1621bd36199f5e0bf9b562a545bebe9df935ea75671cba040a34c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:58:56.640618 kubelet[3170]: E0916 04:58:56.640484 3170 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73471b9cce1621bd36199f5e0bf9b562a545bebe9df935ea75671cba040a34c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-797d689774-tnml7" Sep 16 04:58:56.640618 kubelet[3170]: E0916 04:58:56.640513 3170 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73471b9cce1621bd36199f5e0bf9b562a545bebe9df935ea75671cba040a34c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-797d689774-tnml7" Sep 16 04:58:56.640618 kubelet[3170]: E0916 04:58:56.640566 3170 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-797d689774-tnml7_calico-system(ceea6ad9-3336-4d54-ac50-786bcebe065c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-797d689774-tnml7_calico-system(ceea6ad9-3336-4d54-ac50-786bcebe065c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"73471b9cce1621bd36199f5e0bf9b562a545bebe9df935ea75671cba040a34c0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-797d689774-tnml7" podUID="ceea6ad9-3336-4d54-ac50-786bcebe065c" Sep 16 04:58:56.643239 containerd[1722]: time="2025-09-16T04:58:56.643210643Z" level=error msg="Failed to destroy network for sandbox \"90614769361d6703d056bae6965238e37326ad32693c2a30557f2f51cafdaf3b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:58:56.646133 containerd[1722]: time="2025-09-16T04:58:56.646077032Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-696d587784-76trh,Uid:98dcb491-139f-48ae-8a04-0d45651d392d,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"90614769361d6703d056bae6965238e37326ad32693c2a30557f2f51cafdaf3b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:58:56.646426 kubelet[3170]: E0916 04:58:56.646381 3170 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90614769361d6703d056bae6965238e37326ad32693c2a30557f2f51cafdaf3b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:58:56.646515 kubelet[3170]: E0916 04:58:56.646499 3170 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90614769361d6703d056bae6965238e37326ad32693c2a30557f2f51cafdaf3b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-696d587784-76trh" Sep 16 04:58:56.646568 kubelet[3170]: E0916 04:58:56.646558 3170 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90614769361d6703d056bae6965238e37326ad32693c2a30557f2f51cafdaf3b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-696d587784-76trh" Sep 16 04:58:56.646653 kubelet[3170]: E0916 04:58:56.646637 3170 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-696d587784-76trh_calico-apiserver(98dcb491-139f-48ae-8a04-0d45651d392d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-696d587784-76trh_calico-apiserver(98dcb491-139f-48ae-8a04-0d45651d392d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"90614769361d6703d056bae6965238e37326ad32693c2a30557f2f51cafdaf3b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-696d587784-76trh" podUID="98dcb491-139f-48ae-8a04-0d45651d392d" Sep 16 04:58:56.648942 containerd[1722]: time="2025-09-16T04:58:56.648912670Z" level=error msg="Failed to destroy network for sandbox \"cad6a9656fb18f9d54643c3d4976f9d43bdfe9493f775e243e138f4aa6ce13f3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:58:56.652423 containerd[1722]: time="2025-09-16T04:58:56.652395209Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-bn6xl,Uid:05ff58dd-4daf-4601-9f2e-e553fb710f78,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cad6a9656fb18f9d54643c3d4976f9d43bdfe9493f775e243e138f4aa6ce13f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:58:56.653130 kubelet[3170]: E0916 04:58:56.652511 3170 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cad6a9656fb18f9d54643c3d4976f9d43bdfe9493f775e243e138f4aa6ce13f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:58:56.653130 kubelet[3170]: E0916 04:58:56.652551 3170 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cad6a9656fb18f9d54643c3d4976f9d43bdfe9493f775e243e138f4aa6ce13f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-bn6xl" Sep 16 04:58:56.653130 kubelet[3170]: E0916 04:58:56.652565 3170 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cad6a9656fb18f9d54643c3d4976f9d43bdfe9493f775e243e138f4aa6ce13f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-bn6xl" Sep 16 04:58:56.653218 kubelet[3170]: E0916 04:58:56.652600 3170 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-bn6xl_calico-system(05ff58dd-4daf-4601-9f2e-e553fb710f78)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-bn6xl_calico-system(05ff58dd-4daf-4601-9f2e-e553fb710f78)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cad6a9656fb18f9d54643c3d4976f9d43bdfe9493f775e243e138f4aa6ce13f3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-bn6xl" podUID="05ff58dd-4daf-4601-9f2e-e553fb710f78" Sep 16 04:58:56.779527 containerd[1722]: time="2025-09-16T04:58:56.779503791Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-748bb69b95-qkbmd,Uid:8d4217ea-b6cb-4fd0-a095-a5df158bad6e,Namespace:calico-apiserver,Attempt:0,}" Sep 16 04:58:56.818118 containerd[1722]: time="2025-09-16T04:58:56.818060359Z" level=error msg="Failed to destroy network for sandbox \"4f239de110317e1e2348b7f21b02dd88fe18f1730eaed7f54dc417fa141a8d12\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:58:56.820787 containerd[1722]: time="2025-09-16T04:58:56.820762772Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-748bb69b95-qkbmd,Uid:8d4217ea-b6cb-4fd0-a095-a5df158bad6e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f239de110317e1e2348b7f21b02dd88fe18f1730eaed7f54dc417fa141a8d12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:58:56.820923 kubelet[3170]: E0916 04:58:56.820883 3170 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f239de110317e1e2348b7f21b02dd88fe18f1730eaed7f54dc417fa141a8d12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:58:56.820923 kubelet[3170]: E0916 04:58:56.820913 3170 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f239de110317e1e2348b7f21b02dd88fe18f1730eaed7f54dc417fa141a8d12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-748bb69b95-qkbmd" Sep 16 04:58:56.821077 kubelet[3170]: E0916 04:58:56.820929 3170 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f239de110317e1e2348b7f21b02dd88fe18f1730eaed7f54dc417fa141a8d12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-748bb69b95-qkbmd" Sep 16 04:58:56.821077 kubelet[3170]: E0916 04:58:56.820965 3170 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-748bb69b95-qkbmd_calico-apiserver(8d4217ea-b6cb-4fd0-a095-a5df158bad6e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-748bb69b95-qkbmd_calico-apiserver(8d4217ea-b6cb-4fd0-a095-a5df158bad6e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4f239de110317e1e2348b7f21b02dd88fe18f1730eaed7f54dc417fa141a8d12\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-748bb69b95-qkbmd" podUID="8d4217ea-b6cb-4fd0-a095-a5df158bad6e" Sep 16 04:58:57.269009 systemd[1]: run-netns-cni\x2dfa345e73\x2d0754\x2ddc0b\x2ddc99\x2d600d3be52015.mount: Deactivated successfully. Sep 16 04:59:03.083985 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4161809743.mount: Deactivated successfully. Sep 16 04:59:03.127338 containerd[1722]: time="2025-09-16T04:59:03.127298025Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:59:03.130253 containerd[1722]: time="2025-09-16T04:59:03.130186053Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 16 04:59:03.133147 containerd[1722]: time="2025-09-16T04:59:03.133126977Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:59:03.137056 containerd[1722]: time="2025-09-16T04:59:03.136747003Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:59:03.137056 containerd[1722]: time="2025-09-16T04:59:03.136958947Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 6.753281505s" Sep 16 04:59:03.137056 containerd[1722]: time="2025-09-16T04:59:03.136983085Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 16 04:59:03.150681 containerd[1722]: time="2025-09-16T04:59:03.150651850Z" level=info msg="CreateContainer within sandbox \"6b8e81d14616412148d1861dca555ca361b945cffb962d712b1a3ee8f04d144c\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 16 04:59:03.175125 containerd[1722]: time="2025-09-16T04:59:03.175077575Z" level=info msg="Container c78dfc1c271743ceaae8bb2c862d69212d63330cc30efe1f610008a8f7fe8a1d: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:59:03.191957 containerd[1722]: time="2025-09-16T04:59:03.191933476Z" level=info msg="CreateContainer within sandbox \"6b8e81d14616412148d1861dca555ca361b945cffb962d712b1a3ee8f04d144c\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"c78dfc1c271743ceaae8bb2c862d69212d63330cc30efe1f610008a8f7fe8a1d\"" Sep 16 04:59:03.192597 containerd[1722]: time="2025-09-16T04:59:03.192555204Z" level=info msg="StartContainer for \"c78dfc1c271743ceaae8bb2c862d69212d63330cc30efe1f610008a8f7fe8a1d\"" Sep 16 04:59:03.194006 containerd[1722]: time="2025-09-16T04:59:03.193977697Z" level=info msg="connecting to shim c78dfc1c271743ceaae8bb2c862d69212d63330cc30efe1f610008a8f7fe8a1d" address="unix:///run/containerd/s/a6d9ad3051bf555edd1c57e64c1ed4df2883b7d409eca1362405eb71fae5ce3e" protocol=ttrpc version=3 Sep 16 04:59:03.215232 systemd[1]: Started cri-containerd-c78dfc1c271743ceaae8bb2c862d69212d63330cc30efe1f610008a8f7fe8a1d.scope - libcontainer container c78dfc1c271743ceaae8bb2c862d69212d63330cc30efe1f610008a8f7fe8a1d. Sep 16 04:59:03.258805 containerd[1722]: time="2025-09-16T04:59:03.258781343Z" level=info msg="StartContainer for \"c78dfc1c271743ceaae8bb2c862d69212d63330cc30efe1f610008a8f7fe8a1d\" returns successfully" Sep 16 04:59:03.421538 kubelet[3170]: I0916 04:59:03.421353 3170 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-4l4kk" podStartSLOduration=1.542127861 podStartE2EDuration="20.421337718s" podCreationTimestamp="2025-09-16 04:58:43 +0000 UTC" firstStartedPulling="2025-09-16 04:58:44.258430218 +0000 UTC m=+19.050456242" lastFinishedPulling="2025-09-16 04:59:03.137640078 +0000 UTC m=+37.929666099" observedRunningTime="2025-09-16 04:59:03.42100655 +0000 UTC m=+38.213032576" watchObservedRunningTime="2025-09-16 04:59:03.421337718 +0000 UTC m=+38.213363747" Sep 16 04:59:03.600417 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 16 04:59:03.600481 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 16 04:59:03.605179 containerd[1722]: time="2025-09-16T04:59:03.605149957Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c78dfc1c271743ceaae8bb2c862d69212d63330cc30efe1f610008a8f7fe8a1d\" id:\"973297510d929996391814274019732f6db31189749d27e1550bd0c90a4bcfcc\" pid:4237 exit_status:1 exited_at:{seconds:1757998743 nanos:604887907}" Sep 16 04:59:03.725712 kubelet[3170]: I0916 04:59:03.724759 3170 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fbf2e235-6c68-48b4-b222-e2a06c30d917-whisker-ca-bundle\") pod \"fbf2e235-6c68-48b4-b222-e2a06c30d917\" (UID: \"fbf2e235-6c68-48b4-b222-e2a06c30d917\") " Sep 16 04:59:03.725712 kubelet[3170]: I0916 04:59:03.725273 3170 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fbf2e235-6c68-48b4-b222-e2a06c30d917-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "fbf2e235-6c68-48b4-b222-e2a06c30d917" (UID: "fbf2e235-6c68-48b4-b222-e2a06c30d917"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 16 04:59:03.725864 kubelet[3170]: I0916 04:59:03.725850 3170 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/fbf2e235-6c68-48b4-b222-e2a06c30d917-whisker-backend-key-pair\") pod \"fbf2e235-6c68-48b4-b222-e2a06c30d917\" (UID: \"fbf2e235-6c68-48b4-b222-e2a06c30d917\") " Sep 16 04:59:03.725982 kubelet[3170]: I0916 04:59:03.725933 3170 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7l7d5\" (UniqueName: \"kubernetes.io/projected/fbf2e235-6c68-48b4-b222-e2a06c30d917-kube-api-access-7l7d5\") pod \"fbf2e235-6c68-48b4-b222-e2a06c30d917\" (UID: \"fbf2e235-6c68-48b4-b222-e2a06c30d917\") " Sep 16 04:59:03.726694 kubelet[3170]: I0916 04:59:03.726662 3170 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fbf2e235-6c68-48b4-b222-e2a06c30d917-whisker-ca-bundle\") on node \"ci-4459.0.0-n-140c1315ab\" DevicePath \"\"" Sep 16 04:59:03.730375 kubelet[3170]: I0916 04:59:03.730322 3170 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fbf2e235-6c68-48b4-b222-e2a06c30d917-kube-api-access-7l7d5" (OuterVolumeSpecName: "kube-api-access-7l7d5") pod "fbf2e235-6c68-48b4-b222-e2a06c30d917" (UID: "fbf2e235-6c68-48b4-b222-e2a06c30d917"). InnerVolumeSpecName "kube-api-access-7l7d5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 16 04:59:03.730883 kubelet[3170]: I0916 04:59:03.730864 3170 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbf2e235-6c68-48b4-b222-e2a06c30d917-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "fbf2e235-6c68-48b4-b222-e2a06c30d917" (UID: "fbf2e235-6c68-48b4-b222-e2a06c30d917"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 16 04:59:03.827393 kubelet[3170]: I0916 04:59:03.827372 3170 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/fbf2e235-6c68-48b4-b222-e2a06c30d917-whisker-backend-key-pair\") on node \"ci-4459.0.0-n-140c1315ab\" DevicePath \"\"" Sep 16 04:59:03.827498 kubelet[3170]: I0916 04:59:03.827482 3170 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7l7d5\" (UniqueName: \"kubernetes.io/projected/fbf2e235-6c68-48b4-b222-e2a06c30d917-kube-api-access-7l7d5\") on node \"ci-4459.0.0-n-140c1315ab\" DevicePath \"\"" Sep 16 04:59:04.083620 systemd[1]: var-lib-kubelet-pods-fbf2e235\x2d6c68\x2d48b4\x2db222\x2de2a06c30d917-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 16 04:59:04.083719 systemd[1]: var-lib-kubelet-pods-fbf2e235\x2d6c68\x2d48b4\x2db222\x2de2a06c30d917-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7l7d5.mount: Deactivated successfully. Sep 16 04:59:04.401126 systemd[1]: Removed slice kubepods-besteffort-podfbf2e235_6c68_48b4_b222_e2a06c30d917.slice - libcontainer container kubepods-besteffort-podfbf2e235_6c68_48b4_b222_e2a06c30d917.slice. Sep 16 04:59:04.476762 systemd[1]: Created slice kubepods-besteffort-podf823ece3_ab24_4ade_aef1_b94d40a57856.slice - libcontainer container kubepods-besteffort-podf823ece3_ab24_4ade_aef1_b94d40a57856.slice. Sep 16 04:59:04.491781 containerd[1722]: time="2025-09-16T04:59:04.491723266Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c78dfc1c271743ceaae8bb2c862d69212d63330cc30efe1f610008a8f7fe8a1d\" id:\"261b45c4551fa8a2d950c54a9f684d907d453558c90747a0373eabda5bc3d64a\" pid:4286 exit_status:1 exited_at:{seconds:1757998744 nanos:491435390}" Sep 16 04:59:04.532552 kubelet[3170]: I0916 04:59:04.532462 3170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5f2f\" (UniqueName: \"kubernetes.io/projected/f823ece3-ab24-4ade-aef1-b94d40a57856-kube-api-access-q5f2f\") pod \"whisker-cbd9f96dd-b7bqw\" (UID: \"f823ece3-ab24-4ade-aef1-b94d40a57856\") " pod="calico-system/whisker-cbd9f96dd-b7bqw" Sep 16 04:59:04.532552 kubelet[3170]: I0916 04:59:04.532510 3170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f823ece3-ab24-4ade-aef1-b94d40a57856-whisker-backend-key-pair\") pod \"whisker-cbd9f96dd-b7bqw\" (UID: \"f823ece3-ab24-4ade-aef1-b94d40a57856\") " pod="calico-system/whisker-cbd9f96dd-b7bqw" Sep 16 04:59:04.532552 kubelet[3170]: I0916 04:59:04.532530 3170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f823ece3-ab24-4ade-aef1-b94d40a57856-whisker-ca-bundle\") pod \"whisker-cbd9f96dd-b7bqw\" (UID: \"f823ece3-ab24-4ade-aef1-b94d40a57856\") " pod="calico-system/whisker-cbd9f96dd-b7bqw" Sep 16 04:59:04.780153 containerd[1722]: time="2025-09-16T04:59:04.780126918Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-cbd9f96dd-b7bqw,Uid:f823ece3-ab24-4ade-aef1-b94d40a57856,Namespace:calico-system,Attempt:0,}" Sep 16 04:59:04.907368 systemd-networkd[1586]: calia2b4802d019: Link UP Sep 16 04:59:04.907534 systemd-networkd[1586]: calia2b4802d019: Gained carrier Sep 16 04:59:04.926245 containerd[1722]: 2025-09-16 04:59:04.804 [INFO][4300] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 16 04:59:04.926245 containerd[1722]: 2025-09-16 04:59:04.811 [INFO][4300] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.0.0--n--140c1315ab-k8s-whisker--cbd9f96dd--b7bqw-eth0 whisker-cbd9f96dd- calico-system f823ece3-ab24-4ade-aef1-b94d40a57856 907 0 2025-09-16 04:59:04 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:cbd9f96dd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4459.0.0-n-140c1315ab whisker-cbd9f96dd-b7bqw eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calia2b4802d019 [] [] }} ContainerID="d89f25190da0d8f98af8f70d77c21f44a23ad27e1a7eb5f914dfe86a57b028d8" Namespace="calico-system" Pod="whisker-cbd9f96dd-b7bqw" WorkloadEndpoint="ci--4459.0.0--n--140c1315ab-k8s-whisker--cbd9f96dd--b7bqw-" Sep 16 04:59:04.926245 containerd[1722]: 2025-09-16 04:59:04.811 [INFO][4300] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d89f25190da0d8f98af8f70d77c21f44a23ad27e1a7eb5f914dfe86a57b028d8" Namespace="calico-system" Pod="whisker-cbd9f96dd-b7bqw" WorkloadEndpoint="ci--4459.0.0--n--140c1315ab-k8s-whisker--cbd9f96dd--b7bqw-eth0" Sep 16 04:59:04.926245 containerd[1722]: 2025-09-16 04:59:04.832 [INFO][4312] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d89f25190da0d8f98af8f70d77c21f44a23ad27e1a7eb5f914dfe86a57b028d8" HandleID="k8s-pod-network.d89f25190da0d8f98af8f70d77c21f44a23ad27e1a7eb5f914dfe86a57b028d8" Workload="ci--4459.0.0--n--140c1315ab-k8s-whisker--cbd9f96dd--b7bqw-eth0" Sep 16 04:59:04.926479 containerd[1722]: 2025-09-16 04:59:04.833 [INFO][4312] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d89f25190da0d8f98af8f70d77c21f44a23ad27e1a7eb5f914dfe86a57b028d8" HandleID="k8s-pod-network.d89f25190da0d8f98af8f70d77c21f44a23ad27e1a7eb5f914dfe86a57b028d8" Workload="ci--4459.0.0--n--140c1315ab-k8s-whisker--cbd9f96dd--b7bqw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5950), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.0.0-n-140c1315ab", "pod":"whisker-cbd9f96dd-b7bqw", "timestamp":"2025-09-16 04:59:04.832977976 +0000 UTC"}, Hostname:"ci-4459.0.0-n-140c1315ab", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 16 04:59:04.926479 containerd[1722]: 2025-09-16 04:59:04.833 [INFO][4312] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 16 04:59:04.926479 containerd[1722]: 2025-09-16 04:59:04.833 [INFO][4312] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 16 04:59:04.926479 containerd[1722]: 2025-09-16 04:59:04.833 [INFO][4312] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.0.0-n-140c1315ab' Sep 16 04:59:04.926479 containerd[1722]: 2025-09-16 04:59:04.839 [INFO][4312] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d89f25190da0d8f98af8f70d77c21f44a23ad27e1a7eb5f914dfe86a57b028d8" host="ci-4459.0.0-n-140c1315ab" Sep 16 04:59:04.926479 containerd[1722]: 2025-09-16 04:59:04.842 [INFO][4312] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.0.0-n-140c1315ab" Sep 16 04:59:04.926479 containerd[1722]: 2025-09-16 04:59:04.846 [INFO][4312] ipam/ipam.go 511: Trying affinity for 192.168.110.128/26 host="ci-4459.0.0-n-140c1315ab" Sep 16 04:59:04.926479 containerd[1722]: 2025-09-16 04:59:04.848 [INFO][4312] ipam/ipam.go 158: Attempting to load block cidr=192.168.110.128/26 host="ci-4459.0.0-n-140c1315ab" Sep 16 04:59:04.926479 containerd[1722]: 2025-09-16 04:59:04.849 [INFO][4312] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.110.128/26 host="ci-4459.0.0-n-140c1315ab" Sep 16 04:59:04.927015 containerd[1722]: 2025-09-16 04:59:04.849 [INFO][4312] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.110.128/26 handle="k8s-pod-network.d89f25190da0d8f98af8f70d77c21f44a23ad27e1a7eb5f914dfe86a57b028d8" host="ci-4459.0.0-n-140c1315ab" Sep 16 04:59:04.927015 containerd[1722]: 2025-09-16 04:59:04.851 [INFO][4312] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d89f25190da0d8f98af8f70d77c21f44a23ad27e1a7eb5f914dfe86a57b028d8 Sep 16 04:59:04.927015 containerd[1722]: 2025-09-16 04:59:04.854 [INFO][4312] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.110.128/26 handle="k8s-pod-network.d89f25190da0d8f98af8f70d77c21f44a23ad27e1a7eb5f914dfe86a57b028d8" host="ci-4459.0.0-n-140c1315ab" Sep 16 04:59:04.927015 containerd[1722]: 2025-09-16 04:59:04.862 [INFO][4312] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.110.129/26] block=192.168.110.128/26 handle="k8s-pod-network.d89f25190da0d8f98af8f70d77c21f44a23ad27e1a7eb5f914dfe86a57b028d8" host="ci-4459.0.0-n-140c1315ab" Sep 16 04:59:04.927015 containerd[1722]: 2025-09-16 04:59:04.862 [INFO][4312] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.110.129/26] handle="k8s-pod-network.d89f25190da0d8f98af8f70d77c21f44a23ad27e1a7eb5f914dfe86a57b028d8" host="ci-4459.0.0-n-140c1315ab" Sep 16 04:59:04.927015 containerd[1722]: 2025-09-16 04:59:04.862 [INFO][4312] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 16 04:59:04.927015 containerd[1722]: 2025-09-16 04:59:04.862 [INFO][4312] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.110.129/26] IPv6=[] ContainerID="d89f25190da0d8f98af8f70d77c21f44a23ad27e1a7eb5f914dfe86a57b028d8" HandleID="k8s-pod-network.d89f25190da0d8f98af8f70d77c21f44a23ad27e1a7eb5f914dfe86a57b028d8" Workload="ci--4459.0.0--n--140c1315ab-k8s-whisker--cbd9f96dd--b7bqw-eth0" Sep 16 04:59:04.927266 containerd[1722]: 2025-09-16 04:59:04.866 [INFO][4300] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d89f25190da0d8f98af8f70d77c21f44a23ad27e1a7eb5f914dfe86a57b028d8" Namespace="calico-system" Pod="whisker-cbd9f96dd-b7bqw" WorkloadEndpoint="ci--4459.0.0--n--140c1315ab-k8s-whisker--cbd9f96dd--b7bqw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.0.0--n--140c1315ab-k8s-whisker--cbd9f96dd--b7bqw-eth0", GenerateName:"whisker-cbd9f96dd-", Namespace:"calico-system", SelfLink:"", UID:"f823ece3-ab24-4ade-aef1-b94d40a57856", ResourceVersion:"907", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 59, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"cbd9f96dd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.0.0-n-140c1315ab", ContainerID:"", Pod:"whisker-cbd9f96dd-b7bqw", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.110.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia2b4802d019", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:59:04.927266 containerd[1722]: 2025-09-16 04:59:04.866 [INFO][4300] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.110.129/32] ContainerID="d89f25190da0d8f98af8f70d77c21f44a23ad27e1a7eb5f914dfe86a57b028d8" Namespace="calico-system" Pod="whisker-cbd9f96dd-b7bqw" WorkloadEndpoint="ci--4459.0.0--n--140c1315ab-k8s-whisker--cbd9f96dd--b7bqw-eth0" Sep 16 04:59:04.927364 containerd[1722]: 2025-09-16 04:59:04.866 [INFO][4300] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia2b4802d019 ContainerID="d89f25190da0d8f98af8f70d77c21f44a23ad27e1a7eb5f914dfe86a57b028d8" Namespace="calico-system" Pod="whisker-cbd9f96dd-b7bqw" WorkloadEndpoint="ci--4459.0.0--n--140c1315ab-k8s-whisker--cbd9f96dd--b7bqw-eth0" Sep 16 04:59:04.927364 containerd[1722]: 2025-09-16 04:59:04.907 [INFO][4300] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d89f25190da0d8f98af8f70d77c21f44a23ad27e1a7eb5f914dfe86a57b028d8" Namespace="calico-system" Pod="whisker-cbd9f96dd-b7bqw" WorkloadEndpoint="ci--4459.0.0--n--140c1315ab-k8s-whisker--cbd9f96dd--b7bqw-eth0" Sep 16 04:59:04.927446 containerd[1722]: 2025-09-16 04:59:04.909 [INFO][4300] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d89f25190da0d8f98af8f70d77c21f44a23ad27e1a7eb5f914dfe86a57b028d8" Namespace="calico-system" Pod="whisker-cbd9f96dd-b7bqw" WorkloadEndpoint="ci--4459.0.0--n--140c1315ab-k8s-whisker--cbd9f96dd--b7bqw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.0.0--n--140c1315ab-k8s-whisker--cbd9f96dd--b7bqw-eth0", GenerateName:"whisker-cbd9f96dd-", Namespace:"calico-system", SelfLink:"", UID:"f823ece3-ab24-4ade-aef1-b94d40a57856", ResourceVersion:"907", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 59, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"cbd9f96dd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.0.0-n-140c1315ab", ContainerID:"d89f25190da0d8f98af8f70d77c21f44a23ad27e1a7eb5f914dfe86a57b028d8", Pod:"whisker-cbd9f96dd-b7bqw", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.110.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia2b4802d019", MAC:"62:1d:1a:b6:47:30", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:59:04.927642 containerd[1722]: 2025-09-16 04:59:04.923 [INFO][4300] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d89f25190da0d8f98af8f70d77c21f44a23ad27e1a7eb5f914dfe86a57b028d8" Namespace="calico-system" Pod="whisker-cbd9f96dd-b7bqw" WorkloadEndpoint="ci--4459.0.0--n--140c1315ab-k8s-whisker--cbd9f96dd--b7bqw-eth0" Sep 16 04:59:04.977077 containerd[1722]: time="2025-09-16T04:59:04.977017529Z" level=info msg="connecting to shim d89f25190da0d8f98af8f70d77c21f44a23ad27e1a7eb5f914dfe86a57b028d8" address="unix:///run/containerd/s/683643086eed59d680af4d04e8f38ab06e66b2f1b52dd7d32b16da03bfc119bf" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:59:05.003272 systemd[1]: Started cri-containerd-d89f25190da0d8f98af8f70d77c21f44a23ad27e1a7eb5f914dfe86a57b028d8.scope - libcontainer container d89f25190da0d8f98af8f70d77c21f44a23ad27e1a7eb5f914dfe86a57b028d8. Sep 16 04:59:05.081805 containerd[1722]: time="2025-09-16T04:59:05.081768198Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-cbd9f96dd-b7bqw,Uid:f823ece3-ab24-4ade-aef1-b94d40a57856,Namespace:calico-system,Attempt:0,} returns sandbox id \"d89f25190da0d8f98af8f70d77c21f44a23ad27e1a7eb5f914dfe86a57b028d8\"" Sep 16 04:59:05.085404 containerd[1722]: time="2025-09-16T04:59:05.085365196Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 16 04:59:05.294444 kubelet[3170]: I0916 04:59:05.294282 3170 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fbf2e235-6c68-48b4-b222-e2a06c30d917" path="/var/lib/kubelet/pods/fbf2e235-6c68-48b4-b222-e2a06c30d917/volumes" Sep 16 04:59:05.514646 systemd-networkd[1586]: vxlan.calico: Link UP Sep 16 04:59:05.514652 systemd-networkd[1586]: vxlan.calico: Gained carrier Sep 16 04:59:06.498128 containerd[1722]: time="2025-09-16T04:59:06.498082832Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:59:06.500470 containerd[1722]: time="2025-09-16T04:59:06.500397046Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Sep 16 04:59:06.502921 containerd[1722]: time="2025-09-16T04:59:06.502900613Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:59:06.506582 containerd[1722]: time="2025-09-16T04:59:06.506539928Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:59:06.507063 containerd[1722]: time="2025-09-16T04:59:06.506898797Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 1.42150488s" Sep 16 04:59:06.507063 containerd[1722]: time="2025-09-16T04:59:06.506925406Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 16 04:59:06.514964 containerd[1722]: time="2025-09-16T04:59:06.513564913Z" level=info msg="CreateContainer within sandbox \"d89f25190da0d8f98af8f70d77c21f44a23ad27e1a7eb5f914dfe86a57b028d8\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 16 04:59:06.532295 containerd[1722]: time="2025-09-16T04:59:06.532271765Z" level=info msg="Container 62c8025ff821915159270942c99cb24f497e79bb1569bb08f7976e6cdee1c0ce: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:59:06.546494 containerd[1722]: time="2025-09-16T04:59:06.546469340Z" level=info msg="CreateContainer within sandbox \"d89f25190da0d8f98af8f70d77c21f44a23ad27e1a7eb5f914dfe86a57b028d8\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"62c8025ff821915159270942c99cb24f497e79bb1569bb08f7976e6cdee1c0ce\"" Sep 16 04:59:06.546984 containerd[1722]: time="2025-09-16T04:59:06.546962130Z" level=info msg="StartContainer for \"62c8025ff821915159270942c99cb24f497e79bb1569bb08f7976e6cdee1c0ce\"" Sep 16 04:59:06.548024 containerd[1722]: time="2025-09-16T04:59:06.547986390Z" level=info msg="connecting to shim 62c8025ff821915159270942c99cb24f497e79bb1569bb08f7976e6cdee1c0ce" address="unix:///run/containerd/s/683643086eed59d680af4d04e8f38ab06e66b2f1b52dd7d32b16da03bfc119bf" protocol=ttrpc version=3 Sep 16 04:59:06.569238 systemd[1]: Started cri-containerd-62c8025ff821915159270942c99cb24f497e79bb1569bb08f7976e6cdee1c0ce.scope - libcontainer container 62c8025ff821915159270942c99cb24f497e79bb1569bb08f7976e6cdee1c0ce. Sep 16 04:59:06.610962 containerd[1722]: time="2025-09-16T04:59:06.610925378Z" level=info msg="StartContainer for \"62c8025ff821915159270942c99cb24f497e79bb1569bb08f7976e6cdee1c0ce\" returns successfully" Sep 16 04:59:06.612021 containerd[1722]: time="2025-09-16T04:59:06.611887811Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 16 04:59:06.865183 systemd-networkd[1586]: calia2b4802d019: Gained IPv6LL Sep 16 04:59:07.286914 containerd[1722]: time="2025-09-16T04:59:07.286748454Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-696d587784-b5t7c,Uid:4b74af3f-b2c3-4c95-9de8-d14cd49c421e,Namespace:calico-apiserver,Attempt:0,}" Sep 16 04:59:07.287284 containerd[1722]: time="2025-09-16T04:59:07.287263331Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-797d689774-tnml7,Uid:ceea6ad9-3336-4d54-ac50-786bcebe065c,Namespace:calico-system,Attempt:0,}" Sep 16 04:59:07.397561 systemd-networkd[1586]: calie7f2ba957cc: Link UP Sep 16 04:59:07.397904 systemd-networkd[1586]: calie7f2ba957cc: Gained carrier Sep 16 04:59:07.414397 containerd[1722]: 2025-09-16 04:59:07.335 [INFO][4606] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.0.0--n--140c1315ab-k8s-calico--apiserver--696d587784--b5t7c-eth0 calico-apiserver-696d587784- calico-apiserver 4b74af3f-b2c3-4c95-9de8-d14cd49c421e 837 0 2025-09-16 04:58:41 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:696d587784 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459.0.0-n-140c1315ab calico-apiserver-696d587784-b5t7c eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie7f2ba957cc [] [] }} ContainerID="92c5f7b17fa4e8d8e85d2c7abdebbded0a49aa406dfed75ef788e2ade54ce3c8" Namespace="calico-apiserver" Pod="calico-apiserver-696d587784-b5t7c" WorkloadEndpoint="ci--4459.0.0--n--140c1315ab-k8s-calico--apiserver--696d587784--b5t7c-" Sep 16 04:59:07.414397 containerd[1722]: 2025-09-16 04:59:07.335 [INFO][4606] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="92c5f7b17fa4e8d8e85d2c7abdebbded0a49aa406dfed75ef788e2ade54ce3c8" Namespace="calico-apiserver" Pod="calico-apiserver-696d587784-b5t7c" WorkloadEndpoint="ci--4459.0.0--n--140c1315ab-k8s-calico--apiserver--696d587784--b5t7c-eth0" Sep 16 04:59:07.414397 containerd[1722]: 2025-09-16 04:59:07.363 [INFO][4634] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="92c5f7b17fa4e8d8e85d2c7abdebbded0a49aa406dfed75ef788e2ade54ce3c8" HandleID="k8s-pod-network.92c5f7b17fa4e8d8e85d2c7abdebbded0a49aa406dfed75ef788e2ade54ce3c8" Workload="ci--4459.0.0--n--140c1315ab-k8s-calico--apiserver--696d587784--b5t7c-eth0" Sep 16 04:59:07.414663 containerd[1722]: 2025-09-16 04:59:07.363 [INFO][4634] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="92c5f7b17fa4e8d8e85d2c7abdebbded0a49aa406dfed75ef788e2ade54ce3c8" HandleID="k8s-pod-network.92c5f7b17fa4e8d8e85d2c7abdebbded0a49aa406dfed75ef788e2ade54ce3c8" Workload="ci--4459.0.0--n--140c1315ab-k8s-calico--apiserver--696d587784--b5t7c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c4ff0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459.0.0-n-140c1315ab", "pod":"calico-apiserver-696d587784-b5t7c", "timestamp":"2025-09-16 04:59:07.363097493 +0000 UTC"}, Hostname:"ci-4459.0.0-n-140c1315ab", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 16 04:59:07.414663 containerd[1722]: 2025-09-16 04:59:07.363 [INFO][4634] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 16 04:59:07.414663 containerd[1722]: 2025-09-16 04:59:07.363 [INFO][4634] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 16 04:59:07.414663 containerd[1722]: 2025-09-16 04:59:07.363 [INFO][4634] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.0.0-n-140c1315ab' Sep 16 04:59:07.414663 containerd[1722]: 2025-09-16 04:59:07.368 [INFO][4634] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.92c5f7b17fa4e8d8e85d2c7abdebbded0a49aa406dfed75ef788e2ade54ce3c8" host="ci-4459.0.0-n-140c1315ab" Sep 16 04:59:07.414663 containerd[1722]: 2025-09-16 04:59:07.370 [INFO][4634] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.0.0-n-140c1315ab" Sep 16 04:59:07.414663 containerd[1722]: 2025-09-16 04:59:07.374 [INFO][4634] ipam/ipam.go 511: Trying affinity for 192.168.110.128/26 host="ci-4459.0.0-n-140c1315ab" Sep 16 04:59:07.414663 containerd[1722]: 2025-09-16 04:59:07.375 [INFO][4634] ipam/ipam.go 158: Attempting to load block cidr=192.168.110.128/26 host="ci-4459.0.0-n-140c1315ab" Sep 16 04:59:07.414663 containerd[1722]: 2025-09-16 04:59:07.377 [INFO][4634] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.110.128/26 host="ci-4459.0.0-n-140c1315ab" Sep 16 04:59:07.414921 containerd[1722]: 2025-09-16 04:59:07.377 [INFO][4634] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.110.128/26 handle="k8s-pod-network.92c5f7b17fa4e8d8e85d2c7abdebbded0a49aa406dfed75ef788e2ade54ce3c8" host="ci-4459.0.0-n-140c1315ab" Sep 16 04:59:07.414921 containerd[1722]: 2025-09-16 04:59:07.378 [INFO][4634] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.92c5f7b17fa4e8d8e85d2c7abdebbded0a49aa406dfed75ef788e2ade54ce3c8 Sep 16 04:59:07.414921 containerd[1722]: 2025-09-16 04:59:07.382 [INFO][4634] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.110.128/26 handle="k8s-pod-network.92c5f7b17fa4e8d8e85d2c7abdebbded0a49aa406dfed75ef788e2ade54ce3c8" host="ci-4459.0.0-n-140c1315ab" Sep 16 04:59:07.414921 containerd[1722]: 2025-09-16 04:59:07.389 [INFO][4634] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.110.130/26] block=192.168.110.128/26 handle="k8s-pod-network.92c5f7b17fa4e8d8e85d2c7abdebbded0a49aa406dfed75ef788e2ade54ce3c8" host="ci-4459.0.0-n-140c1315ab" Sep 16 04:59:07.414921 containerd[1722]: 2025-09-16 04:59:07.389 [INFO][4634] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.110.130/26] handle="k8s-pod-network.92c5f7b17fa4e8d8e85d2c7abdebbded0a49aa406dfed75ef788e2ade54ce3c8" host="ci-4459.0.0-n-140c1315ab" Sep 16 04:59:07.414921 containerd[1722]: 2025-09-16 04:59:07.389 [INFO][4634] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 16 04:59:07.414921 containerd[1722]: 2025-09-16 04:59:07.389 [INFO][4634] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.110.130/26] IPv6=[] ContainerID="92c5f7b17fa4e8d8e85d2c7abdebbded0a49aa406dfed75ef788e2ade54ce3c8" HandleID="k8s-pod-network.92c5f7b17fa4e8d8e85d2c7abdebbded0a49aa406dfed75ef788e2ade54ce3c8" Workload="ci--4459.0.0--n--140c1315ab-k8s-calico--apiserver--696d587784--b5t7c-eth0" Sep 16 04:59:07.415140 containerd[1722]: 2025-09-16 04:59:07.393 [INFO][4606] cni-plugin/k8s.go 418: Populated endpoint ContainerID="92c5f7b17fa4e8d8e85d2c7abdebbded0a49aa406dfed75ef788e2ade54ce3c8" Namespace="calico-apiserver" Pod="calico-apiserver-696d587784-b5t7c" WorkloadEndpoint="ci--4459.0.0--n--140c1315ab-k8s-calico--apiserver--696d587784--b5t7c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.0.0--n--140c1315ab-k8s-calico--apiserver--696d587784--b5t7c-eth0", GenerateName:"calico-apiserver-696d587784-", Namespace:"calico-apiserver", SelfLink:"", UID:"4b74af3f-b2c3-4c95-9de8-d14cd49c421e", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 58, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"696d587784", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.0.0-n-140c1315ab", ContainerID:"", Pod:"calico-apiserver-696d587784-b5t7c", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.110.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie7f2ba957cc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:59:07.415222 containerd[1722]: 2025-09-16 04:59:07.393 [INFO][4606] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.110.130/32] ContainerID="92c5f7b17fa4e8d8e85d2c7abdebbded0a49aa406dfed75ef788e2ade54ce3c8" Namespace="calico-apiserver" Pod="calico-apiserver-696d587784-b5t7c" WorkloadEndpoint="ci--4459.0.0--n--140c1315ab-k8s-calico--apiserver--696d587784--b5t7c-eth0" Sep 16 04:59:07.415222 containerd[1722]: 2025-09-16 04:59:07.393 [INFO][4606] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie7f2ba957cc ContainerID="92c5f7b17fa4e8d8e85d2c7abdebbded0a49aa406dfed75ef788e2ade54ce3c8" Namespace="calico-apiserver" Pod="calico-apiserver-696d587784-b5t7c" WorkloadEndpoint="ci--4459.0.0--n--140c1315ab-k8s-calico--apiserver--696d587784--b5t7c-eth0" Sep 16 04:59:07.415222 containerd[1722]: 2025-09-16 04:59:07.397 [INFO][4606] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="92c5f7b17fa4e8d8e85d2c7abdebbded0a49aa406dfed75ef788e2ade54ce3c8" Namespace="calico-apiserver" Pod="calico-apiserver-696d587784-b5t7c" WorkloadEndpoint="ci--4459.0.0--n--140c1315ab-k8s-calico--apiserver--696d587784--b5t7c-eth0" Sep 16 04:59:07.415315 containerd[1722]: 2025-09-16 04:59:07.398 [INFO][4606] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="92c5f7b17fa4e8d8e85d2c7abdebbded0a49aa406dfed75ef788e2ade54ce3c8" Namespace="calico-apiserver" Pod="calico-apiserver-696d587784-b5t7c" WorkloadEndpoint="ci--4459.0.0--n--140c1315ab-k8s-calico--apiserver--696d587784--b5t7c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.0.0--n--140c1315ab-k8s-calico--apiserver--696d587784--b5t7c-eth0", GenerateName:"calico-apiserver-696d587784-", Namespace:"calico-apiserver", SelfLink:"", UID:"4b74af3f-b2c3-4c95-9de8-d14cd49c421e", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 58, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"696d587784", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.0.0-n-140c1315ab", ContainerID:"92c5f7b17fa4e8d8e85d2c7abdebbded0a49aa406dfed75ef788e2ade54ce3c8", Pod:"calico-apiserver-696d587784-b5t7c", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.110.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie7f2ba957cc", MAC:"32:d0:1f:22:32:28", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:59:07.415436 containerd[1722]: 2025-09-16 04:59:07.411 [INFO][4606] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="92c5f7b17fa4e8d8e85d2c7abdebbded0a49aa406dfed75ef788e2ade54ce3c8" Namespace="calico-apiserver" Pod="calico-apiserver-696d587784-b5t7c" WorkloadEndpoint="ci--4459.0.0--n--140c1315ab-k8s-calico--apiserver--696d587784--b5t7c-eth0" Sep 16 04:59:07.448894 containerd[1722]: time="2025-09-16T04:59:07.448698774Z" level=info msg="connecting to shim 92c5f7b17fa4e8d8e85d2c7abdebbded0a49aa406dfed75ef788e2ade54ce3c8" address="unix:///run/containerd/s/3810e5d50e2914fce832296fc38f5a8f31bdcc4dea815aa5f3c898a99c2c5e17" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:59:07.465202 systemd[1]: Started cri-containerd-92c5f7b17fa4e8d8e85d2c7abdebbded0a49aa406dfed75ef788e2ade54ce3c8.scope - libcontainer container 92c5f7b17fa4e8d8e85d2c7abdebbded0a49aa406dfed75ef788e2ade54ce3c8. Sep 16 04:59:07.497634 systemd-networkd[1586]: califf315c8eb4c: Link UP Sep 16 04:59:07.499353 systemd-networkd[1586]: califf315c8eb4c: Gained carrier Sep 16 04:59:07.506198 systemd-networkd[1586]: vxlan.calico: Gained IPv6LL Sep 16 04:59:07.518323 containerd[1722]: 2025-09-16 04:59:07.336 [INFO][4615] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.0.0--n--140c1315ab-k8s-calico--kube--controllers--797d689774--tnml7-eth0 calico-kube-controllers-797d689774- calico-system ceea6ad9-3336-4d54-ac50-786bcebe065c 840 0 2025-09-16 04:58:44 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:797d689774 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4459.0.0-n-140c1315ab calico-kube-controllers-797d689774-tnml7 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] califf315c8eb4c [] [] }} ContainerID="62c9f1a11c12e43c332da6c2579d640625c43ce2d87a54bb54d9ee7aaff21e4b" Namespace="calico-system" Pod="calico-kube-controllers-797d689774-tnml7" WorkloadEndpoint="ci--4459.0.0--n--140c1315ab-k8s-calico--kube--controllers--797d689774--tnml7-" Sep 16 04:59:07.518323 containerd[1722]: 2025-09-16 04:59:07.336 [INFO][4615] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="62c9f1a11c12e43c332da6c2579d640625c43ce2d87a54bb54d9ee7aaff21e4b" Namespace="calico-system" Pod="calico-kube-controllers-797d689774-tnml7" WorkloadEndpoint="ci--4459.0.0--n--140c1315ab-k8s-calico--kube--controllers--797d689774--tnml7-eth0" Sep 16 04:59:07.518323 containerd[1722]: 2025-09-16 04:59:07.364 [INFO][4629] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="62c9f1a11c12e43c332da6c2579d640625c43ce2d87a54bb54d9ee7aaff21e4b" HandleID="k8s-pod-network.62c9f1a11c12e43c332da6c2579d640625c43ce2d87a54bb54d9ee7aaff21e4b" Workload="ci--4459.0.0--n--140c1315ab-k8s-calico--kube--controllers--797d689774--tnml7-eth0" Sep 16 04:59:07.518660 containerd[1722]: 2025-09-16 04:59:07.364 [INFO][4629] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="62c9f1a11c12e43c332da6c2579d640625c43ce2d87a54bb54d9ee7aaff21e4b" HandleID="k8s-pod-network.62c9f1a11c12e43c332da6c2579d640625c43ce2d87a54bb54d9ee7aaff21e4b" Workload="ci--4459.0.0--n--140c1315ab-k8s-calico--kube--controllers--797d689774--tnml7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f430), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.0.0-n-140c1315ab", "pod":"calico-kube-controllers-797d689774-tnml7", "timestamp":"2025-09-16 04:59:07.36433396 +0000 UTC"}, Hostname:"ci-4459.0.0-n-140c1315ab", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 16 04:59:07.518660 containerd[1722]: 2025-09-16 04:59:07.364 [INFO][4629] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 16 04:59:07.518660 containerd[1722]: 2025-09-16 04:59:07.389 [INFO][4629] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 16 04:59:07.518660 containerd[1722]: 2025-09-16 04:59:07.389 [INFO][4629] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.0.0-n-140c1315ab' Sep 16 04:59:07.518660 containerd[1722]: 2025-09-16 04:59:07.468 [INFO][4629] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.62c9f1a11c12e43c332da6c2579d640625c43ce2d87a54bb54d9ee7aaff21e4b" host="ci-4459.0.0-n-140c1315ab" Sep 16 04:59:07.518660 containerd[1722]: 2025-09-16 04:59:07.471 [INFO][4629] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.0.0-n-140c1315ab" Sep 16 04:59:07.518660 containerd[1722]: 2025-09-16 04:59:07.474 [INFO][4629] ipam/ipam.go 511: Trying affinity for 192.168.110.128/26 host="ci-4459.0.0-n-140c1315ab" Sep 16 04:59:07.518660 containerd[1722]: 2025-09-16 04:59:07.476 [INFO][4629] ipam/ipam.go 158: Attempting to load block cidr=192.168.110.128/26 host="ci-4459.0.0-n-140c1315ab" Sep 16 04:59:07.518660 containerd[1722]: 2025-09-16 04:59:07.477 [INFO][4629] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.110.128/26 host="ci-4459.0.0-n-140c1315ab" Sep 16 04:59:07.519531 containerd[1722]: 2025-09-16 04:59:07.477 [INFO][4629] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.110.128/26 handle="k8s-pod-network.62c9f1a11c12e43c332da6c2579d640625c43ce2d87a54bb54d9ee7aaff21e4b" host="ci-4459.0.0-n-140c1315ab" Sep 16 04:59:07.519531 containerd[1722]: 2025-09-16 04:59:07.478 [INFO][4629] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.62c9f1a11c12e43c332da6c2579d640625c43ce2d87a54bb54d9ee7aaff21e4b Sep 16 04:59:07.519531 containerd[1722]: 2025-09-16 04:59:07.482 [INFO][4629] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.110.128/26 handle="k8s-pod-network.62c9f1a11c12e43c332da6c2579d640625c43ce2d87a54bb54d9ee7aaff21e4b" host="ci-4459.0.0-n-140c1315ab" Sep 16 04:59:07.519531 containerd[1722]: 2025-09-16 04:59:07.490 [INFO][4629] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.110.131/26] block=192.168.110.128/26 handle="k8s-pod-network.62c9f1a11c12e43c332da6c2579d640625c43ce2d87a54bb54d9ee7aaff21e4b" host="ci-4459.0.0-n-140c1315ab" Sep 16 04:59:07.519531 containerd[1722]: 2025-09-16 04:59:07.490 [INFO][4629] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.110.131/26] handle="k8s-pod-network.62c9f1a11c12e43c332da6c2579d640625c43ce2d87a54bb54d9ee7aaff21e4b" host="ci-4459.0.0-n-140c1315ab" Sep 16 04:59:07.519531 containerd[1722]: 2025-09-16 04:59:07.490 [INFO][4629] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 16 04:59:07.519531 containerd[1722]: 2025-09-16 04:59:07.490 [INFO][4629] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.110.131/26] IPv6=[] ContainerID="62c9f1a11c12e43c332da6c2579d640625c43ce2d87a54bb54d9ee7aaff21e4b" HandleID="k8s-pod-network.62c9f1a11c12e43c332da6c2579d640625c43ce2d87a54bb54d9ee7aaff21e4b" Workload="ci--4459.0.0--n--140c1315ab-k8s-calico--kube--controllers--797d689774--tnml7-eth0" Sep 16 04:59:07.519812 containerd[1722]: 2025-09-16 04:59:07.492 [INFO][4615] cni-plugin/k8s.go 418: Populated endpoint ContainerID="62c9f1a11c12e43c332da6c2579d640625c43ce2d87a54bb54d9ee7aaff21e4b" Namespace="calico-system" Pod="calico-kube-controllers-797d689774-tnml7" WorkloadEndpoint="ci--4459.0.0--n--140c1315ab-k8s-calico--kube--controllers--797d689774--tnml7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.0.0--n--140c1315ab-k8s-calico--kube--controllers--797d689774--tnml7-eth0", GenerateName:"calico-kube-controllers-797d689774-", Namespace:"calico-system", SelfLink:"", UID:"ceea6ad9-3336-4d54-ac50-786bcebe065c", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 58, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"797d689774", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.0.0-n-140c1315ab", ContainerID:"", Pod:"calico-kube-controllers-797d689774-tnml7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.110.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califf315c8eb4c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:59:07.519881 containerd[1722]: 2025-09-16 04:59:07.493 [INFO][4615] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.110.131/32] ContainerID="62c9f1a11c12e43c332da6c2579d640625c43ce2d87a54bb54d9ee7aaff21e4b" Namespace="calico-system" Pod="calico-kube-controllers-797d689774-tnml7" WorkloadEndpoint="ci--4459.0.0--n--140c1315ab-k8s-calico--kube--controllers--797d689774--tnml7-eth0" Sep 16 04:59:07.519881 containerd[1722]: 2025-09-16 04:59:07.493 [INFO][4615] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califf315c8eb4c ContainerID="62c9f1a11c12e43c332da6c2579d640625c43ce2d87a54bb54d9ee7aaff21e4b" Namespace="calico-system" Pod="calico-kube-controllers-797d689774-tnml7" WorkloadEndpoint="ci--4459.0.0--n--140c1315ab-k8s-calico--kube--controllers--797d689774--tnml7-eth0" Sep 16 04:59:07.519881 containerd[1722]: 2025-09-16 04:59:07.498 [INFO][4615] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="62c9f1a11c12e43c332da6c2579d640625c43ce2d87a54bb54d9ee7aaff21e4b" Namespace="calico-system" Pod="calico-kube-controllers-797d689774-tnml7" WorkloadEndpoint="ci--4459.0.0--n--140c1315ab-k8s-calico--kube--controllers--797d689774--tnml7-eth0" Sep 16 04:59:07.519951 containerd[1722]: 2025-09-16 04:59:07.500 [INFO][4615] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="62c9f1a11c12e43c332da6c2579d640625c43ce2d87a54bb54d9ee7aaff21e4b" Namespace="calico-system" Pod="calico-kube-controllers-797d689774-tnml7" WorkloadEndpoint="ci--4459.0.0--n--140c1315ab-k8s-calico--kube--controllers--797d689774--tnml7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.0.0--n--140c1315ab-k8s-calico--kube--controllers--797d689774--tnml7-eth0", GenerateName:"calico-kube-controllers-797d689774-", Namespace:"calico-system", SelfLink:"", UID:"ceea6ad9-3336-4d54-ac50-786bcebe065c", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 58, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"797d689774", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.0.0-n-140c1315ab", ContainerID:"62c9f1a11c12e43c332da6c2579d640625c43ce2d87a54bb54d9ee7aaff21e4b", Pod:"calico-kube-controllers-797d689774-tnml7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.110.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califf315c8eb4c", MAC:"c2:ce:95:f3:f1:c2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:59:07.520011 containerd[1722]: 2025-09-16 04:59:07.516 [INFO][4615] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="62c9f1a11c12e43c332da6c2579d640625c43ce2d87a54bb54d9ee7aaff21e4b" Namespace="calico-system" Pod="calico-kube-controllers-797d689774-tnml7" WorkloadEndpoint="ci--4459.0.0--n--140c1315ab-k8s-calico--kube--controllers--797d689774--tnml7-eth0" Sep 16 04:59:07.555496 containerd[1722]: time="2025-09-16T04:59:07.555411544Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-696d587784-b5t7c,Uid:4b74af3f-b2c3-4c95-9de8-d14cd49c421e,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"92c5f7b17fa4e8d8e85d2c7abdebbded0a49aa406dfed75ef788e2ade54ce3c8\"" Sep 16 04:59:07.570404 containerd[1722]: time="2025-09-16T04:59:07.570373206Z" level=info msg="connecting to shim 62c9f1a11c12e43c332da6c2579d640625c43ce2d87a54bb54d9ee7aaff21e4b" address="unix:///run/containerd/s/1537d277e5c5afc799372fbd04c4fb6c185dc13670c4bc83be842ad4be05e917" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:59:07.595219 systemd[1]: Started cri-containerd-62c9f1a11c12e43c332da6c2579d640625c43ce2d87a54bb54d9ee7aaff21e4b.scope - libcontainer container 62c9f1a11c12e43c332da6c2579d640625c43ce2d87a54bb54d9ee7aaff21e4b. Sep 16 04:59:07.639851 containerd[1722]: time="2025-09-16T04:59:07.639829112Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-797d689774-tnml7,Uid:ceea6ad9-3336-4d54-ac50-786bcebe065c,Namespace:calico-system,Attempt:0,} returns sandbox id \"62c9f1a11c12e43c332da6c2579d640625c43ce2d87a54bb54d9ee7aaff21e4b\"" Sep 16 04:59:08.286836 containerd[1722]: time="2025-09-16T04:59:08.286701156Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qlxsc,Uid:9b4e9d9e-651f-42b1-b357-b24bcf14db30,Namespace:calico-system,Attempt:0,}" Sep 16 04:59:08.286836 containerd[1722]: time="2025-09-16T04:59:08.286737496Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-748bb69b95-qkbmd,Uid:8d4217ea-b6cb-4fd0-a095-a5df158bad6e,Namespace:calico-apiserver,Attempt:0,}" Sep 16 04:59:08.287040 containerd[1722]: time="2025-09-16T04:59:08.286701355Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-z9npp,Uid:8250dfb6-0d48-46f0-a766-dbe41444f723,Namespace:kube-system,Attempt:0,}" Sep 16 04:59:08.457683 systemd-networkd[1586]: cali32b83118d58: Link UP Sep 16 04:59:08.460489 systemd-networkd[1586]: cali32b83118d58: Gained carrier Sep 16 04:59:08.474151 containerd[1722]: 2025-09-16 04:59:08.380 [INFO][4778] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.0.0--n--140c1315ab-k8s-coredns--674b8bbfcf--z9npp-eth0 coredns-674b8bbfcf- kube-system 8250dfb6-0d48-46f0-a766-dbe41444f723 836 0 2025-09-16 04:58:32 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459.0.0-n-140c1315ab coredns-674b8bbfcf-z9npp eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali32b83118d58 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="5838127db4287f1adf20b52b15adbe4aa5b1755a220240bd6142b8a62c7012f7" Namespace="kube-system" Pod="coredns-674b8bbfcf-z9npp" WorkloadEndpoint="ci--4459.0.0--n--140c1315ab-k8s-coredns--674b8bbfcf--z9npp-" Sep 16 04:59:08.474151 containerd[1722]: 2025-09-16 04:59:08.380 [INFO][4778] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5838127db4287f1adf20b52b15adbe4aa5b1755a220240bd6142b8a62c7012f7" Namespace="kube-system" Pod="coredns-674b8bbfcf-z9npp" WorkloadEndpoint="ci--4459.0.0--n--140c1315ab-k8s-coredns--674b8bbfcf--z9npp-eth0" Sep 16 04:59:08.474151 containerd[1722]: 2025-09-16 04:59:08.408 [INFO][4806] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5838127db4287f1adf20b52b15adbe4aa5b1755a220240bd6142b8a62c7012f7" HandleID="k8s-pod-network.5838127db4287f1adf20b52b15adbe4aa5b1755a220240bd6142b8a62c7012f7" Workload="ci--4459.0.0--n--140c1315ab-k8s-coredns--674b8bbfcf--z9npp-eth0" Sep 16 04:59:08.474317 containerd[1722]: 2025-09-16 04:59:08.409 [INFO][4806] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5838127db4287f1adf20b52b15adbe4aa5b1755a220240bd6142b8a62c7012f7" HandleID="k8s-pod-network.5838127db4287f1adf20b52b15adbe4aa5b1755a220240bd6142b8a62c7012f7" Workload="ci--4459.0.0--n--140c1315ab-k8s-coredns--674b8bbfcf--z9npp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5890), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459.0.0-n-140c1315ab", "pod":"coredns-674b8bbfcf-z9npp", "timestamp":"2025-09-16 04:59:08.408951371 +0000 UTC"}, Hostname:"ci-4459.0.0-n-140c1315ab", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 16 04:59:08.474317 containerd[1722]: 2025-09-16 04:59:08.409 [INFO][4806] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 16 04:59:08.474317 containerd[1722]: 2025-09-16 04:59:08.409 [INFO][4806] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 16 04:59:08.474317 containerd[1722]: 2025-09-16 04:59:08.409 [INFO][4806] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.0.0-n-140c1315ab' Sep 16 04:59:08.474317 containerd[1722]: 2025-09-16 04:59:08.420 [INFO][4806] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5838127db4287f1adf20b52b15adbe4aa5b1755a220240bd6142b8a62c7012f7" host="ci-4459.0.0-n-140c1315ab" Sep 16 04:59:08.474317 containerd[1722]: 2025-09-16 04:59:08.423 [INFO][4806] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.0.0-n-140c1315ab" Sep 16 04:59:08.474317 containerd[1722]: 2025-09-16 04:59:08.426 [INFO][4806] ipam/ipam.go 511: Trying affinity for 192.168.110.128/26 host="ci-4459.0.0-n-140c1315ab" Sep 16 04:59:08.474317 containerd[1722]: 2025-09-16 04:59:08.427 [INFO][4806] ipam/ipam.go 158: Attempting to load block cidr=192.168.110.128/26 host="ci-4459.0.0-n-140c1315ab" Sep 16 04:59:08.474317 containerd[1722]: 2025-09-16 04:59:08.429 [INFO][4806] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.110.128/26 host="ci-4459.0.0-n-140c1315ab" Sep 16 04:59:08.474500 containerd[1722]: 2025-09-16 04:59:08.429 [INFO][4806] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.110.128/26 handle="k8s-pod-network.5838127db4287f1adf20b52b15adbe4aa5b1755a220240bd6142b8a62c7012f7" host="ci-4459.0.0-n-140c1315ab" Sep 16 04:59:08.474500 containerd[1722]: 2025-09-16 04:59:08.431 [INFO][4806] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5838127db4287f1adf20b52b15adbe4aa5b1755a220240bd6142b8a62c7012f7 Sep 16 04:59:08.474500 containerd[1722]: 2025-09-16 04:59:08.435 [INFO][4806] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.110.128/26 handle="k8s-pod-network.5838127db4287f1adf20b52b15adbe4aa5b1755a220240bd6142b8a62c7012f7" host="ci-4459.0.0-n-140c1315ab" Sep 16 04:59:08.474500 containerd[1722]: 2025-09-16 04:59:08.447 [INFO][4806] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.110.132/26] block=192.168.110.128/26 handle="k8s-pod-network.5838127db4287f1adf20b52b15adbe4aa5b1755a220240bd6142b8a62c7012f7" host="ci-4459.0.0-n-140c1315ab" Sep 16 04:59:08.474500 containerd[1722]: 2025-09-16 04:59:08.447 [INFO][4806] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.110.132/26] handle="k8s-pod-network.5838127db4287f1adf20b52b15adbe4aa5b1755a220240bd6142b8a62c7012f7" host="ci-4459.0.0-n-140c1315ab" Sep 16 04:59:08.474500 containerd[1722]: 2025-09-16 04:59:08.447 [INFO][4806] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 16 04:59:08.474500 containerd[1722]: 2025-09-16 04:59:08.447 [INFO][4806] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.110.132/26] IPv6=[] ContainerID="5838127db4287f1adf20b52b15adbe4aa5b1755a220240bd6142b8a62c7012f7" HandleID="k8s-pod-network.5838127db4287f1adf20b52b15adbe4aa5b1755a220240bd6142b8a62c7012f7" Workload="ci--4459.0.0--n--140c1315ab-k8s-coredns--674b8bbfcf--z9npp-eth0" Sep 16 04:59:08.474634 containerd[1722]: 2025-09-16 04:59:08.450 [INFO][4778] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5838127db4287f1adf20b52b15adbe4aa5b1755a220240bd6142b8a62c7012f7" Namespace="kube-system" Pod="coredns-674b8bbfcf-z9npp" WorkloadEndpoint="ci--4459.0.0--n--140c1315ab-k8s-coredns--674b8bbfcf--z9npp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.0.0--n--140c1315ab-k8s-coredns--674b8bbfcf--z9npp-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"8250dfb6-0d48-46f0-a766-dbe41444f723", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 58, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.0.0-n-140c1315ab", ContainerID:"", Pod:"coredns-674b8bbfcf-z9npp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.110.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali32b83118d58", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:59:08.474634 containerd[1722]: 2025-09-16 04:59:08.450 [INFO][4778] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.110.132/32] ContainerID="5838127db4287f1adf20b52b15adbe4aa5b1755a220240bd6142b8a62c7012f7" Namespace="kube-system" Pod="coredns-674b8bbfcf-z9npp" WorkloadEndpoint="ci--4459.0.0--n--140c1315ab-k8s-coredns--674b8bbfcf--z9npp-eth0" Sep 16 04:59:08.474634 containerd[1722]: 2025-09-16 04:59:08.450 [INFO][4778] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali32b83118d58 ContainerID="5838127db4287f1adf20b52b15adbe4aa5b1755a220240bd6142b8a62c7012f7" Namespace="kube-system" Pod="coredns-674b8bbfcf-z9npp" WorkloadEndpoint="ci--4459.0.0--n--140c1315ab-k8s-coredns--674b8bbfcf--z9npp-eth0" Sep 16 04:59:08.474634 containerd[1722]: 2025-09-16 04:59:08.461 [INFO][4778] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5838127db4287f1adf20b52b15adbe4aa5b1755a220240bd6142b8a62c7012f7" Namespace="kube-system" Pod="coredns-674b8bbfcf-z9npp" WorkloadEndpoint="ci--4459.0.0--n--140c1315ab-k8s-coredns--674b8bbfcf--z9npp-eth0" Sep 16 04:59:08.474634 containerd[1722]: 2025-09-16 04:59:08.463 [INFO][4778] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5838127db4287f1adf20b52b15adbe4aa5b1755a220240bd6142b8a62c7012f7" Namespace="kube-system" Pod="coredns-674b8bbfcf-z9npp" WorkloadEndpoint="ci--4459.0.0--n--140c1315ab-k8s-coredns--674b8bbfcf--z9npp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.0.0--n--140c1315ab-k8s-coredns--674b8bbfcf--z9npp-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"8250dfb6-0d48-46f0-a766-dbe41444f723", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 58, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.0.0-n-140c1315ab", ContainerID:"5838127db4287f1adf20b52b15adbe4aa5b1755a220240bd6142b8a62c7012f7", Pod:"coredns-674b8bbfcf-z9npp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.110.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali32b83118d58", MAC:"7e:0f:30:6f:97:a0", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:59:08.474634 containerd[1722]: 2025-09-16 04:59:08.473 [INFO][4778] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5838127db4287f1adf20b52b15adbe4aa5b1755a220240bd6142b8a62c7012f7" Namespace="kube-system" Pod="coredns-674b8bbfcf-z9npp" WorkloadEndpoint="ci--4459.0.0--n--140c1315ab-k8s-coredns--674b8bbfcf--z9npp-eth0" Sep 16 04:59:08.521492 containerd[1722]: time="2025-09-16T04:59:08.521337825Z" level=info msg="connecting to shim 5838127db4287f1adf20b52b15adbe4aa5b1755a220240bd6142b8a62c7012f7" address="unix:///run/containerd/s/ef2f3ca480037c372eff6df2b55d17811044742dd7da1d051f9f43c3a949626d" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:59:08.549481 systemd[1]: Started cri-containerd-5838127db4287f1adf20b52b15adbe4aa5b1755a220240bd6142b8a62c7012f7.scope - libcontainer container 5838127db4287f1adf20b52b15adbe4aa5b1755a220240bd6142b8a62c7012f7. Sep 16 04:59:08.577317 systemd-networkd[1586]: cali2b2ea1e218a: Link UP Sep 16 04:59:08.579285 systemd-networkd[1586]: cali2b2ea1e218a: Gained carrier Sep 16 04:59:08.605293 containerd[1722]: 2025-09-16 04:59:08.356 [INFO][4756] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.0.0--n--140c1315ab-k8s-csi--node--driver--qlxsc-eth0 csi-node-driver- calico-system 9b4e9d9e-651f-42b1-b357-b24bcf14db30 716 0 2025-09-16 04:58:44 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4459.0.0-n-140c1315ab csi-node-driver-qlxsc eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali2b2ea1e218a [] [] }} ContainerID="41d174970394b87315e8c39d32cca548a86b68425381550e91270c5adc787a4a" Namespace="calico-system" Pod="csi-node-driver-qlxsc" WorkloadEndpoint="ci--4459.0.0--n--140c1315ab-k8s-csi--node--driver--qlxsc-" Sep 16 04:59:08.605293 containerd[1722]: 2025-09-16 04:59:08.358 [INFO][4756] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="41d174970394b87315e8c39d32cca548a86b68425381550e91270c5adc787a4a" Namespace="calico-system" Pod="csi-node-driver-qlxsc" WorkloadEndpoint="ci--4459.0.0--n--140c1315ab-k8s-csi--node--driver--qlxsc-eth0" Sep 16 04:59:08.605293 containerd[1722]: 2025-09-16 04:59:08.406 [INFO][4795] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="41d174970394b87315e8c39d32cca548a86b68425381550e91270c5adc787a4a" HandleID="k8s-pod-network.41d174970394b87315e8c39d32cca548a86b68425381550e91270c5adc787a4a" Workload="ci--4459.0.0--n--140c1315ab-k8s-csi--node--driver--qlxsc-eth0" Sep 16 04:59:08.605293 containerd[1722]: 2025-09-16 04:59:08.411 [INFO][4795] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="41d174970394b87315e8c39d32cca548a86b68425381550e91270c5adc787a4a" HandleID="k8s-pod-network.41d174970394b87315e8c39d32cca548a86b68425381550e91270c5adc787a4a" Workload="ci--4459.0.0--n--140c1315ab-k8s-csi--node--driver--qlxsc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d59f0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.0.0-n-140c1315ab", "pod":"csi-node-driver-qlxsc", "timestamp":"2025-09-16 04:59:08.406575775 +0000 UTC"}, Hostname:"ci-4459.0.0-n-140c1315ab", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 16 04:59:08.605293 containerd[1722]: 2025-09-16 04:59:08.411 [INFO][4795] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 16 04:59:08.605293 containerd[1722]: 2025-09-16 04:59:08.447 [INFO][4795] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 16 04:59:08.605293 containerd[1722]: 2025-09-16 04:59:08.447 [INFO][4795] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.0.0-n-140c1315ab' Sep 16 04:59:08.605293 containerd[1722]: 2025-09-16 04:59:08.521 [INFO][4795] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.41d174970394b87315e8c39d32cca548a86b68425381550e91270c5adc787a4a" host="ci-4459.0.0-n-140c1315ab" Sep 16 04:59:08.605293 containerd[1722]: 2025-09-16 04:59:08.533 [INFO][4795] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.0.0-n-140c1315ab" Sep 16 04:59:08.605293 containerd[1722]: 2025-09-16 04:59:08.539 [INFO][4795] ipam/ipam.go 511: Trying affinity for 192.168.110.128/26 host="ci-4459.0.0-n-140c1315ab" Sep 16 04:59:08.605293 containerd[1722]: 2025-09-16 04:59:08.541 [INFO][4795] ipam/ipam.go 158: Attempting to load block cidr=192.168.110.128/26 host="ci-4459.0.0-n-140c1315ab" Sep 16 04:59:08.605293 containerd[1722]: 2025-09-16 04:59:08.543 [INFO][4795] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.110.128/26 host="ci-4459.0.0-n-140c1315ab" Sep 16 04:59:08.605293 containerd[1722]: 2025-09-16 04:59:08.543 [INFO][4795] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.110.128/26 handle="k8s-pod-network.41d174970394b87315e8c39d32cca548a86b68425381550e91270c5adc787a4a" host="ci-4459.0.0-n-140c1315ab" Sep 16 04:59:08.605293 containerd[1722]: 2025-09-16 04:59:08.544 [INFO][4795] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.41d174970394b87315e8c39d32cca548a86b68425381550e91270c5adc787a4a Sep 16 04:59:08.605293 containerd[1722]: 2025-09-16 04:59:08.549 [INFO][4795] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.110.128/26 handle="k8s-pod-network.41d174970394b87315e8c39d32cca548a86b68425381550e91270c5adc787a4a" host="ci-4459.0.0-n-140c1315ab" Sep 16 04:59:08.605293 containerd[1722]: 2025-09-16 04:59:08.561 [INFO][4795] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.110.133/26] block=192.168.110.128/26 handle="k8s-pod-network.41d174970394b87315e8c39d32cca548a86b68425381550e91270c5adc787a4a" host="ci-4459.0.0-n-140c1315ab" Sep 16 04:59:08.605293 containerd[1722]: 2025-09-16 04:59:08.562 [INFO][4795] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.110.133/26] handle="k8s-pod-network.41d174970394b87315e8c39d32cca548a86b68425381550e91270c5adc787a4a" host="ci-4459.0.0-n-140c1315ab" Sep 16 04:59:08.605293 containerd[1722]: 2025-09-16 04:59:08.562 [INFO][4795] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 16 04:59:08.605293 containerd[1722]: 2025-09-16 04:59:08.562 [INFO][4795] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.110.133/26] IPv6=[] ContainerID="41d174970394b87315e8c39d32cca548a86b68425381550e91270c5adc787a4a" HandleID="k8s-pod-network.41d174970394b87315e8c39d32cca548a86b68425381550e91270c5adc787a4a" Workload="ci--4459.0.0--n--140c1315ab-k8s-csi--node--driver--qlxsc-eth0" Sep 16 04:59:08.605997 containerd[1722]: 2025-09-16 04:59:08.566 [INFO][4756] cni-plugin/k8s.go 418: Populated endpoint ContainerID="41d174970394b87315e8c39d32cca548a86b68425381550e91270c5adc787a4a" Namespace="calico-system" Pod="csi-node-driver-qlxsc" WorkloadEndpoint="ci--4459.0.0--n--140c1315ab-k8s-csi--node--driver--qlxsc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.0.0--n--140c1315ab-k8s-csi--node--driver--qlxsc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9b4e9d9e-651f-42b1-b357-b24bcf14db30", ResourceVersion:"716", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 58, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.0.0-n-140c1315ab", ContainerID:"", Pod:"csi-node-driver-qlxsc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.110.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2b2ea1e218a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:59:08.605997 containerd[1722]: 2025-09-16 04:59:08.574 [INFO][4756] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.110.133/32] ContainerID="41d174970394b87315e8c39d32cca548a86b68425381550e91270c5adc787a4a" Namespace="calico-system" Pod="csi-node-driver-qlxsc" WorkloadEndpoint="ci--4459.0.0--n--140c1315ab-k8s-csi--node--driver--qlxsc-eth0" Sep 16 04:59:08.605997 containerd[1722]: 2025-09-16 04:59:08.574 [INFO][4756] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2b2ea1e218a ContainerID="41d174970394b87315e8c39d32cca548a86b68425381550e91270c5adc787a4a" Namespace="calico-system" Pod="csi-node-driver-qlxsc" WorkloadEndpoint="ci--4459.0.0--n--140c1315ab-k8s-csi--node--driver--qlxsc-eth0" Sep 16 04:59:08.605997 containerd[1722]: 2025-09-16 04:59:08.580 [INFO][4756] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="41d174970394b87315e8c39d32cca548a86b68425381550e91270c5adc787a4a" Namespace="calico-system" Pod="csi-node-driver-qlxsc" WorkloadEndpoint="ci--4459.0.0--n--140c1315ab-k8s-csi--node--driver--qlxsc-eth0" Sep 16 04:59:08.605997 containerd[1722]: 2025-09-16 04:59:08.581 [INFO][4756] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="41d174970394b87315e8c39d32cca548a86b68425381550e91270c5adc787a4a" Namespace="calico-system" Pod="csi-node-driver-qlxsc" WorkloadEndpoint="ci--4459.0.0--n--140c1315ab-k8s-csi--node--driver--qlxsc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.0.0--n--140c1315ab-k8s-csi--node--driver--qlxsc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9b4e9d9e-651f-42b1-b357-b24bcf14db30", ResourceVersion:"716", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 58, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.0.0-n-140c1315ab", ContainerID:"41d174970394b87315e8c39d32cca548a86b68425381550e91270c5adc787a4a", Pod:"csi-node-driver-qlxsc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.110.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2b2ea1e218a", MAC:"62:2c:1f:d1:22:6a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:59:08.605997 containerd[1722]: 2025-09-16 04:59:08.602 [INFO][4756] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="41d174970394b87315e8c39d32cca548a86b68425381550e91270c5adc787a4a" Namespace="calico-system" Pod="csi-node-driver-qlxsc" WorkloadEndpoint="ci--4459.0.0--n--140c1315ab-k8s-csi--node--driver--qlxsc-eth0" Sep 16 04:59:08.610201 containerd[1722]: time="2025-09-16T04:59:08.610141213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-z9npp,Uid:8250dfb6-0d48-46f0-a766-dbe41444f723,Namespace:kube-system,Attempt:0,} returns sandbox id \"5838127db4287f1adf20b52b15adbe4aa5b1755a220240bd6142b8a62c7012f7\"" Sep 16 04:59:08.618854 containerd[1722]: time="2025-09-16T04:59:08.618829406Z" level=info msg="CreateContainer within sandbox \"5838127db4287f1adf20b52b15adbe4aa5b1755a220240bd6142b8a62c7012f7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 16 04:59:08.655489 containerd[1722]: time="2025-09-16T04:59:08.654638315Z" level=info msg="connecting to shim 41d174970394b87315e8c39d32cca548a86b68425381550e91270c5adc787a4a" address="unix:///run/containerd/s/3bce2d6cb65bde8f8304aa4c3e26baef6c43fc434fbcdd3ce7ea055b74786298" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:59:08.661407 containerd[1722]: time="2025-09-16T04:59:08.657844498Z" level=info msg="Container ab1048e72aefcceea11c421d04241fc8123606017b19a1f766f8d8020e910356: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:59:08.664651 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3726337536.mount: Deactivated successfully. Sep 16 04:59:08.677313 containerd[1722]: time="2025-09-16T04:59:08.675422787Z" level=info msg="CreateContainer within sandbox \"5838127db4287f1adf20b52b15adbe4aa5b1755a220240bd6142b8a62c7012f7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ab1048e72aefcceea11c421d04241fc8123606017b19a1f766f8d8020e910356\"" Sep 16 04:59:08.678077 containerd[1722]: time="2025-09-16T04:59:08.678054764Z" level=info msg="StartContainer for \"ab1048e72aefcceea11c421d04241fc8123606017b19a1f766f8d8020e910356\"" Sep 16 04:59:08.686338 containerd[1722]: time="2025-09-16T04:59:08.686006738Z" level=info msg="connecting to shim ab1048e72aefcceea11c421d04241fc8123606017b19a1f766f8d8020e910356" address="unix:///run/containerd/s/ef2f3ca480037c372eff6df2b55d17811044742dd7da1d051f9f43c3a949626d" protocol=ttrpc version=3 Sep 16 04:59:08.688361 systemd[1]: Started cri-containerd-41d174970394b87315e8c39d32cca548a86b68425381550e91270c5adc787a4a.scope - libcontainer container 41d174970394b87315e8c39d32cca548a86b68425381550e91270c5adc787a4a. Sep 16 04:59:08.711040 systemd-networkd[1586]: cali2698452924c: Link UP Sep 16 04:59:08.711238 systemd-networkd[1586]: cali2698452924c: Gained carrier Sep 16 04:59:08.721349 systemd[1]: Started cri-containerd-ab1048e72aefcceea11c421d04241fc8123606017b19a1f766f8d8020e910356.scope - libcontainer container ab1048e72aefcceea11c421d04241fc8123606017b19a1f766f8d8020e910356. Sep 16 04:59:08.738659 containerd[1722]: 2025-09-16 04:59:08.372 [INFO][4769] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.0.0--n--140c1315ab-k8s-calico--apiserver--748bb69b95--qkbmd-eth0 calico-apiserver-748bb69b95- calico-apiserver 8d4217ea-b6cb-4fd0-a095-a5df158bad6e 835 0 2025-09-16 04:58:42 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:748bb69b95 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459.0.0-n-140c1315ab calico-apiserver-748bb69b95-qkbmd eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali2698452924c [] [] }} ContainerID="9170974f888d4295360c8efdc6d55c54469c56bf2cbb7f079538a5cd031e3878" Namespace="calico-apiserver" Pod="calico-apiserver-748bb69b95-qkbmd" WorkloadEndpoint="ci--4459.0.0--n--140c1315ab-k8s-calico--apiserver--748bb69b95--qkbmd-" Sep 16 04:59:08.738659 containerd[1722]: 2025-09-16 04:59:08.372 [INFO][4769] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9170974f888d4295360c8efdc6d55c54469c56bf2cbb7f079538a5cd031e3878" Namespace="calico-apiserver" Pod="calico-apiserver-748bb69b95-qkbmd" WorkloadEndpoint="ci--4459.0.0--n--140c1315ab-k8s-calico--apiserver--748bb69b95--qkbmd-eth0" Sep 16 04:59:08.738659 containerd[1722]: 2025-09-16 04:59:08.417 [INFO][4800] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9170974f888d4295360c8efdc6d55c54469c56bf2cbb7f079538a5cd031e3878" HandleID="k8s-pod-network.9170974f888d4295360c8efdc6d55c54469c56bf2cbb7f079538a5cd031e3878" Workload="ci--4459.0.0--n--140c1315ab-k8s-calico--apiserver--748bb69b95--qkbmd-eth0" Sep 16 04:59:08.738659 containerd[1722]: 2025-09-16 04:59:08.417 [INFO][4800] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9170974f888d4295360c8efdc6d55c54469c56bf2cbb7f079538a5cd031e3878" HandleID="k8s-pod-network.9170974f888d4295360c8efdc6d55c54469c56bf2cbb7f079538a5cd031e3878" Workload="ci--4459.0.0--n--140c1315ab-k8s-calico--apiserver--748bb69b95--qkbmd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002df640), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459.0.0-n-140c1315ab", "pod":"calico-apiserver-748bb69b95-qkbmd", "timestamp":"2025-09-16 04:59:08.417336733 +0000 UTC"}, Hostname:"ci-4459.0.0-n-140c1315ab", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 16 04:59:08.738659 containerd[1722]: 2025-09-16 04:59:08.417 [INFO][4800] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 16 04:59:08.738659 containerd[1722]: 2025-09-16 04:59:08.562 [INFO][4800] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 16 04:59:08.738659 containerd[1722]: 2025-09-16 04:59:08.562 [INFO][4800] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.0.0-n-140c1315ab' Sep 16 04:59:08.738659 containerd[1722]: 2025-09-16 04:59:08.620 [INFO][4800] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9170974f888d4295360c8efdc6d55c54469c56bf2cbb7f079538a5cd031e3878" host="ci-4459.0.0-n-140c1315ab" Sep 16 04:59:08.738659 containerd[1722]: 2025-09-16 04:59:08.636 [INFO][4800] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.0.0-n-140c1315ab" Sep 16 04:59:08.738659 containerd[1722]: 2025-09-16 04:59:08.653 [INFO][4800] ipam/ipam.go 511: Trying affinity for 192.168.110.128/26 host="ci-4459.0.0-n-140c1315ab" Sep 16 04:59:08.738659 containerd[1722]: 2025-09-16 04:59:08.663 [INFO][4800] ipam/ipam.go 158: Attempting to load block cidr=192.168.110.128/26 host="ci-4459.0.0-n-140c1315ab" Sep 16 04:59:08.738659 containerd[1722]: 2025-09-16 04:59:08.671 [INFO][4800] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.110.128/26 host="ci-4459.0.0-n-140c1315ab" Sep 16 04:59:08.738659 containerd[1722]: 2025-09-16 04:59:08.673 [INFO][4800] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.110.128/26 handle="k8s-pod-network.9170974f888d4295360c8efdc6d55c54469c56bf2cbb7f079538a5cd031e3878" host="ci-4459.0.0-n-140c1315ab" Sep 16 04:59:08.738659 containerd[1722]: 2025-09-16 04:59:08.680 [INFO][4800] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9170974f888d4295360c8efdc6d55c54469c56bf2cbb7f079538a5cd031e3878 Sep 16 04:59:08.738659 containerd[1722]: 2025-09-16 04:59:08.686 [INFO][4800] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.110.128/26 handle="k8s-pod-network.9170974f888d4295360c8efdc6d55c54469c56bf2cbb7f079538a5cd031e3878" host="ci-4459.0.0-n-140c1315ab" Sep 16 04:59:08.738659 containerd[1722]: 2025-09-16 04:59:08.697 [INFO][4800] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.110.134/26] block=192.168.110.128/26 handle="k8s-pod-network.9170974f888d4295360c8efdc6d55c54469c56bf2cbb7f079538a5cd031e3878" host="ci-4459.0.0-n-140c1315ab" Sep 16 04:59:08.738659 containerd[1722]: 2025-09-16 04:59:08.697 [INFO][4800] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.110.134/26] handle="k8s-pod-network.9170974f888d4295360c8efdc6d55c54469c56bf2cbb7f079538a5cd031e3878" host="ci-4459.0.0-n-140c1315ab" Sep 16 04:59:08.738659 containerd[1722]: 2025-09-16 04:59:08.697 [INFO][4800] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 16 04:59:08.738659 containerd[1722]: 2025-09-16 04:59:08.697 [INFO][4800] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.110.134/26] IPv6=[] ContainerID="9170974f888d4295360c8efdc6d55c54469c56bf2cbb7f079538a5cd031e3878" HandleID="k8s-pod-network.9170974f888d4295360c8efdc6d55c54469c56bf2cbb7f079538a5cd031e3878" Workload="ci--4459.0.0--n--140c1315ab-k8s-calico--apiserver--748bb69b95--qkbmd-eth0" Sep 16 04:59:08.739446 containerd[1722]: 2025-09-16 04:59:08.701 [INFO][4769] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9170974f888d4295360c8efdc6d55c54469c56bf2cbb7f079538a5cd031e3878" Namespace="calico-apiserver" Pod="calico-apiserver-748bb69b95-qkbmd" WorkloadEndpoint="ci--4459.0.0--n--140c1315ab-k8s-calico--apiserver--748bb69b95--qkbmd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.0.0--n--140c1315ab-k8s-calico--apiserver--748bb69b95--qkbmd-eth0", GenerateName:"calico-apiserver-748bb69b95-", Namespace:"calico-apiserver", SelfLink:"", UID:"8d4217ea-b6cb-4fd0-a095-a5df158bad6e", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 58, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"748bb69b95", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.0.0-n-140c1315ab", ContainerID:"", Pod:"calico-apiserver-748bb69b95-qkbmd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.110.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2698452924c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:59:08.739446 containerd[1722]: 2025-09-16 04:59:08.702 [INFO][4769] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.110.134/32] ContainerID="9170974f888d4295360c8efdc6d55c54469c56bf2cbb7f079538a5cd031e3878" Namespace="calico-apiserver" Pod="calico-apiserver-748bb69b95-qkbmd" WorkloadEndpoint="ci--4459.0.0--n--140c1315ab-k8s-calico--apiserver--748bb69b95--qkbmd-eth0" Sep 16 04:59:08.739446 containerd[1722]: 2025-09-16 04:59:08.702 [INFO][4769] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2698452924c ContainerID="9170974f888d4295360c8efdc6d55c54469c56bf2cbb7f079538a5cd031e3878" Namespace="calico-apiserver" Pod="calico-apiserver-748bb69b95-qkbmd" WorkloadEndpoint="ci--4459.0.0--n--140c1315ab-k8s-calico--apiserver--748bb69b95--qkbmd-eth0" Sep 16 04:59:08.739446 containerd[1722]: 2025-09-16 04:59:08.710 [INFO][4769] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9170974f888d4295360c8efdc6d55c54469c56bf2cbb7f079538a5cd031e3878" Namespace="calico-apiserver" Pod="calico-apiserver-748bb69b95-qkbmd" WorkloadEndpoint="ci--4459.0.0--n--140c1315ab-k8s-calico--apiserver--748bb69b95--qkbmd-eth0" Sep 16 04:59:08.739446 containerd[1722]: 2025-09-16 04:59:08.711 [INFO][4769] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9170974f888d4295360c8efdc6d55c54469c56bf2cbb7f079538a5cd031e3878" Namespace="calico-apiserver" Pod="calico-apiserver-748bb69b95-qkbmd" WorkloadEndpoint="ci--4459.0.0--n--140c1315ab-k8s-calico--apiserver--748bb69b95--qkbmd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.0.0--n--140c1315ab-k8s-calico--apiserver--748bb69b95--qkbmd-eth0", GenerateName:"calico-apiserver-748bb69b95-", Namespace:"calico-apiserver", SelfLink:"", UID:"8d4217ea-b6cb-4fd0-a095-a5df158bad6e", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 58, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"748bb69b95", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.0.0-n-140c1315ab", ContainerID:"9170974f888d4295360c8efdc6d55c54469c56bf2cbb7f079538a5cd031e3878", Pod:"calico-apiserver-748bb69b95-qkbmd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.110.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2698452924c", MAC:"62:4a:72:7d:95:1a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:59:08.739446 containerd[1722]: 2025-09-16 04:59:08.736 [INFO][4769] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9170974f888d4295360c8efdc6d55c54469c56bf2cbb7f079538a5cd031e3878" Namespace="calico-apiserver" Pod="calico-apiserver-748bb69b95-qkbmd" WorkloadEndpoint="ci--4459.0.0--n--140c1315ab-k8s-calico--apiserver--748bb69b95--qkbmd-eth0" Sep 16 04:59:08.812769 containerd[1722]: time="2025-09-16T04:59:08.812712308Z" level=info msg="StartContainer for \"ab1048e72aefcceea11c421d04241fc8123606017b19a1f766f8d8020e910356\" returns successfully" Sep 16 04:59:08.816572 containerd[1722]: time="2025-09-16T04:59:08.816520655Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qlxsc,Uid:9b4e9d9e-651f-42b1-b357-b24bcf14db30,Namespace:calico-system,Attempt:0,} returns sandbox id \"41d174970394b87315e8c39d32cca548a86b68425381550e91270c5adc787a4a\"" Sep 16 04:59:08.855360 containerd[1722]: time="2025-09-16T04:59:08.855324659Z" level=info msg="connecting to shim 9170974f888d4295360c8efdc6d55c54469c56bf2cbb7f079538a5cd031e3878" address="unix:///run/containerd/s/043d215e6b9ba4f8d95aedc8f92adf1ed42c41d3a674a9d171ee814b8f1faacc" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:59:08.882328 systemd[1]: Started cri-containerd-9170974f888d4295360c8efdc6d55c54469c56bf2cbb7f079538a5cd031e3878.scope - libcontainer container 9170974f888d4295360c8efdc6d55c54469c56bf2cbb7f079538a5cd031e3878. Sep 16 04:59:08.953101 containerd[1722]: time="2025-09-16T04:59:08.952980923Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-748bb69b95-qkbmd,Uid:8d4217ea-b6cb-4fd0-a095-a5df158bad6e,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"9170974f888d4295360c8efdc6d55c54469c56bf2cbb7f079538a5cd031e3878\"" Sep 16 04:59:09.041166 systemd-networkd[1586]: calie7f2ba957cc: Gained IPv6LL Sep 16 04:59:09.169173 systemd-networkd[1586]: califf315c8eb4c: Gained IPv6LL Sep 16 04:59:09.286509 containerd[1722]: time="2025-09-16T04:59:09.286478916Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-7mk7b,Uid:4baa5529-b6ae-4993-b06b-11459fee0da1,Namespace:kube-system,Attempt:0,}" Sep 16 04:59:09.306108 containerd[1722]: time="2025-09-16T04:59:09.305282519Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Sep 16 04:59:09.306108 containerd[1722]: time="2025-09-16T04:59:09.305840224Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:59:09.311872 containerd[1722]: time="2025-09-16T04:59:09.311831029Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:59:09.312121 containerd[1722]: time="2025-09-16T04:59:09.312080934Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 2.700170096s" Sep 16 04:59:09.312121 containerd[1722]: time="2025-09-16T04:59:09.312118769Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 16 04:59:09.312341 containerd[1722]: time="2025-09-16T04:59:09.312324977Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:59:09.316133 containerd[1722]: time="2025-09-16T04:59:09.315477908Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 16 04:59:09.322808 containerd[1722]: time="2025-09-16T04:59:09.322786616Z" level=info msg="CreateContainer within sandbox \"d89f25190da0d8f98af8f70d77c21f44a23ad27e1a7eb5f914dfe86a57b028d8\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 16 04:59:09.341478 containerd[1722]: time="2025-09-16T04:59:09.341454921Z" level=info msg="Container 03739d01686d22bafea8d80da324a50fc2a29909f54223c99a31a1e5b4938d52: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:59:09.358457 containerd[1722]: time="2025-09-16T04:59:09.358422382Z" level=info msg="CreateContainer within sandbox \"d89f25190da0d8f98af8f70d77c21f44a23ad27e1a7eb5f914dfe86a57b028d8\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"03739d01686d22bafea8d80da324a50fc2a29909f54223c99a31a1e5b4938d52\"" Sep 16 04:59:09.359835 containerd[1722]: time="2025-09-16T04:59:09.359801835Z" level=info msg="StartContainer for \"03739d01686d22bafea8d80da324a50fc2a29909f54223c99a31a1e5b4938d52\"" Sep 16 04:59:09.361279 containerd[1722]: time="2025-09-16T04:59:09.361254171Z" level=info msg="connecting to shim 03739d01686d22bafea8d80da324a50fc2a29909f54223c99a31a1e5b4938d52" address="unix:///run/containerd/s/683643086eed59d680af4d04e8f38ab06e66b2f1b52dd7d32b16da03bfc119bf" protocol=ttrpc version=3 Sep 16 04:59:09.383247 systemd[1]: Started cri-containerd-03739d01686d22bafea8d80da324a50fc2a29909f54223c99a31a1e5b4938d52.scope - libcontainer container 03739d01686d22bafea8d80da324a50fc2a29909f54223c99a31a1e5b4938d52. Sep 16 04:59:09.392237 systemd-networkd[1586]: cali2e147fc3a55: Link UP Sep 16 04:59:09.392385 systemd-networkd[1586]: cali2e147fc3a55: Gained carrier Sep 16 04:59:09.409369 containerd[1722]: 2025-09-16 04:59:09.325 [INFO][5025] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.0.0--n--140c1315ab-k8s-coredns--674b8bbfcf--7mk7b-eth0 coredns-674b8bbfcf- kube-system 4baa5529-b6ae-4993-b06b-11459fee0da1 830 0 2025-09-16 04:58:32 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459.0.0-n-140c1315ab coredns-674b8bbfcf-7mk7b eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali2e147fc3a55 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="acaad7cd5c54e75aeeb547aceebb3b43efeea9edf6d8b592da59a98f9fdefd67" Namespace="kube-system" Pod="coredns-674b8bbfcf-7mk7b" WorkloadEndpoint="ci--4459.0.0--n--140c1315ab-k8s-coredns--674b8bbfcf--7mk7b-" Sep 16 04:59:09.409369 containerd[1722]: 2025-09-16 04:59:09.326 [INFO][5025] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="acaad7cd5c54e75aeeb547aceebb3b43efeea9edf6d8b592da59a98f9fdefd67" Namespace="kube-system" Pod="coredns-674b8bbfcf-7mk7b" WorkloadEndpoint="ci--4459.0.0--n--140c1315ab-k8s-coredns--674b8bbfcf--7mk7b-eth0" Sep 16 04:59:09.409369 containerd[1722]: 2025-09-16 04:59:09.346 [INFO][5038] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="acaad7cd5c54e75aeeb547aceebb3b43efeea9edf6d8b592da59a98f9fdefd67" HandleID="k8s-pod-network.acaad7cd5c54e75aeeb547aceebb3b43efeea9edf6d8b592da59a98f9fdefd67" Workload="ci--4459.0.0--n--140c1315ab-k8s-coredns--674b8bbfcf--7mk7b-eth0" Sep 16 04:59:09.409369 containerd[1722]: 2025-09-16 04:59:09.346 [INFO][5038] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="acaad7cd5c54e75aeeb547aceebb3b43efeea9edf6d8b592da59a98f9fdefd67" HandleID="k8s-pod-network.acaad7cd5c54e75aeeb547aceebb3b43efeea9edf6d8b592da59a98f9fdefd67" Workload="ci--4459.0.0--n--140c1315ab-k8s-coredns--674b8bbfcf--7mk7b-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002592b0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459.0.0-n-140c1315ab", "pod":"coredns-674b8bbfcf-7mk7b", "timestamp":"2025-09-16 04:59:09.346014019 +0000 UTC"}, Hostname:"ci-4459.0.0-n-140c1315ab", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 16 04:59:09.409369 containerd[1722]: 2025-09-16 04:59:09.346 [INFO][5038] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 16 04:59:09.409369 containerd[1722]: 2025-09-16 04:59:09.346 [INFO][5038] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 16 04:59:09.409369 containerd[1722]: 2025-09-16 04:59:09.346 [INFO][5038] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.0.0-n-140c1315ab' Sep 16 04:59:09.409369 containerd[1722]: 2025-09-16 04:59:09.351 [INFO][5038] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.acaad7cd5c54e75aeeb547aceebb3b43efeea9edf6d8b592da59a98f9fdefd67" host="ci-4459.0.0-n-140c1315ab" Sep 16 04:59:09.409369 containerd[1722]: 2025-09-16 04:59:09.354 [INFO][5038] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.0.0-n-140c1315ab" Sep 16 04:59:09.409369 containerd[1722]: 2025-09-16 04:59:09.358 [INFO][5038] ipam/ipam.go 511: Trying affinity for 192.168.110.128/26 host="ci-4459.0.0-n-140c1315ab" Sep 16 04:59:09.409369 containerd[1722]: 2025-09-16 04:59:09.363 [INFO][5038] ipam/ipam.go 158: Attempting to load block cidr=192.168.110.128/26 host="ci-4459.0.0-n-140c1315ab" Sep 16 04:59:09.409369 containerd[1722]: 2025-09-16 04:59:09.366 [INFO][5038] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.110.128/26 host="ci-4459.0.0-n-140c1315ab" Sep 16 04:59:09.409369 containerd[1722]: 2025-09-16 04:59:09.366 [INFO][5038] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.110.128/26 handle="k8s-pod-network.acaad7cd5c54e75aeeb547aceebb3b43efeea9edf6d8b592da59a98f9fdefd67" host="ci-4459.0.0-n-140c1315ab" Sep 16 04:59:09.409369 containerd[1722]: 2025-09-16 04:59:09.369 [INFO][5038] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.acaad7cd5c54e75aeeb547aceebb3b43efeea9edf6d8b592da59a98f9fdefd67 Sep 16 04:59:09.409369 containerd[1722]: 2025-09-16 04:59:09.373 [INFO][5038] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.110.128/26 handle="k8s-pod-network.acaad7cd5c54e75aeeb547aceebb3b43efeea9edf6d8b592da59a98f9fdefd67" host="ci-4459.0.0-n-140c1315ab" Sep 16 04:59:09.409369 containerd[1722]: 2025-09-16 04:59:09.383 [INFO][5038] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.110.135/26] block=192.168.110.128/26 handle="k8s-pod-network.acaad7cd5c54e75aeeb547aceebb3b43efeea9edf6d8b592da59a98f9fdefd67" host="ci-4459.0.0-n-140c1315ab" Sep 16 04:59:09.409369 containerd[1722]: 2025-09-16 04:59:09.383 [INFO][5038] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.110.135/26] handle="k8s-pod-network.acaad7cd5c54e75aeeb547aceebb3b43efeea9edf6d8b592da59a98f9fdefd67" host="ci-4459.0.0-n-140c1315ab" Sep 16 04:59:09.409369 containerd[1722]: 2025-09-16 04:59:09.383 [INFO][5038] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 16 04:59:09.409369 containerd[1722]: 2025-09-16 04:59:09.383 [INFO][5038] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.110.135/26] IPv6=[] ContainerID="acaad7cd5c54e75aeeb547aceebb3b43efeea9edf6d8b592da59a98f9fdefd67" HandleID="k8s-pod-network.acaad7cd5c54e75aeeb547aceebb3b43efeea9edf6d8b592da59a98f9fdefd67" Workload="ci--4459.0.0--n--140c1315ab-k8s-coredns--674b8bbfcf--7mk7b-eth0" Sep 16 04:59:09.410317 containerd[1722]: 2025-09-16 04:59:09.385 [INFO][5025] cni-plugin/k8s.go 418: Populated endpoint ContainerID="acaad7cd5c54e75aeeb547aceebb3b43efeea9edf6d8b592da59a98f9fdefd67" Namespace="kube-system" Pod="coredns-674b8bbfcf-7mk7b" WorkloadEndpoint="ci--4459.0.0--n--140c1315ab-k8s-coredns--674b8bbfcf--7mk7b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.0.0--n--140c1315ab-k8s-coredns--674b8bbfcf--7mk7b-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"4baa5529-b6ae-4993-b06b-11459fee0da1", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 58, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.0.0-n-140c1315ab", ContainerID:"", Pod:"coredns-674b8bbfcf-7mk7b", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.110.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2e147fc3a55", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:59:09.410317 containerd[1722]: 2025-09-16 04:59:09.387 [INFO][5025] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.110.135/32] ContainerID="acaad7cd5c54e75aeeb547aceebb3b43efeea9edf6d8b592da59a98f9fdefd67" Namespace="kube-system" Pod="coredns-674b8bbfcf-7mk7b" WorkloadEndpoint="ci--4459.0.0--n--140c1315ab-k8s-coredns--674b8bbfcf--7mk7b-eth0" Sep 16 04:59:09.410317 containerd[1722]: 2025-09-16 04:59:09.387 [INFO][5025] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2e147fc3a55 ContainerID="acaad7cd5c54e75aeeb547aceebb3b43efeea9edf6d8b592da59a98f9fdefd67" Namespace="kube-system" Pod="coredns-674b8bbfcf-7mk7b" WorkloadEndpoint="ci--4459.0.0--n--140c1315ab-k8s-coredns--674b8bbfcf--7mk7b-eth0" Sep 16 04:59:09.410317 containerd[1722]: 2025-09-16 04:59:09.391 [INFO][5025] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="acaad7cd5c54e75aeeb547aceebb3b43efeea9edf6d8b592da59a98f9fdefd67" Namespace="kube-system" Pod="coredns-674b8bbfcf-7mk7b" WorkloadEndpoint="ci--4459.0.0--n--140c1315ab-k8s-coredns--674b8bbfcf--7mk7b-eth0" Sep 16 04:59:09.410317 containerd[1722]: 2025-09-16 04:59:09.393 [INFO][5025] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="acaad7cd5c54e75aeeb547aceebb3b43efeea9edf6d8b592da59a98f9fdefd67" Namespace="kube-system" Pod="coredns-674b8bbfcf-7mk7b" WorkloadEndpoint="ci--4459.0.0--n--140c1315ab-k8s-coredns--674b8bbfcf--7mk7b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.0.0--n--140c1315ab-k8s-coredns--674b8bbfcf--7mk7b-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"4baa5529-b6ae-4993-b06b-11459fee0da1", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 58, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.0.0-n-140c1315ab", ContainerID:"acaad7cd5c54e75aeeb547aceebb3b43efeea9edf6d8b592da59a98f9fdefd67", Pod:"coredns-674b8bbfcf-7mk7b", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.110.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2e147fc3a55", MAC:"f6:af:55:7e:ed:83", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:59:09.410317 containerd[1722]: 2025-09-16 04:59:09.407 [INFO][5025] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="acaad7cd5c54e75aeeb547aceebb3b43efeea9edf6d8b592da59a98f9fdefd67" Namespace="kube-system" Pod="coredns-674b8bbfcf-7mk7b" WorkloadEndpoint="ci--4459.0.0--n--140c1315ab-k8s-coredns--674b8bbfcf--7mk7b-eth0" Sep 16 04:59:09.456758 kubelet[3170]: I0916 04:59:09.456402 3170 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-z9npp" podStartSLOduration=37.456387428 podStartE2EDuration="37.456387428s" podCreationTimestamp="2025-09-16 04:58:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 04:59:09.438531966 +0000 UTC m=+44.230557992" watchObservedRunningTime="2025-09-16 04:59:09.456387428 +0000 UTC m=+44.248413455" Sep 16 04:59:09.467669 containerd[1722]: time="2025-09-16T04:59:09.467629343Z" level=info msg="StartContainer for \"03739d01686d22bafea8d80da324a50fc2a29909f54223c99a31a1e5b4938d52\" returns successfully" Sep 16 04:59:09.481066 containerd[1722]: time="2025-09-16T04:59:09.480895462Z" level=info msg="connecting to shim acaad7cd5c54e75aeeb547aceebb3b43efeea9edf6d8b592da59a98f9fdefd67" address="unix:///run/containerd/s/2e52f54cee1225184a2722c0fa37ee039d4085d58ee54376c0c78848f791fe11" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:59:09.503247 systemd[1]: Started cri-containerd-acaad7cd5c54e75aeeb547aceebb3b43efeea9edf6d8b592da59a98f9fdefd67.scope - libcontainer container acaad7cd5c54e75aeeb547aceebb3b43efeea9edf6d8b592da59a98f9fdefd67. Sep 16 04:59:09.555047 containerd[1722]: time="2025-09-16T04:59:09.554989360Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-7mk7b,Uid:4baa5529-b6ae-4993-b06b-11459fee0da1,Namespace:kube-system,Attempt:0,} returns sandbox id \"acaad7cd5c54e75aeeb547aceebb3b43efeea9edf6d8b592da59a98f9fdefd67\"" Sep 16 04:59:09.563381 containerd[1722]: time="2025-09-16T04:59:09.563354439Z" level=info msg="CreateContainer within sandbox \"acaad7cd5c54e75aeeb547aceebb3b43efeea9edf6d8b592da59a98f9fdefd67\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 16 04:59:09.584559 containerd[1722]: time="2025-09-16T04:59:09.581758112Z" level=info msg="Container d15c0be0d42dc895fcf8be5c3d519eb2745d9e5aeda6da81af0ae8d8f6a4426a: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:59:09.594014 containerd[1722]: time="2025-09-16T04:59:09.593981590Z" level=info msg="CreateContainer within sandbox \"acaad7cd5c54e75aeeb547aceebb3b43efeea9edf6d8b592da59a98f9fdefd67\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d15c0be0d42dc895fcf8be5c3d519eb2745d9e5aeda6da81af0ae8d8f6a4426a\"" Sep 16 04:59:09.594491 containerd[1722]: time="2025-09-16T04:59:09.594424881Z" level=info msg="StartContainer for \"d15c0be0d42dc895fcf8be5c3d519eb2745d9e5aeda6da81af0ae8d8f6a4426a\"" Sep 16 04:59:09.595322 containerd[1722]: time="2025-09-16T04:59:09.595295104Z" level=info msg="connecting to shim d15c0be0d42dc895fcf8be5c3d519eb2745d9e5aeda6da81af0ae8d8f6a4426a" address="unix:///run/containerd/s/2e52f54cee1225184a2722c0fa37ee039d4085d58ee54376c0c78848f791fe11" protocol=ttrpc version=3 Sep 16 04:59:09.617209 systemd[1]: Started cri-containerd-d15c0be0d42dc895fcf8be5c3d519eb2745d9e5aeda6da81af0ae8d8f6a4426a.scope - libcontainer container d15c0be0d42dc895fcf8be5c3d519eb2745d9e5aeda6da81af0ae8d8f6a4426a. Sep 16 04:59:09.642324 containerd[1722]: time="2025-09-16T04:59:09.642302571Z" level=info msg="StartContainer for \"d15c0be0d42dc895fcf8be5c3d519eb2745d9e5aeda6da81af0ae8d8f6a4426a\" returns successfully" Sep 16 04:59:09.809177 systemd-networkd[1586]: cali32b83118d58: Gained IPv6LL Sep 16 04:59:09.937278 systemd-networkd[1586]: cali2b2ea1e218a: Gained IPv6LL Sep 16 04:59:10.287195 containerd[1722]: time="2025-09-16T04:59:10.287157963Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-696d587784-76trh,Uid:98dcb491-139f-48ae-8a04-0d45651d392d,Namespace:calico-apiserver,Attempt:0,}" Sep 16 04:59:10.381227 systemd-networkd[1586]: califc262d2b782: Link UP Sep 16 04:59:10.381416 systemd-networkd[1586]: califc262d2b782: Gained carrier Sep 16 04:59:10.401896 containerd[1722]: 2025-09-16 04:59:10.318 [INFO][5167] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.0.0--n--140c1315ab-k8s-calico--apiserver--696d587784--76trh-eth0 calico-apiserver-696d587784- calico-apiserver 98dcb491-139f-48ae-8a04-0d45651d392d 838 0 2025-09-16 04:58:41 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:696d587784 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459.0.0-n-140c1315ab calico-apiserver-696d587784-76trh eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] califc262d2b782 [] [] }} ContainerID="7a61fdc32c58f20217730afeb565dd4f40ba40b091599735b415c728f71bd860" Namespace="calico-apiserver" Pod="calico-apiserver-696d587784-76trh" WorkloadEndpoint="ci--4459.0.0--n--140c1315ab-k8s-calico--apiserver--696d587784--76trh-" Sep 16 04:59:10.401896 containerd[1722]: 2025-09-16 04:59:10.318 [INFO][5167] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7a61fdc32c58f20217730afeb565dd4f40ba40b091599735b415c728f71bd860" Namespace="calico-apiserver" Pod="calico-apiserver-696d587784-76trh" WorkloadEndpoint="ci--4459.0.0--n--140c1315ab-k8s-calico--apiserver--696d587784--76trh-eth0" Sep 16 04:59:10.401896 containerd[1722]: 2025-09-16 04:59:10.338 [INFO][5180] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7a61fdc32c58f20217730afeb565dd4f40ba40b091599735b415c728f71bd860" HandleID="k8s-pod-network.7a61fdc32c58f20217730afeb565dd4f40ba40b091599735b415c728f71bd860" Workload="ci--4459.0.0--n--140c1315ab-k8s-calico--apiserver--696d587784--76trh-eth0" Sep 16 04:59:10.401896 containerd[1722]: 2025-09-16 04:59:10.338 [INFO][5180] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7a61fdc32c58f20217730afeb565dd4f40ba40b091599735b415c728f71bd860" HandleID="k8s-pod-network.7a61fdc32c58f20217730afeb565dd4f40ba40b091599735b415c728f71bd860" Workload="ci--4459.0.0--n--140c1315ab-k8s-calico--apiserver--696d587784--76trh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5640), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459.0.0-n-140c1315ab", "pod":"calico-apiserver-696d587784-76trh", "timestamp":"2025-09-16 04:59:10.338297002 +0000 UTC"}, Hostname:"ci-4459.0.0-n-140c1315ab", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 16 04:59:10.401896 containerd[1722]: 2025-09-16 04:59:10.338 [INFO][5180] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 16 04:59:10.401896 containerd[1722]: 2025-09-16 04:59:10.338 [INFO][5180] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 16 04:59:10.401896 containerd[1722]: 2025-09-16 04:59:10.338 [INFO][5180] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.0.0-n-140c1315ab' Sep 16 04:59:10.401896 containerd[1722]: 2025-09-16 04:59:10.344 [INFO][5180] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7a61fdc32c58f20217730afeb565dd4f40ba40b091599735b415c728f71bd860" host="ci-4459.0.0-n-140c1315ab" Sep 16 04:59:10.401896 containerd[1722]: 2025-09-16 04:59:10.347 [INFO][5180] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.0.0-n-140c1315ab" Sep 16 04:59:10.401896 containerd[1722]: 2025-09-16 04:59:10.350 [INFO][5180] ipam/ipam.go 511: Trying affinity for 192.168.110.128/26 host="ci-4459.0.0-n-140c1315ab" Sep 16 04:59:10.401896 containerd[1722]: 2025-09-16 04:59:10.352 [INFO][5180] ipam/ipam.go 158: Attempting to load block cidr=192.168.110.128/26 host="ci-4459.0.0-n-140c1315ab" Sep 16 04:59:10.401896 containerd[1722]: 2025-09-16 04:59:10.353 [INFO][5180] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.110.128/26 host="ci-4459.0.0-n-140c1315ab" Sep 16 04:59:10.401896 containerd[1722]: 2025-09-16 04:59:10.354 [INFO][5180] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.110.128/26 handle="k8s-pod-network.7a61fdc32c58f20217730afeb565dd4f40ba40b091599735b415c728f71bd860" host="ci-4459.0.0-n-140c1315ab" Sep 16 04:59:10.401896 containerd[1722]: 2025-09-16 04:59:10.355 [INFO][5180] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7a61fdc32c58f20217730afeb565dd4f40ba40b091599735b415c728f71bd860 Sep 16 04:59:10.401896 containerd[1722]: 2025-09-16 04:59:10.358 [INFO][5180] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.110.128/26 handle="k8s-pod-network.7a61fdc32c58f20217730afeb565dd4f40ba40b091599735b415c728f71bd860" host="ci-4459.0.0-n-140c1315ab" Sep 16 04:59:10.401896 containerd[1722]: 2025-09-16 04:59:10.375 [INFO][5180] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.110.136/26] block=192.168.110.128/26 handle="k8s-pod-network.7a61fdc32c58f20217730afeb565dd4f40ba40b091599735b415c728f71bd860" host="ci-4459.0.0-n-140c1315ab" Sep 16 04:59:10.401896 containerd[1722]: 2025-09-16 04:59:10.375 [INFO][5180] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.110.136/26] handle="k8s-pod-network.7a61fdc32c58f20217730afeb565dd4f40ba40b091599735b415c728f71bd860" host="ci-4459.0.0-n-140c1315ab" Sep 16 04:59:10.401896 containerd[1722]: 2025-09-16 04:59:10.375 [INFO][5180] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 16 04:59:10.401896 containerd[1722]: 2025-09-16 04:59:10.375 [INFO][5180] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.110.136/26] IPv6=[] ContainerID="7a61fdc32c58f20217730afeb565dd4f40ba40b091599735b415c728f71bd860" HandleID="k8s-pod-network.7a61fdc32c58f20217730afeb565dd4f40ba40b091599735b415c728f71bd860" Workload="ci--4459.0.0--n--140c1315ab-k8s-calico--apiserver--696d587784--76trh-eth0" Sep 16 04:59:10.402499 containerd[1722]: 2025-09-16 04:59:10.376 [INFO][5167] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7a61fdc32c58f20217730afeb565dd4f40ba40b091599735b415c728f71bd860" Namespace="calico-apiserver" Pod="calico-apiserver-696d587784-76trh" WorkloadEndpoint="ci--4459.0.0--n--140c1315ab-k8s-calico--apiserver--696d587784--76trh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.0.0--n--140c1315ab-k8s-calico--apiserver--696d587784--76trh-eth0", GenerateName:"calico-apiserver-696d587784-", Namespace:"calico-apiserver", SelfLink:"", UID:"98dcb491-139f-48ae-8a04-0d45651d392d", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 58, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"696d587784", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.0.0-n-140c1315ab", ContainerID:"", Pod:"calico-apiserver-696d587784-76trh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.110.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califc262d2b782", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:59:10.402499 containerd[1722]: 2025-09-16 04:59:10.376 [INFO][5167] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.110.136/32] ContainerID="7a61fdc32c58f20217730afeb565dd4f40ba40b091599735b415c728f71bd860" Namespace="calico-apiserver" Pod="calico-apiserver-696d587784-76trh" WorkloadEndpoint="ci--4459.0.0--n--140c1315ab-k8s-calico--apiserver--696d587784--76trh-eth0" Sep 16 04:59:10.402499 containerd[1722]: 2025-09-16 04:59:10.376 [INFO][5167] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califc262d2b782 ContainerID="7a61fdc32c58f20217730afeb565dd4f40ba40b091599735b415c728f71bd860" Namespace="calico-apiserver" Pod="calico-apiserver-696d587784-76trh" WorkloadEndpoint="ci--4459.0.0--n--140c1315ab-k8s-calico--apiserver--696d587784--76trh-eth0" Sep 16 04:59:10.402499 containerd[1722]: 2025-09-16 04:59:10.383 [INFO][5167] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7a61fdc32c58f20217730afeb565dd4f40ba40b091599735b415c728f71bd860" Namespace="calico-apiserver" Pod="calico-apiserver-696d587784-76trh" WorkloadEndpoint="ci--4459.0.0--n--140c1315ab-k8s-calico--apiserver--696d587784--76trh-eth0" Sep 16 04:59:10.402499 containerd[1722]: 2025-09-16 04:59:10.383 [INFO][5167] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7a61fdc32c58f20217730afeb565dd4f40ba40b091599735b415c728f71bd860" Namespace="calico-apiserver" Pod="calico-apiserver-696d587784-76trh" WorkloadEndpoint="ci--4459.0.0--n--140c1315ab-k8s-calico--apiserver--696d587784--76trh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.0.0--n--140c1315ab-k8s-calico--apiserver--696d587784--76trh-eth0", GenerateName:"calico-apiserver-696d587784-", Namespace:"calico-apiserver", SelfLink:"", UID:"98dcb491-139f-48ae-8a04-0d45651d392d", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 58, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"696d587784", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.0.0-n-140c1315ab", ContainerID:"7a61fdc32c58f20217730afeb565dd4f40ba40b091599735b415c728f71bd860", Pod:"calico-apiserver-696d587784-76trh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.110.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califc262d2b782", MAC:"c2:2a:60:5e:2c:33", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:59:10.402499 containerd[1722]: 2025-09-16 04:59:10.400 [INFO][5167] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7a61fdc32c58f20217730afeb565dd4f40ba40b091599735b415c728f71bd860" Namespace="calico-apiserver" Pod="calico-apiserver-696d587784-76trh" WorkloadEndpoint="ci--4459.0.0--n--140c1315ab-k8s-calico--apiserver--696d587784--76trh-eth0" Sep 16 04:59:10.447861 containerd[1722]: time="2025-09-16T04:59:10.447746002Z" level=info msg="connecting to shim 7a61fdc32c58f20217730afeb565dd4f40ba40b091599735b415c728f71bd860" address="unix:///run/containerd/s/9dcec28d511465915212999c2c92914c3217883f133a82910e0774b831893f82" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:59:10.470724 kubelet[3170]: I0916 04:59:10.469973 3170 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-cbd9f96dd-b7bqw" podStartSLOduration=2.239756303 podStartE2EDuration="6.469958944s" podCreationTimestamp="2025-09-16 04:59:04 +0000 UTC" firstStartedPulling="2025-09-16 04:59:05.08500863 +0000 UTC m=+39.877034648" lastFinishedPulling="2025-09-16 04:59:09.315211264 +0000 UTC m=+44.107237289" observedRunningTime="2025-09-16 04:59:10.446947612 +0000 UTC m=+45.238973642" watchObservedRunningTime="2025-09-16 04:59:10.469958944 +0000 UTC m=+45.261984964" Sep 16 04:59:10.471984 kubelet[3170]: I0916 04:59:10.471931 3170 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-7mk7b" podStartSLOduration=38.471917185 podStartE2EDuration="38.471917185s" podCreationTimestamp="2025-09-16 04:58:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 04:59:10.469928192 +0000 UTC m=+45.261954222" watchObservedRunningTime="2025-09-16 04:59:10.471917185 +0000 UTC m=+45.263943212" Sep 16 04:59:10.487943 systemd[1]: Started cri-containerd-7a61fdc32c58f20217730afeb565dd4f40ba40b091599735b415c728f71bd860.scope - libcontainer container 7a61fdc32c58f20217730afeb565dd4f40ba40b091599735b415c728f71bd860. Sep 16 04:59:10.552294 containerd[1722]: time="2025-09-16T04:59:10.551302986Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-696d587784-76trh,Uid:98dcb491-139f-48ae-8a04-0d45651d392d,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"7a61fdc32c58f20217730afeb565dd4f40ba40b091599735b415c728f71bd860\"" Sep 16 04:59:10.577178 systemd-networkd[1586]: cali2698452924c: Gained IPv6LL Sep 16 04:59:11.089185 systemd-networkd[1586]: cali2e147fc3a55: Gained IPv6LL Sep 16 04:59:11.287235 containerd[1722]: time="2025-09-16T04:59:11.286955295Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-bn6xl,Uid:05ff58dd-4daf-4601-9f2e-e553fb710f78,Namespace:calico-system,Attempt:0,}" Sep 16 04:59:11.507624 systemd-networkd[1586]: cali1f638a513e4: Link UP Sep 16 04:59:11.507799 systemd-networkd[1586]: cali1f638a513e4: Gained carrier Sep 16 04:59:11.528100 containerd[1722]: 2025-09-16 04:59:11.436 [INFO][5249] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.0.0--n--140c1315ab-k8s-goldmane--54d579b49d--bn6xl-eth0 goldmane-54d579b49d- calico-system 05ff58dd-4daf-4601-9f2e-e553fb710f78 839 0 2025-09-16 04:58:43 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4459.0.0-n-140c1315ab goldmane-54d579b49d-bn6xl eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali1f638a513e4 [] [] }} ContainerID="6c6bc510f87957809b94fc466410944fd34f8d79bcc4c3aa8e1614b79cdabba0" Namespace="calico-system" Pod="goldmane-54d579b49d-bn6xl" WorkloadEndpoint="ci--4459.0.0--n--140c1315ab-k8s-goldmane--54d579b49d--bn6xl-" Sep 16 04:59:11.528100 containerd[1722]: 2025-09-16 04:59:11.436 [INFO][5249] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6c6bc510f87957809b94fc466410944fd34f8d79bcc4c3aa8e1614b79cdabba0" Namespace="calico-system" Pod="goldmane-54d579b49d-bn6xl" WorkloadEndpoint="ci--4459.0.0--n--140c1315ab-k8s-goldmane--54d579b49d--bn6xl-eth0" Sep 16 04:59:11.528100 containerd[1722]: 2025-09-16 04:59:11.465 [INFO][5264] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6c6bc510f87957809b94fc466410944fd34f8d79bcc4c3aa8e1614b79cdabba0" HandleID="k8s-pod-network.6c6bc510f87957809b94fc466410944fd34f8d79bcc4c3aa8e1614b79cdabba0" Workload="ci--4459.0.0--n--140c1315ab-k8s-goldmane--54d579b49d--bn6xl-eth0" Sep 16 04:59:11.528100 containerd[1722]: 2025-09-16 04:59:11.465 [INFO][5264] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6c6bc510f87957809b94fc466410944fd34f8d79bcc4c3aa8e1614b79cdabba0" HandleID="k8s-pod-network.6c6bc510f87957809b94fc466410944fd34f8d79bcc4c3aa8e1614b79cdabba0" Workload="ci--4459.0.0--n--140c1315ab-k8s-goldmane--54d579b49d--bn6xl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002e8ff0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.0.0-n-140c1315ab", "pod":"goldmane-54d579b49d-bn6xl", "timestamp":"2025-09-16 04:59:11.46507364 +0000 UTC"}, Hostname:"ci-4459.0.0-n-140c1315ab", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 16 04:59:11.528100 containerd[1722]: 2025-09-16 04:59:11.465 [INFO][5264] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 16 04:59:11.528100 containerd[1722]: 2025-09-16 04:59:11.465 [INFO][5264] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 16 04:59:11.528100 containerd[1722]: 2025-09-16 04:59:11.465 [INFO][5264] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.0.0-n-140c1315ab' Sep 16 04:59:11.528100 containerd[1722]: 2025-09-16 04:59:11.472 [INFO][5264] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6c6bc510f87957809b94fc466410944fd34f8d79bcc4c3aa8e1614b79cdabba0" host="ci-4459.0.0-n-140c1315ab" Sep 16 04:59:11.528100 containerd[1722]: 2025-09-16 04:59:11.476 [INFO][5264] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.0.0-n-140c1315ab" Sep 16 04:59:11.528100 containerd[1722]: 2025-09-16 04:59:11.482 [INFO][5264] ipam/ipam.go 511: Trying affinity for 192.168.110.128/26 host="ci-4459.0.0-n-140c1315ab" Sep 16 04:59:11.528100 containerd[1722]: 2025-09-16 04:59:11.484 [INFO][5264] ipam/ipam.go 158: Attempting to load block cidr=192.168.110.128/26 host="ci-4459.0.0-n-140c1315ab" Sep 16 04:59:11.528100 containerd[1722]: 2025-09-16 04:59:11.486 [INFO][5264] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.110.128/26 host="ci-4459.0.0-n-140c1315ab" Sep 16 04:59:11.528100 containerd[1722]: 2025-09-16 04:59:11.486 [INFO][5264] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.110.128/26 handle="k8s-pod-network.6c6bc510f87957809b94fc466410944fd34f8d79bcc4c3aa8e1614b79cdabba0" host="ci-4459.0.0-n-140c1315ab" Sep 16 04:59:11.528100 containerd[1722]: 2025-09-16 04:59:11.488 [INFO][5264] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6c6bc510f87957809b94fc466410944fd34f8d79bcc4c3aa8e1614b79cdabba0 Sep 16 04:59:11.528100 containerd[1722]: 2025-09-16 04:59:11.492 [INFO][5264] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.110.128/26 handle="k8s-pod-network.6c6bc510f87957809b94fc466410944fd34f8d79bcc4c3aa8e1614b79cdabba0" host="ci-4459.0.0-n-140c1315ab" Sep 16 04:59:11.528100 containerd[1722]: 2025-09-16 04:59:11.501 [INFO][5264] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.110.137/26] block=192.168.110.128/26 handle="k8s-pod-network.6c6bc510f87957809b94fc466410944fd34f8d79bcc4c3aa8e1614b79cdabba0" host="ci-4459.0.0-n-140c1315ab" Sep 16 04:59:11.528100 containerd[1722]: 2025-09-16 04:59:11.501 [INFO][5264] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.110.137/26] handle="k8s-pod-network.6c6bc510f87957809b94fc466410944fd34f8d79bcc4c3aa8e1614b79cdabba0" host="ci-4459.0.0-n-140c1315ab" Sep 16 04:59:11.528100 containerd[1722]: 2025-09-16 04:59:11.501 [INFO][5264] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 16 04:59:11.528100 containerd[1722]: 2025-09-16 04:59:11.501 [INFO][5264] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.110.137/26] IPv6=[] ContainerID="6c6bc510f87957809b94fc466410944fd34f8d79bcc4c3aa8e1614b79cdabba0" HandleID="k8s-pod-network.6c6bc510f87957809b94fc466410944fd34f8d79bcc4c3aa8e1614b79cdabba0" Workload="ci--4459.0.0--n--140c1315ab-k8s-goldmane--54d579b49d--bn6xl-eth0" Sep 16 04:59:11.528599 containerd[1722]: 2025-09-16 04:59:11.504 [INFO][5249] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6c6bc510f87957809b94fc466410944fd34f8d79bcc4c3aa8e1614b79cdabba0" Namespace="calico-system" Pod="goldmane-54d579b49d-bn6xl" WorkloadEndpoint="ci--4459.0.0--n--140c1315ab-k8s-goldmane--54d579b49d--bn6xl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.0.0--n--140c1315ab-k8s-goldmane--54d579b49d--bn6xl-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"05ff58dd-4daf-4601-9f2e-e553fb710f78", ResourceVersion:"839", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 58, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.0.0-n-140c1315ab", ContainerID:"", Pod:"goldmane-54d579b49d-bn6xl", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.110.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali1f638a513e4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:59:11.528599 containerd[1722]: 2025-09-16 04:59:11.504 [INFO][5249] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.110.137/32] ContainerID="6c6bc510f87957809b94fc466410944fd34f8d79bcc4c3aa8e1614b79cdabba0" Namespace="calico-system" Pod="goldmane-54d579b49d-bn6xl" WorkloadEndpoint="ci--4459.0.0--n--140c1315ab-k8s-goldmane--54d579b49d--bn6xl-eth0" Sep 16 04:59:11.528599 containerd[1722]: 2025-09-16 04:59:11.504 [INFO][5249] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1f638a513e4 ContainerID="6c6bc510f87957809b94fc466410944fd34f8d79bcc4c3aa8e1614b79cdabba0" Namespace="calico-system" Pod="goldmane-54d579b49d-bn6xl" WorkloadEndpoint="ci--4459.0.0--n--140c1315ab-k8s-goldmane--54d579b49d--bn6xl-eth0" Sep 16 04:59:11.528599 containerd[1722]: 2025-09-16 04:59:11.508 [INFO][5249] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6c6bc510f87957809b94fc466410944fd34f8d79bcc4c3aa8e1614b79cdabba0" Namespace="calico-system" Pod="goldmane-54d579b49d-bn6xl" WorkloadEndpoint="ci--4459.0.0--n--140c1315ab-k8s-goldmane--54d579b49d--bn6xl-eth0" Sep 16 04:59:11.528599 containerd[1722]: 2025-09-16 04:59:11.509 [INFO][5249] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6c6bc510f87957809b94fc466410944fd34f8d79bcc4c3aa8e1614b79cdabba0" Namespace="calico-system" Pod="goldmane-54d579b49d-bn6xl" WorkloadEndpoint="ci--4459.0.0--n--140c1315ab-k8s-goldmane--54d579b49d--bn6xl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.0.0--n--140c1315ab-k8s-goldmane--54d579b49d--bn6xl-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"05ff58dd-4daf-4601-9f2e-e553fb710f78", ResourceVersion:"839", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 58, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.0.0-n-140c1315ab", ContainerID:"6c6bc510f87957809b94fc466410944fd34f8d79bcc4c3aa8e1614b79cdabba0", Pod:"goldmane-54d579b49d-bn6xl", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.110.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali1f638a513e4", MAC:"22:33:37:fc:d0:4a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:59:11.528599 containerd[1722]: 2025-09-16 04:59:11.525 [INFO][5249] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6c6bc510f87957809b94fc466410944fd34f8d79bcc4c3aa8e1614b79cdabba0" Namespace="calico-system" Pod="goldmane-54d579b49d-bn6xl" WorkloadEndpoint="ci--4459.0.0--n--140c1315ab-k8s-goldmane--54d579b49d--bn6xl-eth0" Sep 16 04:59:11.578845 containerd[1722]: time="2025-09-16T04:59:11.578727149Z" level=info msg="connecting to shim 6c6bc510f87957809b94fc466410944fd34f8d79bcc4c3aa8e1614b79cdabba0" address="unix:///run/containerd/s/8a85987d3c2192b19e063ed65e26ac5dd59e238812c723682d2ba78fa0c74e89" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:59:11.611307 systemd[1]: Started cri-containerd-6c6bc510f87957809b94fc466410944fd34f8d79bcc4c3aa8e1614b79cdabba0.scope - libcontainer container 6c6bc510f87957809b94fc466410944fd34f8d79bcc4c3aa8e1614b79cdabba0. Sep 16 04:59:11.656993 containerd[1722]: time="2025-09-16T04:59:11.656761973Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-bn6xl,Uid:05ff58dd-4daf-4601-9f2e-e553fb710f78,Namespace:calico-system,Attempt:0,} returns sandbox id \"6c6bc510f87957809b94fc466410944fd34f8d79bcc4c3aa8e1614b79cdabba0\"" Sep 16 04:59:11.857205 systemd-networkd[1586]: califc262d2b782: Gained IPv6LL Sep 16 04:59:12.108048 containerd[1722]: time="2025-09-16T04:59:12.107988734Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:59:12.110331 containerd[1722]: time="2025-09-16T04:59:12.110301698Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Sep 16 04:59:12.113666 containerd[1722]: time="2025-09-16T04:59:12.113636307Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:59:12.117408 containerd[1722]: time="2025-09-16T04:59:12.116992885Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:59:12.117408 containerd[1722]: time="2025-09-16T04:59:12.117322646Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 2.801815847s" Sep 16 04:59:12.117408 containerd[1722]: time="2025-09-16T04:59:12.117345069Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 16 04:59:12.118494 containerd[1722]: time="2025-09-16T04:59:12.118470121Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 16 04:59:12.129510 containerd[1722]: time="2025-09-16T04:59:12.129164543Z" level=info msg="CreateContainer within sandbox \"92c5f7b17fa4e8d8e85d2c7abdebbded0a49aa406dfed75ef788e2ade54ce3c8\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 16 04:59:12.147273 containerd[1722]: time="2025-09-16T04:59:12.146047445Z" level=info msg="Container 6e8920778b396c49ee6eaac06f5bb5c7070c97e1eb31b1e809c498f29a30f43b: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:59:12.164211 containerd[1722]: time="2025-09-16T04:59:12.163898324Z" level=info msg="CreateContainer within sandbox \"92c5f7b17fa4e8d8e85d2c7abdebbded0a49aa406dfed75ef788e2ade54ce3c8\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"6e8920778b396c49ee6eaac06f5bb5c7070c97e1eb31b1e809c498f29a30f43b\"" Sep 16 04:59:12.165469 containerd[1722]: time="2025-09-16T04:59:12.165198665Z" level=info msg="StartContainer for \"6e8920778b396c49ee6eaac06f5bb5c7070c97e1eb31b1e809c498f29a30f43b\"" Sep 16 04:59:12.166078 containerd[1722]: time="2025-09-16T04:59:12.166020949Z" level=info msg="connecting to shim 6e8920778b396c49ee6eaac06f5bb5c7070c97e1eb31b1e809c498f29a30f43b" address="unix:///run/containerd/s/3810e5d50e2914fce832296fc38f5a8f31bdcc4dea815aa5f3c898a99c2c5e17" protocol=ttrpc version=3 Sep 16 04:59:12.192208 systemd[1]: Started cri-containerd-6e8920778b396c49ee6eaac06f5bb5c7070c97e1eb31b1e809c498f29a30f43b.scope - libcontainer container 6e8920778b396c49ee6eaac06f5bb5c7070c97e1eb31b1e809c498f29a30f43b. Sep 16 04:59:12.234272 containerd[1722]: time="2025-09-16T04:59:12.234241114Z" level=info msg="StartContainer for \"6e8920778b396c49ee6eaac06f5bb5c7070c97e1eb31b1e809c498f29a30f43b\" returns successfully" Sep 16 04:59:13.073238 systemd-networkd[1586]: cali1f638a513e4: Gained IPv6LL Sep 16 04:59:13.450953 kubelet[3170]: I0916 04:59:13.450703 3170 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 16 04:59:15.477983 containerd[1722]: time="2025-09-16T04:59:15.477943426Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:59:15.480364 containerd[1722]: time="2025-09-16T04:59:15.480252526Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Sep 16 04:59:15.482924 containerd[1722]: time="2025-09-16T04:59:15.482902920Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:59:15.486352 containerd[1722]: time="2025-09-16T04:59:15.486309553Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:59:15.486774 containerd[1722]: time="2025-09-16T04:59:15.486634014Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 3.368134194s" Sep 16 04:59:15.486774 containerd[1722]: time="2025-09-16T04:59:15.486657239Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 16 04:59:15.488356 containerd[1722]: time="2025-09-16T04:59:15.488332805Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 16 04:59:15.506239 containerd[1722]: time="2025-09-16T04:59:15.506215067Z" level=info msg="CreateContainer within sandbox \"62c9f1a11c12e43c332da6c2579d640625c43ce2d87a54bb54d9ee7aaff21e4b\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 16 04:59:15.525845 containerd[1722]: time="2025-09-16T04:59:15.525077499Z" level=info msg="Container 9043f02600f56cd7e22d8576992d0c07bd6d45d0b91389d4920be9f9f21af079: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:59:15.541950 containerd[1722]: time="2025-09-16T04:59:15.541927427Z" level=info msg="CreateContainer within sandbox \"62c9f1a11c12e43c332da6c2579d640625c43ce2d87a54bb54d9ee7aaff21e4b\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"9043f02600f56cd7e22d8576992d0c07bd6d45d0b91389d4920be9f9f21af079\"" Sep 16 04:59:15.542517 containerd[1722]: time="2025-09-16T04:59:15.542495403Z" level=info msg="StartContainer for \"9043f02600f56cd7e22d8576992d0c07bd6d45d0b91389d4920be9f9f21af079\"" Sep 16 04:59:15.543691 containerd[1722]: time="2025-09-16T04:59:15.543644448Z" level=info msg="connecting to shim 9043f02600f56cd7e22d8576992d0c07bd6d45d0b91389d4920be9f9f21af079" address="unix:///run/containerd/s/1537d277e5c5afc799372fbd04c4fb6c185dc13670c4bc83be842ad4be05e917" protocol=ttrpc version=3 Sep 16 04:59:15.566214 systemd[1]: Started cri-containerd-9043f02600f56cd7e22d8576992d0c07bd6d45d0b91389d4920be9f9f21af079.scope - libcontainer container 9043f02600f56cd7e22d8576992d0c07bd6d45d0b91389d4920be9f9f21af079. Sep 16 04:59:15.607834 containerd[1722]: time="2025-09-16T04:59:15.607743938Z" level=info msg="StartContainer for \"9043f02600f56cd7e22d8576992d0c07bd6d45d0b91389d4920be9f9f21af079\" returns successfully" Sep 16 04:59:16.471109 kubelet[3170]: I0916 04:59:16.470924 3170 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-696d587784-b5t7c" podStartSLOduration=30.909901382 podStartE2EDuration="35.470816178s" podCreationTimestamp="2025-09-16 04:58:41 +0000 UTC" firstStartedPulling="2025-09-16 04:59:07.557072542 +0000 UTC m=+42.349098556" lastFinishedPulling="2025-09-16 04:59:12.11798734 +0000 UTC m=+46.910013352" observedRunningTime="2025-09-16 04:59:12.464864942 +0000 UTC m=+47.256890966" watchObservedRunningTime="2025-09-16 04:59:16.470816178 +0000 UTC m=+51.262842209" Sep 16 04:59:16.471917 kubelet[3170]: I0916 04:59:16.471362 3170 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-797d689774-tnml7" podStartSLOduration=24.624528882 podStartE2EDuration="32.471348303s" podCreationTimestamp="2025-09-16 04:58:44 +0000 UTC" firstStartedPulling="2025-09-16 04:59:07.640578409 +0000 UTC m=+42.432604428" lastFinishedPulling="2025-09-16 04:59:15.487397835 +0000 UTC m=+50.279423849" observedRunningTime="2025-09-16 04:59:16.47072607 +0000 UTC m=+51.262752099" watchObservedRunningTime="2025-09-16 04:59:16.471348303 +0000 UTC m=+51.263374334" Sep 16 04:59:16.504577 containerd[1722]: time="2025-09-16T04:59:16.504405235Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9043f02600f56cd7e22d8576992d0c07bd6d45d0b91389d4920be9f9f21af079\" id:\"4d29b09fd846c17327ce2705a55601e3a229ca3d8682cb9bef82369b6e0e0a51\" pid:5442 exited_at:{seconds:1757998756 nanos:503377661}" Sep 16 04:59:16.748014 containerd[1722]: time="2025-09-16T04:59:16.747933498Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:59:16.751462 containerd[1722]: time="2025-09-16T04:59:16.751345499Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 16 04:59:16.754287 containerd[1722]: time="2025-09-16T04:59:16.754219045Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:59:16.760755 containerd[1722]: time="2025-09-16T04:59:16.760320062Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:59:16.760755 containerd[1722]: time="2025-09-16T04:59:16.760650785Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 1.272069113s" Sep 16 04:59:16.760755 containerd[1722]: time="2025-09-16T04:59:16.760680180Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 16 04:59:16.761417 containerd[1722]: time="2025-09-16T04:59:16.761393196Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 16 04:59:16.767718 containerd[1722]: time="2025-09-16T04:59:16.767695962Z" level=info msg="CreateContainer within sandbox \"41d174970394b87315e8c39d32cca548a86b68425381550e91270c5adc787a4a\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 16 04:59:16.788004 containerd[1722]: time="2025-09-16T04:59:16.787979665Z" level=info msg="Container c993f2f23a908030918b59c0ab4737a74934d8ad8aa3cbdb44592dddf8e13de5: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:59:16.801801 containerd[1722]: time="2025-09-16T04:59:16.801777999Z" level=info msg="CreateContainer within sandbox \"41d174970394b87315e8c39d32cca548a86b68425381550e91270c5adc787a4a\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"c993f2f23a908030918b59c0ab4737a74934d8ad8aa3cbdb44592dddf8e13de5\"" Sep 16 04:59:16.803796 containerd[1722]: time="2025-09-16T04:59:16.802151238Z" level=info msg="StartContainer for \"c993f2f23a908030918b59c0ab4737a74934d8ad8aa3cbdb44592dddf8e13de5\"" Sep 16 04:59:16.803796 containerd[1722]: time="2025-09-16T04:59:16.803079759Z" level=info msg="connecting to shim c993f2f23a908030918b59c0ab4737a74934d8ad8aa3cbdb44592dddf8e13de5" address="unix:///run/containerd/s/3bce2d6cb65bde8f8304aa4c3e26baef6c43fc434fbcdd3ce7ea055b74786298" protocol=ttrpc version=3 Sep 16 04:59:16.821225 systemd[1]: Started cri-containerd-c993f2f23a908030918b59c0ab4737a74934d8ad8aa3cbdb44592dddf8e13de5.scope - libcontainer container c993f2f23a908030918b59c0ab4737a74934d8ad8aa3cbdb44592dddf8e13de5. Sep 16 04:59:16.849837 containerd[1722]: time="2025-09-16T04:59:16.849821421Z" level=info msg="StartContainer for \"c993f2f23a908030918b59c0ab4737a74934d8ad8aa3cbdb44592dddf8e13de5\" returns successfully" Sep 16 04:59:17.078846 containerd[1722]: time="2025-09-16T04:59:17.078425023Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:59:17.082146 containerd[1722]: time="2025-09-16T04:59:17.082128066Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 16 04:59:17.083329 containerd[1722]: time="2025-09-16T04:59:17.083306797Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 321.890386ms" Sep 16 04:59:17.083382 containerd[1722]: time="2025-09-16T04:59:17.083328589Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 16 04:59:17.084371 containerd[1722]: time="2025-09-16T04:59:17.084328483Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 16 04:59:17.089703 containerd[1722]: time="2025-09-16T04:59:17.089680537Z" level=info msg="CreateContainer within sandbox \"9170974f888d4295360c8efdc6d55c54469c56bf2cbb7f079538a5cd031e3878\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 16 04:59:17.110111 containerd[1722]: time="2025-09-16T04:59:17.108112509Z" level=info msg="Container 720f851bfa3282cc66ba6095833bb4b4eb11cddb6773fc1a219a8acea2a9a48a: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:59:17.127988 containerd[1722]: time="2025-09-16T04:59:17.127960924Z" level=info msg="CreateContainer within sandbox \"9170974f888d4295360c8efdc6d55c54469c56bf2cbb7f079538a5cd031e3878\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"720f851bfa3282cc66ba6095833bb4b4eb11cddb6773fc1a219a8acea2a9a48a\"" Sep 16 04:59:17.128652 containerd[1722]: time="2025-09-16T04:59:17.128469744Z" level=info msg="StartContainer for \"720f851bfa3282cc66ba6095833bb4b4eb11cddb6773fc1a219a8acea2a9a48a\"" Sep 16 04:59:17.132297 containerd[1722]: time="2025-09-16T04:59:17.132272708Z" level=info msg="connecting to shim 720f851bfa3282cc66ba6095833bb4b4eb11cddb6773fc1a219a8acea2a9a48a" address="unix:///run/containerd/s/043d215e6b9ba4f8d95aedc8f92adf1ed42c41d3a674a9d171ee814b8f1faacc" protocol=ttrpc version=3 Sep 16 04:59:17.158201 systemd[1]: Started cri-containerd-720f851bfa3282cc66ba6095833bb4b4eb11cddb6773fc1a219a8acea2a9a48a.scope - libcontainer container 720f851bfa3282cc66ba6095833bb4b4eb11cddb6773fc1a219a8acea2a9a48a. Sep 16 04:59:17.202520 containerd[1722]: time="2025-09-16T04:59:17.202450656Z" level=info msg="StartContainer for \"720f851bfa3282cc66ba6095833bb4b4eb11cddb6773fc1a219a8acea2a9a48a\" returns successfully" Sep 16 04:59:17.560905 containerd[1722]: time="2025-09-16T04:59:17.560876795Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:59:17.563584 containerd[1722]: time="2025-09-16T04:59:17.563442975Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 16 04:59:17.565063 containerd[1722]: time="2025-09-16T04:59:17.565039194Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 480.602231ms" Sep 16 04:59:17.565063 containerd[1722]: time="2025-09-16T04:59:17.565061885Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 16 04:59:17.566205 containerd[1722]: time="2025-09-16T04:59:17.566181417Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 16 04:59:17.572655 containerd[1722]: time="2025-09-16T04:59:17.572619344Z" level=info msg="CreateContainer within sandbox \"7a61fdc32c58f20217730afeb565dd4f40ba40b091599735b415c728f71bd860\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 16 04:59:17.589069 containerd[1722]: time="2025-09-16T04:59:17.589032625Z" level=info msg="Container f57ec9b75d67a42cc7d541dc8757ec1cf45ff1f7b2ee7fdec157a571ec3d03e4: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:59:17.604412 containerd[1722]: time="2025-09-16T04:59:17.604364565Z" level=info msg="CreateContainer within sandbox \"7a61fdc32c58f20217730afeb565dd4f40ba40b091599735b415c728f71bd860\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"f57ec9b75d67a42cc7d541dc8757ec1cf45ff1f7b2ee7fdec157a571ec3d03e4\"" Sep 16 04:59:17.605705 containerd[1722]: time="2025-09-16T04:59:17.605599727Z" level=info msg="StartContainer for \"f57ec9b75d67a42cc7d541dc8757ec1cf45ff1f7b2ee7fdec157a571ec3d03e4\"" Sep 16 04:59:17.607537 containerd[1722]: time="2025-09-16T04:59:17.607513544Z" level=info msg="connecting to shim f57ec9b75d67a42cc7d541dc8757ec1cf45ff1f7b2ee7fdec157a571ec3d03e4" address="unix:///run/containerd/s/9dcec28d511465915212999c2c92914c3217883f133a82910e0774b831893f82" protocol=ttrpc version=3 Sep 16 04:59:17.619229 systemd[1]: Started cri-containerd-f57ec9b75d67a42cc7d541dc8757ec1cf45ff1f7b2ee7fdec157a571ec3d03e4.scope - libcontainer container f57ec9b75d67a42cc7d541dc8757ec1cf45ff1f7b2ee7fdec157a571ec3d03e4. Sep 16 04:59:17.664343 containerd[1722]: time="2025-09-16T04:59:17.664271570Z" level=info msg="StartContainer for \"f57ec9b75d67a42cc7d541dc8757ec1cf45ff1f7b2ee7fdec157a571ec3d03e4\" returns successfully" Sep 16 04:59:17.941305 kubelet[3170]: I0916 04:59:17.941208 3170 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-748bb69b95-qkbmd" podStartSLOduration=27.811721103 podStartE2EDuration="35.941190365s" podCreationTimestamp="2025-09-16 04:58:42 +0000 UTC" firstStartedPulling="2025-09-16 04:59:08.95441115 +0000 UTC m=+43.746437163" lastFinishedPulling="2025-09-16 04:59:17.083880413 +0000 UTC m=+51.875906425" observedRunningTime="2025-09-16 04:59:17.47716294 +0000 UTC m=+52.269188963" watchObservedRunningTime="2025-09-16 04:59:17.941190365 +0000 UTC m=+52.733216398" Sep 16 04:59:18.026070 systemd[1]: Created slice kubepods-besteffort-podf0fb2867_bdc4_4c21_96f1_59f522c61062.slice - libcontainer container kubepods-besteffort-podf0fb2867_bdc4_4c21_96f1_59f522c61062.slice. Sep 16 04:59:18.109353 kubelet[3170]: I0916 04:59:18.109327 3170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lh8t4\" (UniqueName: \"kubernetes.io/projected/f0fb2867-bdc4-4c21-96f1-59f522c61062-kube-api-access-lh8t4\") pod \"calico-apiserver-748bb69b95-xdrbd\" (UID: \"f0fb2867-bdc4-4c21-96f1-59f522c61062\") " pod="calico-apiserver/calico-apiserver-748bb69b95-xdrbd" Sep 16 04:59:18.109444 kubelet[3170]: I0916 04:59:18.109364 3170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f0fb2867-bdc4-4c21-96f1-59f522c61062-calico-apiserver-certs\") pod \"calico-apiserver-748bb69b95-xdrbd\" (UID: \"f0fb2867-bdc4-4c21-96f1-59f522c61062\") " pod="calico-apiserver/calico-apiserver-748bb69b95-xdrbd" Sep 16 04:59:18.330684 containerd[1722]: time="2025-09-16T04:59:18.330650555Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-748bb69b95-xdrbd,Uid:f0fb2867-bdc4-4c21-96f1-59f522c61062,Namespace:calico-apiserver,Attempt:0,}" Sep 16 04:59:18.447994 systemd-networkd[1586]: cali96f97e52306: Link UP Sep 16 04:59:18.450760 systemd-networkd[1586]: cali96f97e52306: Gained carrier Sep 16 04:59:18.470123 containerd[1722]: 2025-09-16 04:59:18.376 [INFO][5567] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.0.0--n--140c1315ab-k8s-calico--apiserver--748bb69b95--xdrbd-eth0 calico-apiserver-748bb69b95- calico-apiserver f0fb2867-bdc4-4c21-96f1-59f522c61062 1056 0 2025-09-16 04:59:17 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:748bb69b95 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459.0.0-n-140c1315ab calico-apiserver-748bb69b95-xdrbd eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali96f97e52306 [] [] }} ContainerID="89390a56c9fd58ae741c3dfd0cfbe9190afa32e96a2cee78a1123e77f9f928fd" Namespace="calico-apiserver" Pod="calico-apiserver-748bb69b95-xdrbd" WorkloadEndpoint="ci--4459.0.0--n--140c1315ab-k8s-calico--apiserver--748bb69b95--xdrbd-" Sep 16 04:59:18.470123 containerd[1722]: 2025-09-16 04:59:18.376 [INFO][5567] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="89390a56c9fd58ae741c3dfd0cfbe9190afa32e96a2cee78a1123e77f9f928fd" Namespace="calico-apiserver" Pod="calico-apiserver-748bb69b95-xdrbd" WorkloadEndpoint="ci--4459.0.0--n--140c1315ab-k8s-calico--apiserver--748bb69b95--xdrbd-eth0" Sep 16 04:59:18.470123 containerd[1722]: 2025-09-16 04:59:18.401 [INFO][5575] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="89390a56c9fd58ae741c3dfd0cfbe9190afa32e96a2cee78a1123e77f9f928fd" HandleID="k8s-pod-network.89390a56c9fd58ae741c3dfd0cfbe9190afa32e96a2cee78a1123e77f9f928fd" Workload="ci--4459.0.0--n--140c1315ab-k8s-calico--apiserver--748bb69b95--xdrbd-eth0" Sep 16 04:59:18.470123 containerd[1722]: 2025-09-16 04:59:18.401 [INFO][5575] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="89390a56c9fd58ae741c3dfd0cfbe9190afa32e96a2cee78a1123e77f9f928fd" HandleID="k8s-pod-network.89390a56c9fd58ae741c3dfd0cfbe9190afa32e96a2cee78a1123e77f9f928fd" Workload="ci--4459.0.0--n--140c1315ab-k8s-calico--apiserver--748bb69b95--xdrbd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002adc50), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459.0.0-n-140c1315ab", "pod":"calico-apiserver-748bb69b95-xdrbd", "timestamp":"2025-09-16 04:59:18.401668171 +0000 UTC"}, Hostname:"ci-4459.0.0-n-140c1315ab", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 16 04:59:18.470123 containerd[1722]: 2025-09-16 04:59:18.401 [INFO][5575] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 16 04:59:18.470123 containerd[1722]: 2025-09-16 04:59:18.401 [INFO][5575] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 16 04:59:18.470123 containerd[1722]: 2025-09-16 04:59:18.401 [INFO][5575] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.0.0-n-140c1315ab' Sep 16 04:59:18.470123 containerd[1722]: 2025-09-16 04:59:18.406 [INFO][5575] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.89390a56c9fd58ae741c3dfd0cfbe9190afa32e96a2cee78a1123e77f9f928fd" host="ci-4459.0.0-n-140c1315ab" Sep 16 04:59:18.470123 containerd[1722]: 2025-09-16 04:59:18.409 [INFO][5575] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.0.0-n-140c1315ab" Sep 16 04:59:18.470123 containerd[1722]: 2025-09-16 04:59:18.416 [INFO][5575] ipam/ipam.go 511: Trying affinity for 192.168.110.128/26 host="ci-4459.0.0-n-140c1315ab" Sep 16 04:59:18.470123 containerd[1722]: 2025-09-16 04:59:18.419 [INFO][5575] ipam/ipam.go 158: Attempting to load block cidr=192.168.110.128/26 host="ci-4459.0.0-n-140c1315ab" Sep 16 04:59:18.470123 containerd[1722]: 2025-09-16 04:59:18.424 [INFO][5575] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.110.128/26 host="ci-4459.0.0-n-140c1315ab" Sep 16 04:59:18.470123 containerd[1722]: 2025-09-16 04:59:18.424 [INFO][5575] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.110.128/26 handle="k8s-pod-network.89390a56c9fd58ae741c3dfd0cfbe9190afa32e96a2cee78a1123e77f9f928fd" host="ci-4459.0.0-n-140c1315ab" Sep 16 04:59:18.470123 containerd[1722]: 2025-09-16 04:59:18.425 [INFO][5575] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.89390a56c9fd58ae741c3dfd0cfbe9190afa32e96a2cee78a1123e77f9f928fd Sep 16 04:59:18.470123 containerd[1722]: 2025-09-16 04:59:18.429 [INFO][5575] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.110.128/26 handle="k8s-pod-network.89390a56c9fd58ae741c3dfd0cfbe9190afa32e96a2cee78a1123e77f9f928fd" host="ci-4459.0.0-n-140c1315ab" Sep 16 04:59:18.470123 containerd[1722]: 2025-09-16 04:59:18.441 [INFO][5575] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.110.138/26] block=192.168.110.128/26 handle="k8s-pod-network.89390a56c9fd58ae741c3dfd0cfbe9190afa32e96a2cee78a1123e77f9f928fd" host="ci-4459.0.0-n-140c1315ab" Sep 16 04:59:18.470123 containerd[1722]: 2025-09-16 04:59:18.441 [INFO][5575] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.110.138/26] handle="k8s-pod-network.89390a56c9fd58ae741c3dfd0cfbe9190afa32e96a2cee78a1123e77f9f928fd" host="ci-4459.0.0-n-140c1315ab" Sep 16 04:59:18.470123 containerd[1722]: 2025-09-16 04:59:18.441 [INFO][5575] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 16 04:59:18.470123 containerd[1722]: 2025-09-16 04:59:18.441 [INFO][5575] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.110.138/26] IPv6=[] ContainerID="89390a56c9fd58ae741c3dfd0cfbe9190afa32e96a2cee78a1123e77f9f928fd" HandleID="k8s-pod-network.89390a56c9fd58ae741c3dfd0cfbe9190afa32e96a2cee78a1123e77f9f928fd" Workload="ci--4459.0.0--n--140c1315ab-k8s-calico--apiserver--748bb69b95--xdrbd-eth0" Sep 16 04:59:18.471751 containerd[1722]: 2025-09-16 04:59:18.443 [INFO][5567] cni-plugin/k8s.go 418: Populated endpoint ContainerID="89390a56c9fd58ae741c3dfd0cfbe9190afa32e96a2cee78a1123e77f9f928fd" Namespace="calico-apiserver" Pod="calico-apiserver-748bb69b95-xdrbd" WorkloadEndpoint="ci--4459.0.0--n--140c1315ab-k8s-calico--apiserver--748bb69b95--xdrbd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.0.0--n--140c1315ab-k8s-calico--apiserver--748bb69b95--xdrbd-eth0", GenerateName:"calico-apiserver-748bb69b95-", Namespace:"calico-apiserver", SelfLink:"", UID:"f0fb2867-bdc4-4c21-96f1-59f522c61062", ResourceVersion:"1056", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 59, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"748bb69b95", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.0.0-n-140c1315ab", ContainerID:"", Pod:"calico-apiserver-748bb69b95-xdrbd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.110.138/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali96f97e52306", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:59:18.471751 containerd[1722]: 2025-09-16 04:59:18.444 [INFO][5567] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.110.138/32] ContainerID="89390a56c9fd58ae741c3dfd0cfbe9190afa32e96a2cee78a1123e77f9f928fd" Namespace="calico-apiserver" Pod="calico-apiserver-748bb69b95-xdrbd" WorkloadEndpoint="ci--4459.0.0--n--140c1315ab-k8s-calico--apiserver--748bb69b95--xdrbd-eth0" Sep 16 04:59:18.471751 containerd[1722]: 2025-09-16 04:59:18.444 [INFO][5567] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali96f97e52306 ContainerID="89390a56c9fd58ae741c3dfd0cfbe9190afa32e96a2cee78a1123e77f9f928fd" Namespace="calico-apiserver" Pod="calico-apiserver-748bb69b95-xdrbd" WorkloadEndpoint="ci--4459.0.0--n--140c1315ab-k8s-calico--apiserver--748bb69b95--xdrbd-eth0" Sep 16 04:59:18.471751 containerd[1722]: 2025-09-16 04:59:18.451 [INFO][5567] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="89390a56c9fd58ae741c3dfd0cfbe9190afa32e96a2cee78a1123e77f9f928fd" Namespace="calico-apiserver" Pod="calico-apiserver-748bb69b95-xdrbd" WorkloadEndpoint="ci--4459.0.0--n--140c1315ab-k8s-calico--apiserver--748bb69b95--xdrbd-eth0" Sep 16 04:59:18.471751 containerd[1722]: 2025-09-16 04:59:18.453 [INFO][5567] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="89390a56c9fd58ae741c3dfd0cfbe9190afa32e96a2cee78a1123e77f9f928fd" Namespace="calico-apiserver" Pod="calico-apiserver-748bb69b95-xdrbd" WorkloadEndpoint="ci--4459.0.0--n--140c1315ab-k8s-calico--apiserver--748bb69b95--xdrbd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.0.0--n--140c1315ab-k8s-calico--apiserver--748bb69b95--xdrbd-eth0", GenerateName:"calico-apiserver-748bb69b95-", Namespace:"calico-apiserver", SelfLink:"", UID:"f0fb2867-bdc4-4c21-96f1-59f522c61062", ResourceVersion:"1056", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 59, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"748bb69b95", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.0.0-n-140c1315ab", ContainerID:"89390a56c9fd58ae741c3dfd0cfbe9190afa32e96a2cee78a1123e77f9f928fd", Pod:"calico-apiserver-748bb69b95-xdrbd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.110.138/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali96f97e52306", MAC:"56:9e:a3:8e:6b:1c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:59:18.471751 containerd[1722]: 2025-09-16 04:59:18.464 [INFO][5567] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="89390a56c9fd58ae741c3dfd0cfbe9190afa32e96a2cee78a1123e77f9f928fd" Namespace="calico-apiserver" Pod="calico-apiserver-748bb69b95-xdrbd" WorkloadEndpoint="ci--4459.0.0--n--140c1315ab-k8s-calico--apiserver--748bb69b95--xdrbd-eth0" Sep 16 04:59:18.492815 kubelet[3170]: I0916 04:59:18.492763 3170 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-696d587784-76trh" podStartSLOduration=30.480892149 podStartE2EDuration="37.492747112s" podCreationTimestamp="2025-09-16 04:58:41 +0000 UTC" firstStartedPulling="2025-09-16 04:59:10.553859615 +0000 UTC m=+45.345885638" lastFinishedPulling="2025-09-16 04:59:17.565714579 +0000 UTC m=+52.357740601" observedRunningTime="2025-09-16 04:59:18.49235039 +0000 UTC m=+53.284376418" watchObservedRunningTime="2025-09-16 04:59:18.492747112 +0000 UTC m=+53.284773131" Sep 16 04:59:18.545283 containerd[1722]: time="2025-09-16T04:59:18.545201737Z" level=info msg="connecting to shim 89390a56c9fd58ae741c3dfd0cfbe9190afa32e96a2cee78a1123e77f9f928fd" address="unix:///run/containerd/s/eaee8dbd6d438dedb42d7c344af37117c4c3e6fb9716822e488820452a8aacc7" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:59:18.568564 systemd[1]: Started cri-containerd-89390a56c9fd58ae741c3dfd0cfbe9190afa32e96a2cee78a1123e77f9f928fd.scope - libcontainer container 89390a56c9fd58ae741c3dfd0cfbe9190afa32e96a2cee78a1123e77f9f928fd. Sep 16 04:59:18.613059 containerd[1722]: time="2025-09-16T04:59:18.613014604Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-748bb69b95-xdrbd,Uid:f0fb2867-bdc4-4c21-96f1-59f522c61062,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"89390a56c9fd58ae741c3dfd0cfbe9190afa32e96a2cee78a1123e77f9f928fd\"" Sep 16 04:59:18.621982 containerd[1722]: time="2025-09-16T04:59:18.621762659Z" level=info msg="CreateContainer within sandbox \"89390a56c9fd58ae741c3dfd0cfbe9190afa32e96a2cee78a1123e77f9f928fd\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 16 04:59:18.646114 containerd[1722]: time="2025-09-16T04:59:18.645971346Z" level=info msg="Container 5aae7e11c366c51c5cbffb7219a585f442fc0ed77c313a25553a0cf0c761f4fa: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:59:18.665590 containerd[1722]: time="2025-09-16T04:59:18.665561906Z" level=info msg="CreateContainer within sandbox \"89390a56c9fd58ae741c3dfd0cfbe9190afa32e96a2cee78a1123e77f9f928fd\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"5aae7e11c366c51c5cbffb7219a585f442fc0ed77c313a25553a0cf0c761f4fa\"" Sep 16 04:59:18.666104 containerd[1722]: time="2025-09-16T04:59:18.666025021Z" level=info msg="StartContainer for \"5aae7e11c366c51c5cbffb7219a585f442fc0ed77c313a25553a0cf0c761f4fa\"" Sep 16 04:59:18.667330 containerd[1722]: time="2025-09-16T04:59:18.667297253Z" level=info msg="connecting to shim 5aae7e11c366c51c5cbffb7219a585f442fc0ed77c313a25553a0cf0c761f4fa" address="unix:///run/containerd/s/eaee8dbd6d438dedb42d7c344af37117c4c3e6fb9716822e488820452a8aacc7" protocol=ttrpc version=3 Sep 16 04:59:18.691262 systemd[1]: Started cri-containerd-5aae7e11c366c51c5cbffb7219a585f442fc0ed77c313a25553a0cf0c761f4fa.scope - libcontainer container 5aae7e11c366c51c5cbffb7219a585f442fc0ed77c313a25553a0cf0c761f4fa. Sep 16 04:59:18.849926 containerd[1722]: time="2025-09-16T04:59:18.849901256Z" level=info msg="StartContainer for \"5aae7e11c366c51c5cbffb7219a585f442fc0ed77c313a25553a0cf0c761f4fa\" returns successfully" Sep 16 04:59:19.476103 kubelet[3170]: I0916 04:59:19.476017 3170 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 16 04:59:19.477395 containerd[1722]: time="2025-09-16T04:59:19.477366372Z" level=info msg="StopContainer for \"f57ec9b75d67a42cc7d541dc8757ec1cf45ff1f7b2ee7fdec157a571ec3d03e4\" with timeout 30 (s)" Sep 16 04:59:19.478158 containerd[1722]: time="2025-09-16T04:59:19.478132330Z" level=info msg="Stop container \"f57ec9b75d67a42cc7d541dc8757ec1cf45ff1f7b2ee7fdec157a571ec3d03e4\" with signal terminated" Sep 16 04:59:19.492485 kubelet[3170]: I0916 04:59:19.492305 3170 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-748bb69b95-xdrbd" podStartSLOduration=2.492288801 podStartE2EDuration="2.492288801s" podCreationTimestamp="2025-09-16 04:59:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 04:59:19.49171747 +0000 UTC m=+54.283743494" watchObservedRunningTime="2025-09-16 04:59:19.492288801 +0000 UTC m=+54.284314832" Sep 16 04:59:19.497466 systemd[1]: cri-containerd-f57ec9b75d67a42cc7d541dc8757ec1cf45ff1f7b2ee7fdec157a571ec3d03e4.scope: Deactivated successfully. Sep 16 04:59:19.505705 containerd[1722]: time="2025-09-16T04:59:19.505676247Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f57ec9b75d67a42cc7d541dc8757ec1cf45ff1f7b2ee7fdec157a571ec3d03e4\" id:\"f57ec9b75d67a42cc7d541dc8757ec1cf45ff1f7b2ee7fdec157a571ec3d03e4\" pid:5538 exit_status:1 exited_at:{seconds:1757998759 nanos:504811457}" Sep 16 04:59:19.505786 containerd[1722]: time="2025-09-16T04:59:19.505730890Z" level=info msg="received exit event container_id:\"f57ec9b75d67a42cc7d541dc8757ec1cf45ff1f7b2ee7fdec157a571ec3d03e4\" id:\"f57ec9b75d67a42cc7d541dc8757ec1cf45ff1f7b2ee7fdec157a571ec3d03e4\" pid:5538 exit_status:1 exited_at:{seconds:1757998759 nanos:504811457}" Sep 16 04:59:19.530510 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f57ec9b75d67a42cc7d541dc8757ec1cf45ff1f7b2ee7fdec157a571ec3d03e4-rootfs.mount: Deactivated successfully. Sep 16 04:59:20.113244 systemd-networkd[1586]: cali96f97e52306: Gained IPv6LL Sep 16 04:59:20.262888 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1829329560.mount: Deactivated successfully. Sep 16 04:59:20.477260 kubelet[3170]: I0916 04:59:20.477188 3170 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 16 04:59:22.814388 containerd[1722]: time="2025-09-16T04:59:22.814314565Z" level=info msg="StopContainer for \"f57ec9b75d67a42cc7d541dc8757ec1cf45ff1f7b2ee7fdec157a571ec3d03e4\" returns successfully" Sep 16 04:59:22.815557 containerd[1722]: time="2025-09-16T04:59:22.815314499Z" level=info msg="StopPodSandbox for \"7a61fdc32c58f20217730afeb565dd4f40ba40b091599735b415c728f71bd860\"" Sep 16 04:59:22.815557 containerd[1722]: time="2025-09-16T04:59:22.815379691Z" level=info msg="Container to stop \"f57ec9b75d67a42cc7d541dc8757ec1cf45ff1f7b2ee7fdec157a571ec3d03e4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 16 04:59:22.827274 systemd[1]: cri-containerd-7a61fdc32c58f20217730afeb565dd4f40ba40b091599735b415c728f71bd860.scope: Deactivated successfully. Sep 16 04:59:22.834160 containerd[1722]: time="2025-09-16T04:59:22.834013801Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7a61fdc32c58f20217730afeb565dd4f40ba40b091599735b415c728f71bd860\" id:\"7a61fdc32c58f20217730afeb565dd4f40ba40b091599735b415c728f71bd860\" pid:5228 exit_status:137 exited_at:{seconds:1757998762 nanos:833495504}" Sep 16 04:59:22.862272 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7a61fdc32c58f20217730afeb565dd4f40ba40b091599735b415c728f71bd860-rootfs.mount: Deactivated successfully. Sep 16 04:59:22.862867 containerd[1722]: time="2025-09-16T04:59:22.862731086Z" level=info msg="shim disconnected" id=7a61fdc32c58f20217730afeb565dd4f40ba40b091599735b415c728f71bd860 namespace=k8s.io Sep 16 04:59:22.862867 containerd[1722]: time="2025-09-16T04:59:22.862752792Z" level=warning msg="cleaning up after shim disconnected" id=7a61fdc32c58f20217730afeb565dd4f40ba40b091599735b415c728f71bd860 namespace=k8s.io Sep 16 04:59:22.862867 containerd[1722]: time="2025-09-16T04:59:22.862759708Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 16 04:59:22.863329 containerd[1722]: time="2025-09-16T04:59:22.863309725Z" level=info msg="received exit event sandbox_id:\"7a61fdc32c58f20217730afeb565dd4f40ba40b091599735b415c728f71bd860\" exit_status:137 exited_at:{seconds:1757998762 nanos:833495504}" Sep 16 04:59:22.869773 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7a61fdc32c58f20217730afeb565dd4f40ba40b091599735b415c728f71bd860-shm.mount: Deactivated successfully. Sep 16 04:59:22.933591 systemd-networkd[1586]: califc262d2b782: Link DOWN Sep 16 04:59:22.933597 systemd-networkd[1586]: califc262d2b782: Lost carrier Sep 16 04:59:23.029495 containerd[1722]: 2025-09-16 04:59:22.931 [INFO][5741] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7a61fdc32c58f20217730afeb565dd4f40ba40b091599735b415c728f71bd860" Sep 16 04:59:23.029495 containerd[1722]: 2025-09-16 04:59:22.931 [INFO][5741] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7a61fdc32c58f20217730afeb565dd4f40ba40b091599735b415c728f71bd860" iface="eth0" netns="/var/run/netns/cni-aecdeef0-a529-6b75-fa58-d924cee293bd" Sep 16 04:59:23.029495 containerd[1722]: 2025-09-16 04:59:22.932 [INFO][5741] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7a61fdc32c58f20217730afeb565dd4f40ba40b091599735b415c728f71bd860" iface="eth0" netns="/var/run/netns/cni-aecdeef0-a529-6b75-fa58-d924cee293bd" Sep 16 04:59:23.029495 containerd[1722]: 2025-09-16 04:59:22.943 [INFO][5741] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="7a61fdc32c58f20217730afeb565dd4f40ba40b091599735b415c728f71bd860" after=11.692606ms iface="eth0" netns="/var/run/netns/cni-aecdeef0-a529-6b75-fa58-d924cee293bd" Sep 16 04:59:23.029495 containerd[1722]: 2025-09-16 04:59:22.943 [INFO][5741] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7a61fdc32c58f20217730afeb565dd4f40ba40b091599735b415c728f71bd860" Sep 16 04:59:23.029495 containerd[1722]: 2025-09-16 04:59:22.943 [INFO][5741] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7a61fdc32c58f20217730afeb565dd4f40ba40b091599735b415c728f71bd860" Sep 16 04:59:23.029495 containerd[1722]: 2025-09-16 04:59:22.982 [INFO][5758] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7a61fdc32c58f20217730afeb565dd4f40ba40b091599735b415c728f71bd860" HandleID="k8s-pod-network.7a61fdc32c58f20217730afeb565dd4f40ba40b091599735b415c728f71bd860" Workload="ci--4459.0.0--n--140c1315ab-k8s-calico--apiserver--696d587784--76trh-eth0" Sep 16 04:59:23.029495 containerd[1722]: 2025-09-16 04:59:22.982 [INFO][5758] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 16 04:59:23.029495 containerd[1722]: 2025-09-16 04:59:22.982 [INFO][5758] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 16 04:59:23.029495 containerd[1722]: 2025-09-16 04:59:23.024 [INFO][5758] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="7a61fdc32c58f20217730afeb565dd4f40ba40b091599735b415c728f71bd860" HandleID="k8s-pod-network.7a61fdc32c58f20217730afeb565dd4f40ba40b091599735b415c728f71bd860" Workload="ci--4459.0.0--n--140c1315ab-k8s-calico--apiserver--696d587784--76trh-eth0" Sep 16 04:59:23.029495 containerd[1722]: 2025-09-16 04:59:23.024 [INFO][5758] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7a61fdc32c58f20217730afeb565dd4f40ba40b091599735b415c728f71bd860" HandleID="k8s-pod-network.7a61fdc32c58f20217730afeb565dd4f40ba40b091599735b415c728f71bd860" Workload="ci--4459.0.0--n--140c1315ab-k8s-calico--apiserver--696d587784--76trh-eth0" Sep 16 04:59:23.029495 containerd[1722]: 2025-09-16 04:59:23.025 [INFO][5758] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 16 04:59:23.029495 containerd[1722]: 2025-09-16 04:59:23.027 [INFO][5741] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7a61fdc32c58f20217730afeb565dd4f40ba40b091599735b415c728f71bd860" Sep 16 04:59:23.030279 containerd[1722]: time="2025-09-16T04:59:23.030196719Z" level=info msg="TearDown network for sandbox \"7a61fdc32c58f20217730afeb565dd4f40ba40b091599735b415c728f71bd860\" successfully" Sep 16 04:59:23.030279 containerd[1722]: time="2025-09-16T04:59:23.030218910Z" level=info msg="StopPodSandbox for \"7a61fdc32c58f20217730afeb565dd4f40ba40b091599735b415c728f71bd860\" returns successfully" Sep 16 04:59:23.032574 systemd[1]: run-netns-cni\x2daecdeef0\x2da529\x2d6b75\x2dfa58\x2dd924cee293bd.mount: Deactivated successfully. Sep 16 04:59:23.140541 kubelet[3170]: I0916 04:59:23.140437 3170 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zwjhc\" (UniqueName: \"kubernetes.io/projected/98dcb491-139f-48ae-8a04-0d45651d392d-kube-api-access-zwjhc\") pod \"98dcb491-139f-48ae-8a04-0d45651d392d\" (UID: \"98dcb491-139f-48ae-8a04-0d45651d392d\") " Sep 16 04:59:23.140541 kubelet[3170]: I0916 04:59:23.140484 3170 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/98dcb491-139f-48ae-8a04-0d45651d392d-calico-apiserver-certs\") pod \"98dcb491-139f-48ae-8a04-0d45651d392d\" (UID: \"98dcb491-139f-48ae-8a04-0d45651d392d\") " Sep 16 04:59:23.144271 kubelet[3170]: I0916 04:59:23.144191 3170 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98dcb491-139f-48ae-8a04-0d45651d392d-kube-api-access-zwjhc" (OuterVolumeSpecName: "kube-api-access-zwjhc") pod "98dcb491-139f-48ae-8a04-0d45651d392d" (UID: "98dcb491-139f-48ae-8a04-0d45651d392d"). InnerVolumeSpecName "kube-api-access-zwjhc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 16 04:59:23.144750 systemd[1]: var-lib-kubelet-pods-98dcb491\x2d139f\x2d48ae\x2d8a04\x2d0d45651d392d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzwjhc.mount: Deactivated successfully. Sep 16 04:59:23.147258 kubelet[3170]: I0916 04:59:23.147237 3170 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98dcb491-139f-48ae-8a04-0d45651d392d-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "98dcb491-139f-48ae-8a04-0d45651d392d" (UID: "98dcb491-139f-48ae-8a04-0d45651d392d"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 16 04:59:23.148220 systemd[1]: var-lib-kubelet-pods-98dcb491\x2d139f\x2d48ae\x2d8a04\x2d0d45651d392d-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Sep 16 04:59:23.242745 kubelet[3170]: I0916 04:59:23.242722 3170 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zwjhc\" (UniqueName: \"kubernetes.io/projected/98dcb491-139f-48ae-8a04-0d45651d392d-kube-api-access-zwjhc\") on node \"ci-4459.0.0-n-140c1315ab\" DevicePath \"\"" Sep 16 04:59:23.242745 kubelet[3170]: I0916 04:59:23.242743 3170 reconciler_common.go:299] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/98dcb491-139f-48ae-8a04-0d45651d392d-calico-apiserver-certs\") on node \"ci-4459.0.0-n-140c1315ab\" DevicePath \"\"" Sep 16 04:59:23.297516 systemd[1]: Removed slice kubepods-besteffort-pod98dcb491_139f_48ae_8a04_0d45651d392d.slice - libcontainer container kubepods-besteffort-pod98dcb491_139f_48ae_8a04_0d45651d392d.slice. Sep 16 04:59:23.501414 kubelet[3170]: I0916 04:59:23.482793 3170 scope.go:117] "RemoveContainer" containerID="f57ec9b75d67a42cc7d541dc8757ec1cf45ff1f7b2ee7fdec157a571ec3d03e4" Sep 16 04:59:23.501513 containerd[1722]: time="2025-09-16T04:59:23.486401108Z" level=info msg="RemoveContainer for \"f57ec9b75d67a42cc7d541dc8757ec1cf45ff1f7b2ee7fdec157a571ec3d03e4\"" Sep 16 04:59:23.760836 containerd[1722]: time="2025-09-16T04:59:23.760795807Z" level=info msg="RemoveContainer for \"f57ec9b75d67a42cc7d541dc8757ec1cf45ff1f7b2ee7fdec157a571ec3d03e4\" returns successfully" Sep 16 04:59:23.761159 kubelet[3170]: I0916 04:59:23.761129 3170 scope.go:117] "RemoveContainer" containerID="f57ec9b75d67a42cc7d541dc8757ec1cf45ff1f7b2ee7fdec157a571ec3d03e4" Sep 16 04:59:23.761390 containerd[1722]: time="2025-09-16T04:59:23.761365442Z" level=error msg="ContainerStatus for \"f57ec9b75d67a42cc7d541dc8757ec1cf45ff1f7b2ee7fdec157a571ec3d03e4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f57ec9b75d67a42cc7d541dc8757ec1cf45ff1f7b2ee7fdec157a571ec3d03e4\": not found" Sep 16 04:59:23.761488 kubelet[3170]: E0916 04:59:23.761472 3170 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f57ec9b75d67a42cc7d541dc8757ec1cf45ff1f7b2ee7fdec157a571ec3d03e4\": not found" containerID="f57ec9b75d67a42cc7d541dc8757ec1cf45ff1f7b2ee7fdec157a571ec3d03e4" Sep 16 04:59:23.761533 kubelet[3170]: I0916 04:59:23.761495 3170 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f57ec9b75d67a42cc7d541dc8757ec1cf45ff1f7b2ee7fdec157a571ec3d03e4"} err="failed to get container status \"f57ec9b75d67a42cc7d541dc8757ec1cf45ff1f7b2ee7fdec157a571ec3d03e4\": rpc error: code = NotFound desc = an error occurred when try to find container \"f57ec9b75d67a42cc7d541dc8757ec1cf45ff1f7b2ee7fdec157a571ec3d03e4\": not found" Sep 16 04:59:23.788267 containerd[1722]: time="2025-09-16T04:59:23.788240404Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:59:23.791290 containerd[1722]: time="2025-09-16T04:59:23.791190408Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Sep 16 04:59:23.793917 containerd[1722]: time="2025-09-16T04:59:23.793895446Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:59:23.797451 containerd[1722]: time="2025-09-16T04:59:23.797413339Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:59:23.797837 containerd[1722]: time="2025-09-16T04:59:23.797763759Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 6.231550938s" Sep 16 04:59:23.797837 containerd[1722]: time="2025-09-16T04:59:23.797788907Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 16 04:59:23.798583 containerd[1722]: time="2025-09-16T04:59:23.798565983Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 16 04:59:23.804460 containerd[1722]: time="2025-09-16T04:59:23.804435723Z" level=info msg="CreateContainer within sandbox \"6c6bc510f87957809b94fc466410944fd34f8d79bcc4c3aa8e1614b79cdabba0\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 16 04:59:23.817006 containerd[1722]: time="2025-09-16T04:59:23.816984838Z" level=info msg="Container 66dd65ab3b3dc79b0289e4825220ad7a4b3a80c344f5b7a1404bc0c69d92f53b: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:59:23.833389 containerd[1722]: time="2025-09-16T04:59:23.833350452Z" level=info msg="CreateContainer within sandbox \"6c6bc510f87957809b94fc466410944fd34f8d79bcc4c3aa8e1614b79cdabba0\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"66dd65ab3b3dc79b0289e4825220ad7a4b3a80c344f5b7a1404bc0c69d92f53b\"" Sep 16 04:59:23.834006 containerd[1722]: time="2025-09-16T04:59:23.833932279Z" level=info msg="StartContainer for \"66dd65ab3b3dc79b0289e4825220ad7a4b3a80c344f5b7a1404bc0c69d92f53b\"" Sep 16 04:59:23.835113 containerd[1722]: time="2025-09-16T04:59:23.835067106Z" level=info msg="connecting to shim 66dd65ab3b3dc79b0289e4825220ad7a4b3a80c344f5b7a1404bc0c69d92f53b" address="unix:///run/containerd/s/8a85987d3c2192b19e063ed65e26ac5dd59e238812c723682d2ba78fa0c74e89" protocol=ttrpc version=3 Sep 16 04:59:23.850223 systemd[1]: Started cri-containerd-66dd65ab3b3dc79b0289e4825220ad7a4b3a80c344f5b7a1404bc0c69d92f53b.scope - libcontainer container 66dd65ab3b3dc79b0289e4825220ad7a4b3a80c344f5b7a1404bc0c69d92f53b. Sep 16 04:59:23.893374 containerd[1722]: time="2025-09-16T04:59:23.893307231Z" level=info msg="StartContainer for \"66dd65ab3b3dc79b0289e4825220ad7a4b3a80c344f5b7a1404bc0c69d92f53b\" returns successfully" Sep 16 04:59:24.501916 kubelet[3170]: I0916 04:59:24.501617 3170 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-bn6xl" podStartSLOduration=29.361477075 podStartE2EDuration="41.501602022s" podCreationTimestamp="2025-09-16 04:58:43 +0000 UTC" firstStartedPulling="2025-09-16 04:59:11.658345294 +0000 UTC m=+46.450371319" lastFinishedPulling="2025-09-16 04:59:23.798470253 +0000 UTC m=+58.590496266" observedRunningTime="2025-09-16 04:59:24.500369773 +0000 UTC m=+59.292395803" watchObservedRunningTime="2025-09-16 04:59:24.501602022 +0000 UTC m=+59.293628046" Sep 16 04:59:24.553118 containerd[1722]: time="2025-09-16T04:59:24.553078215Z" level=info msg="TaskExit event in podsandbox handler container_id:\"66dd65ab3b3dc79b0289e4825220ad7a4b3a80c344f5b7a1404bc0c69d92f53b\" id:\"5a21706506b8bd11f96b5f933263e5efffaf9ebce6294e6c77971089784279d1\" pid:5820 exit_status:1 exited_at:{seconds:1757998764 nanos:552730061}" Sep 16 04:59:25.283056 containerd[1722]: time="2025-09-16T04:59:25.283018192Z" level=info msg="StopPodSandbox for \"7a61fdc32c58f20217730afeb565dd4f40ba40b091599735b415c728f71bd860\"" Sep 16 04:59:25.289457 kubelet[3170]: I0916 04:59:25.289429 3170 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98dcb491-139f-48ae-8a04-0d45651d392d" path="/var/lib/kubelet/pods/98dcb491-139f-48ae-8a04-0d45651d392d/volumes" Sep 16 04:59:25.342813 containerd[1722]: 2025-09-16 04:59:25.317 [WARNING][5842] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="7a61fdc32c58f20217730afeb565dd4f40ba40b091599735b415c728f71bd860" WorkloadEndpoint="ci--4459.0.0--n--140c1315ab-k8s-calico--apiserver--696d587784--76trh-eth0" Sep 16 04:59:25.342813 containerd[1722]: 2025-09-16 04:59:25.317 [INFO][5842] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7a61fdc32c58f20217730afeb565dd4f40ba40b091599735b415c728f71bd860" Sep 16 04:59:25.342813 containerd[1722]: 2025-09-16 04:59:25.317 [INFO][5842] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7a61fdc32c58f20217730afeb565dd4f40ba40b091599735b415c728f71bd860" iface="eth0" netns="" Sep 16 04:59:25.342813 containerd[1722]: 2025-09-16 04:59:25.317 [INFO][5842] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7a61fdc32c58f20217730afeb565dd4f40ba40b091599735b415c728f71bd860" Sep 16 04:59:25.342813 containerd[1722]: 2025-09-16 04:59:25.317 [INFO][5842] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7a61fdc32c58f20217730afeb565dd4f40ba40b091599735b415c728f71bd860" Sep 16 04:59:25.342813 containerd[1722]: 2025-09-16 04:59:25.334 [INFO][5850] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7a61fdc32c58f20217730afeb565dd4f40ba40b091599735b415c728f71bd860" HandleID="k8s-pod-network.7a61fdc32c58f20217730afeb565dd4f40ba40b091599735b415c728f71bd860" Workload="ci--4459.0.0--n--140c1315ab-k8s-calico--apiserver--696d587784--76trh-eth0" Sep 16 04:59:25.342813 containerd[1722]: 2025-09-16 04:59:25.334 [INFO][5850] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 16 04:59:25.342813 containerd[1722]: 2025-09-16 04:59:25.334 [INFO][5850] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 16 04:59:25.342813 containerd[1722]: 2025-09-16 04:59:25.338 [WARNING][5850] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7a61fdc32c58f20217730afeb565dd4f40ba40b091599735b415c728f71bd860" HandleID="k8s-pod-network.7a61fdc32c58f20217730afeb565dd4f40ba40b091599735b415c728f71bd860" Workload="ci--4459.0.0--n--140c1315ab-k8s-calico--apiserver--696d587784--76trh-eth0" Sep 16 04:59:25.342813 containerd[1722]: 2025-09-16 04:59:25.339 [INFO][5850] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7a61fdc32c58f20217730afeb565dd4f40ba40b091599735b415c728f71bd860" HandleID="k8s-pod-network.7a61fdc32c58f20217730afeb565dd4f40ba40b091599735b415c728f71bd860" Workload="ci--4459.0.0--n--140c1315ab-k8s-calico--apiserver--696d587784--76trh-eth0" Sep 16 04:59:25.342813 containerd[1722]: 2025-09-16 04:59:25.341 [INFO][5850] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 16 04:59:25.342813 containerd[1722]: 2025-09-16 04:59:25.342 [INFO][5842] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7a61fdc32c58f20217730afeb565dd4f40ba40b091599735b415c728f71bd860" Sep 16 04:59:25.343127 containerd[1722]: time="2025-09-16T04:59:25.343083164Z" level=info msg="TearDown network for sandbox \"7a61fdc32c58f20217730afeb565dd4f40ba40b091599735b415c728f71bd860\" successfully" Sep 16 04:59:25.343127 containerd[1722]: time="2025-09-16T04:59:25.343124429Z" level=info msg="StopPodSandbox for \"7a61fdc32c58f20217730afeb565dd4f40ba40b091599735b415c728f71bd860\" returns successfully" Sep 16 04:59:25.343593 containerd[1722]: time="2025-09-16T04:59:25.343575971Z" level=info msg="RemovePodSandbox for \"7a61fdc32c58f20217730afeb565dd4f40ba40b091599735b415c728f71bd860\"" Sep 16 04:59:25.343647 containerd[1722]: time="2025-09-16T04:59:25.343597282Z" level=info msg="Forcibly stopping sandbox \"7a61fdc32c58f20217730afeb565dd4f40ba40b091599735b415c728f71bd860\"" Sep 16 04:59:25.407545 containerd[1722]: 2025-09-16 04:59:25.377 [WARNING][5864] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="7a61fdc32c58f20217730afeb565dd4f40ba40b091599735b415c728f71bd860" WorkloadEndpoint="ci--4459.0.0--n--140c1315ab-k8s-calico--apiserver--696d587784--76trh-eth0" Sep 16 04:59:25.407545 containerd[1722]: 2025-09-16 04:59:25.377 [INFO][5864] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7a61fdc32c58f20217730afeb565dd4f40ba40b091599735b415c728f71bd860" Sep 16 04:59:25.407545 containerd[1722]: 2025-09-16 04:59:25.377 [INFO][5864] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7a61fdc32c58f20217730afeb565dd4f40ba40b091599735b415c728f71bd860" iface="eth0" netns="" Sep 16 04:59:25.407545 containerd[1722]: 2025-09-16 04:59:25.377 [INFO][5864] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7a61fdc32c58f20217730afeb565dd4f40ba40b091599735b415c728f71bd860" Sep 16 04:59:25.407545 containerd[1722]: 2025-09-16 04:59:25.377 [INFO][5864] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7a61fdc32c58f20217730afeb565dd4f40ba40b091599735b415c728f71bd860" Sep 16 04:59:25.407545 containerd[1722]: 2025-09-16 04:59:25.398 [INFO][5875] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7a61fdc32c58f20217730afeb565dd4f40ba40b091599735b415c728f71bd860" HandleID="k8s-pod-network.7a61fdc32c58f20217730afeb565dd4f40ba40b091599735b415c728f71bd860" Workload="ci--4459.0.0--n--140c1315ab-k8s-calico--apiserver--696d587784--76trh-eth0" Sep 16 04:59:25.407545 containerd[1722]: 2025-09-16 04:59:25.398 [INFO][5875] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 16 04:59:25.407545 containerd[1722]: 2025-09-16 04:59:25.398 [INFO][5875] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 16 04:59:25.407545 containerd[1722]: 2025-09-16 04:59:25.403 [WARNING][5875] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7a61fdc32c58f20217730afeb565dd4f40ba40b091599735b415c728f71bd860" HandleID="k8s-pod-network.7a61fdc32c58f20217730afeb565dd4f40ba40b091599735b415c728f71bd860" Workload="ci--4459.0.0--n--140c1315ab-k8s-calico--apiserver--696d587784--76trh-eth0" Sep 16 04:59:25.407545 containerd[1722]: 2025-09-16 04:59:25.403 [INFO][5875] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7a61fdc32c58f20217730afeb565dd4f40ba40b091599735b415c728f71bd860" HandleID="k8s-pod-network.7a61fdc32c58f20217730afeb565dd4f40ba40b091599735b415c728f71bd860" Workload="ci--4459.0.0--n--140c1315ab-k8s-calico--apiserver--696d587784--76trh-eth0" Sep 16 04:59:25.407545 containerd[1722]: 2025-09-16 04:59:25.405 [INFO][5875] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 16 04:59:25.407545 containerd[1722]: 2025-09-16 04:59:25.406 [INFO][5864] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7a61fdc32c58f20217730afeb565dd4f40ba40b091599735b415c728f71bd860" Sep 16 04:59:25.407826 containerd[1722]: time="2025-09-16T04:59:25.407544775Z" level=info msg="TearDown network for sandbox \"7a61fdc32c58f20217730afeb565dd4f40ba40b091599735b415c728f71bd860\" successfully" Sep 16 04:59:25.412499 containerd[1722]: time="2025-09-16T04:59:25.412175567Z" level=info msg="Ensure that sandbox 7a61fdc32c58f20217730afeb565dd4f40ba40b091599735b415c728f71bd860 in task-service has been cleanup successfully" Sep 16 04:59:25.422845 containerd[1722]: time="2025-09-16T04:59:25.422823985Z" level=info msg="RemovePodSandbox \"7a61fdc32c58f20217730afeb565dd4f40ba40b091599735b415c728f71bd860\" returns successfully" Sep 16 04:59:25.563296 containerd[1722]: time="2025-09-16T04:59:25.563224456Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:59:25.565722 containerd[1722]: time="2025-09-16T04:59:25.565691409Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 16 04:59:25.568467 containerd[1722]: time="2025-09-16T04:59:25.568426712Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:59:25.569380 containerd[1722]: time="2025-09-16T04:59:25.569359794Z" level=info msg="TaskExit event in podsandbox handler container_id:\"66dd65ab3b3dc79b0289e4825220ad7a4b3a80c344f5b7a1404bc0c69d92f53b\" id:\"6386dd7e99509dfda269b4c72c20e62ecfdbc66287e4c52ebe31074f8d5003c6\" pid:5895 exit_status:1 exited_at:{seconds:1757998765 nanos:569094846}" Sep 16 04:59:25.572873 containerd[1722]: time="2025-09-16T04:59:25.572829111Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:59:25.573539 containerd[1722]: time="2025-09-16T04:59:25.573242500Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 1.774546086s" Sep 16 04:59:25.573539 containerd[1722]: time="2025-09-16T04:59:25.573268792Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 16 04:59:25.580263 containerd[1722]: time="2025-09-16T04:59:25.580239488Z" level=info msg="CreateContainer within sandbox \"41d174970394b87315e8c39d32cca548a86b68425381550e91270c5adc787a4a\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 16 04:59:25.592933 containerd[1722]: time="2025-09-16T04:59:25.592070601Z" level=info msg="Container cd4735d9c7347eb8182e1b26c9050b089d4964152bf23d9ac8c94de6f7fad1d5: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:59:25.609411 containerd[1722]: time="2025-09-16T04:59:25.609390247Z" level=info msg="CreateContainer within sandbox \"41d174970394b87315e8c39d32cca548a86b68425381550e91270c5adc787a4a\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"cd4735d9c7347eb8182e1b26c9050b089d4964152bf23d9ac8c94de6f7fad1d5\"" Sep 16 04:59:25.610034 containerd[1722]: time="2025-09-16T04:59:25.609884399Z" level=info msg="StartContainer for \"cd4735d9c7347eb8182e1b26c9050b089d4964152bf23d9ac8c94de6f7fad1d5\"" Sep 16 04:59:25.611066 containerd[1722]: time="2025-09-16T04:59:25.611044719Z" level=info msg="connecting to shim cd4735d9c7347eb8182e1b26c9050b089d4964152bf23d9ac8c94de6f7fad1d5" address="unix:///run/containerd/s/3bce2d6cb65bde8f8304aa4c3e26baef6c43fc434fbcdd3ce7ea055b74786298" protocol=ttrpc version=3 Sep 16 04:59:25.630240 systemd[1]: Started cri-containerd-cd4735d9c7347eb8182e1b26c9050b089d4964152bf23d9ac8c94de6f7fad1d5.scope - libcontainer container cd4735d9c7347eb8182e1b26c9050b089d4964152bf23d9ac8c94de6f7fad1d5. Sep 16 04:59:25.664313 containerd[1722]: time="2025-09-16T04:59:25.664293807Z" level=info msg="StartContainer for \"cd4735d9c7347eb8182e1b26c9050b089d4964152bf23d9ac8c94de6f7fad1d5\" returns successfully" Sep 16 04:59:26.375212 kubelet[3170]: I0916 04:59:26.375194 3170 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 16 04:59:26.375486 kubelet[3170]: I0916 04:59:26.375223 3170 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 16 04:59:30.273130 kubelet[3170]: I0916 04:59:30.272345 3170 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 16 04:59:30.291042 kubelet[3170]: I0916 04:59:30.290746 3170 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-qlxsc" podStartSLOduration=29.534396466 podStartE2EDuration="46.290731532s" podCreationTimestamp="2025-09-16 04:58:44 +0000 UTC" firstStartedPulling="2025-09-16 04:59:08.817454587 +0000 UTC m=+43.609480607" lastFinishedPulling="2025-09-16 04:59:25.573789654 +0000 UTC m=+60.365815673" observedRunningTime="2025-09-16 04:59:26.516783425 +0000 UTC m=+61.308809451" watchObservedRunningTime="2025-09-16 04:59:30.290731532 +0000 UTC m=+65.082757559" Sep 16 04:59:34.455790 containerd[1722]: time="2025-09-16T04:59:34.455749916Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c78dfc1c271743ceaae8bb2c862d69212d63330cc30efe1f610008a8f7fe8a1d\" id:\"7f7236d8fe2dbec33051bd9b8ab969e4ec91e13f5d2c9094bcb12dddbf8fed2c\" pid:5967 exited_at:{seconds:1757998774 nanos:455539506}" Sep 16 04:59:34.908452 kubelet[3170]: I0916 04:59:34.908426 3170 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 16 04:59:34.958684 containerd[1722]: time="2025-09-16T04:59:34.958643441Z" level=info msg="StopContainer for \"6e8920778b396c49ee6eaac06f5bb5c7070c97e1eb31b1e809c498f29a30f43b\" with timeout 30 (s)" Sep 16 04:59:34.959610 containerd[1722]: time="2025-09-16T04:59:34.959442246Z" level=info msg="Stop container \"6e8920778b396c49ee6eaac06f5bb5c7070c97e1eb31b1e809c498f29a30f43b\" with signal terminated" Sep 16 04:59:34.983925 systemd[1]: cri-containerd-6e8920778b396c49ee6eaac06f5bb5c7070c97e1eb31b1e809c498f29a30f43b.scope: Deactivated successfully. Sep 16 04:59:34.988773 containerd[1722]: time="2025-09-16T04:59:34.988742487Z" level=info msg="received exit event container_id:\"6e8920778b396c49ee6eaac06f5bb5c7070c97e1eb31b1e809c498f29a30f43b\" id:\"6e8920778b396c49ee6eaac06f5bb5c7070c97e1eb31b1e809c498f29a30f43b\" pid:5350 exit_status:1 exited_at:{seconds:1757998774 nanos:988314011}" Sep 16 04:59:34.988925 containerd[1722]: time="2025-09-16T04:59:34.988758206Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6e8920778b396c49ee6eaac06f5bb5c7070c97e1eb31b1e809c498f29a30f43b\" id:\"6e8920778b396c49ee6eaac06f5bb5c7070c97e1eb31b1e809c498f29a30f43b\" pid:5350 exit_status:1 exited_at:{seconds:1757998774 nanos:988314011}" Sep 16 04:59:35.010330 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6e8920778b396c49ee6eaac06f5bb5c7070c97e1eb31b1e809c498f29a30f43b-rootfs.mount: Deactivated successfully. Sep 16 04:59:35.224526 containerd[1722]: time="2025-09-16T04:59:35.224460607Z" level=info msg="StopContainer for \"6e8920778b396c49ee6eaac06f5bb5c7070c97e1eb31b1e809c498f29a30f43b\" returns successfully" Sep 16 04:59:35.224942 containerd[1722]: time="2025-09-16T04:59:35.224906122Z" level=info msg="StopPodSandbox for \"92c5f7b17fa4e8d8e85d2c7abdebbded0a49aa406dfed75ef788e2ade54ce3c8\"" Sep 16 04:59:35.224993 containerd[1722]: time="2025-09-16T04:59:35.224980651Z" level=info msg="Container to stop \"6e8920778b396c49ee6eaac06f5bb5c7070c97e1eb31b1e809c498f29a30f43b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 16 04:59:35.230602 systemd[1]: cri-containerd-92c5f7b17fa4e8d8e85d2c7abdebbded0a49aa406dfed75ef788e2ade54ce3c8.scope: Deactivated successfully. Sep 16 04:59:35.232061 containerd[1722]: time="2025-09-16T04:59:35.231978204Z" level=info msg="TaskExit event in podsandbox handler container_id:\"92c5f7b17fa4e8d8e85d2c7abdebbded0a49aa406dfed75ef788e2ade54ce3c8\" id:\"92c5f7b17fa4e8d8e85d2c7abdebbded0a49aa406dfed75ef788e2ade54ce3c8\" pid:4684 exit_status:137 exited_at:{seconds:1757998775 nanos:230913536}" Sep 16 04:59:35.254305 containerd[1722]: time="2025-09-16T04:59:35.254268947Z" level=info msg="received exit event sandbox_id:\"92c5f7b17fa4e8d8e85d2c7abdebbded0a49aa406dfed75ef788e2ade54ce3c8\" exit_status:137 exited_at:{seconds:1757998775 nanos:230913536}" Sep 16 04:59:35.254981 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-92c5f7b17fa4e8d8e85d2c7abdebbded0a49aa406dfed75ef788e2ade54ce3c8-rootfs.mount: Deactivated successfully. Sep 16 04:59:35.255469 containerd[1722]: time="2025-09-16T04:59:35.255320943Z" level=info msg="shim disconnected" id=92c5f7b17fa4e8d8e85d2c7abdebbded0a49aa406dfed75ef788e2ade54ce3c8 namespace=k8s.io Sep 16 04:59:35.257951 containerd[1722]: time="2025-09-16T04:59:35.256577584Z" level=warning msg="cleaning up after shim disconnected" id=92c5f7b17fa4e8d8e85d2c7abdebbded0a49aa406dfed75ef788e2ade54ce3c8 namespace=k8s.io Sep 16 04:59:35.258061 containerd[1722]: time="2025-09-16T04:59:35.258027700Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 16 04:59:35.259879 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-92c5f7b17fa4e8d8e85d2c7abdebbded0a49aa406dfed75ef788e2ade54ce3c8-shm.mount: Deactivated successfully. Sep 16 04:59:35.300253 systemd-networkd[1586]: calie7f2ba957cc: Link DOWN Sep 16 04:59:35.300257 systemd-networkd[1586]: calie7f2ba957cc: Lost carrier Sep 16 04:59:35.367824 containerd[1722]: 2025-09-16 04:59:35.297 [INFO][6049] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="92c5f7b17fa4e8d8e85d2c7abdebbded0a49aa406dfed75ef788e2ade54ce3c8" Sep 16 04:59:35.367824 containerd[1722]: 2025-09-16 04:59:35.297 [INFO][6049] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="92c5f7b17fa4e8d8e85d2c7abdebbded0a49aa406dfed75ef788e2ade54ce3c8" iface="eth0" netns="/var/run/netns/cni-9d7a6979-0599-07b5-0d16-bbfeb41b2bd3" Sep 16 04:59:35.367824 containerd[1722]: 2025-09-16 04:59:35.298 [INFO][6049] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="92c5f7b17fa4e8d8e85d2c7abdebbded0a49aa406dfed75ef788e2ade54ce3c8" iface="eth0" netns="/var/run/netns/cni-9d7a6979-0599-07b5-0d16-bbfeb41b2bd3" Sep 16 04:59:35.367824 containerd[1722]: 2025-09-16 04:59:35.306 [INFO][6049] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="92c5f7b17fa4e8d8e85d2c7abdebbded0a49aa406dfed75ef788e2ade54ce3c8" after=8.245177ms iface="eth0" netns="/var/run/netns/cni-9d7a6979-0599-07b5-0d16-bbfeb41b2bd3" Sep 16 04:59:35.367824 containerd[1722]: 2025-09-16 04:59:35.306 [INFO][6049] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="92c5f7b17fa4e8d8e85d2c7abdebbded0a49aa406dfed75ef788e2ade54ce3c8" Sep 16 04:59:35.367824 containerd[1722]: 2025-09-16 04:59:35.306 [INFO][6049] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="92c5f7b17fa4e8d8e85d2c7abdebbded0a49aa406dfed75ef788e2ade54ce3c8" Sep 16 04:59:35.367824 containerd[1722]: 2025-09-16 04:59:35.330 [INFO][6064] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="92c5f7b17fa4e8d8e85d2c7abdebbded0a49aa406dfed75ef788e2ade54ce3c8" HandleID="k8s-pod-network.92c5f7b17fa4e8d8e85d2c7abdebbded0a49aa406dfed75ef788e2ade54ce3c8" Workload="ci--4459.0.0--n--140c1315ab-k8s-calico--apiserver--696d587784--b5t7c-eth0" Sep 16 04:59:35.367824 containerd[1722]: 2025-09-16 04:59:35.330 [INFO][6064] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 16 04:59:35.367824 containerd[1722]: 2025-09-16 04:59:35.330 [INFO][6064] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 16 04:59:35.367824 containerd[1722]: 2025-09-16 04:59:35.362 [INFO][6064] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="92c5f7b17fa4e8d8e85d2c7abdebbded0a49aa406dfed75ef788e2ade54ce3c8" HandleID="k8s-pod-network.92c5f7b17fa4e8d8e85d2c7abdebbded0a49aa406dfed75ef788e2ade54ce3c8" Workload="ci--4459.0.0--n--140c1315ab-k8s-calico--apiserver--696d587784--b5t7c-eth0" Sep 16 04:59:35.367824 containerd[1722]: 2025-09-16 04:59:35.362 [INFO][6064] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="92c5f7b17fa4e8d8e85d2c7abdebbded0a49aa406dfed75ef788e2ade54ce3c8" HandleID="k8s-pod-network.92c5f7b17fa4e8d8e85d2c7abdebbded0a49aa406dfed75ef788e2ade54ce3c8" Workload="ci--4459.0.0--n--140c1315ab-k8s-calico--apiserver--696d587784--b5t7c-eth0" Sep 16 04:59:35.367824 containerd[1722]: 2025-09-16 04:59:35.363 [INFO][6064] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 16 04:59:35.367824 containerd[1722]: 2025-09-16 04:59:35.364 [INFO][6049] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="92c5f7b17fa4e8d8e85d2c7abdebbded0a49aa406dfed75ef788e2ade54ce3c8" Sep 16 04:59:35.368272 systemd[1]: run-netns-cni\x2d9d7a6979\x2d0599\x2d07b5\x2d0d16\x2dbbfeb41b2bd3.mount: Deactivated successfully. Sep 16 04:59:35.368462 containerd[1722]: time="2025-09-16T04:59:35.368405903Z" level=info msg="TearDown network for sandbox \"92c5f7b17fa4e8d8e85d2c7abdebbded0a49aa406dfed75ef788e2ade54ce3c8\" successfully" Sep 16 04:59:35.368462 containerd[1722]: time="2025-09-16T04:59:35.368426304Z" level=info msg="StopPodSandbox for \"92c5f7b17fa4e8d8e85d2c7abdebbded0a49aa406dfed75ef788e2ade54ce3c8\" returns successfully" Sep 16 04:59:35.404496 kubelet[3170]: I0916 04:59:35.404402 3170 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dwxqc\" (UniqueName: \"kubernetes.io/projected/4b74af3f-b2c3-4c95-9de8-d14cd49c421e-kube-api-access-dwxqc\") pod \"4b74af3f-b2c3-4c95-9de8-d14cd49c421e\" (UID: \"4b74af3f-b2c3-4c95-9de8-d14cd49c421e\") " Sep 16 04:59:35.404496 kubelet[3170]: I0916 04:59:35.404451 3170 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/4b74af3f-b2c3-4c95-9de8-d14cd49c421e-calico-apiserver-certs\") pod \"4b74af3f-b2c3-4c95-9de8-d14cd49c421e\" (UID: \"4b74af3f-b2c3-4c95-9de8-d14cd49c421e\") " Sep 16 04:59:35.407964 kubelet[3170]: I0916 04:59:35.407930 3170 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b74af3f-b2c3-4c95-9de8-d14cd49c421e-kube-api-access-dwxqc" (OuterVolumeSpecName: "kube-api-access-dwxqc") pod "4b74af3f-b2c3-4c95-9de8-d14cd49c421e" (UID: "4b74af3f-b2c3-4c95-9de8-d14cd49c421e"). InnerVolumeSpecName "kube-api-access-dwxqc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 16 04:59:35.409111 kubelet[3170]: I0916 04:59:35.408076 3170 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b74af3f-b2c3-4c95-9de8-d14cd49c421e-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "4b74af3f-b2c3-4c95-9de8-d14cd49c421e" (UID: "4b74af3f-b2c3-4c95-9de8-d14cd49c421e"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 16 04:59:35.409171 systemd[1]: var-lib-kubelet-pods-4b74af3f\x2db2c3\x2d4c95\x2d9de8\x2dd14cd49c421e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddwxqc.mount: Deactivated successfully. Sep 16 04:59:35.409250 systemd[1]: var-lib-kubelet-pods-4b74af3f\x2db2c3\x2d4c95\x2d9de8\x2dd14cd49c421e-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Sep 16 04:59:35.505329 kubelet[3170]: I0916 04:59:35.505257 3170 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dwxqc\" (UniqueName: \"kubernetes.io/projected/4b74af3f-b2c3-4c95-9de8-d14cd49c421e-kube-api-access-dwxqc\") on node \"ci-4459.0.0-n-140c1315ab\" DevicePath \"\"" Sep 16 04:59:35.505329 kubelet[3170]: I0916 04:59:35.505286 3170 reconciler_common.go:299] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/4b74af3f-b2c3-4c95-9de8-d14cd49c421e-calico-apiserver-certs\") on node \"ci-4459.0.0-n-140c1315ab\" DevicePath \"\"" Sep 16 04:59:35.511110 kubelet[3170]: I0916 04:59:35.510931 3170 scope.go:117] "RemoveContainer" containerID="6e8920778b396c49ee6eaac06f5bb5c7070c97e1eb31b1e809c498f29a30f43b" Sep 16 04:59:35.513007 containerd[1722]: time="2025-09-16T04:59:35.512989047Z" level=info msg="RemoveContainer for \"6e8920778b396c49ee6eaac06f5bb5c7070c97e1eb31b1e809c498f29a30f43b\"" Sep 16 04:59:35.516692 systemd[1]: Removed slice kubepods-besteffort-pod4b74af3f_b2c3_4c95_9de8_d14cd49c421e.slice - libcontainer container kubepods-besteffort-pod4b74af3f_b2c3_4c95_9de8_d14cd49c421e.slice. Sep 16 04:59:35.521557 containerd[1722]: time="2025-09-16T04:59:35.521524286Z" level=info msg="RemoveContainer for \"6e8920778b396c49ee6eaac06f5bb5c7070c97e1eb31b1e809c498f29a30f43b\" returns successfully" Sep 16 04:59:35.521790 kubelet[3170]: I0916 04:59:35.521687 3170 scope.go:117] "RemoveContainer" containerID="6e8920778b396c49ee6eaac06f5bb5c7070c97e1eb31b1e809c498f29a30f43b" Sep 16 04:59:35.521863 containerd[1722]: time="2025-09-16T04:59:35.521835672Z" level=error msg="ContainerStatus for \"6e8920778b396c49ee6eaac06f5bb5c7070c97e1eb31b1e809c498f29a30f43b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6e8920778b396c49ee6eaac06f5bb5c7070c97e1eb31b1e809c498f29a30f43b\": not found" Sep 16 04:59:35.521967 kubelet[3170]: E0916 04:59:35.521944 3170 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6e8920778b396c49ee6eaac06f5bb5c7070c97e1eb31b1e809c498f29a30f43b\": not found" containerID="6e8920778b396c49ee6eaac06f5bb5c7070c97e1eb31b1e809c498f29a30f43b" Sep 16 04:59:35.522020 kubelet[3170]: I0916 04:59:35.521966 3170 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6e8920778b396c49ee6eaac06f5bb5c7070c97e1eb31b1e809c498f29a30f43b"} err="failed to get container status \"6e8920778b396c49ee6eaac06f5bb5c7070c97e1eb31b1e809c498f29a30f43b\": rpc error: code = NotFound desc = an error occurred when try to find container \"6e8920778b396c49ee6eaac06f5bb5c7070c97e1eb31b1e809c498f29a30f43b\": not found" Sep 16 04:59:37.288117 kubelet[3170]: I0916 04:59:37.288074 3170 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b74af3f-b2c3-4c95-9de8-d14cd49c421e" path="/var/lib/kubelet/pods/4b74af3f-b2c3-4c95-9de8-d14cd49c421e/volumes" Sep 16 04:59:46.494956 containerd[1722]: time="2025-09-16T04:59:46.494924305Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9043f02600f56cd7e22d8576992d0c07bd6d45d0b91389d4920be9f9f21af079\" id:\"87402c1139930ba16de06d1098adc62b3e8f1d819cbf0ef04812f6906857ff80\" pid:6106 exited_at:{seconds:1757998786 nanos:494650071}" Sep 16 04:59:47.479966 containerd[1722]: time="2025-09-16T04:59:47.479913022Z" level=info msg="TaskExit event in podsandbox handler container_id:\"66dd65ab3b3dc79b0289e4825220ad7a4b3a80c344f5b7a1404bc0c69d92f53b\" id:\"1b4e84bfe4f1b8d95bfba70548649d9718be8c32b1623a59c982be417a33c0a0\" pid:6127 exited_at:{seconds:1757998787 nanos:479519069}" Sep 16 04:59:55.601465 containerd[1722]: time="2025-09-16T04:59:55.601422396Z" level=info msg="TaskExit event in podsandbox handler container_id:\"66dd65ab3b3dc79b0289e4825220ad7a4b3a80c344f5b7a1404bc0c69d92f53b\" id:\"9289f7a8d0f5cb26e743da8201f834418d932443aabb6e7c1a841052f4051fd7\" pid:6151 exited_at:{seconds:1757998795 nanos:600850465}" Sep 16 05:00:04.551681 containerd[1722]: time="2025-09-16T05:00:04.551596879Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c78dfc1c271743ceaae8bb2c862d69212d63330cc30efe1f610008a8f7fe8a1d\" id:\"8dc3032be904930fc7ea5a7d43bc5966fa9913c65e638e29878c0f6e8561eebe\" pid:6177 exited_at:{seconds:1757998804 nanos:551219442}" Sep 16 05:00:10.019588 containerd[1722]: time="2025-09-16T05:00:10.019488637Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9043f02600f56cd7e22d8576992d0c07bd6d45d0b91389d4920be9f9f21af079\" id:\"472f225ce88c5293db35cc7c8d20006d4b2f2e047a5c0b1cb77e0b7a2b5eafc8\" pid:6201 exited_at:{seconds:1757998810 nanos:19298770}" Sep 16 05:00:16.492960 containerd[1722]: time="2025-09-16T05:00:16.492790927Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9043f02600f56cd7e22d8576992d0c07bd6d45d0b91389d4920be9f9f21af079\" id:\"5859471f47a0d837e11c23c56d99ddcee465ad362d9aea3e11ff6cbe2f63bed4\" pid:6225 exited_at:{seconds:1757998816 nanos:492600695}" Sep 16 05:00:25.425660 containerd[1722]: time="2025-09-16T05:00:25.425615657Z" level=info msg="StopPodSandbox for \"92c5f7b17fa4e8d8e85d2c7abdebbded0a49aa406dfed75ef788e2ade54ce3c8\"" Sep 16 05:00:25.473729 containerd[1722]: 2025-09-16 05:00:25.451 [WARNING][6244] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="92c5f7b17fa4e8d8e85d2c7abdebbded0a49aa406dfed75ef788e2ade54ce3c8" WorkloadEndpoint="ci--4459.0.0--n--140c1315ab-k8s-calico--apiserver--696d587784--b5t7c-eth0" Sep 16 05:00:25.473729 containerd[1722]: 2025-09-16 05:00:25.451 [INFO][6244] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="92c5f7b17fa4e8d8e85d2c7abdebbded0a49aa406dfed75ef788e2ade54ce3c8" Sep 16 05:00:25.473729 containerd[1722]: 2025-09-16 05:00:25.451 [INFO][6244] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="92c5f7b17fa4e8d8e85d2c7abdebbded0a49aa406dfed75ef788e2ade54ce3c8" iface="eth0" netns="" Sep 16 05:00:25.473729 containerd[1722]: 2025-09-16 05:00:25.451 [INFO][6244] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="92c5f7b17fa4e8d8e85d2c7abdebbded0a49aa406dfed75ef788e2ade54ce3c8" Sep 16 05:00:25.473729 containerd[1722]: 2025-09-16 05:00:25.451 [INFO][6244] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="92c5f7b17fa4e8d8e85d2c7abdebbded0a49aa406dfed75ef788e2ade54ce3c8" Sep 16 05:00:25.473729 containerd[1722]: 2025-09-16 05:00:25.466 [INFO][6251] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="92c5f7b17fa4e8d8e85d2c7abdebbded0a49aa406dfed75ef788e2ade54ce3c8" HandleID="k8s-pod-network.92c5f7b17fa4e8d8e85d2c7abdebbded0a49aa406dfed75ef788e2ade54ce3c8" Workload="ci--4459.0.0--n--140c1315ab-k8s-calico--apiserver--696d587784--b5t7c-eth0" Sep 16 05:00:25.473729 containerd[1722]: 2025-09-16 05:00:25.466 [INFO][6251] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 16 05:00:25.473729 containerd[1722]: 2025-09-16 05:00:25.466 [INFO][6251] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 16 05:00:25.473729 containerd[1722]: 2025-09-16 05:00:25.471 [WARNING][6251] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="92c5f7b17fa4e8d8e85d2c7abdebbded0a49aa406dfed75ef788e2ade54ce3c8" HandleID="k8s-pod-network.92c5f7b17fa4e8d8e85d2c7abdebbded0a49aa406dfed75ef788e2ade54ce3c8" Workload="ci--4459.0.0--n--140c1315ab-k8s-calico--apiserver--696d587784--b5t7c-eth0" Sep 16 05:00:25.473729 containerd[1722]: 2025-09-16 05:00:25.471 [INFO][6251] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="92c5f7b17fa4e8d8e85d2c7abdebbded0a49aa406dfed75ef788e2ade54ce3c8" HandleID="k8s-pod-network.92c5f7b17fa4e8d8e85d2c7abdebbded0a49aa406dfed75ef788e2ade54ce3c8" Workload="ci--4459.0.0--n--140c1315ab-k8s-calico--apiserver--696d587784--b5t7c-eth0" Sep 16 05:00:25.473729 containerd[1722]: 2025-09-16 05:00:25.472 [INFO][6251] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 16 05:00:25.473729 containerd[1722]: 2025-09-16 05:00:25.472 [INFO][6244] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="92c5f7b17fa4e8d8e85d2c7abdebbded0a49aa406dfed75ef788e2ade54ce3c8" Sep 16 05:00:25.473729 containerd[1722]: time="2025-09-16T05:00:25.473614002Z" level=info msg="TearDown network for sandbox \"92c5f7b17fa4e8d8e85d2c7abdebbded0a49aa406dfed75ef788e2ade54ce3c8\" successfully" Sep 16 05:00:25.473729 containerd[1722]: time="2025-09-16T05:00:25.473628788Z" level=info msg="StopPodSandbox for \"92c5f7b17fa4e8d8e85d2c7abdebbded0a49aa406dfed75ef788e2ade54ce3c8\" returns successfully" Sep 16 05:00:25.474115 containerd[1722]: time="2025-09-16T05:00:25.473947648Z" level=info msg="RemovePodSandbox for \"92c5f7b17fa4e8d8e85d2c7abdebbded0a49aa406dfed75ef788e2ade54ce3c8\"" Sep 16 05:00:25.474115 containerd[1722]: time="2025-09-16T05:00:25.473972699Z" level=info msg="Forcibly stopping sandbox \"92c5f7b17fa4e8d8e85d2c7abdebbded0a49aa406dfed75ef788e2ade54ce3c8\"" Sep 16 05:00:25.531780 containerd[1722]: 2025-09-16 05:00:25.500 [WARNING][6265] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="92c5f7b17fa4e8d8e85d2c7abdebbded0a49aa406dfed75ef788e2ade54ce3c8" WorkloadEndpoint="ci--4459.0.0--n--140c1315ab-k8s-calico--apiserver--696d587784--b5t7c-eth0" Sep 16 05:00:25.531780 containerd[1722]: 2025-09-16 05:00:25.500 [INFO][6265] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="92c5f7b17fa4e8d8e85d2c7abdebbded0a49aa406dfed75ef788e2ade54ce3c8" Sep 16 05:00:25.531780 containerd[1722]: 2025-09-16 05:00:25.500 [INFO][6265] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="92c5f7b17fa4e8d8e85d2c7abdebbded0a49aa406dfed75ef788e2ade54ce3c8" iface="eth0" netns="" Sep 16 05:00:25.531780 containerd[1722]: 2025-09-16 05:00:25.500 [INFO][6265] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="92c5f7b17fa4e8d8e85d2c7abdebbded0a49aa406dfed75ef788e2ade54ce3c8" Sep 16 05:00:25.531780 containerd[1722]: 2025-09-16 05:00:25.500 [INFO][6265] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="92c5f7b17fa4e8d8e85d2c7abdebbded0a49aa406dfed75ef788e2ade54ce3c8" Sep 16 05:00:25.531780 containerd[1722]: 2025-09-16 05:00:25.520 [INFO][6283] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="92c5f7b17fa4e8d8e85d2c7abdebbded0a49aa406dfed75ef788e2ade54ce3c8" HandleID="k8s-pod-network.92c5f7b17fa4e8d8e85d2c7abdebbded0a49aa406dfed75ef788e2ade54ce3c8" Workload="ci--4459.0.0--n--140c1315ab-k8s-calico--apiserver--696d587784--b5t7c-eth0" Sep 16 05:00:25.531780 containerd[1722]: 2025-09-16 05:00:25.520 [INFO][6283] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 16 05:00:25.531780 containerd[1722]: 2025-09-16 05:00:25.520 [INFO][6283] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 16 05:00:25.531780 containerd[1722]: 2025-09-16 05:00:25.526 [WARNING][6283] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="92c5f7b17fa4e8d8e85d2c7abdebbded0a49aa406dfed75ef788e2ade54ce3c8" HandleID="k8s-pod-network.92c5f7b17fa4e8d8e85d2c7abdebbded0a49aa406dfed75ef788e2ade54ce3c8" Workload="ci--4459.0.0--n--140c1315ab-k8s-calico--apiserver--696d587784--b5t7c-eth0" Sep 16 05:00:25.531780 containerd[1722]: 2025-09-16 05:00:25.526 [INFO][6283] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="92c5f7b17fa4e8d8e85d2c7abdebbded0a49aa406dfed75ef788e2ade54ce3c8" HandleID="k8s-pod-network.92c5f7b17fa4e8d8e85d2c7abdebbded0a49aa406dfed75ef788e2ade54ce3c8" Workload="ci--4459.0.0--n--140c1315ab-k8s-calico--apiserver--696d587784--b5t7c-eth0" Sep 16 05:00:25.531780 containerd[1722]: 2025-09-16 05:00:25.527 [INFO][6283] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 16 05:00:25.531780 containerd[1722]: 2025-09-16 05:00:25.528 [INFO][6265] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="92c5f7b17fa4e8d8e85d2c7abdebbded0a49aa406dfed75ef788e2ade54ce3c8" Sep 16 05:00:25.532554 containerd[1722]: time="2025-09-16T05:00:25.532208058Z" level=info msg="TearDown network for sandbox \"92c5f7b17fa4e8d8e85d2c7abdebbded0a49aa406dfed75ef788e2ade54ce3c8\" successfully" Sep 16 05:00:25.534157 containerd[1722]: time="2025-09-16T05:00:25.534132746Z" level=info msg="Ensure that sandbox 92c5f7b17fa4e8d8e85d2c7abdebbded0a49aa406dfed75ef788e2ade54ce3c8 in task-service has been cleanup successfully" Sep 16 05:00:25.542369 containerd[1722]: time="2025-09-16T05:00:25.542341867Z" level=info msg="RemovePodSandbox \"92c5f7b17fa4e8d8e85d2c7abdebbded0a49aa406dfed75ef788e2ade54ce3c8\" returns successfully" Sep 16 05:00:25.555355 containerd[1722]: time="2025-09-16T05:00:25.555324042Z" level=info msg="TaskExit event in podsandbox handler container_id:\"66dd65ab3b3dc79b0289e4825220ad7a4b3a80c344f5b7a1404bc0c69d92f53b\" id:\"8405731001dcab642b1b99e00e1a89a07bf18853d39f51f8480b96ad7befab59\" pid:6290 exited_at:{seconds:1757998825 nanos:555018639}" Sep 16 05:00:34.453403 containerd[1722]: time="2025-09-16T05:00:34.453360485Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c78dfc1c271743ceaae8bb2c862d69212d63330cc30efe1f610008a8f7fe8a1d\" id:\"7a69d464941f4abcc7135da7b4c96b5a98c40d5e80fb27e2cef072b118759fd6\" pid:6325 exited_at:{seconds:1757998834 nanos:453134437}" Sep 16 05:00:46.489806 containerd[1722]: time="2025-09-16T05:00:46.489753690Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9043f02600f56cd7e22d8576992d0c07bd6d45d0b91389d4920be9f9f21af079\" id:\"ade7369159b477fc17057094873f3f7fe975007252d3d48d58595e6b042d2223\" pid:6370 exited_at:{seconds:1757998846 nanos:489344340}" Sep 16 05:00:47.121494 systemd[1]: Started sshd@7-10.200.8.38:22-10.200.16.10:47470.service - OpenSSH per-connection server daemon (10.200.16.10:47470). Sep 16 05:00:47.358026 containerd[1722]: time="2025-09-16T05:00:47.357984834Z" level=info msg="TaskExit event in podsandbox handler container_id:\"66dd65ab3b3dc79b0289e4825220ad7a4b3a80c344f5b7a1404bc0c69d92f53b\" id:\"cd69c0998373785974846f4cf6165e987c0f6b2034ad0c41baffecf6d839cc27\" pid:6398 exited_at:{seconds:1757998847 nanos:357515449}" Sep 16 05:00:47.752595 sshd[6383]: Accepted publickey for core from 10.200.16.10 port 47470 ssh2: RSA SHA256:Gt12JRTkVM33b5eHTclAVwi2ZfrwSLC40ZPSMLLbmPQ Sep 16 05:00:47.753603 sshd-session[6383]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:00:47.757941 systemd-logind[1696]: New session 10 of user core. Sep 16 05:00:47.762235 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 16 05:00:48.264933 sshd[6409]: Connection closed by 10.200.16.10 port 47470 Sep 16 05:00:48.265352 sshd-session[6383]: pam_unix(sshd:session): session closed for user core Sep 16 05:00:48.268000 systemd[1]: sshd@7-10.200.8.38:22-10.200.16.10:47470.service: Deactivated successfully. Sep 16 05:00:48.269555 systemd[1]: session-10.scope: Deactivated successfully. Sep 16 05:00:48.270919 systemd-logind[1696]: Session 10 logged out. Waiting for processes to exit. Sep 16 05:00:48.272233 systemd-logind[1696]: Removed session 10. Sep 16 05:00:53.383618 systemd[1]: Started sshd@8-10.200.8.38:22-10.200.16.10:52946.service - OpenSSH per-connection server daemon (10.200.16.10:52946). Sep 16 05:00:54.013661 sshd[6423]: Accepted publickey for core from 10.200.16.10 port 52946 ssh2: RSA SHA256:Gt12JRTkVM33b5eHTclAVwi2ZfrwSLC40ZPSMLLbmPQ Sep 16 05:00:54.015081 sshd-session[6423]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:00:54.019589 systemd-logind[1696]: New session 11 of user core. Sep 16 05:00:54.021847 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 16 05:00:54.503750 sshd[6426]: Connection closed by 10.200.16.10 port 52946 Sep 16 05:00:54.504307 sshd-session[6423]: pam_unix(sshd:session): session closed for user core Sep 16 05:00:54.507208 systemd[1]: sshd@8-10.200.8.38:22-10.200.16.10:52946.service: Deactivated successfully. Sep 16 05:00:54.508904 systemd[1]: session-11.scope: Deactivated successfully. Sep 16 05:00:54.509779 systemd-logind[1696]: Session 11 logged out. Waiting for processes to exit. Sep 16 05:00:54.510875 systemd-logind[1696]: Removed session 11. Sep 16 05:00:55.552645 containerd[1722]: time="2025-09-16T05:00:55.552123864Z" level=info msg="TaskExit event in podsandbox handler container_id:\"66dd65ab3b3dc79b0289e4825220ad7a4b3a80c344f5b7a1404bc0c69d92f53b\" id:\"f39f92afc4dc4b72923974db123574afecc94d887306b5c3c5c838f3e0f5db69\" pid:6452 exited_at:{seconds:1757998855 nanos:551297889}" Sep 16 05:00:59.615761 systemd[1]: Started sshd@9-10.200.8.38:22-10.200.16.10:52950.service - OpenSSH per-connection server daemon (10.200.16.10:52950). Sep 16 05:01:00.249136 sshd[6463]: Accepted publickey for core from 10.200.16.10 port 52950 ssh2: RSA SHA256:Gt12JRTkVM33b5eHTclAVwi2ZfrwSLC40ZPSMLLbmPQ Sep 16 05:01:00.250337 sshd-session[6463]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:01:00.254386 systemd-logind[1696]: New session 12 of user core. Sep 16 05:01:00.259245 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 16 05:01:00.732250 sshd[6466]: Connection closed by 10.200.16.10 port 52950 Sep 16 05:01:00.732653 sshd-session[6463]: pam_unix(sshd:session): session closed for user core Sep 16 05:01:00.735605 systemd[1]: sshd@9-10.200.8.38:22-10.200.16.10:52950.service: Deactivated successfully. Sep 16 05:01:00.737405 systemd[1]: session-12.scope: Deactivated successfully. Sep 16 05:01:00.738358 systemd-logind[1696]: Session 12 logged out. Waiting for processes to exit. Sep 16 05:01:00.739610 systemd-logind[1696]: Removed session 12. Sep 16 05:01:04.460418 containerd[1722]: time="2025-09-16T05:01:04.460386115Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c78dfc1c271743ceaae8bb2c862d69212d63330cc30efe1f610008a8f7fe8a1d\" id:\"373a2cb08c38f997a5c15dfa48e49be2abfd6a30d7679d011ff5837bfe5ed56c\" pid:6493 exited_at:{seconds:1757998864 nanos:460076494}" Sep 16 05:01:05.843719 systemd[1]: Started sshd@10-10.200.8.38:22-10.200.16.10:56168.service - OpenSSH per-connection server daemon (10.200.16.10:56168). Sep 16 05:01:06.474787 sshd[6505]: Accepted publickey for core from 10.200.16.10 port 56168 ssh2: RSA SHA256:Gt12JRTkVM33b5eHTclAVwi2ZfrwSLC40ZPSMLLbmPQ Sep 16 05:01:06.475740 sshd-session[6505]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:01:06.480047 systemd-logind[1696]: New session 13 of user core. Sep 16 05:01:06.484218 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 16 05:01:06.963223 sshd[6508]: Connection closed by 10.200.16.10 port 56168 Sep 16 05:01:06.963636 sshd-session[6505]: pam_unix(sshd:session): session closed for user core Sep 16 05:01:06.966354 systemd[1]: sshd@10-10.200.8.38:22-10.200.16.10:56168.service: Deactivated successfully. Sep 16 05:01:06.967998 systemd[1]: session-13.scope: Deactivated successfully. Sep 16 05:01:06.968761 systemd-logind[1696]: Session 13 logged out. Waiting for processes to exit. Sep 16 05:01:06.969863 systemd-logind[1696]: Removed session 13. Sep 16 05:01:10.007357 containerd[1722]: time="2025-09-16T05:01:10.007302267Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9043f02600f56cd7e22d8576992d0c07bd6d45d0b91389d4920be9f9f21af079\" id:\"d6afc7f6c5ac9bd5a865af2751fe4a21902668d1070aba05c681cc1149ac5af9\" pid:6533 exited_at:{seconds:1757998870 nanos:6925800}" Sep 16 05:01:12.080207 systemd[1]: Started sshd@11-10.200.8.38:22-10.200.16.10:58526.service - OpenSSH per-connection server daemon (10.200.16.10:58526). Sep 16 05:01:12.703178 sshd[6543]: Accepted publickey for core from 10.200.16.10 port 58526 ssh2: RSA SHA256:Gt12JRTkVM33b5eHTclAVwi2ZfrwSLC40ZPSMLLbmPQ Sep 16 05:01:12.704051 sshd-session[6543]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:01:12.708131 systemd-logind[1696]: New session 14 of user core. Sep 16 05:01:12.712259 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 16 05:01:13.191731 sshd[6546]: Connection closed by 10.200.16.10 port 58526 Sep 16 05:01:13.192247 sshd-session[6543]: pam_unix(sshd:session): session closed for user core Sep 16 05:01:13.195117 systemd[1]: sshd@11-10.200.8.38:22-10.200.16.10:58526.service: Deactivated successfully. Sep 16 05:01:13.196784 systemd[1]: session-14.scope: Deactivated successfully. Sep 16 05:01:13.198358 systemd-logind[1696]: Session 14 logged out. Waiting for processes to exit. Sep 16 05:01:13.200278 systemd-logind[1696]: Removed session 14. Sep 16 05:01:13.307740 systemd[1]: Started sshd@12-10.200.8.38:22-10.200.16.10:58542.service - OpenSSH per-connection server daemon (10.200.16.10:58542). Sep 16 05:01:13.936594 sshd[6559]: Accepted publickey for core from 10.200.16.10 port 58542 ssh2: RSA SHA256:Gt12JRTkVM33b5eHTclAVwi2ZfrwSLC40ZPSMLLbmPQ Sep 16 05:01:13.937527 sshd-session[6559]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:01:13.941139 systemd-logind[1696]: New session 15 of user core. Sep 16 05:01:13.946243 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 16 05:01:14.449566 sshd[6562]: Connection closed by 10.200.16.10 port 58542 Sep 16 05:01:14.449984 sshd-session[6559]: pam_unix(sshd:session): session closed for user core Sep 16 05:01:14.452682 systemd[1]: sshd@12-10.200.8.38:22-10.200.16.10:58542.service: Deactivated successfully. Sep 16 05:01:14.454211 systemd[1]: session-15.scope: Deactivated successfully. Sep 16 05:01:14.454876 systemd-logind[1696]: Session 15 logged out. Waiting for processes to exit. Sep 16 05:01:14.456047 systemd-logind[1696]: Removed session 15. Sep 16 05:01:14.563479 systemd[1]: Started sshd@13-10.200.8.38:22-10.200.16.10:58544.service - OpenSSH per-connection server daemon (10.200.16.10:58544). Sep 16 05:01:15.188337 sshd[6571]: Accepted publickey for core from 10.200.16.10 port 58544 ssh2: RSA SHA256:Gt12JRTkVM33b5eHTclAVwi2ZfrwSLC40ZPSMLLbmPQ Sep 16 05:01:15.189382 sshd-session[6571]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:01:15.193544 systemd-logind[1696]: New session 16 of user core. Sep 16 05:01:15.199240 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 16 05:01:15.677170 sshd[6574]: Connection closed by 10.200.16.10 port 58544 Sep 16 05:01:15.677562 sshd-session[6571]: pam_unix(sshd:session): session closed for user core Sep 16 05:01:15.680569 systemd[1]: sshd@13-10.200.8.38:22-10.200.16.10:58544.service: Deactivated successfully. Sep 16 05:01:15.682569 systemd[1]: session-16.scope: Deactivated successfully. Sep 16 05:01:15.683270 systemd-logind[1696]: Session 16 logged out. Waiting for processes to exit. Sep 16 05:01:15.684625 systemd-logind[1696]: Removed session 16. Sep 16 05:01:16.492428 containerd[1722]: time="2025-09-16T05:01:16.492342461Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9043f02600f56cd7e22d8576992d0c07bd6d45d0b91389d4920be9f9f21af079\" id:\"8c1e2f39255ab42065f3137bf2c92f709cfc0c19a5927b16f44b22531ccede73\" pid:6597 exited_at:{seconds:1757998876 nanos:491921243}" Sep 16 05:01:20.792400 systemd[1]: Started sshd@14-10.200.8.38:22-10.200.16.10:55824.service - OpenSSH per-connection server daemon (10.200.16.10:55824). Sep 16 05:01:21.424653 sshd[6611]: Accepted publickey for core from 10.200.16.10 port 55824 ssh2: RSA SHA256:Gt12JRTkVM33b5eHTclAVwi2ZfrwSLC40ZPSMLLbmPQ Sep 16 05:01:21.425885 sshd-session[6611]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:01:21.430662 systemd-logind[1696]: New session 17 of user core. Sep 16 05:01:21.437380 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 16 05:01:21.954681 sshd[6614]: Connection closed by 10.200.16.10 port 55824 Sep 16 05:01:21.955265 sshd-session[6611]: pam_unix(sshd:session): session closed for user core Sep 16 05:01:21.958476 systemd-logind[1696]: Session 17 logged out. Waiting for processes to exit. Sep 16 05:01:21.958825 systemd[1]: sshd@14-10.200.8.38:22-10.200.16.10:55824.service: Deactivated successfully. Sep 16 05:01:21.960628 systemd[1]: session-17.scope: Deactivated successfully. Sep 16 05:01:21.962212 systemd-logind[1696]: Removed session 17. Sep 16 05:01:25.551502 containerd[1722]: time="2025-09-16T05:01:25.551440623Z" level=info msg="TaskExit event in podsandbox handler container_id:\"66dd65ab3b3dc79b0289e4825220ad7a4b3a80c344f5b7a1404bc0c69d92f53b\" id:\"e06a602023be664197f4c818c4b5059646edd629191b3700ba51b30465d88fc9\" pid:6639 exited_at:{seconds:1757998885 nanos:551266267}" Sep 16 05:01:27.071981 systemd[1]: Started sshd@15-10.200.8.38:22-10.200.16.10:55834.service - OpenSSH per-connection server daemon (10.200.16.10:55834). Sep 16 05:01:27.696437 sshd[6651]: Accepted publickey for core from 10.200.16.10 port 55834 ssh2: RSA SHA256:Gt12JRTkVM33b5eHTclAVwi2ZfrwSLC40ZPSMLLbmPQ Sep 16 05:01:27.697357 sshd-session[6651]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:01:27.701207 systemd-logind[1696]: New session 18 of user core. Sep 16 05:01:27.710212 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 16 05:01:28.183586 sshd[6654]: Connection closed by 10.200.16.10 port 55834 Sep 16 05:01:28.184861 sshd-session[6651]: pam_unix(sshd:session): session closed for user core Sep 16 05:01:28.187576 systemd-logind[1696]: Session 18 logged out. Waiting for processes to exit. Sep 16 05:01:28.187816 systemd[1]: sshd@15-10.200.8.38:22-10.200.16.10:55834.service: Deactivated successfully. Sep 16 05:01:28.189678 systemd[1]: session-18.scope: Deactivated successfully. Sep 16 05:01:28.191244 systemd-logind[1696]: Removed session 18. Sep 16 05:01:33.298318 systemd[1]: Started sshd@16-10.200.8.38:22-10.200.16.10:36886.service - OpenSSH per-connection server daemon (10.200.16.10:36886). Sep 16 05:01:33.923573 sshd[6668]: Accepted publickey for core from 10.200.16.10 port 36886 ssh2: RSA SHA256:Gt12JRTkVM33b5eHTclAVwi2ZfrwSLC40ZPSMLLbmPQ Sep 16 05:01:33.924558 sshd-session[6668]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:01:33.928034 systemd-logind[1696]: New session 19 of user core. Sep 16 05:01:33.933212 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 16 05:01:34.410948 sshd[6671]: Connection closed by 10.200.16.10 port 36886 Sep 16 05:01:34.411338 sshd-session[6668]: pam_unix(sshd:session): session closed for user core Sep 16 05:01:34.415562 systemd-logind[1696]: Session 19 logged out. Waiting for processes to exit. Sep 16 05:01:34.415943 systemd[1]: sshd@16-10.200.8.38:22-10.200.16.10:36886.service: Deactivated successfully. Sep 16 05:01:34.420051 systemd[1]: session-19.scope: Deactivated successfully. Sep 16 05:01:34.423461 systemd-logind[1696]: Removed session 19. Sep 16 05:01:34.457393 containerd[1722]: time="2025-09-16T05:01:34.457362411Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c78dfc1c271743ceaae8bb2c862d69212d63330cc30efe1f610008a8f7fe8a1d\" id:\"f18600e711e1e1487a4b4ce56a0650ab84ea9227eca97ab89d6898701bd5d334\" pid:6691 exited_at:{seconds:1757998894 nanos:457013893}" Sep 16 05:01:34.519635 systemd[1]: Started sshd@17-10.200.8.38:22-10.200.16.10:36902.service - OpenSSH per-connection server daemon (10.200.16.10:36902). Sep 16 05:01:35.144053 sshd[6707]: Accepted publickey for core from 10.200.16.10 port 36902 ssh2: RSA SHA256:Gt12JRTkVM33b5eHTclAVwi2ZfrwSLC40ZPSMLLbmPQ Sep 16 05:01:35.144909 sshd-session[6707]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:01:35.148852 systemd-logind[1696]: New session 20 of user core. Sep 16 05:01:35.151207 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 16 05:01:35.689376 sshd[6710]: Connection closed by 10.200.16.10 port 36902 Sep 16 05:01:35.690620 sshd-session[6707]: pam_unix(sshd:session): session closed for user core Sep 16 05:01:35.693190 systemd[1]: sshd@17-10.200.8.38:22-10.200.16.10:36902.service: Deactivated successfully. Sep 16 05:01:35.694787 systemd[1]: session-20.scope: Deactivated successfully. Sep 16 05:01:35.695417 systemd-logind[1696]: Session 20 logged out. Waiting for processes to exit. Sep 16 05:01:35.696513 systemd-logind[1696]: Removed session 20. Sep 16 05:01:35.808721 systemd[1]: Started sshd@18-10.200.8.38:22-10.200.16.10:36910.service - OpenSSH per-connection server daemon (10.200.16.10:36910). Sep 16 05:01:36.435296 sshd[6720]: Accepted publickey for core from 10.200.16.10 port 36910 ssh2: RSA SHA256:Gt12JRTkVM33b5eHTclAVwi2ZfrwSLC40ZPSMLLbmPQ Sep 16 05:01:36.436274 sshd-session[6720]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:01:36.440347 systemd-logind[1696]: New session 21 of user core. Sep 16 05:01:36.444253 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 16 05:01:37.335821 sshd[6723]: Connection closed by 10.200.16.10 port 36910 Sep 16 05:01:37.336311 sshd-session[6720]: pam_unix(sshd:session): session closed for user core Sep 16 05:01:37.339122 systemd[1]: sshd@18-10.200.8.38:22-10.200.16.10:36910.service: Deactivated successfully. Sep 16 05:01:37.340806 systemd[1]: session-21.scope: Deactivated successfully. Sep 16 05:01:37.341453 systemd-logind[1696]: Session 21 logged out. Waiting for processes to exit. Sep 16 05:01:37.342678 systemd-logind[1696]: Removed session 21. Sep 16 05:01:37.450993 systemd[1]: Started sshd@19-10.200.8.38:22-10.200.16.10:36912.service - OpenSSH per-connection server daemon (10.200.16.10:36912). Sep 16 05:01:38.077460 sshd[6741]: Accepted publickey for core from 10.200.16.10 port 36912 ssh2: RSA SHA256:Gt12JRTkVM33b5eHTclAVwi2ZfrwSLC40ZPSMLLbmPQ Sep 16 05:01:38.078440 sshd-session[6741]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:01:38.082156 systemd-logind[1696]: New session 22 of user core. Sep 16 05:01:38.086205 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 16 05:01:38.648960 sshd[6744]: Connection closed by 10.200.16.10 port 36912 Sep 16 05:01:38.650227 sshd-session[6741]: pam_unix(sshd:session): session closed for user core Sep 16 05:01:38.654678 systemd[1]: sshd@19-10.200.8.38:22-10.200.16.10:36912.service: Deactivated successfully. Sep 16 05:01:38.654910 systemd-logind[1696]: Session 22 logged out. Waiting for processes to exit. Sep 16 05:01:38.657726 systemd[1]: session-22.scope: Deactivated successfully. Sep 16 05:01:38.660007 systemd-logind[1696]: Removed session 22. Sep 16 05:01:38.770577 systemd[1]: Started sshd@20-10.200.8.38:22-10.200.16.10:36916.service - OpenSSH per-connection server daemon (10.200.16.10:36916). Sep 16 05:01:39.399158 sshd[6754]: Accepted publickey for core from 10.200.16.10 port 36916 ssh2: RSA SHA256:Gt12JRTkVM33b5eHTclAVwi2ZfrwSLC40ZPSMLLbmPQ Sep 16 05:01:39.400140 sshd-session[6754]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:01:39.404329 systemd-logind[1696]: New session 23 of user core. Sep 16 05:01:39.409231 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 16 05:01:39.886452 sshd[6757]: Connection closed by 10.200.16.10 port 36916 Sep 16 05:01:39.886859 sshd-session[6754]: pam_unix(sshd:session): session closed for user core Sep 16 05:01:39.889705 systemd[1]: sshd@20-10.200.8.38:22-10.200.16.10:36916.service: Deactivated successfully. Sep 16 05:01:39.891364 systemd[1]: session-23.scope: Deactivated successfully. Sep 16 05:01:39.891962 systemd-logind[1696]: Session 23 logged out. Waiting for processes to exit. Sep 16 05:01:39.893197 systemd-logind[1696]: Removed session 23. Sep 16 05:01:45.003279 systemd[1]: Started sshd@21-10.200.8.38:22-10.200.16.10:44842.service - OpenSSH per-connection server daemon (10.200.16.10:44842). Sep 16 05:01:45.633720 sshd[6771]: Accepted publickey for core from 10.200.16.10 port 44842 ssh2: RSA SHA256:Gt12JRTkVM33b5eHTclAVwi2ZfrwSLC40ZPSMLLbmPQ Sep 16 05:01:45.634148 sshd-session[6771]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:01:45.638328 systemd-logind[1696]: New session 24 of user core. Sep 16 05:01:45.644204 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 16 05:01:46.130617 sshd[6774]: Connection closed by 10.200.16.10 port 44842 Sep 16 05:01:46.131920 sshd-session[6771]: pam_unix(sshd:session): session closed for user core Sep 16 05:01:46.134815 systemd-logind[1696]: Session 24 logged out. Waiting for processes to exit. Sep 16 05:01:46.134971 systemd[1]: sshd@21-10.200.8.38:22-10.200.16.10:44842.service: Deactivated successfully. Sep 16 05:01:46.136672 systemd[1]: session-24.scope: Deactivated successfully. Sep 16 05:01:46.138214 systemd-logind[1696]: Removed session 24. Sep 16 05:01:46.491431 containerd[1722]: time="2025-09-16T05:01:46.491221876Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9043f02600f56cd7e22d8576992d0c07bd6d45d0b91389d4920be9f9f21af079\" id:\"77d2df3b21be4cafc8129723dd3b8a884789f385ca5aafba657de8e8b076bde7\" pid:6803 exited_at:{seconds:1757998906 nanos:490936214}" Sep 16 05:01:47.347719 containerd[1722]: time="2025-09-16T05:01:47.347686744Z" level=info msg="TaskExit event in podsandbox handler container_id:\"66dd65ab3b3dc79b0289e4825220ad7a4b3a80c344f5b7a1404bc0c69d92f53b\" id:\"f1dbab75794a999a97dbb70e4ff3551572449e1d19bc7c8f517c3224f9cc1d5b\" pid:6824 exited_at:{seconds:1757998907 nanos:347361423}" Sep 16 05:01:51.270343 systemd[1]: Started sshd@22-10.200.8.38:22-10.200.16.10:59242.service - OpenSSH per-connection server daemon (10.200.16.10:59242). Sep 16 05:01:51.910439 sshd[6836]: Accepted publickey for core from 10.200.16.10 port 59242 ssh2: RSA SHA256:Gt12JRTkVM33b5eHTclAVwi2ZfrwSLC40ZPSMLLbmPQ Sep 16 05:01:51.910846 sshd-session[6836]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:01:51.914902 systemd-logind[1696]: New session 25 of user core. Sep 16 05:01:51.918227 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 16 05:01:52.418660 sshd[6839]: Connection closed by 10.200.16.10 port 59242 Sep 16 05:01:52.417481 sshd-session[6836]: pam_unix(sshd:session): session closed for user core Sep 16 05:01:52.422111 systemd-logind[1696]: Session 25 logged out. Waiting for processes to exit. Sep 16 05:01:52.422633 systemd[1]: sshd@22-10.200.8.38:22-10.200.16.10:59242.service: Deactivated successfully. Sep 16 05:01:52.425749 systemd[1]: session-25.scope: Deactivated successfully. Sep 16 05:01:52.430498 systemd-logind[1696]: Removed session 25. Sep 16 05:01:55.661190 containerd[1722]: time="2025-09-16T05:01:55.661143601Z" level=info msg="TaskExit event in podsandbox handler container_id:\"66dd65ab3b3dc79b0289e4825220ad7a4b3a80c344f5b7a1404bc0c69d92f53b\" id:\"62d492e481573ac72627f4a646519560477d7c849349fb01ecc9a8c883d5e999\" pid:6863 exited_at:{seconds:1757998915 nanos:660909288}" Sep 16 05:01:57.529076 systemd[1]: Started sshd@23-10.200.8.38:22-10.200.16.10:59256.service - OpenSSH per-connection server daemon (10.200.16.10:59256). Sep 16 05:01:58.155304 sshd[6875]: Accepted publickey for core from 10.200.16.10 port 59256 ssh2: RSA SHA256:Gt12JRTkVM33b5eHTclAVwi2ZfrwSLC40ZPSMLLbmPQ Sep 16 05:01:58.156318 sshd-session[6875]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:01:58.160145 systemd-logind[1696]: New session 26 of user core. Sep 16 05:01:58.163240 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 16 05:01:58.638620 sshd[6878]: Connection closed by 10.200.16.10 port 59256 Sep 16 05:01:58.638895 sshd-session[6875]: pam_unix(sshd:session): session closed for user core Sep 16 05:01:58.641864 systemd[1]: sshd@23-10.200.8.38:22-10.200.16.10:59256.service: Deactivated successfully. Sep 16 05:01:58.643347 systemd[1]: session-26.scope: Deactivated successfully. Sep 16 05:01:58.644001 systemd-logind[1696]: Session 26 logged out. Waiting for processes to exit. Sep 16 05:01:58.644960 systemd-logind[1696]: Removed session 26. Sep 16 05:02:03.755274 systemd[1]: Started sshd@24-10.200.8.38:22-10.200.16.10:56890.service - OpenSSH per-connection server daemon (10.200.16.10:56890). Sep 16 05:02:04.386965 sshd[6892]: Accepted publickey for core from 10.200.16.10 port 56890 ssh2: RSA SHA256:Gt12JRTkVM33b5eHTclAVwi2ZfrwSLC40ZPSMLLbmPQ Sep 16 05:02:04.388011 sshd-session[6892]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:02:04.392612 systemd-logind[1696]: New session 27 of user core. Sep 16 05:02:04.400235 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 16 05:02:04.466102 containerd[1722]: time="2025-09-16T05:02:04.465253706Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c78dfc1c271743ceaae8bb2c862d69212d63330cc30efe1f610008a8f7fe8a1d\" id:\"1c29077e99a187ff721533d2e95497deb2129fa59732c07a0e7160c2df74898d\" pid:6911 exited_at:{seconds:1757998924 nanos:465001263}" Sep 16 05:02:04.872661 sshd[6898]: Connection closed by 10.200.16.10 port 56890 Sep 16 05:02:04.873033 sshd-session[6892]: pam_unix(sshd:session): session closed for user core Sep 16 05:02:04.875378 systemd[1]: sshd@24-10.200.8.38:22-10.200.16.10:56890.service: Deactivated successfully. Sep 16 05:02:04.877015 systemd[1]: session-27.scope: Deactivated successfully. Sep 16 05:02:04.877647 systemd-logind[1696]: Session 27 logged out. Waiting for processes to exit. Sep 16 05:02:04.879180 systemd-logind[1696]: Removed session 27.