Apr 28 01:14:04.866516 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Apr 27 22:40:10 -00 2026 Apr 28 01:14:04.866534 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=dba81bba70fdc18951de51911456386ac86d38187268d44374f74ed6158168ec Apr 28 01:14:04.866544 kernel: BIOS-provided physical RAM map: Apr 28 01:14:04.866549 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Apr 28 01:14:04.866554 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Apr 28 01:14:04.866559 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Apr 28 01:14:04.866565 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Apr 28 01:14:04.866569 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Apr 28 01:14:04.866573 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Apr 28 01:14:04.866578 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Apr 28 01:14:04.866583 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Apr 28 01:14:04.866588 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Apr 28 01:14:04.866592 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Apr 28 01:14:04.866596 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Apr 28 01:14:04.866602 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Apr 28 01:14:04.866606 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Apr 28 01:14:04.866613 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Apr 28 01:14:04.866617 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Apr 28 01:14:04.866621 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Apr 28 01:14:04.866626 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Apr 28 01:14:04.866630 kernel: NX (Execute Disable) protection: active Apr 28 01:14:04.866635 kernel: APIC: Static calls initialized Apr 28 01:14:04.866639 kernel: efi: EFI v2.7 by EDK II Apr 28 01:14:04.866644 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b675198 Apr 28 01:14:04.866649 kernel: SMBIOS 2.8 present. Apr 28 01:14:04.866653 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Apr 28 01:14:04.866658 kernel: Hypervisor detected: KVM Apr 28 01:14:04.866663 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 28 01:14:04.866668 kernel: kvm-clock: using sched offset of 4672292869 cycles Apr 28 01:14:04.866672 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 28 01:14:04.866678 kernel: tsc: Detected 2793.438 MHz processor Apr 28 01:14:04.866682 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 28 01:14:04.866688 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 28 01:14:04.866692 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x10000000000 Apr 28 01:14:04.866697 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Apr 28 01:14:04.866702 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 28 01:14:04.866708 kernel: Using GB pages for direct mapping Apr 28 01:14:04.866712 kernel: Secure boot disabled Apr 28 01:14:04.866717 kernel: ACPI: Early table checksum verification disabled Apr 28 01:14:04.866722 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Apr 28 01:14:04.866729 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Apr 28 01:14:04.866735 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 28 01:14:04.866740 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 28 01:14:04.866746 kernel: ACPI: FACS 0x000000009CBDD000 000040 Apr 28 01:14:04.866751 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 28 01:14:04.866756 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 28 01:14:04.866761 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 28 01:14:04.866765 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 28 01:14:04.866770 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Apr 28 01:14:04.866775 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Apr 28 01:14:04.866782 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Apr 28 01:14:04.866786 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Apr 28 01:14:04.866791 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Apr 28 01:14:04.866796 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Apr 28 01:14:04.866801 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Apr 28 01:14:04.866806 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Apr 28 01:14:04.866811 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Apr 28 01:14:04.866816 kernel: No NUMA configuration found Apr 28 01:14:04.866821 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Apr 28 01:14:04.866827 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Apr 28 01:14:04.866832 kernel: Zone ranges: Apr 28 01:14:04.866837 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 28 01:14:04.866842 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Apr 28 01:14:04.866847 kernel: Normal empty Apr 28 01:14:04.866851 kernel: Movable zone start for each node Apr 28 01:14:04.866856 kernel: Early memory node ranges Apr 28 01:14:04.866861 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Apr 28 01:14:04.866866 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Apr 28 01:14:04.866871 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Apr 28 01:14:04.866877 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Apr 28 01:14:04.866882 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Apr 28 01:14:04.866887 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Apr 28 01:14:04.866892 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Apr 28 01:14:04.866897 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 28 01:14:04.866902 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Apr 28 01:14:04.866907 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Apr 28 01:14:04.866912 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 28 01:14:04.866917 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Apr 28 01:14:04.866923 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Apr 28 01:14:04.866928 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Apr 28 01:14:04.866933 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 28 01:14:04.866938 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 28 01:14:04.866943 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 28 01:14:04.866948 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 28 01:14:04.866953 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 28 01:14:04.866957 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 28 01:14:04.866962 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 28 01:14:04.866967 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 28 01:14:04.866974 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 28 01:14:04.866979 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 28 01:14:04.866984 kernel: TSC deadline timer available Apr 28 01:14:04.866989 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Apr 28 01:14:04.866994 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 28 01:14:04.866998 kernel: kvm-guest: KVM setup pv remote TLB flush Apr 28 01:14:04.867003 kernel: kvm-guest: setup PV sched yield Apr 28 01:14:04.867008 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Apr 28 01:14:04.867013 kernel: Booting paravirtualized kernel on KVM Apr 28 01:14:04.867020 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 28 01:14:04.867025 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Apr 28 01:14:04.867030 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Apr 28 01:14:04.867035 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Apr 28 01:14:04.867040 kernel: pcpu-alloc: [0] 0 1 2 3 Apr 28 01:14:04.867045 kernel: kvm-guest: PV spinlocks enabled Apr 28 01:14:04.867050 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 28 01:14:04.867055 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=dba81bba70fdc18951de51911456386ac86d38187268d44374f74ed6158168ec Apr 28 01:14:04.867062 kernel: random: crng init done Apr 28 01:14:04.867066 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 28 01:14:04.867072 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 28 01:14:04.867076 kernel: Fallback order for Node 0: 0 Apr 28 01:14:04.867081 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Apr 28 01:14:04.867086 kernel: Policy zone: DMA32 Apr 28 01:14:04.867091 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 28 01:14:04.867096 kernel: Memory: 2399656K/2567000K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42884K init, 2312K bss, 167140K reserved, 0K cma-reserved) Apr 28 01:14:04.867101 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Apr 28 01:14:04.867107 kernel: ftrace: allocating 37996 entries in 149 pages Apr 28 01:14:04.867112 kernel: ftrace: allocated 149 pages with 4 groups Apr 28 01:14:04.867117 kernel: Dynamic Preempt: voluntary Apr 28 01:14:04.867122 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 28 01:14:04.867133 kernel: rcu: RCU event tracing is enabled. Apr 28 01:14:04.867139 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Apr 28 01:14:04.867145 kernel: Trampoline variant of Tasks RCU enabled. Apr 28 01:14:04.867151 kernel: Rude variant of Tasks RCU enabled. Apr 28 01:14:04.867156 kernel: Tracing variant of Tasks RCU enabled. Apr 28 01:14:04.867161 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 28 01:14:04.867167 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Apr 28 01:14:04.867172 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Apr 28 01:14:04.867179 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 28 01:14:04.867185 kernel: Console: colour dummy device 80x25 Apr 28 01:14:04.867190 kernel: printk: console [ttyS0] enabled Apr 28 01:14:04.867195 kernel: ACPI: Core revision 20230628 Apr 28 01:14:04.867201 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 28 01:14:04.867208 kernel: APIC: Switch to symmetric I/O mode setup Apr 28 01:14:04.867213 kernel: x2apic enabled Apr 28 01:14:04.867219 kernel: APIC: Switched APIC routing to: physical x2apic Apr 28 01:14:04.867224 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Apr 28 01:14:04.867230 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Apr 28 01:14:04.867235 kernel: kvm-guest: setup PV IPIs Apr 28 01:14:04.867241 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 28 01:14:04.867246 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 28 01:14:04.867252 kernel: Calibrating delay loop (skipped) preset value.. 5586.87 BogoMIPS (lpj=2793438) Apr 28 01:14:04.867259 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 28 01:14:04.867264 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Apr 28 01:14:04.867270 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Apr 28 01:14:04.867275 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 28 01:14:04.867280 kernel: Spectre V2 : Mitigation: Retpolines Apr 28 01:14:04.867286 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 28 01:14:04.867291 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 28 01:14:04.867297 kernel: RETBleed: Vulnerable Apr 28 01:14:04.867304 kernel: Speculative Store Bypass: Vulnerable Apr 28 01:14:04.867309 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 28 01:14:04.867336 kernel: GDS: Unknown: Dependent on hypervisor status Apr 28 01:14:04.867367 kernel: active return thunk: its_return_thunk Apr 28 01:14:04.867373 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 28 01:14:04.867378 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 28 01:14:04.867384 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 28 01:14:04.867389 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 28 01:14:04.867394 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 28 01:14:04.867402 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 28 01:14:04.867407 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 28 01:14:04.867413 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 28 01:14:04.867418 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Apr 28 01:14:04.867424 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Apr 28 01:14:04.867429 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Apr 28 01:14:04.867435 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Apr 28 01:14:04.867440 kernel: Freeing SMP alternatives memory: 32K Apr 28 01:14:04.867446 kernel: pid_max: default: 32768 minimum: 301 Apr 28 01:14:04.867452 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 28 01:14:04.867458 kernel: landlock: Up and running. Apr 28 01:14:04.867463 kernel: SELinux: Initializing. Apr 28 01:14:04.867469 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 28 01:14:04.867474 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 28 01:14:04.867480 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8370C CPU @ 2.80GHz (family: 0x6, model: 0x6a, stepping: 0x6) Apr 28 01:14:04.867485 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 28 01:14:04.867491 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 28 01:14:04.867497 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 28 01:14:04.867503 kernel: Performance Events: unsupported p6 CPU model 106 no PMU driver, software events only. Apr 28 01:14:04.867509 kernel: signal: max sigframe size: 3632 Apr 28 01:14:04.867514 kernel: rcu: Hierarchical SRCU implementation. Apr 28 01:14:04.867520 kernel: rcu: Max phase no-delay instances is 400. Apr 28 01:14:04.867525 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 28 01:14:04.867531 kernel: smp: Bringing up secondary CPUs ... Apr 28 01:14:04.867536 kernel: smpboot: x86: Booting SMP configuration: Apr 28 01:14:04.867542 kernel: .... node #0, CPUs: #1 #2 #3 Apr 28 01:14:04.867547 kernel: smp: Brought up 1 node, 4 CPUs Apr 28 01:14:04.867554 kernel: smpboot: Max logical packages: 1 Apr 28 01:14:04.867559 kernel: smpboot: Total of 4 processors activated (22347.50 BogoMIPS) Apr 28 01:14:04.867565 kernel: devtmpfs: initialized Apr 28 01:14:04.867570 kernel: x86/mm: Memory block size: 128MB Apr 28 01:14:04.867576 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Apr 28 01:14:04.867581 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Apr 28 01:14:04.867587 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Apr 28 01:14:04.867592 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Apr 28 01:14:04.867598 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Apr 28 01:14:04.867605 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 28 01:14:04.867610 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Apr 28 01:14:04.867615 kernel: pinctrl core: initialized pinctrl subsystem Apr 28 01:14:04.867621 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 28 01:14:04.867626 kernel: audit: initializing netlink subsys (disabled) Apr 28 01:14:04.867632 kernel: audit: type=2000 audit(1777338844.621:1): state=initialized audit_enabled=0 res=1 Apr 28 01:14:04.867637 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 28 01:14:04.867643 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 28 01:14:04.867648 kernel: cpuidle: using governor menu Apr 28 01:14:04.867654 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 28 01:14:04.867660 kernel: dca service started, version 1.12.1 Apr 28 01:14:04.867665 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Apr 28 01:14:04.867671 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Apr 28 01:14:04.867676 kernel: PCI: Using configuration type 1 for base access Apr 28 01:14:04.867682 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 28 01:14:04.867687 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 28 01:14:04.867693 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 28 01:14:04.867698 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 28 01:14:04.867705 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 28 01:14:04.867711 kernel: ACPI: Added _OSI(Module Device) Apr 28 01:14:04.867716 kernel: ACPI: Added _OSI(Processor Device) Apr 28 01:14:04.867721 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 28 01:14:04.867727 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 28 01:14:04.867732 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 28 01:14:04.867738 kernel: ACPI: Interpreter enabled Apr 28 01:14:04.867743 kernel: ACPI: PM: (supports S0 S3 S5) Apr 28 01:14:04.867749 kernel: ACPI: Using IOAPIC for interrupt routing Apr 28 01:14:04.867755 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 28 01:14:04.867761 kernel: PCI: Using E820 reservations for host bridge windows Apr 28 01:14:04.867766 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 28 01:14:04.867772 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 28 01:14:04.867877 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 28 01:14:04.867941 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 28 01:14:04.867998 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 28 01:14:04.868007 kernel: PCI host bridge to bus 0000:00 Apr 28 01:14:04.868066 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 28 01:14:04.868117 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 28 01:14:04.868167 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 28 01:14:04.868216 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Apr 28 01:14:04.868265 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Apr 28 01:14:04.868335 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Apr 28 01:14:04.868418 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 28 01:14:04.868486 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Apr 28 01:14:04.868550 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Apr 28 01:14:04.868607 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Apr 28 01:14:04.868663 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Apr 28 01:14:04.868718 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Apr 28 01:14:04.868774 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Apr 28 01:14:04.868832 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 28 01:14:04.868894 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Apr 28 01:14:04.868951 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Apr 28 01:14:04.869009 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Apr 28 01:14:04.869065 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Apr 28 01:14:04.869129 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Apr 28 01:14:04.869188 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Apr 28 01:14:04.869244 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Apr 28 01:14:04.869301 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Apr 28 01:14:04.869411 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Apr 28 01:14:04.869469 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Apr 28 01:14:04.869526 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Apr 28 01:14:04.869582 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Apr 28 01:14:04.869639 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Apr 28 01:14:04.869698 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Apr 28 01:14:04.869754 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 28 01:14:04.869812 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Apr 28 01:14:04.869868 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Apr 28 01:14:04.869922 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Apr 28 01:14:04.869981 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Apr 28 01:14:04.870038 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Apr 28 01:14:04.870045 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 28 01:14:04.870051 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 28 01:14:04.870056 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 28 01:14:04.870062 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 28 01:14:04.870067 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 28 01:14:04.870072 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 28 01:14:04.870078 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 28 01:14:04.870084 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 28 01:14:04.870090 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 28 01:14:04.870095 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 28 01:14:04.870101 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 28 01:14:04.870106 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 28 01:14:04.870111 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 28 01:14:04.870117 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 28 01:14:04.870122 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 28 01:14:04.870128 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 28 01:14:04.870134 kernel: iommu: Default domain type: Translated Apr 28 01:14:04.870140 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 28 01:14:04.870145 kernel: efivars: Registered efivars operations Apr 28 01:14:04.870150 kernel: PCI: Using ACPI for IRQ routing Apr 28 01:14:04.870156 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 28 01:14:04.870161 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Apr 28 01:14:04.870167 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Apr 28 01:14:04.870172 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Apr 28 01:14:04.870177 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Apr 28 01:14:04.870233 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 28 01:14:04.870287 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 28 01:14:04.870390 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 28 01:14:04.870398 kernel: vgaarb: loaded Apr 28 01:14:04.870404 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 28 01:14:04.870409 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 28 01:14:04.870415 kernel: clocksource: Switched to clocksource kvm-clock Apr 28 01:14:04.870420 kernel: VFS: Disk quotas dquot_6.6.0 Apr 28 01:14:04.870426 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 28 01:14:04.870433 kernel: pnp: PnP ACPI init Apr 28 01:14:04.870493 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Apr 28 01:14:04.870501 kernel: pnp: PnP ACPI: found 6 devices Apr 28 01:14:04.870507 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 28 01:14:04.870513 kernel: NET: Registered PF_INET protocol family Apr 28 01:14:04.870518 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 28 01:14:04.870524 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 28 01:14:04.870530 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 28 01:14:04.870537 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 28 01:14:04.870543 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 28 01:14:04.870548 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 28 01:14:04.870554 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 28 01:14:04.870560 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 28 01:14:04.870565 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 28 01:14:04.870571 kernel: NET: Registered PF_XDP protocol family Apr 28 01:14:04.870627 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Apr 28 01:14:04.870683 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Apr 28 01:14:04.870737 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 28 01:14:04.870788 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 28 01:14:04.870838 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 28 01:14:04.870887 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Apr 28 01:14:04.870937 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Apr 28 01:14:04.870987 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Apr 28 01:14:04.870994 kernel: PCI: CLS 0 bytes, default 64 Apr 28 01:14:04.871000 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 28 01:14:04.871007 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 28 01:14:04.871012 kernel: Initialise system trusted keyrings Apr 28 01:14:04.871018 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 28 01:14:04.871024 kernel: Key type asymmetric registered Apr 28 01:14:04.871029 kernel: Asymmetric key parser 'x509' registered Apr 28 01:14:04.871034 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 28 01:14:04.871040 kernel: io scheduler mq-deadline registered Apr 28 01:14:04.871045 kernel: io scheduler kyber registered Apr 28 01:14:04.871052 kernel: io scheduler bfq registered Apr 28 01:14:04.871057 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 28 01:14:04.871063 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 28 01:14:04.871069 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 28 01:14:04.871074 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Apr 28 01:14:04.871080 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 28 01:14:04.871085 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 28 01:14:04.871091 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 28 01:14:04.871096 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 28 01:14:04.871103 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 28 01:14:04.871159 kernel: rtc_cmos 00:04: RTC can wake from S4 Apr 28 01:14:04.871167 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 28 01:14:04.871218 kernel: rtc_cmos 00:04: registered as rtc0 Apr 28 01:14:04.871270 kernel: rtc_cmos 00:04: setting system clock to 2026-04-28T01:14:04 UTC (1777338844) Apr 28 01:14:04.871423 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Apr 28 01:14:04.871431 kernel: intel_pstate: CPU model not supported Apr 28 01:14:04.871437 kernel: efifb: probing for efifb Apr 28 01:14:04.871445 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Apr 28 01:14:04.871450 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Apr 28 01:14:04.871456 kernel: efifb: scrolling: redraw Apr 28 01:14:04.871461 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Apr 28 01:14:04.871467 kernel: Console: switching to colour frame buffer device 100x37 Apr 28 01:14:04.871472 kernel: fb0: EFI VGA frame buffer device Apr 28 01:14:04.871490 kernel: pstore: Using crash dump compression: deflate Apr 28 01:14:04.871497 kernel: pstore: Registered efi_pstore as persistent store backend Apr 28 01:14:04.871502 kernel: NET: Registered PF_INET6 protocol family Apr 28 01:14:04.871509 kernel: Segment Routing with IPv6 Apr 28 01:14:04.871515 kernel: In-situ OAM (IOAM) with IPv6 Apr 28 01:14:04.871520 kernel: NET: Registered PF_PACKET protocol family Apr 28 01:14:04.871526 kernel: Key type dns_resolver registered Apr 28 01:14:04.871531 kernel: IPI shorthand broadcast: enabled Apr 28 01:14:04.871537 kernel: sched_clock: Marking stable (749008664, 180853398)->(971638407, -41776345) Apr 28 01:14:04.871542 kernel: registered taskstats version 1 Apr 28 01:14:04.871548 kernel: Loading compiled-in X.509 certificates Apr 28 01:14:04.871553 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 40b5c5a01382737457e1eae3e889ae587960eb18' Apr 28 01:14:04.871560 kernel: Key type .fscrypt registered Apr 28 01:14:04.871566 kernel: Key type fscrypt-provisioning registered Apr 28 01:14:04.871571 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 28 01:14:04.871577 kernel: ima: Allocated hash algorithm: sha1 Apr 28 01:14:04.871582 kernel: ima: No architecture policies found Apr 28 01:14:04.871588 kernel: clk: Disabling unused clocks Apr 28 01:14:04.871594 kernel: Freeing unused kernel image (initmem) memory: 42884K Apr 28 01:14:04.871599 kernel: Write protecting the kernel read-only data: 36864k Apr 28 01:14:04.871605 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 28 01:14:04.871610 kernel: Run /init as init process Apr 28 01:14:04.871617 kernel: with arguments: Apr 28 01:14:04.871623 kernel: /init Apr 28 01:14:04.871628 kernel: with environment: Apr 28 01:14:04.871634 kernel: HOME=/ Apr 28 01:14:04.871639 kernel: TERM=linux Apr 28 01:14:04.871646 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 28 01:14:04.871654 systemd[1]: Detected virtualization kvm. Apr 28 01:14:04.871663 systemd[1]: Detected architecture x86-64. Apr 28 01:14:04.871669 systemd[1]: Running in initrd. Apr 28 01:14:04.871675 systemd[1]: No hostname configured, using default hostname. Apr 28 01:14:04.871681 systemd[1]: Hostname set to . Apr 28 01:14:04.871687 systemd[1]: Initializing machine ID from VM UUID. Apr 28 01:14:04.871694 systemd[1]: Queued start job for default target initrd.target. Apr 28 01:14:04.871700 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 28 01:14:04.871706 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 28 01:14:04.871713 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 28 01:14:04.871719 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 28 01:14:04.871725 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 28 01:14:04.871731 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 28 01:14:04.871740 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 28 01:14:04.871746 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 28 01:14:04.871752 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 28 01:14:04.871758 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 28 01:14:04.871763 systemd[1]: Reached target paths.target - Path Units. Apr 28 01:14:04.871769 systemd[1]: Reached target slices.target - Slice Units. Apr 28 01:14:04.871776 systemd[1]: Reached target swap.target - Swaps. Apr 28 01:14:04.871781 systemd[1]: Reached target timers.target - Timer Units. Apr 28 01:14:04.871789 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 28 01:14:04.871795 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 28 01:14:04.871801 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 28 01:14:04.871807 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 28 01:14:04.871813 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 28 01:14:04.871818 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 28 01:14:04.871824 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 28 01:14:04.871830 systemd[1]: Reached target sockets.target - Socket Units. Apr 28 01:14:04.871836 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 28 01:14:04.871844 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 28 01:14:04.871850 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 28 01:14:04.871856 systemd[1]: Starting systemd-fsck-usr.service... Apr 28 01:14:04.871862 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 28 01:14:04.871868 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 28 01:14:04.871874 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 28 01:14:04.871880 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 28 01:14:04.871897 systemd-journald[194]: Collecting audit messages is disabled. Apr 28 01:14:04.871913 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 28 01:14:04.871919 systemd[1]: Finished systemd-fsck-usr.service. Apr 28 01:14:04.871928 systemd-journald[194]: Journal started Apr 28 01:14:04.871943 systemd-journald[194]: Runtime Journal (/run/log/journal/7882038278d94a688ac4e3cd9157e8ab) is 6.0M, max 48.3M, 42.2M free. Apr 28 01:14:04.875579 systemd[1]: Started systemd-journald.service - Journal Service. Apr 28 01:14:04.876795 systemd-modules-load[195]: Inserted module 'overlay' Apr 28 01:14:04.876829 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 28 01:14:04.887529 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 28 01:14:04.889072 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 28 01:14:04.895276 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 28 01:14:04.896150 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 28 01:14:04.899556 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 28 01:14:04.907795 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 28 01:14:04.910125 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 28 01:14:04.915968 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 28 01:14:04.922019 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 28 01:14:04.921426 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 28 01:14:04.925367 kernel: Bridge firewalling registered Apr 28 01:14:04.926038 systemd-modules-load[195]: Inserted module 'br_netfilter' Apr 28 01:14:04.927080 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 28 01:14:04.936092 dracut-cmdline[226]: dracut-dracut-053 Apr 28 01:14:04.941173 dracut-cmdline[226]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=dba81bba70fdc18951de51911456386ac86d38187268d44374f74ed6158168ec Apr 28 01:14:04.938494 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 28 01:14:04.953079 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 28 01:14:04.955689 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 28 01:14:04.976710 systemd-resolved[255]: Positive Trust Anchors: Apr 28 01:14:04.976721 systemd-resolved[255]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 28 01:14:04.976745 systemd-resolved[255]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 28 01:14:04.978548 systemd-resolved[255]: Defaulting to hostname 'linux'. Apr 28 01:14:04.979136 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 28 01:14:04.980905 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 28 01:14:05.027393 kernel: SCSI subsystem initialized Apr 28 01:14:05.035385 kernel: Loading iSCSI transport class v2.0-870. Apr 28 01:14:05.044382 kernel: iscsi: registered transport (tcp) Apr 28 01:14:05.062601 kernel: iscsi: registered transport (qla4xxx) Apr 28 01:14:05.062627 kernel: QLogic iSCSI HBA Driver Apr 28 01:14:05.092666 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 28 01:14:05.102517 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 28 01:14:05.124731 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 28 01:14:05.124787 kernel: device-mapper: uevent: version 1.0.3 Apr 28 01:14:05.124812 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 28 01:14:05.160544 kernel: raid6: avx512x4 gen() 44412 MB/s Apr 28 01:14:05.177524 kernel: raid6: avx512x2 gen() 43369 MB/s Apr 28 01:14:05.194545 kernel: raid6: avx512x1 gen() 43434 MB/s Apr 28 01:14:05.211566 kernel: raid6: avx2x4 gen() 36641 MB/s Apr 28 01:14:05.228411 kernel: raid6: avx2x2 gen() 36502 MB/s Apr 28 01:14:05.246375 kernel: raid6: avx2x1 gen() 26718 MB/s Apr 28 01:14:05.246405 kernel: raid6: using algorithm avx512x4 gen() 44412 MB/s Apr 28 01:14:05.264367 kernel: raid6: .... xor() 9967 MB/s, rmw enabled Apr 28 01:14:05.264397 kernel: raid6: using avx512x2 recovery algorithm Apr 28 01:14:05.282384 kernel: xor: automatically using best checksumming function avx Apr 28 01:14:05.403418 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 28 01:14:05.412210 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 28 01:14:05.428555 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 28 01:14:05.438236 systemd-udevd[416]: Using default interface naming scheme 'v255'. Apr 28 01:14:05.440852 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 28 01:14:05.451850 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 28 01:14:05.461006 dracut-pre-trigger[427]: rd.md=0: removing MD RAID activation Apr 28 01:14:05.482234 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 28 01:14:05.494912 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 28 01:14:05.522535 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 28 01:14:05.529502 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 28 01:14:05.537488 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 28 01:14:05.539761 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 28 01:14:05.543665 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 28 01:14:05.544260 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 28 01:14:05.557054 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Apr 28 01:14:05.551480 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 28 01:14:05.559148 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 28 01:14:05.566027 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Apr 28 01:14:05.572659 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 28 01:14:05.572706 kernel: GPT:9289727 != 19775487 Apr 28 01:14:05.572724 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 28 01:14:05.572748 kernel: GPT:9289727 != 19775487 Apr 28 01:14:05.572777 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 28 01:14:05.572793 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 28 01:14:05.576365 kernel: cryptd: max_cpu_qlen set to 1000 Apr 28 01:14:05.576422 kernel: libata version 3.00 loaded. Apr 28 01:14:05.586986 kernel: ahci 0000:00:1f.2: version 3.0 Apr 28 01:14:05.587103 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 28 01:14:05.589939 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 28 01:14:05.598791 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Apr 28 01:14:05.598896 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 28 01:14:05.598966 kernel: scsi host0: ahci Apr 28 01:14:05.599042 kernel: scsi host1: ahci Apr 28 01:14:05.590038 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 28 01:14:05.606678 kernel: scsi host2: ahci Apr 28 01:14:05.606791 kernel: scsi host3: ahci Apr 28 01:14:05.606868 kernel: scsi host4: ahci Apr 28 01:14:05.606932 kernel: AVX2 version of gcm_enc/dec engaged. Apr 28 01:14:05.596836 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 28 01:14:05.624756 kernel: AES CTR mode by8 optimization enabled Apr 28 01:14:05.624772 kernel: scsi host5: ahci Apr 28 01:14:05.624878 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Apr 28 01:14:05.624886 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Apr 28 01:14:05.624896 kernel: BTRFS: device fsid c393bc7b-9362-4bef-afe6-6491ed4d6c93 devid 1 transid 34 /dev/vda3 scanned by (udev-worker) (462) Apr 28 01:14:05.624904 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Apr 28 01:14:05.624911 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (466) Apr 28 01:14:05.624918 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Apr 28 01:14:05.624925 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Apr 28 01:14:05.624931 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Apr 28 01:14:05.600195 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 28 01:14:05.600403 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 28 01:14:05.612737 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 28 01:14:05.631967 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 28 01:14:05.640298 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Apr 28 01:14:05.647724 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Apr 28 01:14:05.655852 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Apr 28 01:14:05.656881 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Apr 28 01:14:05.664742 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 28 01:14:05.672513 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 28 01:14:05.673235 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 28 01:14:05.681595 disk-uuid[557]: Primary Header is updated. Apr 28 01:14:05.681595 disk-uuid[557]: Secondary Entries is updated. Apr 28 01:14:05.681595 disk-uuid[557]: Secondary Header is updated. Apr 28 01:14:05.690038 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 28 01:14:05.673273 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 28 01:14:05.678535 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 28 01:14:05.679696 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 28 01:14:05.699985 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 28 01:14:05.751550 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 28 01:14:05.765043 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 28 01:14:05.939384 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 28 01:14:05.939443 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 28 01:14:05.941110 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 28 01:14:05.941373 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 28 01:14:05.944371 kernel: ata1: SATA link down (SStatus 0 SControl 300) Apr 28 01:14:05.944397 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 28 01:14:05.945385 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Apr 28 01:14:05.946735 kernel: ata3.00: applying bridge limits Apr 28 01:14:05.947738 kernel: ata3.00: configured for UDMA/100 Apr 28 01:14:05.948379 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 28 01:14:06.007655 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Apr 28 01:14:06.007826 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 28 01:14:06.020423 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Apr 28 01:14:06.704105 disk-uuid[558]: The operation has completed successfully. Apr 28 01:14:06.705731 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 28 01:14:06.724172 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 28 01:14:06.724275 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 28 01:14:06.743937 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 28 01:14:06.747766 sh[597]: Success Apr 28 01:14:06.757417 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Apr 28 01:14:06.784907 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 28 01:14:06.797511 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 28 01:14:06.800807 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 28 01:14:06.810443 kernel: BTRFS info (device dm-0): first mount of filesystem c393bc7b-9362-4bef-afe6-6491ed4d6c93 Apr 28 01:14:06.810468 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 28 01:14:06.810482 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 28 01:14:06.812036 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 28 01:14:06.813207 kernel: BTRFS info (device dm-0): using free space tree Apr 28 01:14:06.818311 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 28 01:14:06.821194 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 28 01:14:06.824403 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 28 01:14:06.825647 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 28 01:14:06.841058 kernel: BTRFS info (device vda6): first mount of filesystem 00ce5520-a395-45f5-887a-de6bb1d2f08f Apr 28 01:14:06.841084 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 28 01:14:06.841095 kernel: BTRFS info (device vda6): using free space tree Apr 28 01:14:06.845405 kernel: BTRFS info (device vda6): auto enabling async discard Apr 28 01:14:06.851301 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 28 01:14:06.854454 kernel: BTRFS info (device vda6): last unmount of filesystem 00ce5520-a395-45f5-887a-de6bb1d2f08f Apr 28 01:14:06.859055 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 28 01:14:06.864502 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 28 01:14:06.906681 ignition[700]: Ignition 2.19.0 Apr 28 01:14:06.906690 ignition[700]: Stage: fetch-offline Apr 28 01:14:06.906716 ignition[700]: no configs at "/usr/lib/ignition/base.d" Apr 28 01:14:06.906721 ignition[700]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 28 01:14:06.906800 ignition[700]: parsed url from cmdline: "" Apr 28 01:14:06.906802 ignition[700]: no config URL provided Apr 28 01:14:06.906806 ignition[700]: reading system config file "/usr/lib/ignition/user.ign" Apr 28 01:14:06.906812 ignition[700]: no config at "/usr/lib/ignition/user.ign" Apr 28 01:14:06.906829 ignition[700]: op(1): [started] loading QEMU firmware config module Apr 28 01:14:06.906832 ignition[700]: op(1): executing: "modprobe" "qemu_fw_cfg" Apr 28 01:14:06.913765 ignition[700]: op(1): [finished] loading QEMU firmware config module Apr 28 01:14:06.925233 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 28 01:14:06.936498 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 28 01:14:06.953033 systemd-networkd[785]: lo: Link UP Apr 28 01:14:06.953057 systemd-networkd[785]: lo: Gained carrier Apr 28 01:14:06.953884 systemd-networkd[785]: Enumeration completed Apr 28 01:14:06.954289 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 28 01:14:06.954384 systemd-networkd[785]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 28 01:14:06.954386 systemd-networkd[785]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 28 01:14:06.955162 systemd-networkd[785]: eth0: Link UP Apr 28 01:14:06.955164 systemd-networkd[785]: eth0: Gained carrier Apr 28 01:14:06.955171 systemd-networkd[785]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 28 01:14:06.956628 systemd[1]: Reached target network.target - Network. Apr 28 01:14:06.988410 systemd-networkd[785]: eth0: DHCPv4 address 10.0.0.153/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 28 01:14:07.039396 ignition[700]: parsing config with SHA512: 3465695f64117088f3be10329f4b89ad39d7866be64dba9ba9337156f77abf09fd8451f7285f522a1bb194b186b00a62444636e9cdf87d01f1cd0fba243ff7e0 Apr 28 01:14:07.042887 unknown[700]: fetched base config from "system" Apr 28 01:14:07.042896 unknown[700]: fetched user config from "qemu" Apr 28 01:14:07.043272 ignition[700]: fetch-offline: fetch-offline passed Apr 28 01:14:07.043379 ignition[700]: Ignition finished successfully Apr 28 01:14:07.049092 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 28 01:14:07.049911 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Apr 28 01:14:07.067494 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 28 01:14:07.082534 ignition[789]: Ignition 2.19.0 Apr 28 01:14:07.082539 ignition[789]: Stage: kargs Apr 28 01:14:07.082656 ignition[789]: no configs at "/usr/lib/ignition/base.d" Apr 28 01:14:07.082663 ignition[789]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 28 01:14:07.083215 ignition[789]: kargs: kargs passed Apr 28 01:14:07.083239 ignition[789]: Ignition finished successfully Apr 28 01:14:07.089710 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 28 01:14:07.098577 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 28 01:14:07.111734 ignition[797]: Ignition 2.19.0 Apr 28 01:14:07.111749 ignition[797]: Stage: disks Apr 28 01:14:07.111880 ignition[797]: no configs at "/usr/lib/ignition/base.d" Apr 28 01:14:07.111886 ignition[797]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 28 01:14:07.112558 ignition[797]: disks: disks passed Apr 28 01:14:07.112585 ignition[797]: Ignition finished successfully Apr 28 01:14:07.118943 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 28 01:14:07.120926 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 28 01:14:07.123773 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 28 01:14:07.124393 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 28 01:14:07.128586 systemd[1]: Reached target sysinit.target - System Initialization. Apr 28 01:14:07.131867 systemd[1]: Reached target basic.target - Basic System. Apr 28 01:14:07.142476 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 28 01:14:07.153393 systemd-fsck[806]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 28 01:14:07.157178 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 28 01:14:07.158539 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 28 01:14:07.235376 kernel: EXT4-fs (vda9): mounted filesystem f590d1f8-5181-4682-9e04-fe65400dca5c r/w with ordered data mode. Quota mode: none. Apr 28 01:14:07.235683 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 28 01:14:07.237075 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 28 01:14:07.250499 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 28 01:14:07.254068 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 28 01:14:07.258027 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (814) Apr 28 01:14:07.254908 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 28 01:14:07.254934 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 28 01:14:07.267525 kernel: BTRFS info (device vda6): first mount of filesystem 00ce5520-a395-45f5-887a-de6bb1d2f08f Apr 28 01:14:07.267539 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 28 01:14:07.267548 kernel: BTRFS info (device vda6): using free space tree Apr 28 01:14:07.254950 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 28 01:14:07.273538 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 28 01:14:07.276691 kernel: BTRFS info (device vda6): auto enabling async discard Apr 28 01:14:07.276452 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 28 01:14:07.281128 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 28 01:14:07.307194 initrd-setup-root[838]: cut: /sysroot/etc/passwd: No such file or directory Apr 28 01:14:07.310631 initrd-setup-root[845]: cut: /sysroot/etc/group: No such file or directory Apr 28 01:14:07.314922 initrd-setup-root[852]: cut: /sysroot/etc/shadow: No such file or directory Apr 28 01:14:07.318206 initrd-setup-root[859]: cut: /sysroot/etc/gshadow: No such file or directory Apr 28 01:14:07.381927 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 28 01:14:07.395451 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 28 01:14:07.398618 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 28 01:14:07.404402 kernel: BTRFS info (device vda6): last unmount of filesystem 00ce5520-a395-45f5-887a-de6bb1d2f08f Apr 28 01:14:07.418495 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 28 01:14:07.423502 ignition[928]: INFO : Ignition 2.19.0 Apr 28 01:14:07.423502 ignition[928]: INFO : Stage: mount Apr 28 01:14:07.427575 ignition[928]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 28 01:14:07.427575 ignition[928]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 28 01:14:07.427575 ignition[928]: INFO : mount: mount passed Apr 28 01:14:07.427575 ignition[928]: INFO : Ignition finished successfully Apr 28 01:14:07.425638 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 28 01:14:07.441492 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 28 01:14:07.808801 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 28 01:14:07.822550 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 28 01:14:07.831377 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (941) Apr 28 01:14:07.834469 kernel: BTRFS info (device vda6): first mount of filesystem 00ce5520-a395-45f5-887a-de6bb1d2f08f Apr 28 01:14:07.834488 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 28 01:14:07.834496 kernel: BTRFS info (device vda6): using free space tree Apr 28 01:14:07.838372 kernel: BTRFS info (device vda6): auto enabling async discard Apr 28 01:14:07.839486 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 28 01:14:07.858386 ignition[958]: INFO : Ignition 2.19.0 Apr 28 01:14:07.858386 ignition[958]: INFO : Stage: files Apr 28 01:14:07.861015 ignition[958]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 28 01:14:07.861015 ignition[958]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 28 01:14:07.861015 ignition[958]: DEBUG : files: compiled without relabeling support, skipping Apr 28 01:14:07.861015 ignition[958]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 28 01:14:07.861015 ignition[958]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 28 01:14:07.870703 ignition[958]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 28 01:14:07.870703 ignition[958]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 28 01:14:07.870703 ignition[958]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 28 01:14:07.870703 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 28 01:14:07.870703 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 28 01:14:07.861884 unknown[958]: wrote ssh authorized keys file for user: core Apr 28 01:14:07.924319 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 28 01:14:08.024160 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 28 01:14:08.027305 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 28 01:14:08.027305 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 28 01:14:08.027305 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 28 01:14:08.027305 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 28 01:14:08.027305 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 28 01:14:08.027305 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 28 01:14:08.027305 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 28 01:14:08.027305 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 28 01:14:08.027305 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 28 01:14:08.027305 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 28 01:14:08.027305 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 28 01:14:08.027305 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 28 01:14:08.027305 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 28 01:14:08.027305 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-x86-64.raw: attempt #1 Apr 28 01:14:08.285635 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 28 01:14:08.392543 systemd-networkd[785]: eth0: Gained IPv6LL Apr 28 01:14:08.745685 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 28 01:14:08.745685 ignition[958]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 28 01:14:08.751214 ignition[958]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 28 01:14:08.751214 ignition[958]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 28 01:14:08.751214 ignition[958]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 28 01:14:08.751214 ignition[958]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Apr 28 01:14:08.751214 ignition[958]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 28 01:14:08.751214 ignition[958]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 28 01:14:08.751214 ignition[958]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Apr 28 01:14:08.751214 ignition[958]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Apr 28 01:14:08.773516 ignition[958]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Apr 28 01:14:08.773516 ignition[958]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Apr 28 01:14:08.773516 ignition[958]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Apr 28 01:14:08.773516 ignition[958]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Apr 28 01:14:08.773516 ignition[958]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Apr 28 01:14:08.773516 ignition[958]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 28 01:14:08.773516 ignition[958]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 28 01:14:08.773516 ignition[958]: INFO : files: files passed Apr 28 01:14:08.773516 ignition[958]: INFO : Ignition finished successfully Apr 28 01:14:08.767883 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 28 01:14:08.789504 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 28 01:14:08.792633 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 28 01:14:08.795154 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 28 01:14:08.808610 initrd-setup-root-after-ignition[985]: grep: /sysroot/oem/oem-release: No such file or directory Apr 28 01:14:08.795239 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 28 01:14:08.819659 initrd-setup-root-after-ignition[987]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 28 01:14:08.819659 initrd-setup-root-after-ignition[987]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 28 01:14:08.802470 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 28 01:14:08.827656 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 28 01:14:08.805713 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 28 01:14:08.817479 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 28 01:14:08.846229 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 28 01:14:08.846329 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 28 01:14:08.849845 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 28 01:14:08.853194 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 28 01:14:08.856156 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 28 01:14:08.859962 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 28 01:14:08.876036 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 28 01:14:08.878833 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 28 01:14:08.889190 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 28 01:14:08.889949 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 28 01:14:08.893924 systemd[1]: Stopped target timers.target - Timer Units. Apr 28 01:14:08.897042 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 28 01:14:08.897126 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 28 01:14:08.902700 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 28 01:14:08.905808 systemd[1]: Stopped target basic.target - Basic System. Apr 28 01:14:08.908626 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 28 01:14:08.911444 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 28 01:14:08.914649 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 28 01:14:08.917892 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 28 01:14:08.920945 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 28 01:14:08.924115 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 28 01:14:08.924995 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 28 01:14:08.929434 systemd[1]: Stopped target swap.target - Swaps. Apr 28 01:14:08.932198 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 28 01:14:08.932312 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 28 01:14:08.936882 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 28 01:14:08.939921 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 28 01:14:08.943121 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 28 01:14:08.943204 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 28 01:14:08.943938 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 28 01:14:08.944016 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 28 01:14:08.951578 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 28 01:14:08.951664 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 28 01:14:08.954699 systemd[1]: Stopped target paths.target - Path Units. Apr 28 01:14:08.957387 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 28 01:14:08.961410 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 28 01:14:08.962439 systemd[1]: Stopped target slices.target - Slice Units. Apr 28 01:14:08.965672 systemd[1]: Stopped target sockets.target - Socket Units. Apr 28 01:14:08.969131 systemd[1]: iscsid.socket: Deactivated successfully. Apr 28 01:14:08.969215 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 28 01:14:08.971744 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 28 01:14:08.971815 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 28 01:14:08.974421 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 28 01:14:08.974505 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 28 01:14:08.977076 systemd[1]: ignition-files.service: Deactivated successfully. Apr 28 01:14:08.977167 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 28 01:14:08.996545 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 28 01:14:08.998251 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 28 01:14:08.998375 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 28 01:14:09.005693 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 28 01:14:09.007091 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 28 01:14:09.013716 ignition[1012]: INFO : Ignition 2.19.0 Apr 28 01:14:09.013716 ignition[1012]: INFO : Stage: umount Apr 28 01:14:09.013716 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 28 01:14:09.013716 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 28 01:14:09.013716 ignition[1012]: INFO : umount: umount passed Apr 28 01:14:09.013716 ignition[1012]: INFO : Ignition finished successfully Apr 28 01:14:09.007174 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 28 01:14:09.010320 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 28 01:14:09.010466 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 28 01:14:09.016002 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 28 01:14:09.016080 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 28 01:14:09.019847 systemd[1]: Stopped target network.target - Network. Apr 28 01:14:09.022460 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 28 01:14:09.022537 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 28 01:14:09.025583 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 28 01:14:09.025612 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 28 01:14:09.028258 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 28 01:14:09.028288 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 28 01:14:09.029064 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 28 01:14:09.029086 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 28 01:14:09.033880 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 28 01:14:09.036781 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 28 01:14:09.040465 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 28 01:14:09.040929 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 28 01:14:09.041001 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 28 01:14:09.044916 systemd-networkd[785]: eth0: DHCPv6 lease lost Apr 28 01:14:09.049690 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 28 01:14:09.049783 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 28 01:14:09.051734 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 28 01:14:09.051768 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 28 01:14:09.054277 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 28 01:14:09.054414 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 28 01:14:09.058967 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 28 01:14:09.058999 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 28 01:14:09.072563 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 28 01:14:09.075450 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 28 01:14:09.078617 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 28 01:14:09.082524 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 28 01:14:09.082565 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 28 01:14:09.085408 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 28 01:14:09.085437 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 28 01:14:09.088595 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 28 01:14:09.091638 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 28 01:14:09.091718 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 28 01:14:09.103011 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 28 01:14:09.103048 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 28 01:14:09.105889 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 28 01:14:09.105964 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 28 01:14:09.120858 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 28 01:14:09.121009 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 28 01:14:09.124323 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 28 01:14:09.124405 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 28 01:14:09.127816 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 28 01:14:09.127841 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 28 01:14:09.130764 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 28 01:14:09.130795 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 28 01:14:09.135214 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 28 01:14:09.135247 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 28 01:14:09.139580 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 28 01:14:09.139619 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 28 01:14:09.157755 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 28 01:14:09.161576 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 28 01:14:09.161635 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 28 01:14:09.165136 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 28 01:14:09.165163 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 28 01:14:09.166117 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 28 01:14:09.166169 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 28 01:14:09.173494 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 28 01:14:09.177063 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 28 01:14:09.186656 systemd[1]: Switching root. Apr 28 01:14:09.208553 systemd-journald[194]: Journal stopped Apr 28 01:14:09.873027 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Apr 28 01:14:09.873077 kernel: SELinux: policy capability network_peer_controls=1 Apr 28 01:14:09.873088 kernel: SELinux: policy capability open_perms=1 Apr 28 01:14:09.873096 kernel: SELinux: policy capability extended_socket_class=1 Apr 28 01:14:09.873106 kernel: SELinux: policy capability always_check_network=0 Apr 28 01:14:09.873116 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 28 01:14:09.873123 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 28 01:14:09.873130 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 28 01:14:09.873140 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 28 01:14:09.873147 kernel: audit: type=1403 audit(1777338849.312:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 28 01:14:09.873159 systemd[1]: Successfully loaded SELinux policy in 33.918ms. Apr 28 01:14:09.873175 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.459ms. Apr 28 01:14:09.873184 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 28 01:14:09.873193 systemd[1]: Detected virtualization kvm. Apr 28 01:14:09.873201 systemd[1]: Detected architecture x86-64. Apr 28 01:14:09.873209 systemd[1]: Detected first boot. Apr 28 01:14:09.873217 systemd[1]: Initializing machine ID from VM UUID. Apr 28 01:14:09.873225 zram_generator::config[1055]: No configuration found. Apr 28 01:14:09.873233 systemd[1]: Populated /etc with preset unit settings. Apr 28 01:14:09.873241 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 28 01:14:09.873250 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 28 01:14:09.873258 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 28 01:14:09.873268 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 28 01:14:09.873275 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 28 01:14:09.873283 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 28 01:14:09.873291 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 28 01:14:09.873299 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 28 01:14:09.873307 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 28 01:14:09.873315 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 28 01:14:09.873322 systemd[1]: Created slice user.slice - User and Session Slice. Apr 28 01:14:09.873332 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 28 01:14:09.873387 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 28 01:14:09.873396 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 28 01:14:09.873404 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 28 01:14:09.873411 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 28 01:14:09.873419 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 28 01:14:09.873427 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 28 01:14:09.873434 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 28 01:14:09.873442 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 28 01:14:09.873451 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 28 01:14:09.873459 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 28 01:14:09.873467 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 28 01:14:09.873475 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 28 01:14:09.873483 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 28 01:14:09.873491 systemd[1]: Reached target slices.target - Slice Units. Apr 28 01:14:09.873498 systemd[1]: Reached target swap.target - Swaps. Apr 28 01:14:09.873506 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 28 01:14:09.873515 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 28 01:14:09.873522 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 28 01:14:09.873530 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 28 01:14:09.873538 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 28 01:14:09.873545 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 28 01:14:09.873553 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 28 01:14:09.873561 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 28 01:14:09.873569 systemd[1]: Mounting media.mount - External Media Directory... Apr 28 01:14:09.873577 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 28 01:14:09.873586 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 28 01:14:09.873593 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 28 01:14:09.873600 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 28 01:14:09.873608 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 28 01:14:09.873616 systemd[1]: Reached target machines.target - Containers. Apr 28 01:14:09.873623 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 28 01:14:09.873631 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 28 01:14:09.873640 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 28 01:14:09.873649 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 28 01:14:09.873657 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 28 01:14:09.873664 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 28 01:14:09.873672 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 28 01:14:09.873679 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 28 01:14:09.873687 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 28 01:14:09.873694 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 28 01:14:09.873702 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 28 01:14:09.873709 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 28 01:14:09.873721 kernel: fuse: init (API version 7.39) Apr 28 01:14:09.873728 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 28 01:14:09.873736 kernel: loop: module loaded Apr 28 01:14:09.873743 systemd[1]: Stopped systemd-fsck-usr.service. Apr 28 01:14:09.873751 kernel: ACPI: bus type drm_connector registered Apr 28 01:14:09.873758 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 28 01:14:09.873765 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 28 01:14:09.873773 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 28 01:14:09.873791 systemd-journald[1139]: Collecting audit messages is disabled. Apr 28 01:14:09.873810 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 28 01:14:09.873818 systemd-journald[1139]: Journal started Apr 28 01:14:09.873836 systemd-journald[1139]: Runtime Journal (/run/log/journal/7882038278d94a688ac4e3cd9157e8ab) is 6.0M, max 48.3M, 42.2M free. Apr 28 01:14:09.604284 systemd[1]: Queued start job for default target multi-user.target. Apr 28 01:14:09.623835 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Apr 28 01:14:09.624167 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 28 01:14:09.877549 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 28 01:14:09.881121 systemd[1]: verity-setup.service: Deactivated successfully. Apr 28 01:14:09.881143 systemd[1]: Stopped verity-setup.service. Apr 28 01:14:09.885381 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 28 01:14:09.888613 systemd[1]: Started systemd-journald.service - Journal Service. Apr 28 01:14:09.889462 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 28 01:14:09.891133 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 28 01:14:09.892876 systemd[1]: Mounted media.mount - External Media Directory. Apr 28 01:14:09.894454 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 28 01:14:09.896169 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 28 01:14:09.897923 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 28 01:14:09.899587 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 28 01:14:09.901586 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 28 01:14:09.903638 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 28 01:14:09.903764 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 28 01:14:09.905859 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 28 01:14:09.905984 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 28 01:14:09.907854 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 28 01:14:09.907971 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 28 01:14:09.909801 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 28 01:14:09.909917 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 28 01:14:09.911978 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 28 01:14:09.912092 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 28 01:14:09.913946 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 28 01:14:09.914064 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 28 01:14:09.915893 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 28 01:14:09.917785 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 28 01:14:09.919835 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 28 01:14:09.925919 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 28 01:14:09.930879 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 28 01:14:09.937469 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 28 01:14:09.940148 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 28 01:14:09.941863 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 28 01:14:09.941900 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 28 01:14:09.944085 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 28 01:14:09.946712 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 28 01:14:09.949129 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 28 01:14:09.950744 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 28 01:14:09.952448 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 28 01:14:09.954841 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 28 01:14:09.956696 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 28 01:14:09.958149 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 28 01:14:09.960493 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 28 01:14:09.962537 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 28 01:14:09.963390 systemd-journald[1139]: Time spent on flushing to /var/log/journal/7882038278d94a688ac4e3cd9157e8ab is 10.498ms for 993 entries. Apr 28 01:14:09.963390 systemd-journald[1139]: System Journal (/var/log/journal/7882038278d94a688ac4e3cd9157e8ab) is 8.0M, max 195.6M, 187.6M free. Apr 28 01:14:09.988223 systemd-journald[1139]: Received client request to flush runtime journal. Apr 28 01:14:09.988248 kernel: loop0: detected capacity change from 0 to 140768 Apr 28 01:14:09.966407 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 28 01:14:09.972623 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 28 01:14:09.976604 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 28 01:14:09.981510 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 28 01:14:09.986144 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 28 01:14:09.988602 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 28 01:14:09.991811 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 28 01:14:09.994312 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 28 01:14:09.998303 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 28 01:14:10.003992 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 28 01:14:10.011747 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 28 01:14:10.014432 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 28 01:14:10.020654 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 28 01:14:10.023214 udevadm[1173]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Apr 28 01:14:10.027417 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 28 01:14:10.036063 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 28 01:14:10.036853 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 28 01:14:10.043694 systemd-tmpfiles[1187]: ACLs are not supported, ignoring. Apr 28 01:14:10.043893 systemd-tmpfiles[1187]: ACLs are not supported, ignoring. Apr 28 01:14:10.046398 kernel: loop1: detected capacity change from 0 to 219192 Apr 28 01:14:10.047266 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 28 01:14:10.077460 kernel: loop2: detected capacity change from 0 to 142488 Apr 28 01:14:10.103387 kernel: loop3: detected capacity change from 0 to 140768 Apr 28 01:14:10.113491 kernel: loop4: detected capacity change from 0 to 219192 Apr 28 01:14:10.122389 kernel: loop5: detected capacity change from 0 to 142488 Apr 28 01:14:10.130598 (sd-merge)[1194]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Apr 28 01:14:10.130917 (sd-merge)[1194]: Merged extensions into '/usr'. Apr 28 01:14:10.134063 systemd[1]: Reloading requested from client PID 1170 ('systemd-sysext') (unit systemd-sysext.service)... Apr 28 01:14:10.134084 systemd[1]: Reloading... Apr 28 01:14:10.183401 zram_generator::config[1218]: No configuration found. Apr 28 01:14:10.211224 ldconfig[1165]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 28 01:14:10.254984 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 28 01:14:10.283869 systemd[1]: Reloading finished in 149 ms. Apr 28 01:14:10.316552 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 28 01:14:10.318661 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 28 01:14:10.320804 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 28 01:14:10.342512 systemd[1]: Starting ensure-sysext.service... Apr 28 01:14:10.345118 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 28 01:14:10.348530 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 28 01:14:10.353430 systemd[1]: Reloading requested from client PID 1258 ('systemctl') (unit ensure-sysext.service)... Apr 28 01:14:10.353452 systemd[1]: Reloading... Apr 28 01:14:10.358653 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 28 01:14:10.358844 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 28 01:14:10.359309 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 28 01:14:10.359585 systemd-tmpfiles[1259]: ACLs are not supported, ignoring. Apr 28 01:14:10.359638 systemd-tmpfiles[1259]: ACLs are not supported, ignoring. Apr 28 01:14:10.361177 systemd-tmpfiles[1259]: Detected autofs mount point /boot during canonicalization of boot. Apr 28 01:14:10.361197 systemd-tmpfiles[1259]: Skipping /boot Apr 28 01:14:10.366448 systemd-tmpfiles[1259]: Detected autofs mount point /boot during canonicalization of boot. Apr 28 01:14:10.366507 systemd-tmpfiles[1259]: Skipping /boot Apr 28 01:14:10.367588 systemd-udevd[1260]: Using default interface naming scheme 'v255'. Apr 28 01:14:10.378396 zram_generator::config[1286]: No configuration found. Apr 28 01:14:10.425570 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1296) Apr 28 01:14:10.450388 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Apr 28 01:14:10.457411 kernel: ACPI: button: Power Button [PWRF] Apr 28 01:14:10.470171 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 28 01:14:10.476393 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Apr 28 01:14:10.476837 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 28 01:14:10.485136 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Apr 28 01:14:10.486171 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 28 01:14:10.488573 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Apr 28 01:14:10.523399 kernel: mousedev: PS/2 mouse device common for all mice Apr 28 01:14:10.531327 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 28 01:14:10.533537 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 28 01:14:10.533727 systemd[1]: Reloading finished in 180 ms. Apr 28 01:14:10.546582 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 28 01:14:10.575158 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 28 01:14:10.612330 systemd[1]: Finished ensure-sysext.service. Apr 28 01:14:10.629398 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 28 01:14:10.639609 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 28 01:14:10.642923 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 28 01:14:10.644825 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 28 01:14:10.645553 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 28 01:14:10.646830 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 28 01:14:10.651902 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 28 01:14:10.654576 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 28 01:14:10.656472 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 28 01:14:10.657116 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 28 01:14:10.659740 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 28 01:14:10.663384 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 28 01:14:10.666660 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 28 01:14:10.669715 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 28 01:14:10.672716 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 28 01:14:10.675759 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 28 01:14:10.677794 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 28 01:14:10.678436 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 28 01:14:10.682665 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 28 01:14:10.682800 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 28 01:14:10.685124 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 28 01:14:10.685216 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 28 01:14:10.688204 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 28 01:14:10.688767 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 28 01:14:10.691244 augenrules[1386]: No rules Apr 28 01:14:10.691563 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 28 01:14:10.691719 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 28 01:14:10.694186 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 28 01:14:10.696596 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 28 01:14:10.699138 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 28 01:14:10.701661 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 28 01:14:10.710374 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 28 01:14:10.722581 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 28 01:14:10.724848 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 28 01:14:10.724905 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 28 01:14:10.725851 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 28 01:14:10.728473 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 28 01:14:10.729989 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 28 01:14:10.730813 lvm[1404]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 28 01:14:10.730257 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 28 01:14:10.737177 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 28 01:14:10.755625 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 28 01:14:10.757858 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 28 01:14:10.759638 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 28 01:14:10.775847 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 28 01:14:10.782414 lvm[1416]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 28 01:14:10.804735 systemd-networkd[1373]: lo: Link UP Apr 28 01:14:10.804755 systemd-networkd[1373]: lo: Gained carrier Apr 28 01:14:10.805572 systemd-networkd[1373]: Enumeration completed Apr 28 01:14:10.806033 systemd-networkd[1373]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 28 01:14:10.806050 systemd-networkd[1373]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 28 01:14:10.806569 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 28 01:14:10.807450 systemd-networkd[1373]: eth0: Link UP Apr 28 01:14:10.807735 systemd-networkd[1373]: eth0: Gained carrier Apr 28 01:14:10.807754 systemd-networkd[1373]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 28 01:14:10.808603 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 28 01:14:10.820219 systemd-resolved[1375]: Positive Trust Anchors: Apr 28 01:14:10.820253 systemd-resolved[1375]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 28 01:14:10.820279 systemd-resolved[1375]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 28 01:14:10.822499 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 28 01:14:10.823115 systemd-resolved[1375]: Defaulting to hostname 'linux'. Apr 28 01:14:10.824453 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 28 01:14:10.826390 systemd-networkd[1373]: eth0: DHCPv4 address 10.0.0.153/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 28 01:14:10.826428 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 28 01:14:10.827252 systemd-timesyncd[1377]: Network configuration changed, trying to establish connection. Apr 28 01:14:11.770452 systemd-resolved[1375]: Clock change detected. Flushing caches. Apr 28 01:14:11.770474 systemd-timesyncd[1377]: Contacted time server 10.0.0.1:123 (10.0.0.1). Apr 28 01:14:11.770499 systemd-timesyncd[1377]: Initial clock synchronization to Tue 2026-04-28 01:14:11.770405 UTC. Apr 28 01:14:11.771074 systemd[1]: Reached target network.target - Network. Apr 28 01:14:11.772538 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 28 01:14:11.774388 systemd[1]: Reached target sysinit.target - System Initialization. Apr 28 01:14:11.776137 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 28 01:14:11.778090 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 28 01:14:11.780089 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 28 01:14:11.782139 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 28 01:14:11.782178 systemd[1]: Reached target paths.target - Path Units. Apr 28 01:14:11.783649 systemd[1]: Reached target time-set.target - System Time Set. Apr 28 01:14:11.785378 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 28 01:14:11.787121 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 28 01:14:11.789034 systemd[1]: Reached target timers.target - Timer Units. Apr 28 01:14:11.791152 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 28 01:14:11.793700 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 28 01:14:11.802465 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 28 01:14:11.804508 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 28 01:14:11.806218 systemd[1]: Reached target sockets.target - Socket Units. Apr 28 01:14:11.807689 systemd[1]: Reached target basic.target - Basic System. Apr 28 01:14:11.809138 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 28 01:14:11.809168 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 28 01:14:11.809889 systemd[1]: Starting containerd.service - containerd container runtime... Apr 28 01:14:11.812191 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 28 01:14:11.813747 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 28 01:14:11.816050 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 28 01:14:11.817573 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 28 01:14:11.818344 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 28 01:14:11.821839 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 28 01:14:11.824607 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 28 01:14:11.827230 jq[1426]: false Apr 28 01:14:11.827745 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 28 01:14:11.831258 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 28 01:14:11.835843 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 28 01:14:11.836175 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 28 01:14:11.836662 systemd[1]: Starting update-engine.service - Update Engine... Apr 28 01:14:11.840310 extend-filesystems[1427]: Found loop3 Apr 28 01:14:11.840310 extend-filesystems[1427]: Found loop4 Apr 28 01:14:11.840310 extend-filesystems[1427]: Found loop5 Apr 28 01:14:11.840310 extend-filesystems[1427]: Found sr0 Apr 28 01:14:11.840310 extend-filesystems[1427]: Found vda Apr 28 01:14:11.840310 extend-filesystems[1427]: Found vda1 Apr 28 01:14:11.840310 extend-filesystems[1427]: Found vda2 Apr 28 01:14:11.840310 extend-filesystems[1427]: Found vda3 Apr 28 01:14:11.840310 extend-filesystems[1427]: Found usr Apr 28 01:14:11.840310 extend-filesystems[1427]: Found vda4 Apr 28 01:14:11.840310 extend-filesystems[1427]: Found vda6 Apr 28 01:14:11.839424 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 28 01:14:11.847585 dbus-daemon[1425]: [system] SELinux support is enabled Apr 28 01:14:11.866061 extend-filesystems[1427]: Found vda7 Apr 28 01:14:11.866061 extend-filesystems[1427]: Found vda9 Apr 28 01:14:11.866061 extend-filesystems[1427]: Checking size of /dev/vda9 Apr 28 01:14:11.866061 extend-filesystems[1427]: Resized partition /dev/vda9 Apr 28 01:14:11.883077 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1296) Apr 28 01:14:11.883108 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Apr 28 01:14:11.846273 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 28 01:14:11.883175 extend-filesystems[1451]: resize2fs 1.47.1 (20-May-2024) Apr 28 01:14:11.886294 update_engine[1441]: I20260428 01:14:11.859261 1441 main.cc:92] Flatcar Update Engine starting Apr 28 01:14:11.886294 update_engine[1441]: I20260428 01:14:11.861286 1441 update_check_scheduler.cc:74] Next update check in 9m14s Apr 28 01:14:11.846384 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 28 01:14:11.886677 jq[1442]: true Apr 28 01:14:11.846568 systemd[1]: motdgen.service: Deactivated successfully. Apr 28 01:14:11.846660 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 28 01:14:11.886864 tar[1448]: linux-amd64/LICENSE Apr 28 01:14:11.886864 tar[1448]: linux-amd64/helm Apr 28 01:14:11.849008 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 28 01:14:11.862317 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 28 01:14:11.862434 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 28 01:14:11.872697 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 28 01:14:11.872713 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 28 01:14:11.878012 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 28 01:14:11.878027 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 28 01:14:11.884232 systemd[1]: Started update-engine.service - Update Engine. Apr 28 01:14:11.893098 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 28 01:14:11.894554 (ntainerd)[1456]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 28 01:14:11.898488 jq[1450]: true Apr 28 01:14:11.899759 systemd-logind[1435]: Watching system buttons on /dev/input/event1 (Power Button) Apr 28 01:14:11.900087 systemd-logind[1435]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 28 01:14:11.902484 systemd-logind[1435]: New seat seat0. Apr 28 01:14:11.903672 systemd[1]: Started systemd-logind.service - User Login Management. Apr 28 01:14:11.917654 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Apr 28 01:14:11.923191 locksmithd[1459]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 28 01:14:11.932428 extend-filesystems[1451]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Apr 28 01:14:11.932428 extend-filesystems[1451]: old_desc_blocks = 1, new_desc_blocks = 1 Apr 28 01:14:11.932428 extend-filesystems[1451]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Apr 28 01:14:11.941287 extend-filesystems[1427]: Resized filesystem in /dev/vda9 Apr 28 01:14:11.934504 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 28 01:14:11.934636 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 28 01:14:11.945177 bash[1484]: Updated "/home/core/.ssh/authorized_keys" Apr 28 01:14:11.948034 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 28 01:14:11.950480 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Apr 28 01:14:12.042186 containerd[1456]: time="2026-04-28T01:14:12.042066280Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 28 01:14:12.058760 sshd_keygen[1444]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 28 01:14:12.059299 containerd[1456]: time="2026-04-28T01:14:12.059262156Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 28 01:14:12.060812 containerd[1456]: time="2026-04-28T01:14:12.060788267Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 28 01:14:12.061551 containerd[1456]: time="2026-04-28T01:14:12.060862524Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 28 01:14:12.061551 containerd[1456]: time="2026-04-28T01:14:12.060877828Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 28 01:14:12.061551 containerd[1456]: time="2026-04-28T01:14:12.061027847Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 28 01:14:12.061551 containerd[1456]: time="2026-04-28T01:14:12.061041055Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 28 01:14:12.061551 containerd[1456]: time="2026-04-28T01:14:12.061076731Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 28 01:14:12.061551 containerd[1456]: time="2026-04-28T01:14:12.061084896Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 28 01:14:12.061551 containerd[1456]: time="2026-04-28T01:14:12.061202108Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 28 01:14:12.061551 containerd[1456]: time="2026-04-28T01:14:12.061212781Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 28 01:14:12.061551 containerd[1456]: time="2026-04-28T01:14:12.061221092Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 28 01:14:12.061551 containerd[1456]: time="2026-04-28T01:14:12.061228289Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 28 01:14:12.061551 containerd[1456]: time="2026-04-28T01:14:12.061272476Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 28 01:14:12.061551 containerd[1456]: time="2026-04-28T01:14:12.061392186Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 28 01:14:12.061727 containerd[1456]: time="2026-04-28T01:14:12.061451988Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 28 01:14:12.061727 containerd[1456]: time="2026-04-28T01:14:12.061460121Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 28 01:14:12.061727 containerd[1456]: time="2026-04-28T01:14:12.061503217Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 28 01:14:12.061727 containerd[1456]: time="2026-04-28T01:14:12.061529109Z" level=info msg="metadata content store policy set" policy=shared Apr 28 01:14:12.066661 containerd[1456]: time="2026-04-28T01:14:12.066645196Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 28 01:14:12.066783 containerd[1456]: time="2026-04-28T01:14:12.066773196Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 28 01:14:12.066819 containerd[1456]: time="2026-04-28T01:14:12.066813295Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 28 01:14:12.066891 containerd[1456]: time="2026-04-28T01:14:12.066883914Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 28 01:14:12.066920 containerd[1456]: time="2026-04-28T01:14:12.066914419Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 28 01:14:12.067076 containerd[1456]: time="2026-04-28T01:14:12.067060605Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 28 01:14:12.067285 containerd[1456]: time="2026-04-28T01:14:12.067274452Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 28 01:14:12.067384 containerd[1456]: time="2026-04-28T01:14:12.067375803Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 28 01:14:12.067419 containerd[1456]: time="2026-04-28T01:14:12.067413255Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 28 01:14:12.067455 containerd[1456]: time="2026-04-28T01:14:12.067448305Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 28 01:14:12.067484 containerd[1456]: time="2026-04-28T01:14:12.067477921Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 28 01:14:12.067510 containerd[1456]: time="2026-04-28T01:14:12.067504291Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 28 01:14:12.067536 containerd[1456]: time="2026-04-28T01:14:12.067530772Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 28 01:14:12.067563 containerd[1456]: time="2026-04-28T01:14:12.067557948Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 28 01:14:12.067595 containerd[1456]: time="2026-04-28T01:14:12.067588861Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 28 01:14:12.067628 containerd[1456]: time="2026-04-28T01:14:12.067621655Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 28 01:14:12.067655 containerd[1456]: time="2026-04-28T01:14:12.067649252Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 28 01:14:12.067680 containerd[1456]: time="2026-04-28T01:14:12.067675323Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 28 01:14:12.067714 containerd[1456]: time="2026-04-28T01:14:12.067707245Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 28 01:14:12.067926 containerd[1456]: time="2026-04-28T01:14:12.067913854Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 28 01:14:12.068032 containerd[1456]: time="2026-04-28T01:14:12.068023117Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 28 01:14:12.068069 containerd[1456]: time="2026-04-28T01:14:12.068063489Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 28 01:14:12.068100 containerd[1456]: time="2026-04-28T01:14:12.068094660Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 28 01:14:12.068128 containerd[1456]: time="2026-04-28T01:14:12.068123236Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 28 01:14:12.068157 containerd[1456]: time="2026-04-28T01:14:12.068150881Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 28 01:14:12.068325 containerd[1456]: time="2026-04-28T01:14:12.068313236Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 28 01:14:12.068816 containerd[1456]: time="2026-04-28T01:14:12.068801866Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 28 01:14:12.068870 containerd[1456]: time="2026-04-28T01:14:12.068862512Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 28 01:14:12.068902 containerd[1456]: time="2026-04-28T01:14:12.068896051Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 28 01:14:12.068932 containerd[1456]: time="2026-04-28T01:14:12.068926682Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 28 01:14:12.069028 containerd[1456]: time="2026-04-28T01:14:12.069020575Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 28 01:14:12.069066 containerd[1456]: time="2026-04-28T01:14:12.069059108Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 28 01:14:12.069140 containerd[1456]: time="2026-04-28T01:14:12.069114697Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 28 01:14:12.069180 containerd[1456]: time="2026-04-28T01:14:12.069173353Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 28 01:14:12.069211 containerd[1456]: time="2026-04-28T01:14:12.069203190Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 28 01:14:12.069269 containerd[1456]: time="2026-04-28T01:14:12.069261564Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 28 01:14:12.069386 containerd[1456]: time="2026-04-28T01:14:12.069298405Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 28 01:14:12.069386 containerd[1456]: time="2026-04-28T01:14:12.069378735Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 28 01:14:12.069418 containerd[1456]: time="2026-04-28T01:14:12.069400956Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 28 01:14:12.069418 containerd[1456]: time="2026-04-28T01:14:12.069412843Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 28 01:14:12.069449 containerd[1456]: time="2026-04-28T01:14:12.069433146Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 28 01:14:12.069449 containerd[1456]: time="2026-04-28T01:14:12.069442322Z" level=info msg="NRI interface is disabled by configuration." Apr 28 01:14:12.069496 containerd[1456]: time="2026-04-28T01:14:12.069453192Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 28 01:14:12.069757 containerd[1456]: time="2026-04-28T01:14:12.069696267Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 28 01:14:12.069872 containerd[1456]: time="2026-04-28T01:14:12.069757895Z" level=info msg="Connect containerd service" Apr 28 01:14:12.069872 containerd[1456]: time="2026-04-28T01:14:12.069790004Z" level=info msg="using legacy CRI server" Apr 28 01:14:12.069872 containerd[1456]: time="2026-04-28T01:14:12.069799060Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 28 01:14:12.069872 containerd[1456]: time="2026-04-28T01:14:12.069868451Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 28 01:14:12.070631 containerd[1456]: time="2026-04-28T01:14:12.070582263Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 28 01:14:12.071841 containerd[1456]: time="2026-04-28T01:14:12.070761629Z" level=info msg="Start subscribing containerd event" Apr 28 01:14:12.071841 containerd[1456]: time="2026-04-28T01:14:12.070797527Z" level=info msg="Start recovering state" Apr 28 01:14:12.071841 containerd[1456]: time="2026-04-28T01:14:12.070823270Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 28 01:14:12.071841 containerd[1456]: time="2026-04-28T01:14:12.070840742Z" level=info msg="Start event monitor" Apr 28 01:14:12.071841 containerd[1456]: time="2026-04-28T01:14:12.070848534Z" level=info msg="Start snapshots syncer" Apr 28 01:14:12.071841 containerd[1456]: time="2026-04-28T01:14:12.070854233Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 28 01:14:12.071841 containerd[1456]: time="2026-04-28T01:14:12.070855373Z" level=info msg="Start cni network conf syncer for default" Apr 28 01:14:12.071841 containerd[1456]: time="2026-04-28T01:14:12.070924932Z" level=info msg="Start streaming server" Apr 28 01:14:12.071841 containerd[1456]: time="2026-04-28T01:14:12.071009488Z" level=info msg="containerd successfully booted in 0.029939s" Apr 28 01:14:12.071182 systemd[1]: Started containerd.service - containerd container runtime. Apr 28 01:14:12.079183 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 28 01:14:12.097194 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 28 01:14:12.103809 systemd[1]: issuegen.service: Deactivated successfully. Apr 28 01:14:12.103938 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 28 01:14:12.109166 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 28 01:14:12.118476 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 28 01:14:12.130364 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 28 01:14:12.132994 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 28 01:14:12.134783 systemd[1]: Reached target getty.target - Login Prompts. Apr 28 01:14:12.296518 tar[1448]: linux-amd64/README.md Apr 28 01:14:12.310439 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 28 01:14:12.983411 systemd-networkd[1373]: eth0: Gained IPv6LL Apr 28 01:14:12.985815 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 28 01:14:12.988265 systemd[1]: Reached target network-online.target - Network is Online. Apr 28 01:14:13.002182 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Apr 28 01:14:13.005181 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 01:14:13.007610 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 28 01:14:13.020017 systemd[1]: coreos-metadata.service: Deactivated successfully. Apr 28 01:14:13.020164 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Apr 28 01:14:13.022165 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 28 01:14:13.024160 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 28 01:14:13.616740 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 01:14:13.618827 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 28 01:14:13.619787 systemd[1]: Startup finished in 864ms (kernel) + 4.624s (initrd) + 3.397s (userspace) = 8.886s. Apr 28 01:14:13.620825 (kubelet)[1537]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 01:14:13.965994 kubelet[1537]: E0428 01:14:13.965820 1537 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 01:14:13.967679 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 01:14:13.967817 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 01:14:18.004394 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 28 01:14:18.005336 systemd[1]: Started sshd@0-10.0.0.153:22-10.0.0.1:46214.service - OpenSSH per-connection server daemon (10.0.0.1:46214). Apr 28 01:14:18.042106 sshd[1550]: Accepted publickey for core from 10.0.0.1 port 46214 ssh2: RSA SHA256:LlE/68A0qVd4DdmQfcok9T4l7BHzq3PFAQ3i8Jwjpps Apr 28 01:14:18.043693 sshd[1550]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:14:18.052423 systemd-logind[1435]: New session 1 of user core. Apr 28 01:14:18.053503 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 28 01:14:18.064307 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 28 01:14:18.074039 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 28 01:14:18.076277 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 28 01:14:18.082081 (systemd)[1554]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 28 01:14:18.144613 systemd[1554]: Queued start job for default target default.target. Apr 28 01:14:18.163221 systemd[1554]: Created slice app.slice - User Application Slice. Apr 28 01:14:18.163259 systemd[1554]: Reached target paths.target - Paths. Apr 28 01:14:18.163268 systemd[1554]: Reached target timers.target - Timers. Apr 28 01:14:18.164213 systemd[1554]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 28 01:14:18.172234 systemd[1554]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 28 01:14:18.172291 systemd[1554]: Reached target sockets.target - Sockets. Apr 28 01:14:18.172300 systemd[1554]: Reached target basic.target - Basic System. Apr 28 01:14:18.172324 systemd[1554]: Reached target default.target - Main User Target. Apr 28 01:14:18.172343 systemd[1554]: Startup finished in 85ms. Apr 28 01:14:18.172579 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 28 01:14:18.173755 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 28 01:14:18.233745 systemd[1]: Started sshd@1-10.0.0.153:22-10.0.0.1:46216.service - OpenSSH per-connection server daemon (10.0.0.1:46216). Apr 28 01:14:18.262830 sshd[1565]: Accepted publickey for core from 10.0.0.1 port 46216 ssh2: RSA SHA256:LlE/68A0qVd4DdmQfcok9T4l7BHzq3PFAQ3i8Jwjpps Apr 28 01:14:18.263781 sshd[1565]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:14:18.266830 systemd-logind[1435]: New session 2 of user core. Apr 28 01:14:18.281130 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 28 01:14:18.332402 sshd[1565]: pam_unix(sshd:session): session closed for user core Apr 28 01:14:18.342178 systemd[1]: sshd@1-10.0.0.153:22-10.0.0.1:46216.service: Deactivated successfully. Apr 28 01:14:18.343249 systemd[1]: session-2.scope: Deactivated successfully. Apr 28 01:14:18.344228 systemd-logind[1435]: Session 2 logged out. Waiting for processes to exit. Apr 28 01:14:18.351744 systemd[1]: Started sshd@2-10.0.0.153:22-10.0.0.1:46228.service - OpenSSH per-connection server daemon (10.0.0.1:46228). Apr 28 01:14:18.352392 systemd-logind[1435]: Removed session 2. Apr 28 01:14:18.375764 sshd[1572]: Accepted publickey for core from 10.0.0.1 port 46228 ssh2: RSA SHA256:LlE/68A0qVd4DdmQfcok9T4l7BHzq3PFAQ3i8Jwjpps Apr 28 01:14:18.376729 sshd[1572]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:14:18.379734 systemd-logind[1435]: New session 3 of user core. Apr 28 01:14:18.392184 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 28 01:14:18.439078 sshd[1572]: pam_unix(sshd:session): session closed for user core Apr 28 01:14:18.452569 systemd[1]: sshd@2-10.0.0.153:22-10.0.0.1:46228.service: Deactivated successfully. Apr 28 01:14:18.453597 systemd[1]: session-3.scope: Deactivated successfully. Apr 28 01:14:18.454527 systemd-logind[1435]: Session 3 logged out. Waiting for processes to exit. Apr 28 01:14:18.455372 systemd[1]: Started sshd@3-10.0.0.153:22-10.0.0.1:46244.service - OpenSSH per-connection server daemon (10.0.0.1:46244). Apr 28 01:14:18.455898 systemd-logind[1435]: Removed session 3. Apr 28 01:14:18.482453 sshd[1579]: Accepted publickey for core from 10.0.0.1 port 46244 ssh2: RSA SHA256:LlE/68A0qVd4DdmQfcok9T4l7BHzq3PFAQ3i8Jwjpps Apr 28 01:14:18.483429 sshd[1579]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:14:18.486316 systemd-logind[1435]: New session 4 of user core. Apr 28 01:14:18.492101 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 28 01:14:18.542311 sshd[1579]: pam_unix(sshd:session): session closed for user core Apr 28 01:14:18.553250 systemd[1]: sshd@3-10.0.0.153:22-10.0.0.1:46244.service: Deactivated successfully. Apr 28 01:14:18.554271 systemd[1]: session-4.scope: Deactivated successfully. Apr 28 01:14:18.555174 systemd-logind[1435]: Session 4 logged out. Waiting for processes to exit. Apr 28 01:14:18.555920 systemd[1]: Started sshd@4-10.0.0.153:22-10.0.0.1:46254.service - OpenSSH per-connection server daemon (10.0.0.1:46254). Apr 28 01:14:18.556544 systemd-logind[1435]: Removed session 4. Apr 28 01:14:18.581897 sshd[1586]: Accepted publickey for core from 10.0.0.1 port 46254 ssh2: RSA SHA256:LlE/68A0qVd4DdmQfcok9T4l7BHzq3PFAQ3i8Jwjpps Apr 28 01:14:18.582737 sshd[1586]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:14:18.585678 systemd-logind[1435]: New session 5 of user core. Apr 28 01:14:18.599120 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 28 01:14:18.654230 sudo[1589]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 28 01:14:18.654438 sudo[1589]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 28 01:14:18.668715 sudo[1589]: pam_unix(sudo:session): session closed for user root Apr 28 01:14:18.670114 sshd[1586]: pam_unix(sshd:session): session closed for user core Apr 28 01:14:18.678181 systemd[1]: sshd@4-10.0.0.153:22-10.0.0.1:46254.service: Deactivated successfully. Apr 28 01:14:18.679236 systemd[1]: session-5.scope: Deactivated successfully. Apr 28 01:14:18.680145 systemd-logind[1435]: Session 5 logged out. Waiting for processes to exit. Apr 28 01:14:18.681106 systemd[1]: Started sshd@5-10.0.0.153:22-10.0.0.1:46270.service - OpenSSH per-connection server daemon (10.0.0.1:46270). Apr 28 01:14:18.681662 systemd-logind[1435]: Removed session 5. Apr 28 01:14:18.707395 sshd[1594]: Accepted publickey for core from 10.0.0.1 port 46270 ssh2: RSA SHA256:LlE/68A0qVd4DdmQfcok9T4l7BHzq3PFAQ3i8Jwjpps Apr 28 01:14:18.708286 sshd[1594]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:14:18.711266 systemd-logind[1435]: New session 6 of user core. Apr 28 01:14:18.731122 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 28 01:14:18.781030 sudo[1598]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 28 01:14:18.781237 sudo[1598]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 28 01:14:18.784332 sudo[1598]: pam_unix(sudo:session): session closed for user root Apr 28 01:14:18.788144 sudo[1597]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 28 01:14:18.788342 sudo[1597]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 28 01:14:18.803257 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 28 01:14:18.804731 auditctl[1601]: No rules Apr 28 01:14:18.805482 systemd[1]: audit-rules.service: Deactivated successfully. Apr 28 01:14:18.805652 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 28 01:14:18.807036 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 28 01:14:18.831151 augenrules[1619]: No rules Apr 28 01:14:18.832065 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 28 01:14:18.832688 sudo[1597]: pam_unix(sudo:session): session closed for user root Apr 28 01:14:18.833892 sshd[1594]: pam_unix(sshd:session): session closed for user core Apr 28 01:14:18.847861 systemd[1]: sshd@5-10.0.0.153:22-10.0.0.1:46270.service: Deactivated successfully. Apr 28 01:14:18.848917 systemd[1]: session-6.scope: Deactivated successfully. Apr 28 01:14:18.849861 systemd-logind[1435]: Session 6 logged out. Waiting for processes to exit. Apr 28 01:14:18.850746 systemd[1]: Started sshd@6-10.0.0.153:22-10.0.0.1:46274.service - OpenSSH per-connection server daemon (10.0.0.1:46274). Apr 28 01:14:18.851432 systemd-logind[1435]: Removed session 6. Apr 28 01:14:18.878391 sshd[1627]: Accepted publickey for core from 10.0.0.1 port 46274 ssh2: RSA SHA256:LlE/68A0qVd4DdmQfcok9T4l7BHzq3PFAQ3i8Jwjpps Apr 28 01:14:18.879308 sshd[1627]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:14:18.882715 systemd-logind[1435]: New session 7 of user core. Apr 28 01:14:18.892122 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 28 01:14:18.943775 sudo[1630]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 28 01:14:18.944138 sudo[1630]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 28 01:14:19.163200 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 28 01:14:19.163251 (dockerd)[1649]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 28 01:14:19.377002 dockerd[1649]: time="2026-04-28T01:14:19.376890617Z" level=info msg="Starting up" Apr 28 01:14:19.483468 dockerd[1649]: time="2026-04-28T01:14:19.483343531Z" level=info msg="Loading containers: start." Apr 28 01:14:19.578005 kernel: Initializing XFRM netlink socket Apr 28 01:14:19.648288 systemd-networkd[1373]: docker0: Link UP Apr 28 01:14:19.671074 dockerd[1649]: time="2026-04-28T01:14:19.671021749Z" level=info msg="Loading containers: done." Apr 28 01:14:19.680884 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2475586654-merged.mount: Deactivated successfully. Apr 28 01:14:19.682591 dockerd[1649]: time="2026-04-28T01:14:19.682538296Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 28 01:14:19.682648 dockerd[1649]: time="2026-04-28T01:14:19.682632109Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 28 01:14:19.682743 dockerd[1649]: time="2026-04-28T01:14:19.682701170Z" level=info msg="Daemon has completed initialization" Apr 28 01:14:19.708663 dockerd[1649]: time="2026-04-28T01:14:19.708588747Z" level=info msg="API listen on /run/docker.sock" Apr 28 01:14:19.708755 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 28 01:14:20.085518 containerd[1456]: time="2026-04-28T01:14:20.085481653Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.7\"" Apr 28 01:14:20.560244 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3416311387.mount: Deactivated successfully. Apr 28 01:14:21.138136 containerd[1456]: time="2026-04-28T01:14:21.138069526Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 01:14:21.138664 containerd[1456]: time="2026-04-28T01:14:21.138626225Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.7: active requests=0, bytes read=27099952" Apr 28 01:14:21.139728 containerd[1456]: time="2026-04-28T01:14:21.139658569Z" level=info msg="ImageCreate event name:\"sha256:c15709457ff55a861a7259eb631c447f9bf906267615f9d8dcc820635a0bfb95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 01:14:21.141753 containerd[1456]: time="2026-04-28T01:14:21.141711191Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b96b8464d152a24c81d7f0435fd2198f8486970cd26a9e0e9c20826c73d1441c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 01:14:21.143817 containerd[1456]: time="2026-04-28T01:14:21.143735006Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.7\" with image id \"sha256:c15709457ff55a861a7259eb631c447f9bf906267615f9d8dcc820635a0bfb95\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b96b8464d152a24c81d7f0435fd2198f8486970cd26a9e0e9c20826c73d1441c\", size \"27097113\" in 1.058216335s" Apr 28 01:14:21.143817 containerd[1456]: time="2026-04-28T01:14:21.143798341Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.7\" returns image reference \"sha256:c15709457ff55a861a7259eb631c447f9bf906267615f9d8dcc820635a0bfb95\"" Apr 28 01:14:21.144481 containerd[1456]: time="2026-04-28T01:14:21.144453732Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.7\"" Apr 28 01:14:21.827623 containerd[1456]: time="2026-04-28T01:14:21.827544871Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 01:14:21.828191 containerd[1456]: time="2026-04-28T01:14:21.828139930Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.7: active requests=0, bytes read=21252670" Apr 28 01:14:21.829014 containerd[1456]: time="2026-04-28T01:14:21.828938929Z" level=info msg="ImageCreate event name:\"sha256:23986a24c803336f2a2dfbcaaf0547ee8bcf6638f23bec8967e210909d00a97a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 01:14:21.831217 containerd[1456]: time="2026-04-28T01:14:21.831177891Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7d759bdc4fef10a3fc1ad60ce9439d58e1a4df7ebb22751f7cc0201ce55f280b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 01:14:21.832152 containerd[1456]: time="2026-04-28T01:14:21.832122903Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.7\" with image id \"sha256:23986a24c803336f2a2dfbcaaf0547ee8bcf6638f23bec8967e210909d00a97a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7d759bdc4fef10a3fc1ad60ce9439d58e1a4df7ebb22751f7cc0201ce55f280b\", size \"22819085\" in 687.635327ms" Apr 28 01:14:21.832194 containerd[1456]: time="2026-04-28T01:14:21.832154254Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.7\" returns image reference \"sha256:23986a24c803336f2a2dfbcaaf0547ee8bcf6638f23bec8967e210909d00a97a\"" Apr 28 01:14:21.832700 containerd[1456]: time="2026-04-28T01:14:21.832681122Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.7\"" Apr 28 01:14:22.476051 containerd[1456]: time="2026-04-28T01:14:22.475982371Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 01:14:22.476525 containerd[1456]: time="2026-04-28T01:14:22.476488275Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.7: active requests=0, bytes read=15810823" Apr 28 01:14:22.477325 containerd[1456]: time="2026-04-28T01:14:22.477283471Z" level=info msg="ImageCreate event name:\"sha256:568f1856b0e1c464b0b50ab2879ebd535623c1a620b1d2530ba5dd594237dc82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 01:14:22.479562 containerd[1456]: time="2026-04-28T01:14:22.479530359Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:4ab32f707ff84beaac431797999707757b885196b0b9a52d29cb67f95efce7c1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 01:14:22.480450 containerd[1456]: time="2026-04-28T01:14:22.480408787Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.7\" with image id \"sha256:568f1856b0e1c464b0b50ab2879ebd535623c1a620b1d2530ba5dd594237dc82\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:4ab32f707ff84beaac431797999707757b885196b0b9a52d29cb67f95efce7c1\", size \"17377256\" in 647.704807ms" Apr 28 01:14:22.480492 containerd[1456]: time="2026-04-28T01:14:22.480452244Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.7\" returns image reference \"sha256:568f1856b0e1c464b0b50ab2879ebd535623c1a620b1d2530ba5dd594237dc82\"" Apr 28 01:14:22.481069 containerd[1456]: time="2026-04-28T01:14:22.480931321Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.7\"" Apr 28 01:14:23.233070 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3995017012.mount: Deactivated successfully. Apr 28 01:14:23.419741 containerd[1456]: time="2026-04-28T01:14:23.419675370Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 01:14:23.420282 containerd[1456]: time="2026-04-28T01:14:23.420229838Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.7: active requests=0, bytes read=25972848" Apr 28 01:14:23.421138 containerd[1456]: time="2026-04-28T01:14:23.421102146Z" level=info msg="ImageCreate event name:\"sha256:345c2b8919907fbb425a843da24d86a16708ee53a49ad3fa2e6dc229c7b34643\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 01:14:23.422615 containerd[1456]: time="2026-04-28T01:14:23.422576981Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:062519bc0a14769e2f98c6bdff7816a17e6252de3f3c9cb102e6be33fe38d9e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 01:14:23.423166 containerd[1456]: time="2026-04-28T01:14:23.423132644Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.7\" with image id \"sha256:345c2b8919907fbb425a843da24d86a16708ee53a49ad3fa2e6dc229c7b34643\", repo tag \"registry.k8s.io/kube-proxy:v1.34.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:062519bc0a14769e2f98c6bdff7816a17e6252de3f3c9cb102e6be33fe38d9e2\", size \"25971973\" in 942.06804ms" Apr 28 01:14:23.423166 containerd[1456]: time="2026-04-28T01:14:23.423166648Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.7\" returns image reference \"sha256:345c2b8919907fbb425a843da24d86a16708ee53a49ad3fa2e6dc229c7b34643\"" Apr 28 01:14:23.423667 containerd[1456]: time="2026-04-28T01:14:23.423622536Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Apr 28 01:14:23.805653 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4128752021.mount: Deactivated successfully. Apr 28 01:14:24.218273 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 28 01:14:24.227210 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 01:14:24.356351 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 01:14:24.360016 (kubelet)[1934]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 01:14:24.368041 containerd[1456]: time="2026-04-28T01:14:24.367978889Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 01:14:24.369063 containerd[1456]: time="2026-04-28T01:14:24.368851313Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22387483" Apr 28 01:14:24.370181 containerd[1456]: time="2026-04-28T01:14:24.370153002Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 01:14:24.373621 containerd[1456]: time="2026-04-28T01:14:24.373486481Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 01:14:24.374231 containerd[1456]: time="2026-04-28T01:14:24.374083137Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 950.429403ms" Apr 28 01:14:24.374231 containerd[1456]: time="2026-04-28T01:14:24.374112423Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Apr 28 01:14:24.374531 containerd[1456]: time="2026-04-28T01:14:24.374517521Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Apr 28 01:14:24.395041 kubelet[1934]: E0428 01:14:24.394941 1934 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 01:14:24.398335 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 01:14:24.398483 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 01:14:24.786320 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount972753926.mount: Deactivated successfully. Apr 28 01:14:24.791493 containerd[1456]: time="2026-04-28T01:14:24.791447052Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 01:14:24.792014 containerd[1456]: time="2026-04-28T01:14:24.791966112Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321150" Apr 28 01:14:24.793034 containerd[1456]: time="2026-04-28T01:14:24.792992033Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 01:14:24.794660 containerd[1456]: time="2026-04-28T01:14:24.794614001Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 01:14:24.795188 containerd[1456]: time="2026-04-28T01:14:24.795160187Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 420.589618ms" Apr 28 01:14:24.795229 containerd[1456]: time="2026-04-28T01:14:24.795193391Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Apr 28 01:14:24.795707 containerd[1456]: time="2026-04-28T01:14:24.795677050Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\"" Apr 28 01:14:25.167150 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3095512383.mount: Deactivated successfully. Apr 28 01:14:25.722896 containerd[1456]: time="2026-04-28T01:14:25.722844215Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 01:14:25.723610 containerd[1456]: time="2026-04-28T01:14:25.723577072Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.5-0: active requests=0, bytes read=22874255" Apr 28 01:14:25.724561 containerd[1456]: time="2026-04-28T01:14:25.724521227Z" level=info msg="ImageCreate event name:\"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 01:14:25.726653 containerd[1456]: time="2026-04-28T01:14:25.726615764Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 01:14:25.728131 containerd[1456]: time="2026-04-28T01:14:25.728095390Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.5-0\" with image id \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\", repo tag \"registry.k8s.io/etcd:3.6.5-0\", repo digest \"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\", size \"22871747\" in 932.396697ms" Apr 28 01:14:25.728163 containerd[1456]: time="2026-04-28T01:14:25.728129845Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\" returns image reference \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\"" Apr 28 01:14:28.244117 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 01:14:28.258236 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 01:14:28.277220 systemd[1]: Reloading requested from client PID 2038 ('systemctl') (unit session-7.scope)... Apr 28 01:14:28.277245 systemd[1]: Reloading... Apr 28 01:14:28.326094 zram_generator::config[2078]: No configuration found. Apr 28 01:14:28.400798 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 28 01:14:28.445557 systemd[1]: Reloading finished in 168 ms. Apr 28 01:14:28.480335 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 01:14:28.482390 systemd[1]: kubelet.service: Deactivated successfully. Apr 28 01:14:28.482553 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 01:14:28.483719 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 01:14:28.595771 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 01:14:28.599293 (kubelet)[2127]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 28 01:14:28.636880 kubelet[2127]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 28 01:14:28.636880 kubelet[2127]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 28 01:14:28.637221 kubelet[2127]: I0428 01:14:28.636885 2127 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 28 01:14:29.281815 kubelet[2127]: I0428 01:14:29.281762 2127 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Apr 28 01:14:29.281815 kubelet[2127]: I0428 01:14:29.281804 2127 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 28 01:14:29.283618 kubelet[2127]: I0428 01:14:29.283588 2127 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 28 01:14:29.283644 kubelet[2127]: I0428 01:14:29.283619 2127 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 28 01:14:29.283837 kubelet[2127]: I0428 01:14:29.283806 2127 server.go:956] "Client rotation is on, will bootstrap in background" Apr 28 01:14:29.308317 kubelet[2127]: I0428 01:14:29.308267 2127 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 28 01:14:29.308411 kubelet[2127]: E0428 01:14:29.308381 2127 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.153:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.153:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 28 01:14:29.313101 kubelet[2127]: E0428 01:14:29.313034 2127 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 28 01:14:29.313101 kubelet[2127]: I0428 01:14:29.313110 2127 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 28 01:14:29.316507 kubelet[2127]: I0428 01:14:29.316469 2127 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 28 01:14:29.317338 kubelet[2127]: I0428 01:14:29.317027 2127 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 28 01:14:29.317525 kubelet[2127]: I0428 01:14:29.317348 2127 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 28 01:14:29.317649 kubelet[2127]: I0428 01:14:29.317531 2127 topology_manager.go:138] "Creating topology manager with none policy" Apr 28 01:14:29.317649 kubelet[2127]: I0428 01:14:29.317542 2127 container_manager_linux.go:306] "Creating device plugin manager" Apr 28 01:14:29.317649 kubelet[2127]: I0428 01:14:29.317635 2127 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Apr 28 01:14:29.341225 kubelet[2127]: I0428 01:14:29.341155 2127 state_mem.go:36] "Initialized new in-memory state store" Apr 28 01:14:29.341374 kubelet[2127]: I0428 01:14:29.341346 2127 kubelet.go:475] "Attempting to sync node with API server" Apr 28 01:14:29.341374 kubelet[2127]: I0428 01:14:29.341356 2127 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 28 01:14:29.341374 kubelet[2127]: I0428 01:14:29.341373 2127 kubelet.go:387] "Adding apiserver pod source" Apr 28 01:14:29.341429 kubelet[2127]: I0428 01:14:29.341386 2127 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 28 01:14:29.343678 kubelet[2127]: E0428 01:14:29.343646 2127 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.153:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.153:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 28 01:14:29.343678 kubelet[2127]: E0428 01:14:29.343650 2127 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.153:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.153:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 28 01:14:29.344589 kubelet[2127]: I0428 01:14:29.343793 2127 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 28 01:14:29.344589 kubelet[2127]: I0428 01:14:29.344200 2127 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 28 01:14:29.344589 kubelet[2127]: I0428 01:14:29.344221 2127 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 28 01:14:29.344589 kubelet[2127]: W0428 01:14:29.344263 2127 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 28 01:14:29.346707 kubelet[2127]: I0428 01:14:29.346685 2127 server.go:1262] "Started kubelet" Apr 28 01:14:29.347167 kubelet[2127]: I0428 01:14:29.346863 2127 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 28 01:14:29.347167 kubelet[2127]: I0428 01:14:29.346901 2127 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 28 01:14:29.347445 kubelet[2127]: I0428 01:14:29.347415 2127 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 28 01:14:29.347555 kubelet[2127]: I0428 01:14:29.347543 2127 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 28 01:14:29.347683 kubelet[2127]: I0428 01:14:29.347658 2127 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 28 01:14:29.350117 kubelet[2127]: I0428 01:14:29.350095 2127 server.go:310] "Adding debug handlers to kubelet server" Apr 28 01:14:29.350853 kubelet[2127]: I0428 01:14:29.350718 2127 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 28 01:14:29.352601 kubelet[2127]: I0428 01:14:29.351990 2127 volume_manager.go:313] "Starting Kubelet Volume Manager" Apr 28 01:14:29.352822 kubelet[2127]: E0428 01:14:29.352705 2127 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:14:29.353087 kubelet[2127]: E0428 01:14:29.353052 2127 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.153:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.153:6443: connect: connection refused" interval="200ms" Apr 28 01:14:29.353304 kubelet[2127]: I0428 01:14:29.353277 2127 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 28 01:14:29.353359 kubelet[2127]: E0428 01:14:29.353326 2127 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.153:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.153:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 28 01:14:29.353386 kubelet[2127]: I0428 01:14:29.353364 2127 reconciler.go:29] "Reconciler: start to sync state" Apr 28 01:14:29.355449 kubelet[2127]: E0428 01:14:29.351600 2127 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.153:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.153:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa6048f0e6c19d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 01:14:29.346648477 +0000 UTC m=+0.744098630,LastTimestamp:2026-04-28 01:14:29.346648477 +0000 UTC m=+0.744098630,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 01:14:29.355449 kubelet[2127]: E0428 01:14:29.354834 2127 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 28 01:14:29.355449 kubelet[2127]: I0428 01:14:29.355271 2127 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 28 01:14:29.356621 kubelet[2127]: I0428 01:14:29.356591 2127 factory.go:223] Registration of the containerd container factory successfully Apr 28 01:14:29.356621 kubelet[2127]: I0428 01:14:29.356616 2127 factory.go:223] Registration of the systemd container factory successfully Apr 28 01:14:29.368924 kubelet[2127]: I0428 01:14:29.368243 2127 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 28 01:14:29.368924 kubelet[2127]: I0428 01:14:29.368254 2127 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 28 01:14:29.368924 kubelet[2127]: I0428 01:14:29.368267 2127 state_mem.go:36] "Initialized new in-memory state store" Apr 28 01:14:29.373455 kubelet[2127]: I0428 01:14:29.373399 2127 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 28 01:14:29.375817 kubelet[2127]: I0428 01:14:29.375324 2127 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 28 01:14:29.375817 kubelet[2127]: I0428 01:14:29.375355 2127 status_manager.go:244] "Starting to sync pod status with apiserver" Apr 28 01:14:29.375817 kubelet[2127]: I0428 01:14:29.375395 2127 kubelet.go:2428] "Starting kubelet main sync loop" Apr 28 01:14:29.375817 kubelet[2127]: E0428 01:14:29.375423 2127 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 28 01:14:29.376220 kubelet[2127]: E0428 01:14:29.376183 2127 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.153:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.153:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 28 01:14:29.409860 kubelet[2127]: I0428 01:14:29.409798 2127 policy_none.go:49] "None policy: Start" Apr 28 01:14:29.410056 kubelet[2127]: I0428 01:14:29.409883 2127 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 28 01:14:29.410056 kubelet[2127]: I0428 01:14:29.409898 2127 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 28 01:14:29.411431 kubelet[2127]: I0428 01:14:29.411374 2127 policy_none.go:47] "Start" Apr 28 01:14:29.414932 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 28 01:14:29.424332 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 28 01:14:29.436377 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 28 01:14:29.437324 kubelet[2127]: E0428 01:14:29.437263 2127 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 28 01:14:29.437420 kubelet[2127]: I0428 01:14:29.437401 2127 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 28 01:14:29.437444 kubelet[2127]: I0428 01:14:29.437413 2127 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 28 01:14:29.437883 kubelet[2127]: I0428 01:14:29.437547 2127 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 28 01:14:29.438419 kubelet[2127]: E0428 01:14:29.438398 2127 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 28 01:14:29.438498 kubelet[2127]: E0428 01:14:29.438434 2127 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 01:14:29.484046 systemd[1]: Created slice kubepods-burstable-podac637d7a1076bc49b90434afdf2bc5f7.slice - libcontainer container kubepods-burstable-podac637d7a1076bc49b90434afdf2bc5f7.slice. Apr 28 01:14:29.498635 kubelet[2127]: E0428 01:14:29.498577 2127 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 01:14:29.501209 systemd[1]: Created slice kubepods-burstable-podc6bb8708a026256e82ca4c5631a78b5a.slice - libcontainer container kubepods-burstable-podc6bb8708a026256e82ca4c5631a78b5a.slice. Apr 28 01:14:29.502426 kubelet[2127]: E0428 01:14:29.502366 2127 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 01:14:29.509931 systemd[1]: Created slice kubepods-burstable-pod824fd89300514e351ed3b68d82c665c6.slice - libcontainer container kubepods-burstable-pod824fd89300514e351ed3b68d82c665c6.slice. Apr 28 01:14:29.511050 kubelet[2127]: E0428 01:14:29.511029 2127 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 01:14:29.539236 kubelet[2127]: I0428 01:14:29.539138 2127 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 01:14:29.539532 kubelet[2127]: E0428 01:14:29.539465 2127 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.153:6443/api/v1/nodes\": dial tcp 10.0.0.153:6443: connect: connection refused" node="localhost" Apr 28 01:14:29.554177 kubelet[2127]: E0428 01:14:29.554119 2127 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.153:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.153:6443: connect: connection refused" interval="400ms" Apr 28 01:14:29.654228 kubelet[2127]: I0428 01:14:29.654158 2127 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 28 01:14:29.654228 kubelet[2127]: I0428 01:14:29.654200 2127 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 28 01:14:29.654562 kubelet[2127]: I0428 01:14:29.654265 2127 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 28 01:14:29.654562 kubelet[2127]: I0428 01:14:29.654288 2127 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ac637d7a1076bc49b90434afdf2bc5f7-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"ac637d7a1076bc49b90434afdf2bc5f7\") " pod="kube-system/kube-apiserver-localhost" Apr 28 01:14:29.654562 kubelet[2127]: I0428 01:14:29.654302 2127 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ac637d7a1076bc49b90434afdf2bc5f7-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"ac637d7a1076bc49b90434afdf2bc5f7\") " pod="kube-system/kube-apiserver-localhost" Apr 28 01:14:29.654562 kubelet[2127]: I0428 01:14:29.654316 2127 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ac637d7a1076bc49b90434afdf2bc5f7-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"ac637d7a1076bc49b90434afdf2bc5f7\") " pod="kube-system/kube-apiserver-localhost" Apr 28 01:14:29.654562 kubelet[2127]: I0428 01:14:29.654329 2127 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 28 01:14:29.654644 kubelet[2127]: I0428 01:14:29.654362 2127 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/824fd89300514e351ed3b68d82c665c6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"824fd89300514e351ed3b68d82c665c6\") " pod="kube-system/kube-scheduler-localhost" Apr 28 01:14:29.654644 kubelet[2127]: I0428 01:14:29.654435 2127 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 28 01:14:29.741742 kubelet[2127]: I0428 01:14:29.741650 2127 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 01:14:29.742037 kubelet[2127]: E0428 01:14:29.741937 2127 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.153:6443/api/v1/nodes\": dial tcp 10.0.0.153:6443: connect: connection refused" node="localhost" Apr 28 01:14:29.801830 kubelet[2127]: E0428 01:14:29.801570 2127 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:14:29.802481 containerd[1456]: time="2026-04-28T01:14:29.802426626Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:ac637d7a1076bc49b90434afdf2bc5f7,Namespace:kube-system,Attempt:0,}" Apr 28 01:14:29.803751 kubelet[2127]: E0428 01:14:29.803729 2127 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:14:29.804349 containerd[1456]: time="2026-04-28T01:14:29.804079345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:c6bb8708a026256e82ca4c5631a78b5a,Namespace:kube-system,Attempt:0,}" Apr 28 01:14:29.813579 kubelet[2127]: E0428 01:14:29.813538 2127 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:14:29.813916 containerd[1456]: time="2026-04-28T01:14:29.813879863Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:824fd89300514e351ed3b68d82c665c6,Namespace:kube-system,Attempt:0,}" Apr 28 01:14:29.954758 kubelet[2127]: E0428 01:14:29.954664 2127 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.153:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.153:6443: connect: connection refused" interval="800ms" Apr 28 01:14:30.144071 kubelet[2127]: I0428 01:14:30.143922 2127 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 01:14:30.144305 kubelet[2127]: E0428 01:14:30.144266 2127 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.153:6443/api/v1/nodes\": dial tcp 10.0.0.153:6443: connect: connection refused" node="localhost" Apr 28 01:14:30.154698 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2015461821.mount: Deactivated successfully. Apr 28 01:14:30.159729 containerd[1456]: time="2026-04-28T01:14:30.159664117Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 28 01:14:30.162030 containerd[1456]: time="2026-04-28T01:14:30.161973984Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=311988" Apr 28 01:14:30.162636 containerd[1456]: time="2026-04-28T01:14:30.162578756Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 28 01:14:30.163400 containerd[1456]: time="2026-04-28T01:14:30.163369127Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 28 01:14:30.164006 containerd[1456]: time="2026-04-28T01:14:30.163872012Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 28 01:14:30.164569 containerd[1456]: time="2026-04-28T01:14:30.164542595Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 28 01:14:30.165154 containerd[1456]: time="2026-04-28T01:14:30.165082202Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 28 01:14:30.167577 containerd[1456]: time="2026-04-28T01:14:30.167305263Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 28 01:14:30.169108 containerd[1456]: time="2026-04-28T01:14:30.169071143Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 364.947327ms" Apr 28 01:14:30.169571 containerd[1456]: time="2026-04-28T01:14:30.169517583Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 355.58451ms" Apr 28 01:14:30.170678 containerd[1456]: time="2026-04-28T01:14:30.170598486Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 368.091335ms" Apr 28 01:14:30.229547 kubelet[2127]: E0428 01:14:30.229486 2127 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.153:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.153:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 28 01:14:30.257381 containerd[1456]: time="2026-04-28T01:14:30.257307235Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 28 01:14:30.257575 containerd[1456]: time="2026-04-28T01:14:30.257360756Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 28 01:14:30.257575 containerd[1456]: time="2026-04-28T01:14:30.257424745Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 01:14:30.257575 containerd[1456]: time="2026-04-28T01:14:30.257479596Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 01:14:30.258248 containerd[1456]: time="2026-04-28T01:14:30.258132944Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 28 01:14:30.258248 containerd[1456]: time="2026-04-28T01:14:30.258180856Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 28 01:14:30.258706 containerd[1456]: time="2026-04-28T01:14:30.258666871Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 01:14:30.258795 containerd[1456]: time="2026-04-28T01:14:30.258754077Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 01:14:30.261009 containerd[1456]: time="2026-04-28T01:14:30.260899491Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 28 01:14:30.261009 containerd[1456]: time="2026-04-28T01:14:30.260973833Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 28 01:14:30.261009 containerd[1456]: time="2026-04-28T01:14:30.260987228Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 01:14:30.262291 containerd[1456]: time="2026-04-28T01:14:30.262196172Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 01:14:30.275783 kubelet[2127]: E0428 01:14:30.275621 2127 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.153:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.153:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 28 01:14:30.279171 systemd[1]: Started cri-containerd-c21b563569e803256e4a2318fd04c32a4fe8a02829c29f687dfb5cea5d102462.scope - libcontainer container c21b563569e803256e4a2318fd04c32a4fe8a02829c29f687dfb5cea5d102462. Apr 28 01:14:30.282339 systemd[1]: Started cri-containerd-509ac1ec0d49b899fd2c75fed09fa3d995e3941a634f6b730c39d6e3ab74a2be.scope - libcontainer container 509ac1ec0d49b899fd2c75fed09fa3d995e3941a634f6b730c39d6e3ab74a2be. Apr 28 01:14:30.283088 systemd[1]: Started cri-containerd-770a3bfe49facee9d1d24946d450af6440bf50bbbc1283703c8be807c9af1ef6.scope - libcontainer container 770a3bfe49facee9d1d24946d450af6440bf50bbbc1283703c8be807c9af1ef6. Apr 28 01:14:30.315794 containerd[1456]: time="2026-04-28T01:14:30.315727216Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:824fd89300514e351ed3b68d82c665c6,Namespace:kube-system,Attempt:0,} returns sandbox id \"c21b563569e803256e4a2318fd04c32a4fe8a02829c29f687dfb5cea5d102462\"" Apr 28 01:14:30.316573 kubelet[2127]: E0428 01:14:30.316532 2127 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:14:30.323571 containerd[1456]: time="2026-04-28T01:14:30.323528809Z" level=info msg="CreateContainer within sandbox \"c21b563569e803256e4a2318fd04c32a4fe8a02829c29f687dfb5cea5d102462\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 28 01:14:30.323733 containerd[1456]: time="2026-04-28T01:14:30.323676510Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:ac637d7a1076bc49b90434afdf2bc5f7,Namespace:kube-system,Attempt:0,} returns sandbox id \"770a3bfe49facee9d1d24946d450af6440bf50bbbc1283703c8be807c9af1ef6\"" Apr 28 01:14:30.324193 kubelet[2127]: E0428 01:14:30.324150 2127 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:14:30.326213 containerd[1456]: time="2026-04-28T01:14:30.326129536Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:c6bb8708a026256e82ca4c5631a78b5a,Namespace:kube-system,Attempt:0,} returns sandbox id \"509ac1ec0d49b899fd2c75fed09fa3d995e3941a634f6b730c39d6e3ab74a2be\"" Apr 28 01:14:30.326903 kubelet[2127]: E0428 01:14:30.326868 2127 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:14:30.328029 containerd[1456]: time="2026-04-28T01:14:30.327897624Z" level=info msg="CreateContainer within sandbox \"770a3bfe49facee9d1d24946d450af6440bf50bbbc1283703c8be807c9af1ef6\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 28 01:14:30.330226 containerd[1456]: time="2026-04-28T01:14:30.330206001Z" level=info msg="CreateContainer within sandbox \"509ac1ec0d49b899fd2c75fed09fa3d995e3941a634f6b730c39d6e3ab74a2be\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 28 01:14:30.340786 containerd[1456]: time="2026-04-28T01:14:30.340730829Z" level=info msg="CreateContainer within sandbox \"c21b563569e803256e4a2318fd04c32a4fe8a02829c29f687dfb5cea5d102462\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9b35ab09db32f5741c903efd1becf246b3e72579ddc8d856df313c6b34309dd8\"" Apr 28 01:14:30.342004 containerd[1456]: time="2026-04-28T01:14:30.341233188Z" level=info msg="StartContainer for \"9b35ab09db32f5741c903efd1becf246b3e72579ddc8d856df313c6b34309dd8\"" Apr 28 01:14:30.345407 containerd[1456]: time="2026-04-28T01:14:30.345368473Z" level=info msg="CreateContainer within sandbox \"770a3bfe49facee9d1d24946d450af6440bf50bbbc1283703c8be807c9af1ef6\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"5060c7f4b1a70dadeed20a799421c68f8fba331a2a23f5bfa7e75f9323629f83\"" Apr 28 01:14:30.346995 containerd[1456]: time="2026-04-28T01:14:30.346918020Z" level=info msg="StartContainer for \"5060c7f4b1a70dadeed20a799421c68f8fba331a2a23f5bfa7e75f9323629f83\"" Apr 28 01:14:30.348056 containerd[1456]: time="2026-04-28T01:14:30.348036310Z" level=info msg="CreateContainer within sandbox \"509ac1ec0d49b899fd2c75fed09fa3d995e3941a634f6b730c39d6e3ab74a2be\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"9887f5e57d300593ba44c754e3815829aef4d2690dea8836593af83529bc43a0\"" Apr 28 01:14:30.348733 containerd[1456]: time="2026-04-28T01:14:30.348702686Z" level=info msg="StartContainer for \"9887f5e57d300593ba44c754e3815829aef4d2690dea8836593af83529bc43a0\"" Apr 28 01:14:30.363376 systemd[1]: Started cri-containerd-9b35ab09db32f5741c903efd1becf246b3e72579ddc8d856df313c6b34309dd8.scope - libcontainer container 9b35ab09db32f5741c903efd1becf246b3e72579ddc8d856df313c6b34309dd8. Apr 28 01:14:30.375156 systemd[1]: Started cri-containerd-5060c7f4b1a70dadeed20a799421c68f8fba331a2a23f5bfa7e75f9323629f83.scope - libcontainer container 5060c7f4b1a70dadeed20a799421c68f8fba331a2a23f5bfa7e75f9323629f83. Apr 28 01:14:30.377616 systemd[1]: Started cri-containerd-9887f5e57d300593ba44c754e3815829aef4d2690dea8836593af83529bc43a0.scope - libcontainer container 9887f5e57d300593ba44c754e3815829aef4d2690dea8836593af83529bc43a0. Apr 28 01:14:30.418134 containerd[1456]: time="2026-04-28T01:14:30.417556254Z" level=info msg="StartContainer for \"9b35ab09db32f5741c903efd1becf246b3e72579ddc8d856df313c6b34309dd8\" returns successfully" Apr 28 01:14:30.424851 containerd[1456]: time="2026-04-28T01:14:30.424765132Z" level=info msg="StartContainer for \"9887f5e57d300593ba44c754e3815829aef4d2690dea8836593af83529bc43a0\" returns successfully" Apr 28 01:14:30.425126 containerd[1456]: time="2026-04-28T01:14:30.424846214Z" level=info msg="StartContainer for \"5060c7f4b1a70dadeed20a799421c68f8fba331a2a23f5bfa7e75f9323629f83\" returns successfully" Apr 28 01:14:30.440555 kubelet[2127]: E0428 01:14:30.440508 2127 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.153:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.153:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 28 01:14:30.946135 kubelet[2127]: I0428 01:14:30.946065 2127 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 01:14:31.137999 kubelet[2127]: E0428 01:14:31.137891 2127 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Apr 28 01:14:31.232162 kubelet[2127]: I0428 01:14:31.231797 2127 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Apr 28 01:14:31.232162 kubelet[2127]: E0428 01:14:31.231846 2127 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Apr 28 01:14:31.239197 kubelet[2127]: E0428 01:14:31.239173 2127 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:14:31.340207 kubelet[2127]: E0428 01:14:31.340101 2127 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:14:31.385808 kubelet[2127]: E0428 01:14:31.385759 2127 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 01:14:31.385912 kubelet[2127]: E0428 01:14:31.385846 2127 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:14:31.386881 kubelet[2127]: E0428 01:14:31.386859 2127 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 01:14:31.386988 kubelet[2127]: E0428 01:14:31.386931 2127 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:14:31.387651 kubelet[2127]: E0428 01:14:31.387623 2127 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 01:14:31.387731 kubelet[2127]: E0428 01:14:31.387718 2127 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:14:31.441314 kubelet[2127]: E0428 01:14:31.441263 2127 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:14:31.542296 kubelet[2127]: E0428 01:14:31.542092 2127 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:14:31.642584 kubelet[2127]: E0428 01:14:31.642534 2127 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:14:31.743015 kubelet[2127]: E0428 01:14:31.742932 2127 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:14:31.844054 kubelet[2127]: E0428 01:14:31.843881 2127 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:14:31.944965 kubelet[2127]: E0428 01:14:31.944910 2127 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:14:32.046089 kubelet[2127]: E0428 01:14:32.045819 2127 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:14:32.147014 kubelet[2127]: E0428 01:14:32.146869 2127 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:14:32.247930 kubelet[2127]: E0428 01:14:32.247858 2127 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:14:32.348402 kubelet[2127]: E0428 01:14:32.348360 2127 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:14:32.390160 kubelet[2127]: E0428 01:14:32.390134 2127 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 01:14:32.390291 kubelet[2127]: E0428 01:14:32.390231 2127 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 01:14:32.390291 kubelet[2127]: E0428 01:14:32.390278 2127 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:14:32.390387 kubelet[2127]: E0428 01:14:32.390311 2127 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:14:32.390387 kubelet[2127]: E0428 01:14:32.390369 2127 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 01:14:32.390481 kubelet[2127]: E0428 01:14:32.390468 2127 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:14:32.453734 kubelet[2127]: I0428 01:14:32.453610 2127 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 28 01:14:32.460568 kubelet[2127]: I0428 01:14:32.460488 2127 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 28 01:14:32.464444 kubelet[2127]: I0428 01:14:32.464320 2127 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 28 01:14:33.041496 systemd[1]: Reloading requested from client PID 2418 ('systemctl') (unit session-7.scope)... Apr 28 01:14:33.041528 systemd[1]: Reloading... Apr 28 01:14:33.102285 zram_generator::config[2457]: No configuration found. Apr 28 01:14:33.172980 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 28 01:14:33.225631 systemd[1]: Reloading finished in 183 ms. Apr 28 01:14:33.274878 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 01:14:33.290325 systemd[1]: kubelet.service: Deactivated successfully. Apr 28 01:14:33.290875 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 01:14:33.290928 systemd[1]: kubelet.service: Consumed 1.044s CPU time, 129.7M memory peak, 0B memory swap peak. Apr 28 01:14:33.299291 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 01:14:33.448391 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 01:14:33.457595 (kubelet)[2502]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 28 01:14:33.531248 kubelet[2502]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 28 01:14:33.531248 kubelet[2502]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 28 01:14:33.531248 kubelet[2502]: I0428 01:14:33.531140 2502 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 28 01:14:33.539935 kubelet[2502]: I0428 01:14:33.539827 2502 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Apr 28 01:14:33.540158 kubelet[2502]: I0428 01:14:33.540086 2502 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 28 01:14:33.540271 kubelet[2502]: I0428 01:14:33.540211 2502 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 28 01:14:33.540343 kubelet[2502]: I0428 01:14:33.540286 2502 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 28 01:14:33.540892 kubelet[2502]: I0428 01:14:33.540798 2502 server.go:956] "Client rotation is on, will bootstrap in background" Apr 28 01:14:33.544865 kubelet[2502]: I0428 01:14:33.544777 2502 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 28 01:14:33.551163 kubelet[2502]: I0428 01:14:33.551076 2502 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 28 01:14:33.557436 kubelet[2502]: E0428 01:14:33.557383 2502 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 28 01:14:33.557436 kubelet[2502]: I0428 01:14:33.557414 2502 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 28 01:14:33.560403 kubelet[2502]: I0428 01:14:33.560308 2502 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 28 01:14:33.560603 kubelet[2502]: I0428 01:14:33.560437 2502 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 28 01:14:33.561485 kubelet[2502]: I0428 01:14:33.561367 2502 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 28 01:14:33.561485 kubelet[2502]: I0428 01:14:33.561486 2502 topology_manager.go:138] "Creating topology manager with none policy" Apr 28 01:14:33.561640 kubelet[2502]: I0428 01:14:33.561493 2502 container_manager_linux.go:306] "Creating device plugin manager" Apr 28 01:14:33.561640 kubelet[2502]: I0428 01:14:33.561510 2502 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Apr 28 01:14:33.561640 kubelet[2502]: I0428 01:14:33.561620 2502 state_mem.go:36] "Initialized new in-memory state store" Apr 28 01:14:33.561775 kubelet[2502]: I0428 01:14:33.561746 2502 kubelet.go:475] "Attempting to sync node with API server" Apr 28 01:14:33.561775 kubelet[2502]: I0428 01:14:33.561761 2502 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 28 01:14:33.561775 kubelet[2502]: I0428 01:14:33.561775 2502 kubelet.go:387] "Adding apiserver pod source" Apr 28 01:14:33.561825 kubelet[2502]: I0428 01:14:33.561781 2502 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 28 01:14:33.562494 kubelet[2502]: I0428 01:14:33.562482 2502 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 28 01:14:33.562856 kubelet[2502]: I0428 01:14:33.562820 2502 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 28 01:14:33.562880 kubelet[2502]: I0428 01:14:33.562857 2502 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 28 01:14:33.566761 kubelet[2502]: I0428 01:14:33.566655 2502 server.go:1262] "Started kubelet" Apr 28 01:14:33.568395 kubelet[2502]: I0428 01:14:33.568238 2502 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 28 01:14:33.568395 kubelet[2502]: I0428 01:14:33.568266 2502 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 28 01:14:33.568493 kubelet[2502]: I0428 01:14:33.568408 2502 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 28 01:14:33.568493 kubelet[2502]: I0428 01:14:33.568448 2502 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 28 01:14:33.568699 kubelet[2502]: I0428 01:14:33.568622 2502 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 28 01:14:33.569186 kubelet[2502]: I0428 01:14:33.569171 2502 server.go:310] "Adding debug handlers to kubelet server" Apr 28 01:14:33.570614 kubelet[2502]: E0428 01:14:33.570602 2502 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 28 01:14:33.570883 kubelet[2502]: I0428 01:14:33.570876 2502 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 28 01:14:33.571317 kubelet[2502]: E0428 01:14:33.571045 2502 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 01:14:33.571317 kubelet[2502]: I0428 01:14:33.571066 2502 volume_manager.go:313] "Starting Kubelet Volume Manager" Apr 28 01:14:33.571521 kubelet[2502]: I0428 01:14:33.571489 2502 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 28 01:14:33.571609 kubelet[2502]: I0428 01:14:33.571579 2502 reconciler.go:29] "Reconciler: start to sync state" Apr 28 01:14:33.572093 kubelet[2502]: I0428 01:14:33.572066 2502 factory.go:223] Registration of the systemd container factory successfully Apr 28 01:14:33.572156 kubelet[2502]: I0428 01:14:33.572133 2502 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 28 01:14:33.573479 kubelet[2502]: I0428 01:14:33.573428 2502 factory.go:223] Registration of the containerd container factory successfully Apr 28 01:14:33.578222 kubelet[2502]: I0428 01:14:33.578187 2502 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 28 01:14:33.583865 kubelet[2502]: I0428 01:14:33.583422 2502 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 28 01:14:33.583865 kubelet[2502]: I0428 01:14:33.583435 2502 status_manager.go:244] "Starting to sync pod status with apiserver" Apr 28 01:14:33.583865 kubelet[2502]: I0428 01:14:33.583448 2502 kubelet.go:2428] "Starting kubelet main sync loop" Apr 28 01:14:33.583865 kubelet[2502]: E0428 01:14:33.583481 2502 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 28 01:14:33.607728 kubelet[2502]: I0428 01:14:33.607697 2502 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 28 01:14:33.607728 kubelet[2502]: I0428 01:14:33.607720 2502 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 28 01:14:33.607728 kubelet[2502]: I0428 01:14:33.607732 2502 state_mem.go:36] "Initialized new in-memory state store" Apr 28 01:14:33.607818 kubelet[2502]: I0428 01:14:33.607803 2502 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 28 01:14:33.607818 kubelet[2502]: I0428 01:14:33.607810 2502 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 28 01:14:33.607818 kubelet[2502]: I0428 01:14:33.607819 2502 policy_none.go:49] "None policy: Start" Apr 28 01:14:33.607860 kubelet[2502]: I0428 01:14:33.607825 2502 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 28 01:14:33.607860 kubelet[2502]: I0428 01:14:33.607831 2502 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 28 01:14:33.607890 kubelet[2502]: I0428 01:14:33.607885 2502 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Apr 28 01:14:33.607910 kubelet[2502]: I0428 01:14:33.607890 2502 policy_none.go:47] "Start" Apr 28 01:14:33.613442 kubelet[2502]: E0428 01:14:33.613358 2502 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 28 01:14:33.614305 kubelet[2502]: I0428 01:14:33.614192 2502 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 28 01:14:33.614383 kubelet[2502]: I0428 01:14:33.614300 2502 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 28 01:14:33.614578 kubelet[2502]: I0428 01:14:33.614557 2502 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 28 01:14:33.616399 kubelet[2502]: E0428 01:14:33.616291 2502 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 28 01:14:33.685744 kubelet[2502]: I0428 01:14:33.685705 2502 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 28 01:14:33.685826 kubelet[2502]: I0428 01:14:33.685797 2502 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 28 01:14:33.686568 kubelet[2502]: I0428 01:14:33.686477 2502 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 28 01:14:33.693413 kubelet[2502]: E0428 01:14:33.693084 2502 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Apr 28 01:14:33.693459 kubelet[2502]: E0428 01:14:33.693444 2502 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Apr 28 01:14:33.693710 kubelet[2502]: E0428 01:14:33.693478 2502 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Apr 28 01:14:33.722907 kubelet[2502]: I0428 01:14:33.722871 2502 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 28 01:14:33.730910 kubelet[2502]: I0428 01:14:33.730890 2502 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Apr 28 01:14:33.730997 kubelet[2502]: I0428 01:14:33.730973 2502 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Apr 28 01:14:33.772934 kubelet[2502]: I0428 01:14:33.772873 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ac637d7a1076bc49b90434afdf2bc5f7-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"ac637d7a1076bc49b90434afdf2bc5f7\") " pod="kube-system/kube-apiserver-localhost" Apr 28 01:14:33.772934 kubelet[2502]: I0428 01:14:33.772908 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ac637d7a1076bc49b90434afdf2bc5f7-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"ac637d7a1076bc49b90434afdf2bc5f7\") " pod="kube-system/kube-apiserver-localhost" Apr 28 01:14:33.772934 kubelet[2502]: I0428 01:14:33.772929 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 28 01:14:33.773109 kubelet[2502]: I0428 01:14:33.772978 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 28 01:14:33.773109 kubelet[2502]: I0428 01:14:33.773072 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 28 01:14:33.773109 kubelet[2502]: I0428 01:14:33.773087 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ac637d7a1076bc49b90434afdf2bc5f7-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"ac637d7a1076bc49b90434afdf2bc5f7\") " pod="kube-system/kube-apiserver-localhost" Apr 28 01:14:33.773164 kubelet[2502]: I0428 01:14:33.773109 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 28 01:14:33.773164 kubelet[2502]: I0428 01:14:33.773122 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 28 01:14:33.773458 kubelet[2502]: I0428 01:14:33.773430 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/824fd89300514e351ed3b68d82c665c6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"824fd89300514e351ed3b68d82c665c6\") " pod="kube-system/kube-scheduler-localhost" Apr 28 01:14:33.995083 kubelet[2502]: E0428 01:14:33.993838 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:14:33.995083 kubelet[2502]: E0428 01:14:33.993980 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:14:33.995083 kubelet[2502]: E0428 01:14:33.993989 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:14:34.562385 kubelet[2502]: I0428 01:14:34.562331 2502 apiserver.go:52] "Watching apiserver" Apr 28 01:14:34.572085 kubelet[2502]: I0428 01:14:34.572017 2502 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 28 01:14:34.600570 kubelet[2502]: I0428 01:14:34.600536 2502 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 28 01:14:34.600644 kubelet[2502]: I0428 01:14:34.600591 2502 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 28 01:14:34.600899 kubelet[2502]: I0428 01:14:34.600847 2502 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 28 01:14:34.606133 kubelet[2502]: E0428 01:14:34.606014 2502 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Apr 28 01:14:34.606839 kubelet[2502]: E0428 01:14:34.606065 2502 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Apr 28 01:14:34.606839 kubelet[2502]: E0428 01:14:34.606478 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:14:34.606839 kubelet[2502]: E0428 01:14:34.606580 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:14:34.606839 kubelet[2502]: E0428 01:14:34.606083 2502 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Apr 28 01:14:34.606839 kubelet[2502]: E0428 01:14:34.606707 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:14:34.621601 kubelet[2502]: I0428 01:14:34.621512 2502 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.6215021309999997 podStartE2EDuration="2.621502131s" podCreationTimestamp="2026-04-28 01:14:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-28 01:14:34.61588388 +0000 UTC m=+1.146245951" watchObservedRunningTime="2026-04-28 01:14:34.621502131 +0000 UTC m=+1.151864243" Apr 28 01:14:34.621694 kubelet[2502]: I0428 01:14:34.621638 2502 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.621634314 podStartE2EDuration="2.621634314s" podCreationTimestamp="2026-04-28 01:14:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-28 01:14:34.621399085 +0000 UTC m=+1.151761156" watchObservedRunningTime="2026-04-28 01:14:34.621634314 +0000 UTC m=+1.151996373" Apr 28 01:14:35.602300 kubelet[2502]: E0428 01:14:35.602228 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:14:35.602300 kubelet[2502]: E0428 01:14:35.602285 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:14:35.602696 kubelet[2502]: E0428 01:14:35.602427 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:14:39.274527 kubelet[2502]: I0428 01:14:39.274486 2502 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 28 01:14:39.275018 containerd[1456]: time="2026-04-28T01:14:39.274906113Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 28 01:14:39.275231 kubelet[2502]: I0428 01:14:39.275215 2502 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 28 01:14:39.585932 kubelet[2502]: E0428 01:14:39.585807 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:14:39.949413 kubelet[2502]: I0428 01:14:39.949025 2502 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=7.949012893 podStartE2EDuration="7.949012893s" podCreationTimestamp="2026-04-28 01:14:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-28 01:14:34.62721928 +0000 UTC m=+1.157581361" watchObservedRunningTime="2026-04-28 01:14:39.949012893 +0000 UTC m=+6.479375027" Apr 28 01:14:39.959489 systemd[1]: Created slice kubepods-besteffort-pod0c41005f_60b1_4353_a3ac_4d0cfe96728b.slice - libcontainer container kubepods-besteffort-pod0c41005f_60b1_4353_a3ac_4d0cfe96728b.slice. Apr 28 01:14:40.012772 kubelet[2502]: I0428 01:14:40.012678 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0c41005f-60b1-4353-a3ac-4d0cfe96728b-lib-modules\") pod \"kube-proxy-z4crn\" (UID: \"0c41005f-60b1-4353-a3ac-4d0cfe96728b\") " pod="kube-system/kube-proxy-z4crn" Apr 28 01:14:40.012772 kubelet[2502]: I0428 01:14:40.012717 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h28q7\" (UniqueName: \"kubernetes.io/projected/0c41005f-60b1-4353-a3ac-4d0cfe96728b-kube-api-access-h28q7\") pod \"kube-proxy-z4crn\" (UID: \"0c41005f-60b1-4353-a3ac-4d0cfe96728b\") " pod="kube-system/kube-proxy-z4crn" Apr 28 01:14:40.012772 kubelet[2502]: I0428 01:14:40.012737 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0c41005f-60b1-4353-a3ac-4d0cfe96728b-kube-proxy\") pod \"kube-proxy-z4crn\" (UID: \"0c41005f-60b1-4353-a3ac-4d0cfe96728b\") " pod="kube-system/kube-proxy-z4crn" Apr 28 01:14:40.012772 kubelet[2502]: I0428 01:14:40.012750 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0c41005f-60b1-4353-a3ac-4d0cfe96728b-xtables-lock\") pod \"kube-proxy-z4crn\" (UID: \"0c41005f-60b1-4353-a3ac-4d0cfe96728b\") " pod="kube-system/kube-proxy-z4crn" Apr 28 01:14:40.118121 kubelet[2502]: E0428 01:14:40.118083 2502 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Apr 28 01:14:40.118121 kubelet[2502]: E0428 01:14:40.118117 2502 projected.go:196] Error preparing data for projected volume kube-api-access-h28q7 for pod kube-system/kube-proxy-z4crn: configmap "kube-root-ca.crt" not found Apr 28 01:14:40.118298 kubelet[2502]: E0428 01:14:40.118166 2502 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0c41005f-60b1-4353-a3ac-4d0cfe96728b-kube-api-access-h28q7 podName:0c41005f-60b1-4353-a3ac-4d0cfe96728b nodeName:}" failed. No retries permitted until 2026-04-28 01:14:40.618145414 +0000 UTC m=+7.148507474 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-h28q7" (UniqueName: "kubernetes.io/projected/0c41005f-60b1-4353-a3ac-4d0cfe96728b-kube-api-access-h28q7") pod "kube-proxy-z4crn" (UID: "0c41005f-60b1-4353-a3ac-4d0cfe96728b") : configmap "kube-root-ca.crt" not found Apr 28 01:14:40.415302 kubelet[2502]: I0428 01:14:40.414884 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/cc4d8aa7-71fc-4ac5-8848-d2660c72a0f7-var-lib-calico\") pod \"tigera-operator-6fb8d665dd-mtqx5\" (UID: \"cc4d8aa7-71fc-4ac5-8848-d2660c72a0f7\") " pod="tigera-operator/tigera-operator-6fb8d665dd-mtqx5" Apr 28 01:14:40.415302 kubelet[2502]: I0428 01:14:40.414916 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xxc2\" (UniqueName: \"kubernetes.io/projected/cc4d8aa7-71fc-4ac5-8848-d2660c72a0f7-kube-api-access-8xxc2\") pod \"tigera-operator-6fb8d665dd-mtqx5\" (UID: \"cc4d8aa7-71fc-4ac5-8848-d2660c72a0f7\") " pod="tigera-operator/tigera-operator-6fb8d665dd-mtqx5" Apr 28 01:14:40.416700 systemd[1]: Created slice kubepods-besteffort-podcc4d8aa7_71fc_4ac5_8848_d2660c72a0f7.slice - libcontainer container kubepods-besteffort-podcc4d8aa7_71fc_4ac5_8848_d2660c72a0f7.slice. Apr 28 01:14:40.722834 containerd[1456]: time="2026-04-28T01:14:40.722739967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6fb8d665dd-mtqx5,Uid:cc4d8aa7-71fc-4ac5-8848-d2660c72a0f7,Namespace:tigera-operator,Attempt:0,}" Apr 28 01:14:40.744317 containerd[1456]: time="2026-04-28T01:14:40.744198435Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 28 01:14:40.744317 containerd[1456]: time="2026-04-28T01:14:40.744244905Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 28 01:14:40.744317 containerd[1456]: time="2026-04-28T01:14:40.744259748Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 01:14:40.744461 containerd[1456]: time="2026-04-28T01:14:40.744312284Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 01:14:40.771217 systemd[1]: Started cri-containerd-b742a90a43cfdc9bd3f98cf1977440b4644b78e2fe0b7c18cf67866d66b1a972.scope - libcontainer container b742a90a43cfdc9bd3f98cf1977440b4644b78e2fe0b7c18cf67866d66b1a972. Apr 28 01:14:40.801412 containerd[1456]: time="2026-04-28T01:14:40.801371161Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6fb8d665dd-mtqx5,Uid:cc4d8aa7-71fc-4ac5-8848-d2660c72a0f7,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"b742a90a43cfdc9bd3f98cf1977440b4644b78e2fe0b7c18cf67866d66b1a972\"" Apr 28 01:14:40.802904 containerd[1456]: time="2026-04-28T01:14:40.802887173Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.8\"" Apr 28 01:14:40.877130 kubelet[2502]: E0428 01:14:40.877092 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:14:40.877743 containerd[1456]: time="2026-04-28T01:14:40.877709214Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-z4crn,Uid:0c41005f-60b1-4353-a3ac-4d0cfe96728b,Namespace:kube-system,Attempt:0,}" Apr 28 01:14:40.897092 containerd[1456]: time="2026-04-28T01:14:40.896931213Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 28 01:14:40.897092 containerd[1456]: time="2026-04-28T01:14:40.897012886Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 28 01:14:40.897092 containerd[1456]: time="2026-04-28T01:14:40.897025521Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 01:14:40.898297 containerd[1456]: time="2026-04-28T01:14:40.898236253Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 01:14:40.921168 systemd[1]: Started cri-containerd-3872394702ef87a4b3c871d27e48741ede0fb06495f4f9c17cfa83a066a9d47d.scope - libcontainer container 3872394702ef87a4b3c871d27e48741ede0fb06495f4f9c17cfa83a066a9d47d. Apr 28 01:14:40.936473 containerd[1456]: time="2026-04-28T01:14:40.936427668Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-z4crn,Uid:0c41005f-60b1-4353-a3ac-4d0cfe96728b,Namespace:kube-system,Attempt:0,} returns sandbox id \"3872394702ef87a4b3c871d27e48741ede0fb06495f4f9c17cfa83a066a9d47d\"" Apr 28 01:14:40.938102 kubelet[2502]: E0428 01:14:40.936920 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:14:40.944820 containerd[1456]: time="2026-04-28T01:14:40.944756307Z" level=info msg="CreateContainer within sandbox \"3872394702ef87a4b3c871d27e48741ede0fb06495f4f9c17cfa83a066a9d47d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 28 01:14:40.956577 containerd[1456]: time="2026-04-28T01:14:40.956520548Z" level=info msg="CreateContainer within sandbox \"3872394702ef87a4b3c871d27e48741ede0fb06495f4f9c17cfa83a066a9d47d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6bd44317adfdc6a94237cae2ed6f7a8a5dbf87df7f2b8775cd8fda8cf1615f3e\"" Apr 28 01:14:40.956964 containerd[1456]: time="2026-04-28T01:14:40.956926507Z" level=info msg="StartContainer for \"6bd44317adfdc6a94237cae2ed6f7a8a5dbf87df7f2b8775cd8fda8cf1615f3e\"" Apr 28 01:14:40.982156 systemd[1]: Started cri-containerd-6bd44317adfdc6a94237cae2ed6f7a8a5dbf87df7f2b8775cd8fda8cf1615f3e.scope - libcontainer container 6bd44317adfdc6a94237cae2ed6f7a8a5dbf87df7f2b8775cd8fda8cf1615f3e. Apr 28 01:14:41.002610 containerd[1456]: time="2026-04-28T01:14:41.002541048Z" level=info msg="StartContainer for \"6bd44317adfdc6a94237cae2ed6f7a8a5dbf87df7f2b8775cd8fda8cf1615f3e\" returns successfully" Apr 28 01:14:41.356149 kubelet[2502]: E0428 01:14:41.355798 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:14:41.391740 kubelet[2502]: E0428 01:14:41.391708 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:14:41.613045 kubelet[2502]: E0428 01:14:41.612837 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:14:41.614039 kubelet[2502]: E0428 01:14:41.613908 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:14:41.614151 kubelet[2502]: E0428 01:14:41.614039 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:14:41.621776 kubelet[2502]: I0428 01:14:41.621736 2502 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-z4crn" podStartSLOduration=2.621725447 podStartE2EDuration="2.621725447s" podCreationTimestamp="2026-04-28 01:14:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-28 01:14:41.620869916 +0000 UTC m=+8.151231986" watchObservedRunningTime="2026-04-28 01:14:41.621725447 +0000 UTC m=+8.152087518" Apr 28 01:14:42.291038 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2203474356.mount: Deactivated successfully. Apr 28 01:14:42.791473 containerd[1456]: time="2026-04-28T01:14:42.791396269Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 01:14:42.792155 containerd[1456]: time="2026-04-28T01:14:42.792113233Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.8: active requests=0, bytes read=41007543" Apr 28 01:14:42.793222 containerd[1456]: time="2026-04-28T01:14:42.793188403Z" level=info msg="ImageCreate event name:\"sha256:31fe9f73b19b5c10bcbd8f050af2f52293dfee5571cebbb6e816bf013505b9cb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 01:14:42.795061 containerd[1456]: time="2026-04-28T01:14:42.795022920Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:ce8eeaa3e60794610f3851ee06d296575f7c2efef1e3e1f8ac751a1d87ab979c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 01:14:42.795660 containerd[1456]: time="2026-04-28T01:14:42.795621655Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.8\" with image id \"sha256:31fe9f73b19b5c10bcbd8f050af2f52293dfee5571cebbb6e816bf013505b9cb\", repo tag \"quay.io/tigera/operator:v1.40.8\", repo digest \"quay.io/tigera/operator@sha256:ce8eeaa3e60794610f3851ee06d296575f7c2efef1e3e1f8ac751a1d87ab979c\", size \"41003538\" in 1.99247251s" Apr 28 01:14:42.795660 containerd[1456]: time="2026-04-28T01:14:42.795654730Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.8\" returns image reference \"sha256:31fe9f73b19b5c10bcbd8f050af2f52293dfee5571cebbb6e816bf013505b9cb\"" Apr 28 01:14:42.799024 containerd[1456]: time="2026-04-28T01:14:42.798984064Z" level=info msg="CreateContainer within sandbox \"b742a90a43cfdc9bd3f98cf1977440b4644b78e2fe0b7c18cf67866d66b1a972\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 28 01:14:42.808138 containerd[1456]: time="2026-04-28T01:14:42.808094108Z" level=info msg="CreateContainer within sandbox \"b742a90a43cfdc9bd3f98cf1977440b4644b78e2fe0b7c18cf67866d66b1a972\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"d5c678b7204bfca5b3b5764756bab1b5c0c7fe3f89a5dd9e88d36660aa2d0f1c\"" Apr 28 01:14:42.808677 containerd[1456]: time="2026-04-28T01:14:42.808649642Z" level=info msg="StartContainer for \"d5c678b7204bfca5b3b5764756bab1b5c0c7fe3f89a5dd9e88d36660aa2d0f1c\"" Apr 28 01:14:42.837158 systemd[1]: Started cri-containerd-d5c678b7204bfca5b3b5764756bab1b5c0c7fe3f89a5dd9e88d36660aa2d0f1c.scope - libcontainer container d5c678b7204bfca5b3b5764756bab1b5c0c7fe3f89a5dd9e88d36660aa2d0f1c. Apr 28 01:14:42.856307 containerd[1456]: time="2026-04-28T01:14:42.856202785Z" level=info msg="StartContainer for \"d5c678b7204bfca5b3b5764756bab1b5c0c7fe3f89a5dd9e88d36660aa2d0f1c\" returns successfully" Apr 28 01:14:43.632599 kubelet[2502]: I0428 01:14:43.632460 2502 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6fb8d665dd-mtqx5" podStartSLOduration=1.638670455 podStartE2EDuration="3.632445867s" podCreationTimestamp="2026-04-28 01:14:40 +0000 UTC" firstStartedPulling="2026-04-28 01:14:40.802553896 +0000 UTC m=+7.332915964" lastFinishedPulling="2026-04-28 01:14:42.796329316 +0000 UTC m=+9.326691376" observedRunningTime="2026-04-28 01:14:43.632063885 +0000 UTC m=+10.162425945" watchObservedRunningTime="2026-04-28 01:14:43.632445867 +0000 UTC m=+10.162807937" Apr 28 01:14:47.636343 sudo[1630]: pam_unix(sudo:session): session closed for user root Apr 28 01:14:47.637782 sshd[1627]: pam_unix(sshd:session): session closed for user core Apr 28 01:14:47.640398 systemd-logind[1435]: Session 7 logged out. Waiting for processes to exit. Apr 28 01:14:47.643270 systemd[1]: sshd@6-10.0.0.153:22-10.0.0.1:46274.service: Deactivated successfully. Apr 28 01:14:47.646002 systemd[1]: session-7.scope: Deactivated successfully. Apr 28 01:14:47.646549 systemd[1]: session-7.scope: Consumed 4.425s CPU time, 159.6M memory peak, 0B memory swap peak. Apr 28 01:14:47.650270 systemd-logind[1435]: Removed session 7. Apr 28 01:14:49.066536 systemd[1]: Created slice kubepods-besteffort-pod47f54d0b_825d_4cd9_831e_cf181673024c.slice - libcontainer container kubepods-besteffort-pod47f54d0b_825d_4cd9_831e_cf181673024c.slice. Apr 28 01:14:49.074680 kubelet[2502]: I0428 01:14:49.074598 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vflm8\" (UniqueName: \"kubernetes.io/projected/47f54d0b-825d-4cd9-831e-cf181673024c-kube-api-access-vflm8\") pod \"calico-typha-8cdbb4b44-8lpgf\" (UID: \"47f54d0b-825d-4cd9-831e-cf181673024c\") " pod="calico-system/calico-typha-8cdbb4b44-8lpgf" Apr 28 01:14:49.074934 kubelet[2502]: I0428 01:14:49.074850 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/47f54d0b-825d-4cd9-831e-cf181673024c-tigera-ca-bundle\") pod \"calico-typha-8cdbb4b44-8lpgf\" (UID: \"47f54d0b-825d-4cd9-831e-cf181673024c\") " pod="calico-system/calico-typha-8cdbb4b44-8lpgf" Apr 28 01:14:49.076988 kubelet[2502]: I0428 01:14:49.075306 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/47f54d0b-825d-4cd9-831e-cf181673024c-typha-certs\") pod \"calico-typha-8cdbb4b44-8lpgf\" (UID: \"47f54d0b-825d-4cd9-831e-cf181673024c\") " pod="calico-system/calico-typha-8cdbb4b44-8lpgf" Apr 28 01:14:49.123020 systemd[1]: Created slice kubepods-besteffort-podca184153_6f42_4d1e_901d_d3894ee9e4af.slice - libcontainer container kubepods-besteffort-podca184153_6f42_4d1e_901d_d3894ee9e4af.slice. Apr 28 01:14:49.175972 kubelet[2502]: I0428 01:14:49.175901 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ca184153-6f42-4d1e-901d-d3894ee9e4af-var-lib-calico\") pod \"calico-node-rbn78\" (UID: \"ca184153-6f42-4d1e-901d-d3894ee9e4af\") " pod="calico-system/calico-node-rbn78" Apr 28 01:14:49.175972 kubelet[2502]: I0428 01:14:49.175970 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/ca184153-6f42-4d1e-901d-d3894ee9e4af-policysync\") pod \"calico-node-rbn78\" (UID: \"ca184153-6f42-4d1e-901d-d3894ee9e4af\") " pod="calico-system/calico-node-rbn78" Apr 28 01:14:49.176110 kubelet[2502]: I0428 01:14:49.176006 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/ca184153-6f42-4d1e-901d-d3894ee9e4af-var-run-calico\") pod \"calico-node-rbn78\" (UID: \"ca184153-6f42-4d1e-901d-d3894ee9e4af\") " pod="calico-system/calico-node-rbn78" Apr 28 01:14:49.176110 kubelet[2502]: I0428 01:14:49.176028 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/ca184153-6f42-4d1e-901d-d3894ee9e4af-cni-bin-dir\") pod \"calico-node-rbn78\" (UID: \"ca184153-6f42-4d1e-901d-d3894ee9e4af\") " pod="calico-system/calico-node-rbn78" Apr 28 01:14:49.176110 kubelet[2502]: I0428 01:14:49.176039 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/ca184153-6f42-4d1e-901d-d3894ee9e4af-cni-log-dir\") pod \"calico-node-rbn78\" (UID: \"ca184153-6f42-4d1e-901d-d3894ee9e4af\") " pod="calico-system/calico-node-rbn78" Apr 28 01:14:49.176110 kubelet[2502]: I0428 01:14:49.176079 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/ca184153-6f42-4d1e-901d-d3894ee9e4af-nodeproc\") pod \"calico-node-rbn78\" (UID: \"ca184153-6f42-4d1e-901d-d3894ee9e4af\") " pod="calico-system/calico-node-rbn78" Apr 28 01:14:49.176110 kubelet[2502]: I0428 01:14:49.176104 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ca184153-6f42-4d1e-901d-d3894ee9e4af-xtables-lock\") pod \"calico-node-rbn78\" (UID: \"ca184153-6f42-4d1e-901d-d3894ee9e4af\") " pod="calico-system/calico-node-rbn78" Apr 28 01:14:49.176229 kubelet[2502]: I0428 01:14:49.176120 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ca184153-6f42-4d1e-901d-d3894ee9e4af-lib-modules\") pod \"calico-node-rbn78\" (UID: \"ca184153-6f42-4d1e-901d-d3894ee9e4af\") " pod="calico-system/calico-node-rbn78" Apr 28 01:14:49.176229 kubelet[2502]: I0428 01:14:49.176143 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/ca184153-6f42-4d1e-901d-d3894ee9e4af-sys-fs\") pod \"calico-node-rbn78\" (UID: \"ca184153-6f42-4d1e-901d-d3894ee9e4af\") " pod="calico-system/calico-node-rbn78" Apr 28 01:14:49.176229 kubelet[2502]: I0428 01:14:49.176156 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlsgr\" (UniqueName: \"kubernetes.io/projected/ca184153-6f42-4d1e-901d-d3894ee9e4af-kube-api-access-qlsgr\") pod \"calico-node-rbn78\" (UID: \"ca184153-6f42-4d1e-901d-d3894ee9e4af\") " pod="calico-system/calico-node-rbn78" Apr 28 01:14:49.176229 kubelet[2502]: I0428 01:14:49.176167 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/ca184153-6f42-4d1e-901d-d3894ee9e4af-node-certs\") pod \"calico-node-rbn78\" (UID: \"ca184153-6f42-4d1e-901d-d3894ee9e4af\") " pod="calico-system/calico-node-rbn78" Apr 28 01:14:49.176229 kubelet[2502]: I0428 01:14:49.176188 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/ca184153-6f42-4d1e-901d-d3894ee9e4af-bpffs\") pod \"calico-node-rbn78\" (UID: \"ca184153-6f42-4d1e-901d-d3894ee9e4af\") " pod="calico-system/calico-node-rbn78" Apr 28 01:14:49.176303 kubelet[2502]: I0428 01:14:49.176199 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/ca184153-6f42-4d1e-901d-d3894ee9e4af-flexvol-driver-host\") pod \"calico-node-rbn78\" (UID: \"ca184153-6f42-4d1e-901d-d3894ee9e4af\") " pod="calico-system/calico-node-rbn78" Apr 28 01:14:49.176303 kubelet[2502]: I0428 01:14:49.176209 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ca184153-6f42-4d1e-901d-d3894ee9e4af-tigera-ca-bundle\") pod \"calico-node-rbn78\" (UID: \"ca184153-6f42-4d1e-901d-d3894ee9e4af\") " pod="calico-system/calico-node-rbn78" Apr 28 01:14:49.176303 kubelet[2502]: I0428 01:14:49.176221 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/ca184153-6f42-4d1e-901d-d3894ee9e4af-cni-net-dir\") pod \"calico-node-rbn78\" (UID: \"ca184153-6f42-4d1e-901d-d3894ee9e4af\") " pod="calico-system/calico-node-rbn78" Apr 28 01:14:49.221787 kubelet[2502]: E0428 01:14:49.221720 2502 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7dx44" podUID="22c17e9a-2b9c-4268-bd82-cc9430b54e6f" Apr 28 01:14:49.276514 kubelet[2502]: I0428 01:14:49.276434 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/22c17e9a-2b9c-4268-bd82-cc9430b54e6f-socket-dir\") pod \"csi-node-driver-7dx44\" (UID: \"22c17e9a-2b9c-4268-bd82-cc9430b54e6f\") " pod="calico-system/csi-node-driver-7dx44" Apr 28 01:14:49.276514 kubelet[2502]: I0428 01:14:49.276510 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/22c17e9a-2b9c-4268-bd82-cc9430b54e6f-kubelet-dir\") pod \"csi-node-driver-7dx44\" (UID: \"22c17e9a-2b9c-4268-bd82-cc9430b54e6f\") " pod="calico-system/csi-node-driver-7dx44" Apr 28 01:14:49.276696 kubelet[2502]: I0428 01:14:49.276533 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/22c17e9a-2b9c-4268-bd82-cc9430b54e6f-varrun\") pod \"csi-node-driver-7dx44\" (UID: \"22c17e9a-2b9c-4268-bd82-cc9430b54e6f\") " pod="calico-system/csi-node-driver-7dx44" Apr 28 01:14:49.276696 kubelet[2502]: I0428 01:14:49.276586 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/22c17e9a-2b9c-4268-bd82-cc9430b54e6f-registration-dir\") pod \"csi-node-driver-7dx44\" (UID: \"22c17e9a-2b9c-4268-bd82-cc9430b54e6f\") " pod="calico-system/csi-node-driver-7dx44" Apr 28 01:14:49.276696 kubelet[2502]: I0428 01:14:49.276649 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dmhs\" (UniqueName: \"kubernetes.io/projected/22c17e9a-2b9c-4268-bd82-cc9430b54e6f-kube-api-access-4dmhs\") pod \"csi-node-driver-7dx44\" (UID: \"22c17e9a-2b9c-4268-bd82-cc9430b54e6f\") " pod="calico-system/csi-node-driver-7dx44" Apr 28 01:14:49.278447 kubelet[2502]: E0428 01:14:49.278431 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 01:14:49.278603 kubelet[2502]: W0428 01:14:49.278528 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 01:14:49.278603 kubelet[2502]: E0428 01:14:49.278592 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 01:14:49.280857 kubelet[2502]: E0428 01:14:49.280828 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 01:14:49.280857 kubelet[2502]: W0428 01:14:49.280851 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 01:14:49.280919 kubelet[2502]: E0428 01:14:49.280862 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 01:14:49.283317 kubelet[2502]: E0428 01:14:49.283297 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 01:14:49.283317 kubelet[2502]: W0428 01:14:49.283315 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 01:14:49.283388 kubelet[2502]: E0428 01:14:49.283325 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 01:14:49.371930 kubelet[2502]: E0428 01:14:49.371803 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:14:49.372432 containerd[1456]: time="2026-04-28T01:14:49.372372573Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-8cdbb4b44-8lpgf,Uid:47f54d0b-825d-4cd9-831e-cf181673024c,Namespace:calico-system,Attempt:0,}" Apr 28 01:14:49.378007 kubelet[2502]: E0428 01:14:49.377932 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 01:14:49.378007 kubelet[2502]: W0428 01:14:49.378006 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 01:14:49.378138 kubelet[2502]: E0428 01:14:49.378019 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 01:14:49.378294 kubelet[2502]: E0428 01:14:49.378272 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 01:14:49.378294 kubelet[2502]: W0428 01:14:49.378292 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 01:14:49.378370 kubelet[2502]: E0428 01:14:49.378304 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 01:14:49.378568 kubelet[2502]: E0428 01:14:49.378553 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 01:14:49.378568 kubelet[2502]: W0428 01:14:49.378567 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 01:14:49.378648 kubelet[2502]: E0428 01:14:49.378576 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 01:14:49.378875 kubelet[2502]: E0428 01:14:49.378855 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 01:14:49.378875 kubelet[2502]: W0428 01:14:49.378866 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 01:14:49.378875 kubelet[2502]: E0428 01:14:49.378875 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 01:14:49.379125 kubelet[2502]: E0428 01:14:49.379109 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 01:14:49.379148 kubelet[2502]: W0428 01:14:49.379125 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 01:14:49.379148 kubelet[2502]: E0428 01:14:49.379134 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 01:14:49.379409 kubelet[2502]: E0428 01:14:49.379393 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 01:14:49.379432 kubelet[2502]: W0428 01:14:49.379409 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 01:14:49.379432 kubelet[2502]: E0428 01:14:49.379417 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 01:14:49.379656 kubelet[2502]: E0428 01:14:49.379631 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 01:14:49.379656 kubelet[2502]: W0428 01:14:49.379650 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 01:14:49.379736 kubelet[2502]: E0428 01:14:49.379660 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 01:14:49.379879 kubelet[2502]: E0428 01:14:49.379861 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 01:14:49.379879 kubelet[2502]: W0428 01:14:49.379878 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 01:14:49.379879 kubelet[2502]: E0428 01:14:49.379888 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 01:14:49.380175 kubelet[2502]: E0428 01:14:49.380158 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 01:14:49.380175 kubelet[2502]: W0428 01:14:49.380174 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 01:14:49.380269 kubelet[2502]: E0428 01:14:49.380180 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 01:14:49.380394 kubelet[2502]: E0428 01:14:49.380373 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 01:14:49.380394 kubelet[2502]: W0428 01:14:49.380387 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 01:14:49.380394 kubelet[2502]: E0428 01:14:49.380392 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 01:14:49.380587 kubelet[2502]: E0428 01:14:49.380577 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 01:14:49.380587 kubelet[2502]: W0428 01:14:49.380583 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 01:14:49.380636 kubelet[2502]: E0428 01:14:49.380589 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 01:14:49.380860 kubelet[2502]: E0428 01:14:49.380743 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 01:14:49.380860 kubelet[2502]: W0428 01:14:49.380753 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 01:14:49.380860 kubelet[2502]: E0428 01:14:49.380762 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 01:14:49.381113 kubelet[2502]: E0428 01:14:49.381098 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 01:14:49.381113 kubelet[2502]: W0428 01:14:49.381113 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 01:14:49.381206 kubelet[2502]: E0428 01:14:49.381120 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 01:14:49.381346 kubelet[2502]: E0428 01:14:49.381288 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 01:14:49.381346 kubelet[2502]: W0428 01:14:49.381306 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 01:14:49.381346 kubelet[2502]: E0428 01:14:49.381312 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 01:14:49.381572 kubelet[2502]: E0428 01:14:49.381558 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 01:14:49.381572 kubelet[2502]: W0428 01:14:49.381572 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 01:14:49.381612 kubelet[2502]: E0428 01:14:49.381578 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 01:14:49.381833 kubelet[2502]: E0428 01:14:49.381814 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 01:14:49.381833 kubelet[2502]: W0428 01:14:49.381830 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 01:14:49.381911 kubelet[2502]: E0428 01:14:49.381839 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 01:14:49.382039 kubelet[2502]: E0428 01:14:49.382018 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 01:14:49.382039 kubelet[2502]: W0428 01:14:49.382036 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 01:14:49.382140 kubelet[2502]: E0428 01:14:49.382043 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 01:14:49.382204 kubelet[2502]: E0428 01:14:49.382186 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 01:14:49.382222 kubelet[2502]: W0428 01:14:49.382206 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 01:14:49.382222 kubelet[2502]: E0428 01:14:49.382213 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 01:14:49.382989 kubelet[2502]: E0428 01:14:49.382438 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 01:14:49.382989 kubelet[2502]: W0428 01:14:49.382447 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 01:14:49.382989 kubelet[2502]: E0428 01:14:49.382455 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 01:14:49.383330 kubelet[2502]: E0428 01:14:49.383275 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 01:14:49.383330 kubelet[2502]: W0428 01:14:49.383284 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 01:14:49.383330 kubelet[2502]: E0428 01:14:49.383293 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 01:14:49.383460 kubelet[2502]: E0428 01:14:49.383427 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 01:14:49.383460 kubelet[2502]: W0428 01:14:49.383432 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 01:14:49.383460 kubelet[2502]: E0428 01:14:49.383437 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 01:14:49.383626 kubelet[2502]: E0428 01:14:49.383613 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 01:14:49.383626 kubelet[2502]: W0428 01:14:49.383620 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 01:14:49.383626 kubelet[2502]: E0428 01:14:49.383626 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 01:14:49.383884 kubelet[2502]: E0428 01:14:49.383857 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 01:14:49.383909 kubelet[2502]: W0428 01:14:49.383903 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 01:14:49.383930 kubelet[2502]: E0428 01:14:49.383910 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 01:14:49.384132 kubelet[2502]: E0428 01:14:49.384117 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 01:14:49.384132 kubelet[2502]: W0428 01:14:49.384123 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 01:14:49.384132 kubelet[2502]: E0428 01:14:49.384129 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 01:14:49.384687 kubelet[2502]: E0428 01:14:49.384618 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 01:14:49.384687 kubelet[2502]: W0428 01:14:49.384661 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 01:14:49.384687 kubelet[2502]: E0428 01:14:49.384670 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 01:14:49.388976 kubelet[2502]: E0428 01:14:49.388798 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 01:14:49.388976 kubelet[2502]: W0428 01:14:49.388808 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 01:14:49.388976 kubelet[2502]: E0428 01:14:49.388816 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 01:14:49.394783 containerd[1456]: time="2026-04-28T01:14:49.394521792Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 28 01:14:49.394783 containerd[1456]: time="2026-04-28T01:14:49.394763862Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 28 01:14:49.394844 containerd[1456]: time="2026-04-28T01:14:49.394782973Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 01:14:49.395148 containerd[1456]: time="2026-04-28T01:14:49.395046726Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 01:14:49.413157 systemd[1]: Started cri-containerd-4ec1015670ba35cf93c0f13fba2db91017fe082e34d0635611f7399ce8b7eeb4.scope - libcontainer container 4ec1015670ba35cf93c0f13fba2db91017fe082e34d0635611f7399ce8b7eeb4. Apr 28 01:14:49.429743 containerd[1456]: time="2026-04-28T01:14:49.429597514Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rbn78,Uid:ca184153-6f42-4d1e-901d-d3894ee9e4af,Namespace:calico-system,Attempt:0,}" Apr 28 01:14:49.444672 containerd[1456]: time="2026-04-28T01:14:49.444622581Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-8cdbb4b44-8lpgf,Uid:47f54d0b-825d-4cd9-831e-cf181673024c,Namespace:calico-system,Attempt:0,} returns sandbox id \"4ec1015670ba35cf93c0f13fba2db91017fe082e34d0635611f7399ce8b7eeb4\"" Apr 28 01:14:49.445584 kubelet[2502]: E0428 01:14:49.445341 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:14:49.446591 containerd[1456]: time="2026-04-28T01:14:49.446565454Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.5\"" Apr 28 01:14:49.455550 containerd[1456]: time="2026-04-28T01:14:49.455303433Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 28 01:14:49.455550 containerd[1456]: time="2026-04-28T01:14:49.455366241Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 28 01:14:49.455550 containerd[1456]: time="2026-04-28T01:14:49.455377305Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 01:14:49.455746 containerd[1456]: time="2026-04-28T01:14:49.455542139Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 01:14:49.479263 systemd[1]: Started cri-containerd-8589a5a5bf39ce62636f5b6392bb332c0cddeee7b341bb0ea6cb51fce8000c25.scope - libcontainer container 8589a5a5bf39ce62636f5b6392bb332c0cddeee7b341bb0ea6cb51fce8000c25. Apr 28 01:14:49.498733 containerd[1456]: time="2026-04-28T01:14:49.498565490Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rbn78,Uid:ca184153-6f42-4d1e-901d-d3894ee9e4af,Namespace:calico-system,Attempt:0,} returns sandbox id \"8589a5a5bf39ce62636f5b6392bb332c0cddeee7b341bb0ea6cb51fce8000c25\"" Apr 28 01:14:49.589794 kubelet[2502]: E0428 01:14:49.589674 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:14:49.675359 kubelet[2502]: E0428 01:14:49.675264 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 01:14:49.675359 kubelet[2502]: W0428 01:14:49.675287 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 01:14:49.675359 kubelet[2502]: E0428 01:14:49.675301 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 01:14:49.675527 kubelet[2502]: E0428 01:14:49.675502 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 01:14:49.675527 kubelet[2502]: W0428 01:14:49.675516 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 01:14:49.675527 kubelet[2502]: E0428 01:14:49.675523 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 01:14:49.675671 kubelet[2502]: E0428 01:14:49.675656 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 01:14:49.675729 kubelet[2502]: W0428 01:14:49.675672 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 01:14:49.675729 kubelet[2502]: E0428 01:14:49.675678 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 01:14:49.675894 kubelet[2502]: E0428 01:14:49.675871 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 01:14:49.675894 kubelet[2502]: W0428 01:14:49.675879 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 01:14:49.675894 kubelet[2502]: E0428 01:14:49.675885 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 01:14:49.676234 kubelet[2502]: E0428 01:14:49.676168 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 01:14:49.676234 kubelet[2502]: W0428 01:14:49.676197 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 01:14:49.676234 kubelet[2502]: E0428 01:14:49.676209 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 01:14:49.676361 kubelet[2502]: E0428 01:14:49.676339 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 01:14:49.676361 kubelet[2502]: W0428 01:14:49.676347 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 01:14:49.676361 kubelet[2502]: E0428 01:14:49.676355 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 01:14:49.676853 kubelet[2502]: E0428 01:14:49.676787 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 01:14:49.676853 kubelet[2502]: W0428 01:14:49.676809 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 01:14:49.676853 kubelet[2502]: E0428 01:14:49.676821 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 01:14:49.677130 kubelet[2502]: E0428 01:14:49.677050 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 01:14:49.677166 kubelet[2502]: W0428 01:14:49.677130 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 01:14:49.677166 kubelet[2502]: E0428 01:14:49.677141 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 01:14:49.677683 kubelet[2502]: E0428 01:14:49.677505 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 01:14:49.677683 kubelet[2502]: W0428 01:14:49.677517 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 01:14:49.677683 kubelet[2502]: E0428 01:14:49.677545 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 01:14:49.677848 kubelet[2502]: E0428 01:14:49.677794 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 01:14:49.677848 kubelet[2502]: W0428 01:14:49.677810 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 01:14:49.677848 kubelet[2502]: E0428 01:14:49.677817 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 01:14:49.678146 kubelet[2502]: E0428 01:14:49.678123 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 01:14:49.678146 kubelet[2502]: W0428 01:14:49.678139 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 01:14:49.678146 kubelet[2502]: E0428 01:14:49.678147 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 01:14:49.678492 kubelet[2502]: E0428 01:14:49.678353 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 01:14:49.678492 kubelet[2502]: W0428 01:14:49.678377 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 01:14:49.678492 kubelet[2502]: E0428 01:14:49.678387 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 01:14:49.678648 kubelet[2502]: E0428 01:14:49.678619 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 01:14:49.678648 kubelet[2502]: W0428 01:14:49.678635 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 01:14:49.678648 kubelet[2502]: E0428 01:14:49.678641 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 01:14:49.678835 kubelet[2502]: E0428 01:14:49.678802 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 01:14:49.678835 kubelet[2502]: W0428 01:14:49.678817 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 01:14:49.678835 kubelet[2502]: E0428 01:14:49.678828 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 01:14:49.679024 kubelet[2502]: E0428 01:14:49.679006 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 01:14:49.679024 kubelet[2502]: W0428 01:14:49.679020 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 01:14:49.679024 kubelet[2502]: E0428 01:14:49.679026 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 01:14:49.679244 kubelet[2502]: E0428 01:14:49.679230 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 01:14:49.679244 kubelet[2502]: W0428 01:14:49.679243 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 01:14:49.679285 kubelet[2502]: E0428 01:14:49.679249 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 01:14:49.679434 kubelet[2502]: E0428 01:14:49.679400 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 01:14:49.679434 kubelet[2502]: W0428 01:14:49.679417 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 01:14:49.679434 kubelet[2502]: E0428 01:14:49.679422 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 01:14:49.679575 kubelet[2502]: E0428 01:14:49.679558 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 01:14:49.679575 kubelet[2502]: W0428 01:14:49.679571 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 01:14:49.679606 kubelet[2502]: E0428 01:14:49.679576 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 01:14:49.679723 kubelet[2502]: E0428 01:14:49.679707 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 01:14:49.679723 kubelet[2502]: W0428 01:14:49.679720 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 01:14:49.679755 kubelet[2502]: E0428 01:14:49.679725 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 01:14:49.679876 kubelet[2502]: E0428 01:14:49.679860 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 01:14:49.679876 kubelet[2502]: W0428 01:14:49.679872 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 01:14:49.679968 kubelet[2502]: E0428 01:14:49.679878 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 01:14:49.680104 kubelet[2502]: E0428 01:14:49.680086 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 01:14:49.680104 kubelet[2502]: W0428 01:14:49.680099 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 01:14:49.680164 kubelet[2502]: E0428 01:14:49.680105 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 01:14:49.680270 kubelet[2502]: E0428 01:14:49.680253 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 01:14:49.680270 kubelet[2502]: W0428 01:14:49.680266 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 01:14:49.680301 kubelet[2502]: E0428 01:14:49.680271 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 01:14:49.680422 kubelet[2502]: E0428 01:14:49.680406 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 01:14:49.680422 kubelet[2502]: W0428 01:14:49.680419 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 01:14:49.680456 kubelet[2502]: E0428 01:14:49.680424 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 01:14:49.680571 kubelet[2502]: E0428 01:14:49.680555 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 01:14:49.680571 kubelet[2502]: W0428 01:14:49.680568 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 01:14:49.680604 kubelet[2502]: E0428 01:14:49.680573 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 01:14:49.680725 kubelet[2502]: E0428 01:14:49.680709 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 01:14:49.680725 kubelet[2502]: W0428 01:14:49.680722 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 01:14:49.680755 kubelet[2502]: E0428 01:14:49.680727 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 01:14:51.232315 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2058741113.mount: Deactivated successfully. Apr 28 01:14:51.585006 kubelet[2502]: E0428 01:14:51.584794 2502 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7dx44" podUID="22c17e9a-2b9c-4268-bd82-cc9430b54e6f" Apr 28 01:14:51.903769 containerd[1456]: time="2026-04-28T01:14:51.903627985Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 01:14:51.904486 containerd[1456]: time="2026-04-28T01:14:51.904420754Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.5: active requests=0, bytes read=35813139" Apr 28 01:14:51.905273 containerd[1456]: time="2026-04-28T01:14:51.905223048Z" level=info msg="ImageCreate event name:\"sha256:20cad3a3c174ee02dd6e103e3a7e314ada245d5e414fef6d049c10829d8856dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 01:14:51.907442 containerd[1456]: time="2026-04-28T01:14:51.907375425Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:76afd8f80569b3bf783991ce5348294319cefa6d6cca127710d0e068096048a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 01:14:51.908002 containerd[1456]: time="2026-04-28T01:14:51.907904976Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.5\" with image id \"sha256:20cad3a3c174ee02dd6e103e3a7e314ada245d5e414fef6d049c10829d8856dc\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:76afd8f80569b3bf783991ce5348294319cefa6d6cca127710d0e068096048a6\", size \"35812993\" in 2.461301762s" Apr 28 01:14:51.908002 containerd[1456]: time="2026-04-28T01:14:51.907984216Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.5\" returns image reference \"sha256:20cad3a3c174ee02dd6e103e3a7e314ada245d5e414fef6d049c10829d8856dc\"" Apr 28 01:14:51.909139 containerd[1456]: time="2026-04-28T01:14:51.909083552Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.5\"" Apr 28 01:14:51.918152 containerd[1456]: time="2026-04-28T01:14:51.918111998Z" level=info msg="CreateContainer within sandbox \"4ec1015670ba35cf93c0f13fba2db91017fe082e34d0635611f7399ce8b7eeb4\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 28 01:14:51.930025 containerd[1456]: time="2026-04-28T01:14:51.929893685Z" level=info msg="CreateContainer within sandbox \"4ec1015670ba35cf93c0f13fba2db91017fe082e34d0635611f7399ce8b7eeb4\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"10a78f1ac86a96e396197ee5452c9fde2645806e03d03b9b69e7a9c224a13c7d\"" Apr 28 01:14:51.930468 containerd[1456]: time="2026-04-28T01:14:51.930445809Z" level=info msg="StartContainer for \"10a78f1ac86a96e396197ee5452c9fde2645806e03d03b9b69e7a9c224a13c7d\"" Apr 28 01:14:51.965160 systemd[1]: Started cri-containerd-10a78f1ac86a96e396197ee5452c9fde2645806e03d03b9b69e7a9c224a13c7d.scope - libcontainer container 10a78f1ac86a96e396197ee5452c9fde2645806e03d03b9b69e7a9c224a13c7d. Apr 28 01:14:51.997638 containerd[1456]: time="2026-04-28T01:14:51.997555001Z" level=info msg="StartContainer for \"10a78f1ac86a96e396197ee5452c9fde2645806e03d03b9b69e7a9c224a13c7d\" returns successfully" Apr 28 01:14:52.639007 kubelet[2502]: E0428 01:14:52.638889 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:14:52.649504 kubelet[2502]: I0428 01:14:52.649450 2502 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-8cdbb4b44-8lpgf" podStartSLOduration=1.186658146 podStartE2EDuration="3.64943389s" podCreationTimestamp="2026-04-28 01:14:49 +0000 UTC" firstStartedPulling="2026-04-28 01:14:49.446119744 +0000 UTC m=+15.976481806" lastFinishedPulling="2026-04-28 01:14:51.90889549 +0000 UTC m=+18.439257550" observedRunningTime="2026-04-28 01:14:52.648771559 +0000 UTC m=+19.179133631" watchObservedRunningTime="2026-04-28 01:14:52.64943389 +0000 UTC m=+19.179795966" Apr 28 01:14:52.701632 kubelet[2502]: E0428 01:14:52.701597 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 01:14:52.701632 kubelet[2502]: W0428 01:14:52.701624 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 01:14:52.701733 kubelet[2502]: E0428 01:14:52.701640 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 01:14:52.701828 kubelet[2502]: E0428 01:14:52.701805 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 01:14:52.701828 kubelet[2502]: W0428 01:14:52.701819 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 01:14:52.701828 kubelet[2502]: E0428 01:14:52.701827 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 01:14:52.702074 kubelet[2502]: E0428 01:14:52.702021 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 01:14:52.702074 kubelet[2502]: W0428 01:14:52.702065 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 01:14:52.702113 kubelet[2502]: E0428 01:14:52.702075 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 01:14:52.702252 kubelet[2502]: E0428 01:14:52.702235 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 01:14:52.702252 kubelet[2502]: W0428 01:14:52.702248 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 01:14:52.702286 kubelet[2502]: E0428 01:14:52.702253 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 01:14:52.702502 kubelet[2502]: E0428 01:14:52.702483 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 01:14:52.702522 kubelet[2502]: W0428 01:14:52.702503 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 01:14:52.702522 kubelet[2502]: E0428 01:14:52.702515 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 01:14:52.702715 kubelet[2502]: E0428 01:14:52.702698 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 01:14:52.702715 kubelet[2502]: W0428 01:14:52.702711 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 01:14:52.702783 kubelet[2502]: E0428 01:14:52.702717 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 01:14:52.703013 kubelet[2502]: E0428 01:14:52.702998 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 01:14:52.703091 kubelet[2502]: W0428 01:14:52.703013 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 01:14:52.703091 kubelet[2502]: E0428 01:14:52.703020 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 01:14:52.703218 kubelet[2502]: E0428 01:14:52.703201 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 01:14:52.703218 kubelet[2502]: W0428 01:14:52.703216 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 01:14:52.703252 kubelet[2502]: E0428 01:14:52.703223 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 01:14:52.703421 kubelet[2502]: E0428 01:14:52.703405 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 01:14:52.703421 kubelet[2502]: W0428 01:14:52.703419 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 01:14:52.703453 kubelet[2502]: E0428 01:14:52.703426 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 01:14:52.703656 kubelet[2502]: E0428 01:14:52.703643 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 01:14:52.703676 kubelet[2502]: W0428 01:14:52.703656 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 01:14:52.703676 kubelet[2502]: E0428 01:14:52.703661 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 01:14:52.703826 kubelet[2502]: E0428 01:14:52.703811 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 01:14:52.703826 kubelet[2502]: W0428 01:14:52.703825 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 01:14:52.703856 kubelet[2502]: E0428 01:14:52.703830 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 01:14:52.704108 kubelet[2502]: E0428 01:14:52.704094 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 01:14:52.704108 kubelet[2502]: W0428 01:14:52.704108 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 01:14:52.704171 kubelet[2502]: E0428 01:14:52.704113 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 01:14:52.704339 kubelet[2502]: E0428 01:14:52.704324 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 01:14:52.704357 kubelet[2502]: W0428 01:14:52.704339 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 01:14:52.704357 kubelet[2502]: E0428 01:14:52.704344 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 01:14:52.704516 kubelet[2502]: E0428 01:14:52.704488 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 01:14:52.704516 kubelet[2502]: W0428 01:14:52.704504 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 01:14:52.704516 kubelet[2502]: E0428 01:14:52.704509 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 01:14:52.704734 kubelet[2502]: E0428 01:14:52.704712 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 01:14:52.704734 kubelet[2502]: W0428 01:14:52.704725 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 01:14:52.704734 kubelet[2502]: E0428 01:14:52.704730 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 01:14:52.705041 kubelet[2502]: E0428 01:14:52.705024 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 01:14:52.705041 kubelet[2502]: W0428 01:14:52.705038 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 01:14:52.705041 kubelet[2502]: E0428 01:14:52.705061 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 01:14:52.705253 kubelet[2502]: E0428 01:14:52.705229 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 01:14:52.705253 kubelet[2502]: W0428 01:14:52.705234 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 01:14:52.705253 kubelet[2502]: E0428 01:14:52.705239 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 01:14:52.705449 kubelet[2502]: E0428 01:14:52.705430 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 01:14:52.705449 kubelet[2502]: W0428 01:14:52.705439 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 01:14:52.705449 kubelet[2502]: E0428 01:14:52.705445 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 01:14:52.705709 kubelet[2502]: E0428 01:14:52.705686 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 01:14:52.705709 kubelet[2502]: W0428 01:14:52.705707 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 01:14:52.705776 kubelet[2502]: E0428 01:14:52.705718 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 01:14:52.706036 kubelet[2502]: E0428 01:14:52.706019 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 01:14:52.706036 kubelet[2502]: W0428 01:14:52.706034 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 01:14:52.706099 kubelet[2502]: E0428 01:14:52.706042 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 01:14:52.706259 kubelet[2502]: E0428 01:14:52.706236 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 01:14:52.706259 kubelet[2502]: W0428 01:14:52.706245 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 01:14:52.706259 kubelet[2502]: E0428 01:14:52.706251 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 01:14:52.706624 kubelet[2502]: E0428 01:14:52.706582 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 01:14:52.706624 kubelet[2502]: W0428 01:14:52.706623 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 01:14:52.706678 kubelet[2502]: E0428 01:14:52.706632 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 01:14:52.706979 kubelet[2502]: E0428 01:14:52.706935 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 01:14:52.707014 kubelet[2502]: W0428 01:14:52.706994 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 01:14:52.707031 kubelet[2502]: E0428 01:14:52.707016 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 01:14:52.707230 kubelet[2502]: E0428 01:14:52.707215 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 01:14:52.707230 kubelet[2502]: W0428 01:14:52.707230 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 01:14:52.707264 kubelet[2502]: E0428 01:14:52.707236 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 01:14:52.707429 kubelet[2502]: E0428 01:14:52.707416 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 01:14:52.707450 kubelet[2502]: W0428 01:14:52.707429 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 01:14:52.707450 kubelet[2502]: E0428 01:14:52.707435 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 01:14:52.707648 kubelet[2502]: E0428 01:14:52.707632 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 01:14:52.707648 kubelet[2502]: W0428 01:14:52.707645 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 01:14:52.707680 kubelet[2502]: E0428 01:14:52.707651 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 01:14:52.707820 kubelet[2502]: E0428 01:14:52.707805 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 01:14:52.707840 kubelet[2502]: W0428 01:14:52.707819 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 01:14:52.707840 kubelet[2502]: E0428 01:14:52.707825 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 01:14:52.708100 kubelet[2502]: E0428 01:14:52.708080 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 01:14:52.708100 kubelet[2502]: W0428 01:14:52.708097 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 01:14:52.708154 kubelet[2502]: E0428 01:14:52.708106 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 01:14:52.708293 kubelet[2502]: E0428 01:14:52.708275 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 01:14:52.708293 kubelet[2502]: W0428 01:14:52.708289 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 01:14:52.708328 kubelet[2502]: E0428 01:14:52.708295 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 01:14:52.708500 kubelet[2502]: E0428 01:14:52.708483 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 01:14:52.708500 kubelet[2502]: W0428 01:14:52.708495 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 01:14:52.708532 kubelet[2502]: E0428 01:14:52.708501 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 01:14:52.708660 kubelet[2502]: E0428 01:14:52.708644 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 01:14:52.708660 kubelet[2502]: W0428 01:14:52.708657 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 01:14:52.708693 kubelet[2502]: E0428 01:14:52.708662 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 01:14:52.708858 kubelet[2502]: E0428 01:14:52.708843 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 01:14:52.708858 kubelet[2502]: W0428 01:14:52.708856 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 01:14:52.708896 kubelet[2502]: E0428 01:14:52.708861 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 01:14:52.709469 kubelet[2502]: E0428 01:14:52.709435 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 01:14:52.709469 kubelet[2502]: W0428 01:14:52.709455 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 01:14:52.709469 kubelet[2502]: E0428 01:14:52.709462 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 01:14:53.584417 kubelet[2502]: E0428 01:14:53.584362 2502 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7dx44" podUID="22c17e9a-2b9c-4268-bd82-cc9430b54e6f" Apr 28 01:14:53.640596 kubelet[2502]: I0428 01:14:53.640535 2502 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 28 01:14:53.640927 kubelet[2502]: E0428 01:14:53.640813 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:14:53.712391 kubelet[2502]: E0428 01:14:53.712345 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 01:14:53.712391 kubelet[2502]: W0428 01:14:53.712404 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 01:14:53.712391 kubelet[2502]: E0428 01:14:53.712421 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 01:14:53.712655 kubelet[2502]: E0428 01:14:53.712572 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 01:14:53.712655 kubelet[2502]: W0428 01:14:53.712577 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 01:14:53.712655 kubelet[2502]: E0428 01:14:53.712584 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 01:14:53.712972 kubelet[2502]: E0428 01:14:53.712922 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 01:14:53.713002 kubelet[2502]: W0428 01:14:53.712981 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 01:14:53.713002 kubelet[2502]: E0428 01:14:53.712990 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 01:14:53.713197 kubelet[2502]: E0428 01:14:53.713174 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 01:14:53.713197 kubelet[2502]: W0428 01:14:53.713180 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 01:14:53.713197 kubelet[2502]: E0428 01:14:53.713186 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 01:14:53.713392 kubelet[2502]: E0428 01:14:53.713364 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 01:14:53.713392 kubelet[2502]: W0428 01:14:53.713370 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 01:14:53.713392 kubelet[2502]: E0428 01:14:53.713375 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 01:14:53.713619 kubelet[2502]: E0428 01:14:53.713599 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 01:14:53.713619 kubelet[2502]: W0428 01:14:53.713614 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 01:14:53.713619 kubelet[2502]: E0428 01:14:53.713620 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 01:14:53.713871 kubelet[2502]: E0428 01:14:53.713849 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 01:14:53.713897 kubelet[2502]: W0428 01:14:53.713870 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 01:14:53.713897 kubelet[2502]: E0428 01:14:53.713882 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 01:14:53.714149 kubelet[2502]: E0428 01:14:53.714133 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 01:14:53.714149 kubelet[2502]: W0428 01:14:53.714148 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 01:14:53.714193 kubelet[2502]: E0428 01:14:53.714155 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 01:14:53.714360 kubelet[2502]: E0428 01:14:53.714345 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 01:14:53.714382 kubelet[2502]: W0428 01:14:53.714360 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 01:14:53.714382 kubelet[2502]: E0428 01:14:53.714367 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 01:14:53.714750 kubelet[2502]: E0428 01:14:53.714646 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 01:14:53.714750 kubelet[2502]: W0428 01:14:53.714667 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 01:14:53.714750 kubelet[2502]: E0428 01:14:53.714674 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 01:14:53.714860 kubelet[2502]: E0428 01:14:53.714831 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 01:14:53.714860 kubelet[2502]: W0428 01:14:53.714850 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 01:14:53.714860 kubelet[2502]: E0428 01:14:53.714856 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 01:14:53.715083 kubelet[2502]: E0428 01:14:53.715060 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 01:14:53.715083 kubelet[2502]: W0428 01:14:53.715078 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 01:14:53.715128 kubelet[2502]: E0428 01:14:53.715105 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 01:14:53.715352 kubelet[2502]: E0428 01:14:53.715333 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 01:14:53.715352 kubelet[2502]: W0428 01:14:53.715351 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 01:14:53.715451 kubelet[2502]: E0428 01:14:53.715357 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 01:14:53.715513 kubelet[2502]: E0428 01:14:53.715494 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 01:14:53.715513 kubelet[2502]: W0428 01:14:53.715510 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 01:14:53.715513 kubelet[2502]: E0428 01:14:53.715516 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 01:14:53.715717 kubelet[2502]: E0428 01:14:53.715703 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 01:14:53.715717 kubelet[2502]: W0428 01:14:53.715716 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 01:14:53.715757 kubelet[2502]: E0428 01:14:53.715721 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 01:14:53.716010 kubelet[2502]: E0428 01:14:53.715943 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 01:14:53.716087 kubelet[2502]: W0428 01:14:53.716036 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 01:14:53.716087 kubelet[2502]: E0428 01:14:53.716084 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 01:14:53.716339 kubelet[2502]: E0428 01:14:53.716314 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 01:14:53.716339 kubelet[2502]: W0428 01:14:53.716337 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 01:14:53.716405 kubelet[2502]: E0428 01:14:53.716347 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 01:14:53.716713 kubelet[2502]: E0428 01:14:53.716695 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 01:14:53.716766 kubelet[2502]: W0428 01:14:53.716713 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 01:14:53.716766 kubelet[2502]: E0428 01:14:53.716723 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 01:14:53.717013 kubelet[2502]: E0428 01:14:53.716993 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 01:14:53.717066 kubelet[2502]: W0428 01:14:53.717057 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 01:14:53.717090 kubelet[2502]: E0428 01:14:53.717065 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 01:14:53.717365 kubelet[2502]: E0428 01:14:53.717288 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 01:14:53.717365 kubelet[2502]: W0428 01:14:53.717295 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 01:14:53.717365 kubelet[2502]: E0428 01:14:53.717301 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 01:14:53.717519 kubelet[2502]: E0428 01:14:53.717500 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 01:14:53.717519 kubelet[2502]: W0428 01:14:53.717518 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 01:14:53.717576 kubelet[2502]: E0428 01:14:53.717525 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 01:14:53.717739 kubelet[2502]: E0428 01:14:53.717717 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 01:14:53.717739 kubelet[2502]: W0428 01:14:53.717728 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 01:14:53.717780 kubelet[2502]: E0428 01:14:53.717739 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 01:14:53.718010 kubelet[2502]: E0428 01:14:53.717937 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 01:14:53.718010 kubelet[2502]: W0428 01:14:53.717990 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 01:14:53.718010 kubelet[2502]: E0428 01:14:53.717998 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 01:14:53.718160 kubelet[2502]: E0428 01:14:53.718150 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 01:14:53.718160 kubelet[2502]: W0428 01:14:53.718157 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 01:14:53.718226 kubelet[2502]: E0428 01:14:53.718163 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 01:14:53.718487 kubelet[2502]: E0428 01:14:53.718479 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 01:14:53.718580 kubelet[2502]: W0428 01:14:53.718572 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 01:14:53.718659 kubelet[2502]: E0428 01:14:53.718638 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 01:14:53.719093 kubelet[2502]: E0428 01:14:53.719072 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 01:14:53.719093 kubelet[2502]: W0428 01:14:53.719082 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 01:14:53.719093 kubelet[2502]: E0428 01:14:53.719090 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 01:14:53.719302 kubelet[2502]: E0428 01:14:53.719286 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 01:14:53.719302 kubelet[2502]: W0428 01:14:53.719301 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 01:14:53.719343 kubelet[2502]: E0428 01:14:53.719308 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 01:14:53.719484 kubelet[2502]: E0428 01:14:53.719469 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 01:14:53.719484 kubelet[2502]: W0428 01:14:53.719482 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 01:14:53.719520 kubelet[2502]: E0428 01:14:53.719488 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 01:14:53.719674 kubelet[2502]: E0428 01:14:53.719660 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 01:14:53.719674 kubelet[2502]: W0428 01:14:53.719673 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 01:14:53.719742 kubelet[2502]: E0428 01:14:53.719678 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 01:14:53.719903 kubelet[2502]: E0428 01:14:53.719888 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 01:14:53.719903 kubelet[2502]: W0428 01:14:53.719902 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 01:14:53.719936 kubelet[2502]: E0428 01:14:53.719907 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 01:14:53.720229 kubelet[2502]: E0428 01:14:53.720211 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 01:14:53.720407 kubelet[2502]: W0428 01:14:53.720229 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 01:14:53.720407 kubelet[2502]: E0428 01:14:53.720238 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 01:14:53.720562 kubelet[2502]: E0428 01:14:53.720547 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 01:14:53.720562 kubelet[2502]: W0428 01:14:53.720561 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 01:14:53.720598 kubelet[2502]: E0428 01:14:53.720569 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 01:14:53.720756 kubelet[2502]: E0428 01:14:53.720741 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 01:14:53.720756 kubelet[2502]: W0428 01:14:53.720755 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 01:14:53.720796 kubelet[2502]: E0428 01:14:53.720760 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 01:14:53.738617 containerd[1456]: time="2026-04-28T01:14:53.738551531Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 01:14:53.739143 containerd[1456]: time="2026-04-28T01:14:53.739106441Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.5: active requests=0, bytes read=4601981" Apr 28 01:14:53.740027 containerd[1456]: time="2026-04-28T01:14:53.739998777Z" level=info msg="ImageCreate event name:\"sha256:a8eb0feebda3c272a6a24ff173b5058ff04cbc78cfbf08befb26f6548ef76625\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 01:14:53.741729 containerd[1456]: time="2026-04-28T01:14:53.741661802Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:df00fee6895ac073066d91243f29733e71f479317cacef49d50c244bb2d21ea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 01:14:53.742339 containerd[1456]: time="2026-04-28T01:14:53.742285436Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.5\" with image id \"sha256:a8eb0feebda3c272a6a24ff173b5058ff04cbc78cfbf08befb26f6548ef76625\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:df00fee6895ac073066d91243f29733e71f479317cacef49d50c244bb2d21ea1\", size \"7563366\" in 1.833174305s" Apr 28 01:14:53.742339 containerd[1456]: time="2026-04-28T01:14:53.742325755Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.5\" returns image reference \"sha256:a8eb0feebda3c272a6a24ff173b5058ff04cbc78cfbf08befb26f6548ef76625\"" Apr 28 01:14:53.745915 containerd[1456]: time="2026-04-28T01:14:53.745872457Z" level=info msg="CreateContainer within sandbox \"8589a5a5bf39ce62636f5b6392bb332c0cddeee7b341bb0ea6cb51fce8000c25\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 28 01:14:53.757497 containerd[1456]: time="2026-04-28T01:14:53.757436400Z" level=info msg="CreateContainer within sandbox \"8589a5a5bf39ce62636f5b6392bb332c0cddeee7b341bb0ea6cb51fce8000c25\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"86a13c85c8a2139c7b8f12187d5d9cf8d5104426f6db8ebd8e5ed863e7d607f9\"" Apr 28 01:14:53.758091 containerd[1456]: time="2026-04-28T01:14:53.758076235Z" level=info msg="StartContainer for \"86a13c85c8a2139c7b8f12187d5d9cf8d5104426f6db8ebd8e5ed863e7d607f9\"" Apr 28 01:14:53.778351 systemd[1]: run-containerd-runc-k8s.io-86a13c85c8a2139c7b8f12187d5d9cf8d5104426f6db8ebd8e5ed863e7d607f9-runc.dDmQpI.mount: Deactivated successfully. Apr 28 01:14:53.785198 systemd[1]: Started cri-containerd-86a13c85c8a2139c7b8f12187d5d9cf8d5104426f6db8ebd8e5ed863e7d607f9.scope - libcontainer container 86a13c85c8a2139c7b8f12187d5d9cf8d5104426f6db8ebd8e5ed863e7d607f9. Apr 28 01:14:53.806152 containerd[1456]: time="2026-04-28T01:14:53.806081281Z" level=info msg="StartContainer for \"86a13c85c8a2139c7b8f12187d5d9cf8d5104426f6db8ebd8e5ed863e7d607f9\" returns successfully" Apr 28 01:14:53.812485 systemd[1]: cri-containerd-86a13c85c8a2139c7b8f12187d5d9cf8d5104426f6db8ebd8e5ed863e7d607f9.scope: Deactivated successfully. Apr 28 01:14:53.830473 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-86a13c85c8a2139c7b8f12187d5d9cf8d5104426f6db8ebd8e5ed863e7d607f9-rootfs.mount: Deactivated successfully. Apr 28 01:14:53.890872 containerd[1456]: time="2026-04-28T01:14:53.890665830Z" level=info msg="shim disconnected" id=86a13c85c8a2139c7b8f12187d5d9cf8d5104426f6db8ebd8e5ed863e7d607f9 namespace=k8s.io Apr 28 01:14:53.890872 containerd[1456]: time="2026-04-28T01:14:53.890725044Z" level=warning msg="cleaning up after shim disconnected" id=86a13c85c8a2139c7b8f12187d5d9cf8d5104426f6db8ebd8e5ed863e7d607f9 namespace=k8s.io Apr 28 01:14:53.890872 containerd[1456]: time="2026-04-28T01:14:53.890732316Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 28 01:14:54.645134 containerd[1456]: time="2026-04-28T01:14:54.645083483Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.5\"" Apr 28 01:14:55.584388 kubelet[2502]: E0428 01:14:55.584341 2502 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7dx44" podUID="22c17e9a-2b9c-4268-bd82-cc9430b54e6f" Apr 28 01:14:56.734737 update_engine[1441]: I20260428 01:14:56.734609 1441 update_attempter.cc:509] Updating boot flags... Apr 28 01:14:56.756001 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (3267) Apr 28 01:14:56.782232 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (3268) Apr 28 01:14:57.584726 kubelet[2502]: E0428 01:14:57.584624 2502 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7dx44" podUID="22c17e9a-2b9c-4268-bd82-cc9430b54e6f" Apr 28 01:14:59.584692 kubelet[2502]: E0428 01:14:59.584590 2502 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7dx44" podUID="22c17e9a-2b9c-4268-bd82-cc9430b54e6f" Apr 28 01:15:01.584522 kubelet[2502]: E0428 01:15:01.584444 2502 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7dx44" podUID="22c17e9a-2b9c-4268-bd82-cc9430b54e6f" Apr 28 01:15:02.356533 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3099888846.mount: Deactivated successfully. Apr 28 01:15:02.525567 containerd[1456]: time="2026-04-28T01:15:02.525387858Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 01:15:02.526716 containerd[1456]: time="2026-04-28T01:15:02.526577032Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.5: active requests=0, bytes read=159374404" Apr 28 01:15:02.528147 containerd[1456]: time="2026-04-28T01:15:02.528062527Z" level=info msg="ImageCreate event name:\"sha256:cfa3bb2488693bde06ff066d7e0912d23ef7e2aa2c2778dfcd5591694d840c19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 01:15:02.530134 containerd[1456]: time="2026-04-28T01:15:02.530084636Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e2426b97a645ed620e0f4035d594f2f3344b0547cd3dc3458f45e06d5cebdad7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 01:15:02.530516 containerd[1456]: time="2026-04-28T01:15:02.530460294Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.5\" with image id \"sha256:cfa3bb2488693bde06ff066d7e0912d23ef7e2aa2c2778dfcd5591694d840c19\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e2426b97a645ed620e0f4035d594f2f3344b0547cd3dc3458f45e06d5cebdad7\", size \"159374266\" in 7.885326172s" Apr 28 01:15:02.530516 containerd[1456]: time="2026-04-28T01:15:02.530498154Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.5\" returns image reference \"sha256:cfa3bb2488693bde06ff066d7e0912d23ef7e2aa2c2778dfcd5591694d840c19\"" Apr 28 01:15:02.541481 containerd[1456]: time="2026-04-28T01:15:02.541368925Z" level=info msg="CreateContainer within sandbox \"8589a5a5bf39ce62636f5b6392bb332c0cddeee7b341bb0ea6cb51fce8000c25\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Apr 28 01:15:02.558264 containerd[1456]: time="2026-04-28T01:15:02.558213868Z" level=info msg="CreateContainer within sandbox \"8589a5a5bf39ce62636f5b6392bb332c0cddeee7b341bb0ea6cb51fce8000c25\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"2b60915318f4721b95f1c458aacfc6834665739c3e61fe15bc12e778fce2cff4\"" Apr 28 01:15:02.558642 containerd[1456]: time="2026-04-28T01:15:02.558610135Z" level=info msg="StartContainer for \"2b60915318f4721b95f1c458aacfc6834665739c3e61fe15bc12e778fce2cff4\"" Apr 28 01:15:02.602276 systemd[1]: Started cri-containerd-2b60915318f4721b95f1c458aacfc6834665739c3e61fe15bc12e778fce2cff4.scope - libcontainer container 2b60915318f4721b95f1c458aacfc6834665739c3e61fe15bc12e778fce2cff4. Apr 28 01:15:02.672145 containerd[1456]: time="2026-04-28T01:15:02.671929734Z" level=info msg="StartContainer for \"2b60915318f4721b95f1c458aacfc6834665739c3e61fe15bc12e778fce2cff4\" returns successfully" Apr 28 01:15:02.695742 systemd[1]: cri-containerd-2b60915318f4721b95f1c458aacfc6834665739c3e61fe15bc12e778fce2cff4.scope: Deactivated successfully. Apr 28 01:15:02.717133 containerd[1456]: time="2026-04-28T01:15:02.716918551Z" level=info msg="shim disconnected" id=2b60915318f4721b95f1c458aacfc6834665739c3e61fe15bc12e778fce2cff4 namespace=k8s.io Apr 28 01:15:02.717133 containerd[1456]: time="2026-04-28T01:15:02.717104402Z" level=warning msg="cleaning up after shim disconnected" id=2b60915318f4721b95f1c458aacfc6834665739c3e61fe15bc12e778fce2cff4 namespace=k8s.io Apr 28 01:15:02.717133 containerd[1456]: time="2026-04-28T01:15:02.717112703Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 28 01:15:03.357464 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2b60915318f4721b95f1c458aacfc6834665739c3e61fe15bc12e778fce2cff4-rootfs.mount: Deactivated successfully. Apr 28 01:15:03.584032 kubelet[2502]: E0428 01:15:03.583978 2502 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7dx44" podUID="22c17e9a-2b9c-4268-bd82-cc9430b54e6f" Apr 28 01:15:03.680289 containerd[1456]: time="2026-04-28T01:15:03.680162785Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.5\"" Apr 28 01:15:05.585212 kubelet[2502]: E0428 01:15:05.584618 2502 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7dx44" podUID="22c17e9a-2b9c-4268-bd82-cc9430b54e6f" Apr 28 01:15:07.460858 containerd[1456]: time="2026-04-28T01:15:07.460783667Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 01:15:07.461620 containerd[1456]: time="2026-04-28T01:15:07.461578116Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.5: active requests=0, bytes read=67713351" Apr 28 01:15:07.462527 containerd[1456]: time="2026-04-28T01:15:07.462490442Z" level=info msg="ImageCreate event name:\"sha256:f2487068e96f7fdaaf9d02dc114f17cdae3737bb42f1ba06d079d2d2068734b6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 01:15:07.466065 containerd[1456]: time="2026-04-28T01:15:07.465989373Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:ea8a6b721af629c1dab2e1559b93cd843d9a4b640726115380fc23cf47e83232\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 01:15:07.466572 containerd[1456]: time="2026-04-28T01:15:07.466536006Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.5\" with image id \"sha256:f2487068e96f7fdaaf9d02dc114f17cdae3737bb42f1ba06d079d2d2068734b6\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:ea8a6b721af629c1dab2e1559b93cd843d9a4b640726115380fc23cf47e83232\", size \"70674776\" in 3.786318796s" Apr 28 01:15:07.466572 containerd[1456]: time="2026-04-28T01:15:07.466560671Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.5\" returns image reference \"sha256:f2487068e96f7fdaaf9d02dc114f17cdae3737bb42f1ba06d079d2d2068734b6\"" Apr 28 01:15:07.472001 containerd[1456]: time="2026-04-28T01:15:07.471936315Z" level=info msg="CreateContainer within sandbox \"8589a5a5bf39ce62636f5b6392bb332c0cddeee7b341bb0ea6cb51fce8000c25\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 28 01:15:07.483566 containerd[1456]: time="2026-04-28T01:15:07.483523440Z" level=info msg="CreateContainer within sandbox \"8589a5a5bf39ce62636f5b6392bb332c0cddeee7b341bb0ea6cb51fce8000c25\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"ac0a61ff0c451a9602e94f9b437a6765ed535ce95b440d7e5bdd68b7034c3b18\"" Apr 28 01:15:07.484212 containerd[1456]: time="2026-04-28T01:15:07.484160319Z" level=info msg="StartContainer for \"ac0a61ff0c451a9602e94f9b437a6765ed535ce95b440d7e5bdd68b7034c3b18\"" Apr 28 01:15:07.509785 systemd[1]: run-containerd-runc-k8s.io-ac0a61ff0c451a9602e94f9b437a6765ed535ce95b440d7e5bdd68b7034c3b18-runc.FbOqLs.mount: Deactivated successfully. Apr 28 01:15:07.519425 systemd[1]: Started cri-containerd-ac0a61ff0c451a9602e94f9b437a6765ed535ce95b440d7e5bdd68b7034c3b18.scope - libcontainer container ac0a61ff0c451a9602e94f9b437a6765ed535ce95b440d7e5bdd68b7034c3b18. Apr 28 01:15:07.542335 containerd[1456]: time="2026-04-28T01:15:07.542275499Z" level=info msg="StartContainer for \"ac0a61ff0c451a9602e94f9b437a6765ed535ce95b440d7e5bdd68b7034c3b18\" returns successfully" Apr 28 01:15:07.585133 kubelet[2502]: E0428 01:15:07.585013 2502 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7dx44" podUID="22c17e9a-2b9c-4268-bd82-cc9430b54e6f" Apr 28 01:15:08.004663 systemd[1]: cri-containerd-ac0a61ff0c451a9602e94f9b437a6765ed535ce95b440d7e5bdd68b7034c3b18.scope: Deactivated successfully. Apr 28 01:15:08.024775 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ac0a61ff0c451a9602e94f9b437a6765ed535ce95b440d7e5bdd68b7034c3b18-rootfs.mount: Deactivated successfully. Apr 28 01:15:08.030661 kubelet[2502]: I0428 01:15:08.030633 2502 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Apr 28 01:15:08.049914 containerd[1456]: time="2026-04-28T01:15:08.049744678Z" level=info msg="shim disconnected" id=ac0a61ff0c451a9602e94f9b437a6765ed535ce95b440d7e5bdd68b7034c3b18 namespace=k8s.io Apr 28 01:15:08.049914 containerd[1456]: time="2026-04-28T01:15:08.049803461Z" level=warning msg="cleaning up after shim disconnected" id=ac0a61ff0c451a9602e94f9b437a6765ed535ce95b440d7e5bdd68b7034c3b18 namespace=k8s.io Apr 28 01:15:08.049914 containerd[1456]: time="2026-04-28T01:15:08.049816055Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 28 01:15:08.082913 systemd[1]: Created slice kubepods-besteffort-pod25037838_9eb1_455f_8e2a_2a6ebd0fe4f3.slice - libcontainer container kubepods-besteffort-pod25037838_9eb1_455f_8e2a_2a6ebd0fe4f3.slice. Apr 28 01:15:08.094447 systemd[1]: Created slice kubepods-besteffort-pod3581e29b_6797_4b71_bcef_fde26f9a5731.slice - libcontainer container kubepods-besteffort-pod3581e29b_6797_4b71_bcef_fde26f9a5731.slice. Apr 28 01:15:08.099784 systemd[1]: Created slice kubepods-besteffort-podadbc0340_d6da_4e72_94e2_4bd33f757201.slice - libcontainer container kubepods-besteffort-podadbc0340_d6da_4e72_94e2_4bd33f757201.slice. Apr 28 01:15:08.105258 systemd[1]: Created slice kubepods-besteffort-podd6b226a2_c761_4a32_8ac6_230b897faf20.slice - libcontainer container kubepods-besteffort-podd6b226a2_c761_4a32_8ac6_230b897faf20.slice. Apr 28 01:15:08.110299 systemd[1]: Created slice kubepods-burstable-pod1a56c2f2_44ed_42f1_8584_fb82c3e57985.slice - libcontainer container kubepods-burstable-pod1a56c2f2_44ed_42f1_8584_fb82c3e57985.slice. Apr 28 01:15:08.115590 systemd[1]: Created slice kubepods-burstable-podadb579ea_e7d0_4c18_a435_e777162b9b49.slice - libcontainer container kubepods-burstable-podadb579ea_e7d0_4c18_a435_e777162b9b49.slice. Apr 28 01:15:08.119352 systemd[1]: Created slice kubepods-besteffort-pod7a7fa1aa_06eb_4cb9_b502_88326017cac4.slice - libcontainer container kubepods-besteffort-pod7a7fa1aa_06eb_4cb9_b502_88326017cac4.slice. Apr 28 01:15:08.124288 kubelet[2502]: I0428 01:15:08.124232 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1a56c2f2-44ed-42f1-8584-fb82c3e57985-config-volume\") pod \"coredns-66bc5c9577-srt2r\" (UID: \"1a56c2f2-44ed-42f1-8584-fb82c3e57985\") " pod="kube-system/coredns-66bc5c9577-srt2r" Apr 28 01:15:08.124402 kubelet[2502]: I0428 01:15:08.124292 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7a7fa1aa-06eb-4cb9-b502-88326017cac4-goldmane-ca-bundle\") pod \"goldmane-6b4b7f4496-47t8q\" (UID: \"7a7fa1aa-06eb-4cb9-b502-88326017cac4\") " pod="calico-system/goldmane-6b4b7f4496-47t8q" Apr 28 01:15:08.124402 kubelet[2502]: I0428 01:15:08.124306 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/adbc0340-d6da-4e72-94e2-4bd33f757201-whisker-ca-bundle\") pod \"whisker-569f54558-r54gv\" (UID: \"adbc0340-d6da-4e72-94e2-4bd33f757201\") " pod="calico-system/whisker-569f54558-r54gv" Apr 28 01:15:08.124402 kubelet[2502]: I0428 01:15:08.124318 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brd4t\" (UniqueName: \"kubernetes.io/projected/25037838-9eb1-455f-8e2a-2a6ebd0fe4f3-kube-api-access-brd4t\") pod \"calico-kube-controllers-94c7fdbdd-4ls56\" (UID: \"25037838-9eb1-455f-8e2a-2a6ebd0fe4f3\") " pod="calico-system/calico-kube-controllers-94c7fdbdd-4ls56" Apr 28 01:15:08.124402 kubelet[2502]: I0428 01:15:08.124333 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a7fa1aa-06eb-4cb9-b502-88326017cac4-config\") pod \"goldmane-6b4b7f4496-47t8q\" (UID: \"7a7fa1aa-06eb-4cb9-b502-88326017cac4\") " pod="calico-system/goldmane-6b4b7f4496-47t8q" Apr 28 01:15:08.124402 kubelet[2502]: I0428 01:15:08.124346 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/adb579ea-e7d0-4c18-a435-e777162b9b49-config-volume\") pod \"coredns-66bc5c9577-jkglw\" (UID: \"adb579ea-e7d0-4c18-a435-e777162b9b49\") " pod="kube-system/coredns-66bc5c9577-jkglw" Apr 28 01:15:08.124514 kubelet[2502]: I0428 01:15:08.124358 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzknl\" (UniqueName: \"kubernetes.io/projected/d6b226a2-c761-4a32-8ac6-230b897faf20-kube-api-access-jzknl\") pod \"calico-apiserver-59c6777c8b-mc98s\" (UID: \"d6b226a2-c761-4a32-8ac6-230b897faf20\") " pod="calico-system/calico-apiserver-59c6777c8b-mc98s" Apr 28 01:15:08.124514 kubelet[2502]: I0428 01:15:08.124432 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqzhn\" (UniqueName: \"kubernetes.io/projected/adbc0340-d6da-4e72-94e2-4bd33f757201-kube-api-access-lqzhn\") pod \"whisker-569f54558-r54gv\" (UID: \"adbc0340-d6da-4e72-94e2-4bd33f757201\") " pod="calico-system/whisker-569f54558-r54gv" Apr 28 01:15:08.124514 kubelet[2502]: I0428 01:15:08.124456 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/25037838-9eb1-455f-8e2a-2a6ebd0fe4f3-tigera-ca-bundle\") pod \"calico-kube-controllers-94c7fdbdd-4ls56\" (UID: \"25037838-9eb1-455f-8e2a-2a6ebd0fe4f3\") " pod="calico-system/calico-kube-controllers-94c7fdbdd-4ls56" Apr 28 01:15:08.124514 kubelet[2502]: I0428 01:15:08.124472 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/adbc0340-d6da-4e72-94e2-4bd33f757201-nginx-config\") pod \"whisker-569f54558-r54gv\" (UID: \"adbc0340-d6da-4e72-94e2-4bd33f757201\") " pod="calico-system/whisker-569f54558-r54gv" Apr 28 01:15:08.124514 kubelet[2502]: I0428 01:15:08.124489 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/7a7fa1aa-06eb-4cb9-b502-88326017cac4-goldmane-key-pair\") pod \"goldmane-6b4b7f4496-47t8q\" (UID: \"7a7fa1aa-06eb-4cb9-b502-88326017cac4\") " pod="calico-system/goldmane-6b4b7f4496-47t8q" Apr 28 01:15:08.124612 kubelet[2502]: I0428 01:15:08.124504 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsb44\" (UniqueName: \"kubernetes.io/projected/1a56c2f2-44ed-42f1-8584-fb82c3e57985-kube-api-access-zsb44\") pod \"coredns-66bc5c9577-srt2r\" (UID: \"1a56c2f2-44ed-42f1-8584-fb82c3e57985\") " pod="kube-system/coredns-66bc5c9577-srt2r" Apr 28 01:15:08.124612 kubelet[2502]: I0428 01:15:08.124515 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4w8b\" (UniqueName: \"kubernetes.io/projected/adb579ea-e7d0-4c18-a435-e777162b9b49-kube-api-access-h4w8b\") pod \"coredns-66bc5c9577-jkglw\" (UID: \"adb579ea-e7d0-4c18-a435-e777162b9b49\") " pod="kube-system/coredns-66bc5c9577-jkglw" Apr 28 01:15:08.124612 kubelet[2502]: I0428 01:15:08.124526 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/adbc0340-d6da-4e72-94e2-4bd33f757201-whisker-backend-key-pair\") pod \"whisker-569f54558-r54gv\" (UID: \"adbc0340-d6da-4e72-94e2-4bd33f757201\") " pod="calico-system/whisker-569f54558-r54gv" Apr 28 01:15:08.124612 kubelet[2502]: I0428 01:15:08.124538 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7cn4s\" (UniqueName: \"kubernetes.io/projected/7a7fa1aa-06eb-4cb9-b502-88326017cac4-kube-api-access-7cn4s\") pod \"goldmane-6b4b7f4496-47t8q\" (UID: \"7a7fa1aa-06eb-4cb9-b502-88326017cac4\") " pod="calico-system/goldmane-6b4b7f4496-47t8q" Apr 28 01:15:08.124612 kubelet[2502]: I0428 01:15:08.124551 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/3581e29b-6797-4b71-bcef-fde26f9a5731-calico-apiserver-certs\") pod \"calico-apiserver-59c6777c8b-8xfxq\" (UID: \"3581e29b-6797-4b71-bcef-fde26f9a5731\") " pod="calico-system/calico-apiserver-59c6777c8b-8xfxq" Apr 28 01:15:08.124712 kubelet[2502]: I0428 01:15:08.124561 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frpgb\" (UniqueName: \"kubernetes.io/projected/3581e29b-6797-4b71-bcef-fde26f9a5731-kube-api-access-frpgb\") pod \"calico-apiserver-59c6777c8b-8xfxq\" (UID: \"3581e29b-6797-4b71-bcef-fde26f9a5731\") " pod="calico-system/calico-apiserver-59c6777c8b-8xfxq" Apr 28 01:15:08.124712 kubelet[2502]: I0428 01:15:08.124575 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d6b226a2-c761-4a32-8ac6-230b897faf20-calico-apiserver-certs\") pod \"calico-apiserver-59c6777c8b-mc98s\" (UID: \"d6b226a2-c761-4a32-8ac6-230b897faf20\") " pod="calico-system/calico-apiserver-59c6777c8b-mc98s" Apr 28 01:15:08.396901 containerd[1456]: time="2026-04-28T01:15:08.396753109Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-94c7fdbdd-4ls56,Uid:25037838-9eb1-455f-8e2a-2a6ebd0fe4f3,Namespace:calico-system,Attempt:0,}" Apr 28 01:15:08.403746 containerd[1456]: time="2026-04-28T01:15:08.403661889Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59c6777c8b-8xfxq,Uid:3581e29b-6797-4b71-bcef-fde26f9a5731,Namespace:calico-system,Attempt:0,}" Apr 28 01:15:08.406562 containerd[1456]: time="2026-04-28T01:15:08.406436003Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-569f54558-r54gv,Uid:adbc0340-d6da-4e72-94e2-4bd33f757201,Namespace:calico-system,Attempt:0,}" Apr 28 01:15:08.412297 containerd[1456]: time="2026-04-28T01:15:08.412269563Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59c6777c8b-mc98s,Uid:d6b226a2-c761-4a32-8ac6-230b897faf20,Namespace:calico-system,Attempt:0,}" Apr 28 01:15:08.413878 kubelet[2502]: E0428 01:15:08.413819 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:15:08.414367 containerd[1456]: time="2026-04-28T01:15:08.414235163Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-srt2r,Uid:1a56c2f2-44ed-42f1-8584-fb82c3e57985,Namespace:kube-system,Attempt:0,}" Apr 28 01:15:08.419679 kubelet[2502]: E0428 01:15:08.419644 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:15:08.420086 containerd[1456]: time="2026-04-28T01:15:08.420029751Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-jkglw,Uid:adb579ea-e7d0-4c18-a435-e777162b9b49,Namespace:kube-system,Attempt:0,}" Apr 28 01:15:08.425300 containerd[1456]: time="2026-04-28T01:15:08.425256179Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-6b4b7f4496-47t8q,Uid:7a7fa1aa-06eb-4cb9-b502-88326017cac4,Namespace:calico-system,Attempt:0,}" Apr 28 01:15:08.536921 containerd[1456]: time="2026-04-28T01:15:08.536845207Z" level=error msg="Failed to destroy network for sandbox \"aeb4a688f40f86108f7453e3a5315c6e9ddbea828d70eb0aab8b026af70c8127\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 28 01:15:08.537299 containerd[1456]: time="2026-04-28T01:15:08.537255315Z" level=error msg="encountered an error cleaning up failed sandbox \"aeb4a688f40f86108f7453e3a5315c6e9ddbea828d70eb0aab8b026af70c8127\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 28 01:15:08.537323 containerd[1456]: time="2026-04-28T01:15:08.537295034Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59c6777c8b-8xfxq,Uid:3581e29b-6797-4b71-bcef-fde26f9a5731,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"aeb4a688f40f86108f7453e3a5315c6e9ddbea828d70eb0aab8b026af70c8127\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 28 01:15:08.539735 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-aeb4a688f40f86108f7453e3a5315c6e9ddbea828d70eb0aab8b026af70c8127-shm.mount: Deactivated successfully. Apr 28 01:15:08.549899 kubelet[2502]: E0428 01:15:08.549839 2502 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aeb4a688f40f86108f7453e3a5315c6e9ddbea828d70eb0aab8b026af70c8127\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 28 01:15:08.549899 kubelet[2502]: E0428 01:15:08.549914 2502 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aeb4a688f40f86108f7453e3a5315c6e9ddbea828d70eb0aab8b026af70c8127\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-59c6777c8b-8xfxq" Apr 28 01:15:08.549899 kubelet[2502]: E0428 01:15:08.549933 2502 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aeb4a688f40f86108f7453e3a5315c6e9ddbea828d70eb0aab8b026af70c8127\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-59c6777c8b-8xfxq" Apr 28 01:15:08.550143 kubelet[2502]: E0428 01:15:08.550027 2502 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-59c6777c8b-8xfxq_calico-system(3581e29b-6797-4b71-bcef-fde26f9a5731)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-59c6777c8b-8xfxq_calico-system(3581e29b-6797-4b71-bcef-fde26f9a5731)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"aeb4a688f40f86108f7453e3a5315c6e9ddbea828d70eb0aab8b026af70c8127\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-59c6777c8b-8xfxq" podUID="3581e29b-6797-4b71-bcef-fde26f9a5731" Apr 28 01:15:08.565113 containerd[1456]: time="2026-04-28T01:15:08.564988152Z" level=error msg="Failed to destroy network for sandbox \"7237d00fc7b9392bd10921833cb84c027ae94507cfc1031bc9e63919575fcc6d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 28 01:15:08.567660 containerd[1456]: time="2026-04-28T01:15:08.565335127Z" level=error msg="encountered an error cleaning up failed sandbox \"7237d00fc7b9392bd10921833cb84c027ae94507cfc1031bc9e63919575fcc6d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 28 01:15:08.567660 containerd[1456]: time="2026-04-28T01:15:08.565390274Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-6b4b7f4496-47t8q,Uid:7a7fa1aa-06eb-4cb9-b502-88326017cac4,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7237d00fc7b9392bd10921833cb84c027ae94507cfc1031bc9e63919575fcc6d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 28 01:15:08.567070 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7237d00fc7b9392bd10921833cb84c027ae94507cfc1031bc9e63919575fcc6d-shm.mount: Deactivated successfully. Apr 28 01:15:08.567866 kubelet[2502]: E0428 01:15:08.565634 2502 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7237d00fc7b9392bd10921833cb84c027ae94507cfc1031bc9e63919575fcc6d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 28 01:15:08.567866 kubelet[2502]: E0428 01:15:08.565674 2502 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7237d00fc7b9392bd10921833cb84c027ae94507cfc1031bc9e63919575fcc6d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-6b4b7f4496-47t8q" Apr 28 01:15:08.567866 kubelet[2502]: E0428 01:15:08.565733 2502 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7237d00fc7b9392bd10921833cb84c027ae94507cfc1031bc9e63919575fcc6d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-6b4b7f4496-47t8q" Apr 28 01:15:08.567932 kubelet[2502]: E0428 01:15:08.565804 2502 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-6b4b7f4496-47t8q_calico-system(7a7fa1aa-06eb-4cb9-b502-88326017cac4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-6b4b7f4496-47t8q_calico-system(7a7fa1aa-06eb-4cb9-b502-88326017cac4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7237d00fc7b9392bd10921833cb84c027ae94507cfc1031bc9e63919575fcc6d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-6b4b7f4496-47t8q" podUID="7a7fa1aa-06eb-4cb9-b502-88326017cac4" Apr 28 01:15:08.574240 containerd[1456]: time="2026-04-28T01:15:08.574160938Z" level=error msg="Failed to destroy network for sandbox \"4ded7607b6f80e97a1e2bd090c72ab2bf1d445053e40515597369a926c12750c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 28 01:15:08.575685 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4ded7607b6f80e97a1e2bd090c72ab2bf1d445053e40515597369a926c12750c-shm.mount: Deactivated successfully. Apr 28 01:15:08.576275 containerd[1456]: time="2026-04-28T01:15:08.576123073Z" level=error msg="encountered an error cleaning up failed sandbox \"4ded7607b6f80e97a1e2bd090c72ab2bf1d445053e40515597369a926c12750c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 28 01:15:08.576275 containerd[1456]: time="2026-04-28T01:15:08.576180395Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-94c7fdbdd-4ls56,Uid:25037838-9eb1-455f-8e2a-2a6ebd0fe4f3,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4ded7607b6f80e97a1e2bd090c72ab2bf1d445053e40515597369a926c12750c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 28 01:15:08.576437 kubelet[2502]: E0428 01:15:08.576393 2502 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ded7607b6f80e97a1e2bd090c72ab2bf1d445053e40515597369a926c12750c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 28 01:15:08.576478 kubelet[2502]: E0428 01:15:08.576451 2502 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ded7607b6f80e97a1e2bd090c72ab2bf1d445053e40515597369a926c12750c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-94c7fdbdd-4ls56" Apr 28 01:15:08.576478 kubelet[2502]: E0428 01:15:08.576468 2502 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ded7607b6f80e97a1e2bd090c72ab2bf1d445053e40515597369a926c12750c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-94c7fdbdd-4ls56" Apr 28 01:15:08.576590 kubelet[2502]: E0428 01:15:08.576512 2502 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-94c7fdbdd-4ls56_calico-system(25037838-9eb1-455f-8e2a-2a6ebd0fe4f3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-94c7fdbdd-4ls56_calico-system(25037838-9eb1-455f-8e2a-2a6ebd0fe4f3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4ded7607b6f80e97a1e2bd090c72ab2bf1d445053e40515597369a926c12750c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-94c7fdbdd-4ls56" podUID="25037838-9eb1-455f-8e2a-2a6ebd0fe4f3" Apr 28 01:15:08.586103 containerd[1456]: time="2026-04-28T01:15:08.586016199Z" level=error msg="Failed to destroy network for sandbox \"8b8b1804cb3be368633947b45b4abf8921f682c444d641033f588519a6ee9ea6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 28 01:15:08.587331 containerd[1456]: time="2026-04-28T01:15:08.587295262Z" level=error msg="Failed to destroy network for sandbox \"1b5e5ac5cf20259e031d9691f196db006eb613be1608fb4221becec133ac1551\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 28 01:15:08.587811 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8b8b1804cb3be368633947b45b4abf8921f682c444d641033f588519a6ee9ea6-shm.mount: Deactivated successfully. Apr 28 01:15:08.588128 containerd[1456]: time="2026-04-28T01:15:08.588028019Z" level=error msg="Failed to destroy network for sandbox \"f1b614515eb80a635e30633e8feba558c3ed2b080a2416ae21bf47d7e6880c91\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 28 01:15:08.588817 containerd[1456]: time="2026-04-28T01:15:08.588775494Z" level=error msg="encountered an error cleaning up failed sandbox \"f1b614515eb80a635e30633e8feba558c3ed2b080a2416ae21bf47d7e6880c91\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 28 01:15:08.588863 containerd[1456]: time="2026-04-28T01:15:08.588836988Z" level=error msg="encountered an error cleaning up failed sandbox \"1b5e5ac5cf20259e031d9691f196db006eb613be1608fb4221becec133ac1551\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 28 01:15:08.588907 containerd[1456]: time="2026-04-28T01:15:08.588878695Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-jkglw,Uid:adb579ea-e7d0-4c18-a435-e777162b9b49,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1b5e5ac5cf20259e031d9691f196db006eb613be1608fb4221becec133ac1551\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 28 01:15:08.589012 containerd[1456]: time="2026-04-28T01:15:08.588849990Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-569f54558-r54gv,Uid:adbc0340-d6da-4e72-94e2-4bd33f757201,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f1b614515eb80a635e30633e8feba558c3ed2b080a2416ae21bf47d7e6880c91\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 28 01:15:08.589036 containerd[1456]: time="2026-04-28T01:15:08.588786080Z" level=error msg="encountered an error cleaning up failed sandbox \"8b8b1804cb3be368633947b45b4abf8921f682c444d641033f588519a6ee9ea6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 28 01:15:08.589098 containerd[1456]: time="2026-04-28T01:15:08.589075005Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59c6777c8b-mc98s,Uid:d6b226a2-c761-4a32-8ac6-230b897faf20,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8b8b1804cb3be368633947b45b4abf8921f682c444d641033f588519a6ee9ea6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 28 01:15:08.589294 kubelet[2502]: E0428 01:15:08.589240 2502 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b8b1804cb3be368633947b45b4abf8921f682c444d641033f588519a6ee9ea6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 28 01:15:08.589482 kubelet[2502]: E0428 01:15:08.589303 2502 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b8b1804cb3be368633947b45b4abf8921f682c444d641033f588519a6ee9ea6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-59c6777c8b-mc98s" Apr 28 01:15:08.589482 kubelet[2502]: E0428 01:15:08.589313 2502 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b5e5ac5cf20259e031d9691f196db006eb613be1608fb4221becec133ac1551\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 28 01:15:08.589482 kubelet[2502]: E0428 01:15:08.589322 2502 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b8b1804cb3be368633947b45b4abf8921f682c444d641033f588519a6ee9ea6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-59c6777c8b-mc98s" Apr 28 01:15:08.589482 kubelet[2502]: E0428 01:15:08.589260 2502 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1b614515eb80a635e30633e8feba558c3ed2b080a2416ae21bf47d7e6880c91\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 28 01:15:08.589553 kubelet[2502]: E0428 01:15:08.589368 2502 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-59c6777c8b-mc98s_calico-system(d6b226a2-c761-4a32-8ac6-230b897faf20)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-59c6777c8b-mc98s_calico-system(d6b226a2-c761-4a32-8ac6-230b897faf20)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8b8b1804cb3be368633947b45b4abf8921f682c444d641033f588519a6ee9ea6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-59c6777c8b-mc98s" podUID="d6b226a2-c761-4a32-8ac6-230b897faf20" Apr 28 01:15:08.589553 kubelet[2502]: E0428 01:15:08.589388 2502 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1b614515eb80a635e30633e8feba558c3ed2b080a2416ae21bf47d7e6880c91\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-569f54558-r54gv" Apr 28 01:15:08.589553 kubelet[2502]: E0428 01:15:08.589403 2502 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1b614515eb80a635e30633e8feba558c3ed2b080a2416ae21bf47d7e6880c91\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-569f54558-r54gv" Apr 28 01:15:08.589649 kubelet[2502]: E0428 01:15:08.589438 2502 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-569f54558-r54gv_calico-system(adbc0340-d6da-4e72-94e2-4bd33f757201)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-569f54558-r54gv_calico-system(adbc0340-d6da-4e72-94e2-4bd33f757201)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f1b614515eb80a635e30633e8feba558c3ed2b080a2416ae21bf47d7e6880c91\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-569f54558-r54gv" podUID="adbc0340-d6da-4e72-94e2-4bd33f757201" Apr 28 01:15:08.589649 kubelet[2502]: E0428 01:15:08.589467 2502 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b5e5ac5cf20259e031d9691f196db006eb613be1608fb4221becec133ac1551\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-jkglw" Apr 28 01:15:08.589649 kubelet[2502]: E0428 01:15:08.589478 2502 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b5e5ac5cf20259e031d9691f196db006eb613be1608fb4221becec133ac1551\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-jkglw" Apr 28 01:15:08.589735 kubelet[2502]: E0428 01:15:08.589496 2502 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-jkglw_kube-system(adb579ea-e7d0-4c18-a435-e777162b9b49)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-jkglw_kube-system(adb579ea-e7d0-4c18-a435-e777162b9b49)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1b5e5ac5cf20259e031d9691f196db006eb613be1608fb4221becec133ac1551\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-jkglw" podUID="adb579ea-e7d0-4c18-a435-e777162b9b49" Apr 28 01:15:08.590031 containerd[1456]: time="2026-04-28T01:15:08.589908299Z" level=error msg="Failed to destroy network for sandbox \"55aed3bf1d0e6e0ea4708ed31c9d451023cf4ce0db272c7e28f6d525d6e56fe9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 28 01:15:08.590262 containerd[1456]: time="2026-04-28T01:15:08.590231926Z" level=error msg="encountered an error cleaning up failed sandbox \"55aed3bf1d0e6e0ea4708ed31c9d451023cf4ce0db272c7e28f6d525d6e56fe9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 28 01:15:08.590317 containerd[1456]: time="2026-04-28T01:15:08.590279158Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-srt2r,Uid:1a56c2f2-44ed-42f1-8584-fb82c3e57985,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"55aed3bf1d0e6e0ea4708ed31c9d451023cf4ce0db272c7e28f6d525d6e56fe9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 28 01:15:08.590491 kubelet[2502]: E0428 01:15:08.590426 2502 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55aed3bf1d0e6e0ea4708ed31c9d451023cf4ce0db272c7e28f6d525d6e56fe9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 28 01:15:08.590491 kubelet[2502]: E0428 01:15:08.590468 2502 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55aed3bf1d0e6e0ea4708ed31c9d451023cf4ce0db272c7e28f6d525d6e56fe9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-srt2r" Apr 28 01:15:08.590491 kubelet[2502]: E0428 01:15:08.590480 2502 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55aed3bf1d0e6e0ea4708ed31c9d451023cf4ce0db272c7e28f6d525d6e56fe9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-srt2r" Apr 28 01:15:08.590601 kubelet[2502]: E0428 01:15:08.590507 2502 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-srt2r_kube-system(1a56c2f2-44ed-42f1-8584-fb82c3e57985)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-srt2r_kube-system(1a56c2f2-44ed-42f1-8584-fb82c3e57985)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"55aed3bf1d0e6e0ea4708ed31c9d451023cf4ce0db272c7e28f6d525d6e56fe9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-srt2r" podUID="1a56c2f2-44ed-42f1-8584-fb82c3e57985" Apr 28 01:15:08.694678 kubelet[2502]: I0428 01:15:08.694261 2502 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="55aed3bf1d0e6e0ea4708ed31c9d451023cf4ce0db272c7e28f6d525d6e56fe9" Apr 28 01:15:08.695666 kubelet[2502]: I0428 01:15:08.695652 2502 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f1b614515eb80a635e30633e8feba558c3ed2b080a2416ae21bf47d7e6880c91" Apr 28 01:15:08.701994 kubelet[2502]: I0428 01:15:08.699279 2502 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aeb4a688f40f86108f7453e3a5315c6e9ddbea828d70eb0aab8b026af70c8127" Apr 28 01:15:08.701994 kubelet[2502]: I0428 01:15:08.700893 2502 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ded7607b6f80e97a1e2bd090c72ab2bf1d445053e40515597369a926c12750c" Apr 28 01:15:08.702100 containerd[1456]: time="2026-04-28T01:15:08.701757015Z" level=info msg="StopPodSandbox for \"55aed3bf1d0e6e0ea4708ed31c9d451023cf4ce0db272c7e28f6d525d6e56fe9\"" Apr 28 01:15:08.702100 containerd[1456]: time="2026-04-28T01:15:08.701985843Z" level=info msg="StopPodSandbox for \"aeb4a688f40f86108f7453e3a5315c6e9ddbea828d70eb0aab8b026af70c8127\"" Apr 28 01:15:08.702847 containerd[1456]: time="2026-04-28T01:15:08.702814419Z" level=info msg="StopPodSandbox for \"f1b614515eb80a635e30633e8feba558c3ed2b080a2416ae21bf47d7e6880c91\"" Apr 28 01:15:08.705566 containerd[1456]: time="2026-04-28T01:15:08.704647537Z" level=info msg="Ensure that sandbox aeb4a688f40f86108f7453e3a5315c6e9ddbea828d70eb0aab8b026af70c8127 in task-service has been cleanup successfully" Apr 28 01:15:08.705566 containerd[1456]: time="2026-04-28T01:15:08.704831191Z" level=info msg="Ensure that sandbox f1b614515eb80a635e30633e8feba558c3ed2b080a2416ae21bf47d7e6880c91 in task-service has been cleanup successfully" Apr 28 01:15:08.705566 containerd[1456]: time="2026-04-28T01:15:08.704650348Z" level=info msg="Ensure that sandbox 55aed3bf1d0e6e0ea4708ed31c9d451023cf4ce0db272c7e28f6d525d6e56fe9 in task-service has been cleanup successfully" Apr 28 01:15:08.708201 containerd[1456]: time="2026-04-28T01:15:08.708182164Z" level=info msg="CreateContainer within sandbox \"8589a5a5bf39ce62636f5b6392bb332c0cddeee7b341bb0ea6cb51fce8000c25\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 28 01:15:08.708359 kubelet[2502]: I0428 01:15:08.708349 2502 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7237d00fc7b9392bd10921833cb84c027ae94507cfc1031bc9e63919575fcc6d" Apr 28 01:15:08.709967 containerd[1456]: time="2026-04-28T01:15:08.709854330Z" level=info msg="StopPodSandbox for \"4ded7607b6f80e97a1e2bd090c72ab2bf1d445053e40515597369a926c12750c\"" Apr 28 01:15:08.710425 containerd[1456]: time="2026-04-28T01:15:08.710377454Z" level=info msg="Ensure that sandbox 4ded7607b6f80e97a1e2bd090c72ab2bf1d445053e40515597369a926c12750c in task-service has been cleanup successfully" Apr 28 01:15:08.712085 containerd[1456]: time="2026-04-28T01:15:08.711884114Z" level=info msg="StopPodSandbox for \"7237d00fc7b9392bd10921833cb84c027ae94507cfc1031bc9e63919575fcc6d\"" Apr 28 01:15:08.712162 containerd[1456]: time="2026-04-28T01:15:08.712129951Z" level=info msg="Ensure that sandbox 7237d00fc7b9392bd10921833cb84c027ae94507cfc1031bc9e63919575fcc6d in task-service has been cleanup successfully" Apr 28 01:15:08.719435 kubelet[2502]: I0428 01:15:08.719223 2502 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1b5e5ac5cf20259e031d9691f196db006eb613be1608fb4221becec133ac1551" Apr 28 01:15:08.723509 containerd[1456]: time="2026-04-28T01:15:08.723319560Z" level=info msg="StopPodSandbox for \"1b5e5ac5cf20259e031d9691f196db006eb613be1608fb4221becec133ac1551\"" Apr 28 01:15:08.732562 kubelet[2502]: I0428 01:15:08.732520 2502 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b8b1804cb3be368633947b45b4abf8921f682c444d641033f588519a6ee9ea6" Apr 28 01:15:08.733456 containerd[1456]: time="2026-04-28T01:15:08.733404521Z" level=info msg="StopPodSandbox for \"8b8b1804cb3be368633947b45b4abf8921f682c444d641033f588519a6ee9ea6\"" Apr 28 01:15:08.733664 containerd[1456]: time="2026-04-28T01:15:08.733563451Z" level=info msg="Ensure that sandbox 8b8b1804cb3be368633947b45b4abf8921f682c444d641033f588519a6ee9ea6 in task-service has been cleanup successfully" Apr 28 01:15:08.742359 containerd[1456]: time="2026-04-28T01:15:08.742282278Z" level=info msg="Ensure that sandbox 1b5e5ac5cf20259e031d9691f196db006eb613be1608fb4221becec133ac1551 in task-service has been cleanup successfully" Apr 28 01:15:08.763681 containerd[1456]: time="2026-04-28T01:15:08.763578503Z" level=error msg="StopPodSandbox for \"55aed3bf1d0e6e0ea4708ed31c9d451023cf4ce0db272c7e28f6d525d6e56fe9\" failed" error="failed to destroy network for sandbox \"55aed3bf1d0e6e0ea4708ed31c9d451023cf4ce0db272c7e28f6d525d6e56fe9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 28 01:15:08.764010 kubelet[2502]: E0428 01:15:08.763922 2502 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"55aed3bf1d0e6e0ea4708ed31c9d451023cf4ce0db272c7e28f6d525d6e56fe9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="55aed3bf1d0e6e0ea4708ed31c9d451023cf4ce0db272c7e28f6d525d6e56fe9" Apr 28 01:15:08.764103 kubelet[2502]: E0428 01:15:08.764022 2502 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"55aed3bf1d0e6e0ea4708ed31c9d451023cf4ce0db272c7e28f6d525d6e56fe9"} Apr 28 01:15:08.764103 kubelet[2502]: E0428 01:15:08.764089 2502 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1a56c2f2-44ed-42f1-8584-fb82c3e57985\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"55aed3bf1d0e6e0ea4708ed31c9d451023cf4ce0db272c7e28f6d525d6e56fe9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 28 01:15:08.764220 kubelet[2502]: E0428 01:15:08.764111 2502 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1a56c2f2-44ed-42f1-8584-fb82c3e57985\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"55aed3bf1d0e6e0ea4708ed31c9d451023cf4ce0db272c7e28f6d525d6e56fe9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-srt2r" podUID="1a56c2f2-44ed-42f1-8584-fb82c3e57985" Apr 28 01:15:08.764320 containerd[1456]: time="2026-04-28T01:15:08.764251679Z" level=error msg="StopPodSandbox for \"f1b614515eb80a635e30633e8feba558c3ed2b080a2416ae21bf47d7e6880c91\" failed" error="failed to destroy network for sandbox \"f1b614515eb80a635e30633e8feba558c3ed2b080a2416ae21bf47d7e6880c91\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 28 01:15:08.764472 kubelet[2502]: E0428 01:15:08.764417 2502 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f1b614515eb80a635e30633e8feba558c3ed2b080a2416ae21bf47d7e6880c91\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f1b614515eb80a635e30633e8feba558c3ed2b080a2416ae21bf47d7e6880c91" Apr 28 01:15:08.764472 kubelet[2502]: E0428 01:15:08.764447 2502 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f1b614515eb80a635e30633e8feba558c3ed2b080a2416ae21bf47d7e6880c91"} Apr 28 01:15:08.764472 kubelet[2502]: E0428 01:15:08.764466 2502 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"adbc0340-d6da-4e72-94e2-4bd33f757201\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f1b614515eb80a635e30633e8feba558c3ed2b080a2416ae21bf47d7e6880c91\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 28 01:15:08.764558 kubelet[2502]: E0428 01:15:08.764491 2502 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"adbc0340-d6da-4e72-94e2-4bd33f757201\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f1b614515eb80a635e30633e8feba558c3ed2b080a2416ae21bf47d7e6880c91\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-569f54558-r54gv" podUID="adbc0340-d6da-4e72-94e2-4bd33f757201" Apr 28 01:15:08.769413 containerd[1456]: time="2026-04-28T01:15:08.769098014Z" level=error msg="StopPodSandbox for \"4ded7607b6f80e97a1e2bd090c72ab2bf1d445053e40515597369a926c12750c\" failed" error="failed to destroy network for sandbox \"4ded7607b6f80e97a1e2bd090c72ab2bf1d445053e40515597369a926c12750c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 28 01:15:08.769468 kubelet[2502]: E0428 01:15:08.769400 2502 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4ded7607b6f80e97a1e2bd090c72ab2bf1d445053e40515597369a926c12750c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4ded7607b6f80e97a1e2bd090c72ab2bf1d445053e40515597369a926c12750c" Apr 28 01:15:08.769503 kubelet[2502]: E0428 01:15:08.769471 2502 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4ded7607b6f80e97a1e2bd090c72ab2bf1d445053e40515597369a926c12750c"} Apr 28 01:15:08.769503 kubelet[2502]: E0428 01:15:08.769495 2502 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"25037838-9eb1-455f-8e2a-2a6ebd0fe4f3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4ded7607b6f80e97a1e2bd090c72ab2bf1d445053e40515597369a926c12750c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 28 01:15:08.769582 kubelet[2502]: E0428 01:15:08.769512 2502 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"25037838-9eb1-455f-8e2a-2a6ebd0fe4f3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4ded7607b6f80e97a1e2bd090c72ab2bf1d445053e40515597369a926c12750c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-94c7fdbdd-4ls56" podUID="25037838-9eb1-455f-8e2a-2a6ebd0fe4f3" Apr 28 01:15:08.772876 containerd[1456]: time="2026-04-28T01:15:08.772804444Z" level=info msg="CreateContainer within sandbox \"8589a5a5bf39ce62636f5b6392bb332c0cddeee7b341bb0ea6cb51fce8000c25\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"abf9bcc4552ca009cceefeaded287134ab9ac1cba29d848b6548469ebb38f445\"" Apr 28 01:15:08.773239 containerd[1456]: time="2026-04-28T01:15:08.773189981Z" level=error msg="StopPodSandbox for \"aeb4a688f40f86108f7453e3a5315c6e9ddbea828d70eb0aab8b026af70c8127\" failed" error="failed to destroy network for sandbox \"aeb4a688f40f86108f7453e3a5315c6e9ddbea828d70eb0aab8b026af70c8127\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 28 01:15:08.773724 containerd[1456]: time="2026-04-28T01:15:08.773678737Z" level=info msg="StartContainer for \"abf9bcc4552ca009cceefeaded287134ab9ac1cba29d848b6548469ebb38f445\"" Apr 28 01:15:08.774001 kubelet[2502]: E0428 01:15:08.773860 2502 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"aeb4a688f40f86108f7453e3a5315c6e9ddbea828d70eb0aab8b026af70c8127\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="aeb4a688f40f86108f7453e3a5315c6e9ddbea828d70eb0aab8b026af70c8127" Apr 28 01:15:08.774001 kubelet[2502]: E0428 01:15:08.773928 2502 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"aeb4a688f40f86108f7453e3a5315c6e9ddbea828d70eb0aab8b026af70c8127"} Apr 28 01:15:08.774001 kubelet[2502]: E0428 01:15:08.773977 2502 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3581e29b-6797-4b71-bcef-fde26f9a5731\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"aeb4a688f40f86108f7453e3a5315c6e9ddbea828d70eb0aab8b026af70c8127\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 28 01:15:08.774001 kubelet[2502]: E0428 01:15:08.773995 2502 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3581e29b-6797-4b71-bcef-fde26f9a5731\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"aeb4a688f40f86108f7453e3a5315c6e9ddbea828d70eb0aab8b026af70c8127\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-59c6777c8b-8xfxq" podUID="3581e29b-6797-4b71-bcef-fde26f9a5731" Apr 28 01:15:08.778791 containerd[1456]: time="2026-04-28T01:15:08.778708535Z" level=error msg="StopPodSandbox for \"7237d00fc7b9392bd10921833cb84c027ae94507cfc1031bc9e63919575fcc6d\" failed" error="failed to destroy network for sandbox \"7237d00fc7b9392bd10921833cb84c027ae94507cfc1031bc9e63919575fcc6d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 28 01:15:08.779024 kubelet[2502]: E0428 01:15:08.778877 2502 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7237d00fc7b9392bd10921833cb84c027ae94507cfc1031bc9e63919575fcc6d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7237d00fc7b9392bd10921833cb84c027ae94507cfc1031bc9e63919575fcc6d" Apr 28 01:15:08.779024 kubelet[2502]: E0428 01:15:08.779015 2502 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7237d00fc7b9392bd10921833cb84c027ae94507cfc1031bc9e63919575fcc6d"} Apr 28 01:15:08.779112 kubelet[2502]: E0428 01:15:08.779034 2502 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7a7fa1aa-06eb-4cb9-b502-88326017cac4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7237d00fc7b9392bd10921833cb84c027ae94507cfc1031bc9e63919575fcc6d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 28 01:15:08.779112 kubelet[2502]: E0428 01:15:08.779071 2502 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7a7fa1aa-06eb-4cb9-b502-88326017cac4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7237d00fc7b9392bd10921833cb84c027ae94507cfc1031bc9e63919575fcc6d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-6b4b7f4496-47t8q" podUID="7a7fa1aa-06eb-4cb9-b502-88326017cac4" Apr 28 01:15:08.787505 containerd[1456]: time="2026-04-28T01:15:08.787434127Z" level=error msg="StopPodSandbox for \"1b5e5ac5cf20259e031d9691f196db006eb613be1608fb4221becec133ac1551\" failed" error="failed to destroy network for sandbox \"1b5e5ac5cf20259e031d9691f196db006eb613be1608fb4221becec133ac1551\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 28 01:15:08.788406 kubelet[2502]: E0428 01:15:08.787543 2502 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1b5e5ac5cf20259e031d9691f196db006eb613be1608fb4221becec133ac1551\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1b5e5ac5cf20259e031d9691f196db006eb613be1608fb4221becec133ac1551" Apr 28 01:15:08.788406 kubelet[2502]: E0428 01:15:08.787570 2502 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1b5e5ac5cf20259e031d9691f196db006eb613be1608fb4221becec133ac1551"} Apr 28 01:15:08.788406 kubelet[2502]: E0428 01:15:08.787591 2502 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"adb579ea-e7d0-4c18-a435-e777162b9b49\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1b5e5ac5cf20259e031d9691f196db006eb613be1608fb4221becec133ac1551\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 28 01:15:08.788406 kubelet[2502]: E0428 01:15:08.787607 2502 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"adb579ea-e7d0-4c18-a435-e777162b9b49\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1b5e5ac5cf20259e031d9691f196db006eb613be1608fb4221becec133ac1551\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-jkglw" podUID="adb579ea-e7d0-4c18-a435-e777162b9b49" Apr 28 01:15:08.790501 containerd[1456]: time="2026-04-28T01:15:08.790432004Z" level=error msg="StopPodSandbox for \"8b8b1804cb3be368633947b45b4abf8921f682c444d641033f588519a6ee9ea6\" failed" error="failed to destroy network for sandbox \"8b8b1804cb3be368633947b45b4abf8921f682c444d641033f588519a6ee9ea6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 28 01:15:08.790765 kubelet[2502]: E0428 01:15:08.790700 2502 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8b8b1804cb3be368633947b45b4abf8921f682c444d641033f588519a6ee9ea6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8b8b1804cb3be368633947b45b4abf8921f682c444d641033f588519a6ee9ea6" Apr 28 01:15:08.790765 kubelet[2502]: E0428 01:15:08.790739 2502 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8b8b1804cb3be368633947b45b4abf8921f682c444d641033f588519a6ee9ea6"} Apr 28 01:15:08.790765 kubelet[2502]: E0428 01:15:08.790754 2502 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d6b226a2-c761-4a32-8ac6-230b897faf20\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8b8b1804cb3be368633947b45b4abf8921f682c444d641033f588519a6ee9ea6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 28 01:15:08.790884 kubelet[2502]: E0428 01:15:08.790771 2502 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d6b226a2-c761-4a32-8ac6-230b897faf20\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8b8b1804cb3be368633947b45b4abf8921f682c444d641033f588519a6ee9ea6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-59c6777c8b-mc98s" podUID="d6b226a2-c761-4a32-8ac6-230b897faf20" Apr 28 01:15:08.800180 systemd[1]: Started cri-containerd-abf9bcc4552ca009cceefeaded287134ab9ac1cba29d848b6548469ebb38f445.scope - libcontainer container abf9bcc4552ca009cceefeaded287134ab9ac1cba29d848b6548469ebb38f445. Apr 28 01:15:08.822628 containerd[1456]: time="2026-04-28T01:15:08.822593717Z" level=info msg="StartContainer for \"abf9bcc4552ca009cceefeaded287134ab9ac1cba29d848b6548469ebb38f445\" returns successfully" Apr 28 01:15:09.482762 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1b5e5ac5cf20259e031d9691f196db006eb613be1608fb4221becec133ac1551-shm.mount: Deactivated successfully. Apr 28 01:15:09.482850 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-55aed3bf1d0e6e0ea4708ed31c9d451023cf4ce0db272c7e28f6d525d6e56fe9-shm.mount: Deactivated successfully. Apr 28 01:15:09.482895 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f1b614515eb80a635e30633e8feba558c3ed2b080a2416ae21bf47d7e6880c91-shm.mount: Deactivated successfully. Apr 28 01:15:09.588530 systemd[1]: Created slice kubepods-besteffort-pod22c17e9a_2b9c_4268_bd82_cc9430b54e6f.slice - libcontainer container kubepods-besteffort-pod22c17e9a_2b9c_4268_bd82_cc9430b54e6f.slice. Apr 28 01:15:09.593843 containerd[1456]: time="2026-04-28T01:15:09.593793276Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7dx44,Uid:22c17e9a-2b9c-4268-bd82-cc9430b54e6f,Namespace:calico-system,Attempt:0,}" Apr 28 01:15:09.721735 systemd-networkd[1373]: calib42a03d7c95: Link UP Apr 28 01:15:09.722168 systemd-networkd[1373]: calib42a03d7c95: Gained carrier Apr 28 01:15:09.735613 containerd[1456]: 2026-04-28 01:15:09.625 [ERROR][3836] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 28 01:15:09.735613 containerd[1456]: 2026-04-28 01:15:09.650 [INFO][3836] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--7dx44-eth0 csi-node-driver- calico-system 22c17e9a-2b9c-4268-bd82-cc9430b54e6f 721 0 2026-04-28 01:14:49 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:95f96f7df k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-7dx44 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calib42a03d7c95 [] [] }} ContainerID="9635502dfdc3634270ccdb6083a50fe62325ea79ef5649a638eddd881871a50b" Namespace="calico-system" Pod="csi-node-driver-7dx44" WorkloadEndpoint="localhost-k8s-csi--node--driver--7dx44-" Apr 28 01:15:09.735613 containerd[1456]: 2026-04-28 01:15:09.650 [INFO][3836] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9635502dfdc3634270ccdb6083a50fe62325ea79ef5649a638eddd881871a50b" Namespace="calico-system" Pod="csi-node-driver-7dx44" WorkloadEndpoint="localhost-k8s-csi--node--driver--7dx44-eth0" Apr 28 01:15:09.735613 containerd[1456]: 2026-04-28 01:15:09.681 [INFO][3849] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9635502dfdc3634270ccdb6083a50fe62325ea79ef5649a638eddd881871a50b" HandleID="k8s-pod-network.9635502dfdc3634270ccdb6083a50fe62325ea79ef5649a638eddd881871a50b" Workload="localhost-k8s-csi--node--driver--7dx44-eth0" Apr 28 01:15:09.735613 containerd[1456]: 2026-04-28 01:15:09.688 [INFO][3849] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="9635502dfdc3634270ccdb6083a50fe62325ea79ef5649a638eddd881871a50b" HandleID="k8s-pod-network.9635502dfdc3634270ccdb6083a50fe62325ea79ef5649a638eddd881871a50b" Workload="localhost-k8s-csi--node--driver--7dx44-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001b5bb0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-7dx44", "timestamp":"2026-04-28 01:15:09.681905781 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00017edc0)} Apr 28 01:15:09.735613 containerd[1456]: 2026-04-28 01:15:09.688 [INFO][3849] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 28 01:15:09.735613 containerd[1456]: 2026-04-28 01:15:09.688 [INFO][3849] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 28 01:15:09.735613 containerd[1456]: 2026-04-28 01:15:09.688 [INFO][3849] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 28 01:15:09.735613 containerd[1456]: 2026-04-28 01:15:09.690 [INFO][3849] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.9635502dfdc3634270ccdb6083a50fe62325ea79ef5649a638eddd881871a50b" host="localhost" Apr 28 01:15:09.735613 containerd[1456]: 2026-04-28 01:15:09.693 [INFO][3849] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 28 01:15:09.735613 containerd[1456]: 2026-04-28 01:15:09.696 [INFO][3849] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 28 01:15:09.735613 containerd[1456]: 2026-04-28 01:15:09.698 [INFO][3849] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 28 01:15:09.735613 containerd[1456]: 2026-04-28 01:15:09.700 [INFO][3849] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 28 01:15:09.735613 containerd[1456]: 2026-04-28 01:15:09.700 [INFO][3849] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9635502dfdc3634270ccdb6083a50fe62325ea79ef5649a638eddd881871a50b" host="localhost" Apr 28 01:15:09.735613 containerd[1456]: 2026-04-28 01:15:09.703 [INFO][3849] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.9635502dfdc3634270ccdb6083a50fe62325ea79ef5649a638eddd881871a50b Apr 28 01:15:09.735613 containerd[1456]: 2026-04-28 01:15:09.708 [INFO][3849] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9635502dfdc3634270ccdb6083a50fe62325ea79ef5649a638eddd881871a50b" host="localhost" Apr 28 01:15:09.735613 containerd[1456]: 2026-04-28 01:15:09.711 [INFO][3849] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.9635502dfdc3634270ccdb6083a50fe62325ea79ef5649a638eddd881871a50b" host="localhost" Apr 28 01:15:09.735613 containerd[1456]: 2026-04-28 01:15:09.711 [INFO][3849] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.9635502dfdc3634270ccdb6083a50fe62325ea79ef5649a638eddd881871a50b" host="localhost" Apr 28 01:15:09.735613 containerd[1456]: 2026-04-28 01:15:09.711 [INFO][3849] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 28 01:15:09.735613 containerd[1456]: 2026-04-28 01:15:09.711 [INFO][3849] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="9635502dfdc3634270ccdb6083a50fe62325ea79ef5649a638eddd881871a50b" HandleID="k8s-pod-network.9635502dfdc3634270ccdb6083a50fe62325ea79ef5649a638eddd881871a50b" Workload="localhost-k8s-csi--node--driver--7dx44-eth0" Apr 28 01:15:09.736130 containerd[1456]: 2026-04-28 01:15:09.714 [INFO][3836] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9635502dfdc3634270ccdb6083a50fe62325ea79ef5649a638eddd881871a50b" Namespace="calico-system" Pod="csi-node-driver-7dx44" WorkloadEndpoint="localhost-k8s-csi--node--driver--7dx44-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--7dx44-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"22c17e9a-2b9c-4268-bd82-cc9430b54e6f", ResourceVersion:"721", Generation:0, CreationTimestamp:time.Date(2026, time.April, 28, 1, 14, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"95f96f7df", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-7dx44", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib42a03d7c95", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 28 01:15:09.736130 containerd[1456]: 2026-04-28 01:15:09.714 [INFO][3836] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="9635502dfdc3634270ccdb6083a50fe62325ea79ef5649a638eddd881871a50b" Namespace="calico-system" Pod="csi-node-driver-7dx44" WorkloadEndpoint="localhost-k8s-csi--node--driver--7dx44-eth0" Apr 28 01:15:09.736130 containerd[1456]: 2026-04-28 01:15:09.714 [INFO][3836] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib42a03d7c95 ContainerID="9635502dfdc3634270ccdb6083a50fe62325ea79ef5649a638eddd881871a50b" Namespace="calico-system" Pod="csi-node-driver-7dx44" WorkloadEndpoint="localhost-k8s-csi--node--driver--7dx44-eth0" Apr 28 01:15:09.736130 containerd[1456]: 2026-04-28 01:15:09.722 [INFO][3836] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9635502dfdc3634270ccdb6083a50fe62325ea79ef5649a638eddd881871a50b" Namespace="calico-system" Pod="csi-node-driver-7dx44" WorkloadEndpoint="localhost-k8s-csi--node--driver--7dx44-eth0" Apr 28 01:15:09.736130 containerd[1456]: 2026-04-28 01:15:09.723 [INFO][3836] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9635502dfdc3634270ccdb6083a50fe62325ea79ef5649a638eddd881871a50b" Namespace="calico-system" Pod="csi-node-driver-7dx44" WorkloadEndpoint="localhost-k8s-csi--node--driver--7dx44-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--7dx44-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"22c17e9a-2b9c-4268-bd82-cc9430b54e6f", ResourceVersion:"721", Generation:0, CreationTimestamp:time.Date(2026, time.April, 28, 1, 14, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"95f96f7df", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9635502dfdc3634270ccdb6083a50fe62325ea79ef5649a638eddd881871a50b", Pod:"csi-node-driver-7dx44", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib42a03d7c95", MAC:"e6:8f:bc:5f:79:d3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 28 01:15:09.736130 containerd[1456]: 2026-04-28 01:15:09.733 [INFO][3836] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9635502dfdc3634270ccdb6083a50fe62325ea79ef5649a638eddd881871a50b" Namespace="calico-system" Pod="csi-node-driver-7dx44" WorkloadEndpoint="localhost-k8s-csi--node--driver--7dx44-eth0" Apr 28 01:15:09.739931 containerd[1456]: time="2026-04-28T01:15:09.738994177Z" level=info msg="StopPodSandbox for \"f1b614515eb80a635e30633e8feba558c3ed2b080a2416ae21bf47d7e6880c91\"" Apr 28 01:15:09.758201 containerd[1456]: time="2026-04-28T01:15:09.757662162Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 28 01:15:09.758201 containerd[1456]: time="2026-04-28T01:15:09.757716856Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 28 01:15:09.758201 containerd[1456]: time="2026-04-28T01:15:09.757749286Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 01:15:09.760504 containerd[1456]: time="2026-04-28T01:15:09.759921423Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 01:15:09.783220 systemd[1]: Started cri-containerd-9635502dfdc3634270ccdb6083a50fe62325ea79ef5649a638eddd881871a50b.scope - libcontainer container 9635502dfdc3634270ccdb6083a50fe62325ea79ef5649a638eddd881871a50b. Apr 28 01:15:09.793505 systemd-resolved[1375]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 28 01:15:09.798314 kubelet[2502]: I0428 01:15:09.798278 2502 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 28 01:15:09.798892 kubelet[2502]: E0428 01:15:09.798876 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:15:09.810415 containerd[1456]: time="2026-04-28T01:15:09.810333320Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7dx44,Uid:22c17e9a-2b9c-4268-bd82-cc9430b54e6f,Namespace:calico-system,Attempt:0,} returns sandbox id \"9635502dfdc3634270ccdb6083a50fe62325ea79ef5649a638eddd881871a50b\"" Apr 28 01:15:09.811861 containerd[1456]: time="2026-04-28T01:15:09.811842228Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.5\"" Apr 28 01:15:09.816108 kubelet[2502]: I0428 01:15:09.814585 2502 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-rbn78" podStartSLOduration=2.847473323 podStartE2EDuration="20.814573033s" podCreationTimestamp="2026-04-28 01:14:49 +0000 UTC" firstStartedPulling="2026-04-28 01:14:49.500336261 +0000 UTC m=+16.030698322" lastFinishedPulling="2026-04-28 01:15:07.467435972 +0000 UTC m=+33.997798032" observedRunningTime="2026-04-28 01:15:09.772716535 +0000 UTC m=+36.303078607" watchObservedRunningTime="2026-04-28 01:15:09.814573033 +0000 UTC m=+36.344935105" Apr 28 01:15:09.875584 containerd[1456]: 2026-04-28 01:15:09.816 [INFO][3889] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="f1b614515eb80a635e30633e8feba558c3ed2b080a2416ae21bf47d7e6880c91" Apr 28 01:15:09.875584 containerd[1456]: 2026-04-28 01:15:09.816 [INFO][3889] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f1b614515eb80a635e30633e8feba558c3ed2b080a2416ae21bf47d7e6880c91" iface="eth0" netns="/var/run/netns/cni-49ca6b93-0103-f253-4b79-ffd320be2aa0" Apr 28 01:15:09.875584 containerd[1456]: 2026-04-28 01:15:09.820 [INFO][3889] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f1b614515eb80a635e30633e8feba558c3ed2b080a2416ae21bf47d7e6880c91" iface="eth0" netns="/var/run/netns/cni-49ca6b93-0103-f253-4b79-ffd320be2aa0" Apr 28 01:15:09.875584 containerd[1456]: 2026-04-28 01:15:09.820 [INFO][3889] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f1b614515eb80a635e30633e8feba558c3ed2b080a2416ae21bf47d7e6880c91" iface="eth0" netns="/var/run/netns/cni-49ca6b93-0103-f253-4b79-ffd320be2aa0" Apr 28 01:15:09.875584 containerd[1456]: 2026-04-28 01:15:09.820 [INFO][3889] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="f1b614515eb80a635e30633e8feba558c3ed2b080a2416ae21bf47d7e6880c91" Apr 28 01:15:09.875584 containerd[1456]: 2026-04-28 01:15:09.820 [INFO][3889] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="f1b614515eb80a635e30633e8feba558c3ed2b080a2416ae21bf47d7e6880c91" Apr 28 01:15:09.875584 containerd[1456]: 2026-04-28 01:15:09.862 [INFO][3944] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="f1b614515eb80a635e30633e8feba558c3ed2b080a2416ae21bf47d7e6880c91" HandleID="k8s-pod-network.f1b614515eb80a635e30633e8feba558c3ed2b080a2416ae21bf47d7e6880c91" Workload="localhost-k8s-whisker--569f54558--r54gv-eth0" Apr 28 01:15:09.875584 containerd[1456]: 2026-04-28 01:15:09.862 [INFO][3944] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 28 01:15:09.875584 containerd[1456]: 2026-04-28 01:15:09.862 [INFO][3944] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 28 01:15:09.875584 containerd[1456]: 2026-04-28 01:15:09.869 [WARNING][3944] ipam/ipam_plugin.go 515: Asked to release address but it doesn't exist. Ignoring ContainerID="f1b614515eb80a635e30633e8feba558c3ed2b080a2416ae21bf47d7e6880c91" HandleID="k8s-pod-network.f1b614515eb80a635e30633e8feba558c3ed2b080a2416ae21bf47d7e6880c91" Workload="localhost-k8s-whisker--569f54558--r54gv-eth0" Apr 28 01:15:09.875584 containerd[1456]: 2026-04-28 01:15:09.869 [INFO][3944] ipam/ipam_plugin.go 526: Releasing address using workloadID ContainerID="f1b614515eb80a635e30633e8feba558c3ed2b080a2416ae21bf47d7e6880c91" HandleID="k8s-pod-network.f1b614515eb80a635e30633e8feba558c3ed2b080a2416ae21bf47d7e6880c91" Workload="localhost-k8s-whisker--569f54558--r54gv-eth0" Apr 28 01:15:09.875584 containerd[1456]: 2026-04-28 01:15:09.872 [INFO][3944] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 28 01:15:09.875584 containerd[1456]: 2026-04-28 01:15:09.873 [INFO][3889] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="f1b614515eb80a635e30633e8feba558c3ed2b080a2416ae21bf47d7e6880c91" Apr 28 01:15:09.876272 containerd[1456]: time="2026-04-28T01:15:09.876216483Z" level=info msg="TearDown network for sandbox \"f1b614515eb80a635e30633e8feba558c3ed2b080a2416ae21bf47d7e6880c91\" successfully" Apr 28 01:15:09.876272 containerd[1456]: time="2026-04-28T01:15:09.876259416Z" level=info msg="StopPodSandbox for \"f1b614515eb80a635e30633e8feba558c3ed2b080a2416ae21bf47d7e6880c91\" returns successfully" Apr 28 01:15:09.944911 kubelet[2502]: I0428 01:15:09.944832 2502 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lqzhn\" (UniqueName: \"kubernetes.io/projected/adbc0340-d6da-4e72-94e2-4bd33f757201-kube-api-access-lqzhn\") pod \"adbc0340-d6da-4e72-94e2-4bd33f757201\" (UID: \"adbc0340-d6da-4e72-94e2-4bd33f757201\") " Apr 28 01:15:09.944911 kubelet[2502]: I0428 01:15:09.944903 2502 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/adbc0340-d6da-4e72-94e2-4bd33f757201-nginx-config\") pod \"adbc0340-d6da-4e72-94e2-4bd33f757201\" (UID: \"adbc0340-d6da-4e72-94e2-4bd33f757201\") " Apr 28 01:15:09.944911 kubelet[2502]: I0428 01:15:09.944934 2502 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/adbc0340-d6da-4e72-94e2-4bd33f757201-whisker-backend-key-pair\") pod \"adbc0340-d6da-4e72-94e2-4bd33f757201\" (UID: \"adbc0340-d6da-4e72-94e2-4bd33f757201\") " Apr 28 01:15:09.945208 kubelet[2502]: I0428 01:15:09.944981 2502 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/adbc0340-d6da-4e72-94e2-4bd33f757201-whisker-ca-bundle\") pod \"adbc0340-d6da-4e72-94e2-4bd33f757201\" (UID: \"adbc0340-d6da-4e72-94e2-4bd33f757201\") " Apr 28 01:15:09.945477 kubelet[2502]: I0428 01:15:09.945263 2502 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/adbc0340-d6da-4e72-94e2-4bd33f757201-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "adbc0340-d6da-4e72-94e2-4bd33f757201" (UID: "adbc0340-d6da-4e72-94e2-4bd33f757201"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 28 01:15:09.945477 kubelet[2502]: I0428 01:15:09.945338 2502 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/adbc0340-d6da-4e72-94e2-4bd33f757201-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "adbc0340-d6da-4e72-94e2-4bd33f757201" (UID: "adbc0340-d6da-4e72-94e2-4bd33f757201"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 28 01:15:09.947837 kubelet[2502]: I0428 01:15:09.947798 2502 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/adbc0340-d6da-4e72-94e2-4bd33f757201-kube-api-access-lqzhn" (OuterVolumeSpecName: "kube-api-access-lqzhn") pod "adbc0340-d6da-4e72-94e2-4bd33f757201" (UID: "adbc0340-d6da-4e72-94e2-4bd33f757201"). InnerVolumeSpecName "kube-api-access-lqzhn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 28 01:15:09.947837 kubelet[2502]: I0428 01:15:09.947828 2502 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/adbc0340-d6da-4e72-94e2-4bd33f757201-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "adbc0340-d6da-4e72-94e2-4bd33f757201" (UID: "adbc0340-d6da-4e72-94e2-4bd33f757201"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 28 01:15:10.045782 kubelet[2502]: I0428 01:15:10.045589 2502 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lqzhn\" (UniqueName: \"kubernetes.io/projected/adbc0340-d6da-4e72-94e2-4bd33f757201-kube-api-access-lqzhn\") on node \"localhost\" DevicePath \"\"" Apr 28 01:15:10.045782 kubelet[2502]: I0428 01:15:10.045649 2502 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/adbc0340-d6da-4e72-94e2-4bd33f757201-nginx-config\") on node \"localhost\" DevicePath \"\"" Apr 28 01:15:10.045782 kubelet[2502]: I0428 01:15:10.045657 2502 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/adbc0340-d6da-4e72-94e2-4bd33f757201-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Apr 28 01:15:10.045782 kubelet[2502]: I0428 01:15:10.045664 2502 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/adbc0340-d6da-4e72-94e2-4bd33f757201-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Apr 28 01:15:10.393004 kernel: calico-node[4038]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Apr 28 01:15:10.481537 systemd[1]: run-netns-cni\x2d49ca6b93\x2d0103\x2df253\x2d4b79\x2dffd320be2aa0.mount: Deactivated successfully. Apr 28 01:15:10.481627 systemd[1]: var-lib-kubelet-pods-adbc0340\x2dd6da\x2d4e72\x2d94e2\x2d4bd33f757201-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dlqzhn.mount: Deactivated successfully. Apr 28 01:15:10.481673 systemd[1]: var-lib-kubelet-pods-adbc0340\x2dd6da\x2d4e72\x2d94e2\x2d4bd33f757201-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Apr 28 01:15:10.732675 systemd-networkd[1373]: vxlan.calico: Link UP Apr 28 01:15:10.732682 systemd-networkd[1373]: vxlan.calico: Gained carrier Apr 28 01:15:10.742489 kubelet[2502]: E0428 01:15:10.742125 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:15:10.749878 systemd[1]: Removed slice kubepods-besteffort-podadbc0340_d6da_4e72_94e2_4bd33f757201.slice - libcontainer container kubepods-besteffort-podadbc0340_d6da_4e72_94e2_4bd33f757201.slice. Apr 28 01:15:10.841707 systemd[1]: Created slice kubepods-besteffort-pod0fb6b957_f41e_47c7_bb98_83d9726a333b.slice - libcontainer container kubepods-besteffort-pod0fb6b957_f41e_47c7_bb98_83d9726a333b.slice. Apr 28 01:15:10.853495 kubelet[2502]: I0428 01:15:10.853471 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/0fb6b957-f41e-47c7-bb98-83d9726a333b-nginx-config\") pod \"whisker-549c7b7876-9tshx\" (UID: \"0fb6b957-f41e-47c7-bb98-83d9726a333b\") " pod="calico-system/whisker-549c7b7876-9tshx" Apr 28 01:15:10.854088 kubelet[2502]: I0428 01:15:10.854036 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksh6w\" (UniqueName: \"kubernetes.io/projected/0fb6b957-f41e-47c7-bb98-83d9726a333b-kube-api-access-ksh6w\") pod \"whisker-549c7b7876-9tshx\" (UID: \"0fb6b957-f41e-47c7-bb98-83d9726a333b\") " pod="calico-system/whisker-549c7b7876-9tshx" Apr 28 01:15:10.854240 kubelet[2502]: I0428 01:15:10.854186 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0fb6b957-f41e-47c7-bb98-83d9726a333b-whisker-backend-key-pair\") pod \"whisker-549c7b7876-9tshx\" (UID: \"0fb6b957-f41e-47c7-bb98-83d9726a333b\") " pod="calico-system/whisker-549c7b7876-9tshx" Apr 28 01:15:10.854240 kubelet[2502]: I0428 01:15:10.854206 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0fb6b957-f41e-47c7-bb98-83d9726a333b-whisker-ca-bundle\") pod \"whisker-549c7b7876-9tshx\" (UID: \"0fb6b957-f41e-47c7-bb98-83d9726a333b\") " pod="calico-system/whisker-549c7b7876-9tshx" Apr 28 01:15:11.150000 containerd[1456]: time="2026-04-28T01:15:11.149843900Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-549c7b7876-9tshx,Uid:0fb6b957-f41e-47c7-bb98-83d9726a333b,Namespace:calico-system,Attempt:0,}" Apr 28 01:15:11.266698 systemd-networkd[1373]: calia1e36eb8f4e: Link UP Apr 28 01:15:11.267638 systemd-networkd[1373]: calia1e36eb8f4e: Gained carrier Apr 28 01:15:11.279990 containerd[1456]: 2026-04-28 01:15:11.197 [INFO][4200] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--549c7b7876--9tshx-eth0 whisker-549c7b7876- calico-system 0fb6b957-f41e-47c7-bb98-83d9726a333b 946 0 2026-04-28 01:15:10 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:549c7b7876 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-549c7b7876-9tshx eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calia1e36eb8f4e [] [] }} ContainerID="e11eadfd65761a6541501313b0745f93fb3cb4460ed13708c61bf377fb99d7bf" Namespace="calico-system" Pod="whisker-549c7b7876-9tshx" WorkloadEndpoint="localhost-k8s-whisker--549c7b7876--9tshx-" Apr 28 01:15:11.279990 containerd[1456]: 2026-04-28 01:15:11.197 [INFO][4200] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e11eadfd65761a6541501313b0745f93fb3cb4460ed13708c61bf377fb99d7bf" Namespace="calico-system" Pod="whisker-549c7b7876-9tshx" WorkloadEndpoint="localhost-k8s-whisker--549c7b7876--9tshx-eth0" Apr 28 01:15:11.279990 containerd[1456]: 2026-04-28 01:15:11.224 [INFO][4215] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e11eadfd65761a6541501313b0745f93fb3cb4460ed13708c61bf377fb99d7bf" HandleID="k8s-pod-network.e11eadfd65761a6541501313b0745f93fb3cb4460ed13708c61bf377fb99d7bf" Workload="localhost-k8s-whisker--549c7b7876--9tshx-eth0" Apr 28 01:15:11.279990 containerd[1456]: 2026-04-28 01:15:11.231 [INFO][4215] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="e11eadfd65761a6541501313b0745f93fb3cb4460ed13708c61bf377fb99d7bf" HandleID="k8s-pod-network.e11eadfd65761a6541501313b0745f93fb3cb4460ed13708c61bf377fb99d7bf" Workload="localhost-k8s-whisker--549c7b7876--9tshx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002f7700), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-549c7b7876-9tshx", "timestamp":"2026-04-28 01:15:11.224519288 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0004651e0)} Apr 28 01:15:11.279990 containerd[1456]: 2026-04-28 01:15:11.231 [INFO][4215] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 28 01:15:11.279990 containerd[1456]: 2026-04-28 01:15:11.231 [INFO][4215] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 28 01:15:11.279990 containerd[1456]: 2026-04-28 01:15:11.231 [INFO][4215] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 28 01:15:11.279990 containerd[1456]: 2026-04-28 01:15:11.234 [INFO][4215] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.e11eadfd65761a6541501313b0745f93fb3cb4460ed13708c61bf377fb99d7bf" host="localhost" Apr 28 01:15:11.279990 containerd[1456]: 2026-04-28 01:15:11.240 [INFO][4215] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 28 01:15:11.279990 containerd[1456]: 2026-04-28 01:15:11.246 [INFO][4215] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 28 01:15:11.279990 containerd[1456]: 2026-04-28 01:15:11.247 [INFO][4215] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 28 01:15:11.279990 containerd[1456]: 2026-04-28 01:15:11.249 [INFO][4215] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 28 01:15:11.279990 containerd[1456]: 2026-04-28 01:15:11.249 [INFO][4215] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e11eadfd65761a6541501313b0745f93fb3cb4460ed13708c61bf377fb99d7bf" host="localhost" Apr 28 01:15:11.279990 containerd[1456]: 2026-04-28 01:15:11.251 [INFO][4215] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.e11eadfd65761a6541501313b0745f93fb3cb4460ed13708c61bf377fb99d7bf Apr 28 01:15:11.279990 containerd[1456]: 2026-04-28 01:15:11.255 [INFO][4215] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e11eadfd65761a6541501313b0745f93fb3cb4460ed13708c61bf377fb99d7bf" host="localhost" Apr 28 01:15:11.279990 containerd[1456]: 2026-04-28 01:15:11.261 [INFO][4215] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.e11eadfd65761a6541501313b0745f93fb3cb4460ed13708c61bf377fb99d7bf" host="localhost" Apr 28 01:15:11.279990 containerd[1456]: 2026-04-28 01:15:11.262 [INFO][4215] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.e11eadfd65761a6541501313b0745f93fb3cb4460ed13708c61bf377fb99d7bf" host="localhost" Apr 28 01:15:11.279990 containerd[1456]: 2026-04-28 01:15:11.262 [INFO][4215] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 28 01:15:11.279990 containerd[1456]: 2026-04-28 01:15:11.262 [INFO][4215] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="e11eadfd65761a6541501313b0745f93fb3cb4460ed13708c61bf377fb99d7bf" HandleID="k8s-pod-network.e11eadfd65761a6541501313b0745f93fb3cb4460ed13708c61bf377fb99d7bf" Workload="localhost-k8s-whisker--549c7b7876--9tshx-eth0" Apr 28 01:15:11.280682 containerd[1456]: 2026-04-28 01:15:11.264 [INFO][4200] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e11eadfd65761a6541501313b0745f93fb3cb4460ed13708c61bf377fb99d7bf" Namespace="calico-system" Pod="whisker-549c7b7876-9tshx" WorkloadEndpoint="localhost-k8s-whisker--549c7b7876--9tshx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--549c7b7876--9tshx-eth0", GenerateName:"whisker-549c7b7876-", Namespace:"calico-system", SelfLink:"", UID:"0fb6b957-f41e-47c7-bb98-83d9726a333b", ResourceVersion:"946", Generation:0, CreationTimestamp:time.Date(2026, time.April, 28, 1, 15, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"549c7b7876", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-549c7b7876-9tshx", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia1e36eb8f4e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 28 01:15:11.280682 containerd[1456]: 2026-04-28 01:15:11.264 [INFO][4200] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="e11eadfd65761a6541501313b0745f93fb3cb4460ed13708c61bf377fb99d7bf" Namespace="calico-system" Pod="whisker-549c7b7876-9tshx" WorkloadEndpoint="localhost-k8s-whisker--549c7b7876--9tshx-eth0" Apr 28 01:15:11.280682 containerd[1456]: 2026-04-28 01:15:11.264 [INFO][4200] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia1e36eb8f4e ContainerID="e11eadfd65761a6541501313b0745f93fb3cb4460ed13708c61bf377fb99d7bf" Namespace="calico-system" Pod="whisker-549c7b7876-9tshx" WorkloadEndpoint="localhost-k8s-whisker--549c7b7876--9tshx-eth0" Apr 28 01:15:11.280682 containerd[1456]: 2026-04-28 01:15:11.268 [INFO][4200] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e11eadfd65761a6541501313b0745f93fb3cb4460ed13708c61bf377fb99d7bf" Namespace="calico-system" Pod="whisker-549c7b7876-9tshx" WorkloadEndpoint="localhost-k8s-whisker--549c7b7876--9tshx-eth0" Apr 28 01:15:11.280682 containerd[1456]: 2026-04-28 01:15:11.268 [INFO][4200] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e11eadfd65761a6541501313b0745f93fb3cb4460ed13708c61bf377fb99d7bf" Namespace="calico-system" Pod="whisker-549c7b7876-9tshx" WorkloadEndpoint="localhost-k8s-whisker--549c7b7876--9tshx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--549c7b7876--9tshx-eth0", GenerateName:"whisker-549c7b7876-", Namespace:"calico-system", SelfLink:"", UID:"0fb6b957-f41e-47c7-bb98-83d9726a333b", ResourceVersion:"946", Generation:0, CreationTimestamp:time.Date(2026, time.April, 28, 1, 15, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"549c7b7876", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e11eadfd65761a6541501313b0745f93fb3cb4460ed13708c61bf377fb99d7bf", Pod:"whisker-549c7b7876-9tshx", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia1e36eb8f4e", MAC:"de:55:f6:7e:03:80", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 28 01:15:11.280682 containerd[1456]: 2026-04-28 01:15:11.276 [INFO][4200] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e11eadfd65761a6541501313b0745f93fb3cb4460ed13708c61bf377fb99d7bf" Namespace="calico-system" Pod="whisker-549c7b7876-9tshx" WorkloadEndpoint="localhost-k8s-whisker--549c7b7876--9tshx-eth0" Apr 28 01:15:11.299234 containerd[1456]: time="2026-04-28T01:15:11.298282731Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 28 01:15:11.299234 containerd[1456]: time="2026-04-28T01:15:11.299209375Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 28 01:15:11.299234 containerd[1456]: time="2026-04-28T01:15:11.299229559Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 01:15:11.299482 containerd[1456]: time="2026-04-28T01:15:11.299292398Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 01:15:11.318173 systemd[1]: Started cri-containerd-e11eadfd65761a6541501313b0745f93fb3cb4460ed13708c61bf377fb99d7bf.scope - libcontainer container e11eadfd65761a6541501313b0745f93fb3cb4460ed13708c61bf377fb99d7bf. Apr 28 01:15:11.327274 systemd-resolved[1375]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 28 01:15:11.352161 containerd[1456]: time="2026-04-28T01:15:11.352133125Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-549c7b7876-9tshx,Uid:0fb6b957-f41e-47c7-bb98-83d9726a333b,Namespace:calico-system,Attempt:0,} returns sandbox id \"e11eadfd65761a6541501313b0745f93fb3cb4460ed13708c61bf377fb99d7bf\"" Apr 28 01:15:11.587326 kubelet[2502]: I0428 01:15:11.587274 2502 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="adbc0340-d6da-4e72-94e2-4bd33f757201" path="/var/lib/kubelet/pods/adbc0340-d6da-4e72-94e2-4bd33f757201/volumes" Apr 28 01:15:11.607189 systemd-networkd[1373]: calib42a03d7c95: Gained IPv6LL Apr 28 01:15:11.696004 containerd[1456]: time="2026-04-28T01:15:11.695900004Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 01:15:11.696528 containerd[1456]: time="2026-04-28T01:15:11.696472221Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.5: active requests=0, bytes read=8535421" Apr 28 01:15:11.697427 containerd[1456]: time="2026-04-28T01:15:11.697366938Z" level=info msg="ImageCreate event name:\"sha256:94e17390bb55c802657312c601a05da4abfb9d9311bef8a389a19fd8a5388a96\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 01:15:11.701709 containerd[1456]: time="2026-04-28T01:15:11.701658787Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e8a5b44388a309910946072582b1a1f283c52cf73e9825179235d934447c8b7d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 01:15:11.702223 containerd[1456]: time="2026-04-28T01:15:11.702186460Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.5\" with image id \"sha256:94e17390bb55c802657312c601a05da4abfb9d9311bef8a389a19fd8a5388a96\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e8a5b44388a309910946072582b1a1f283c52cf73e9825179235d934447c8b7d\", size \"11496846\" in 1.890239285s" Apr 28 01:15:11.702296 containerd[1456]: time="2026-04-28T01:15:11.702230173Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.5\" returns image reference \"sha256:94e17390bb55c802657312c601a05da4abfb9d9311bef8a389a19fd8a5388a96\"" Apr 28 01:15:11.703323 containerd[1456]: time="2026-04-28T01:15:11.703260984Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.5\"" Apr 28 01:15:11.706892 containerd[1456]: time="2026-04-28T01:15:11.706858439Z" level=info msg="CreateContainer within sandbox \"9635502dfdc3634270ccdb6083a50fe62325ea79ef5649a638eddd881871a50b\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 28 01:15:11.720355 containerd[1456]: time="2026-04-28T01:15:11.720304411Z" level=info msg="CreateContainer within sandbox \"9635502dfdc3634270ccdb6083a50fe62325ea79ef5649a638eddd881871a50b\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"1d705f4ad509a9c487ea177d5a566867e2d31e5f792322fc420fa83118a97426\"" Apr 28 01:15:11.720957 containerd[1456]: time="2026-04-28T01:15:11.720927585Z" level=info msg="StartContainer for \"1d705f4ad509a9c487ea177d5a566867e2d31e5f792322fc420fa83118a97426\"" Apr 28 01:15:11.760560 systemd[1]: Started cri-containerd-1d705f4ad509a9c487ea177d5a566867e2d31e5f792322fc420fa83118a97426.scope - libcontainer container 1d705f4ad509a9c487ea177d5a566867e2d31e5f792322fc420fa83118a97426. Apr 28 01:15:11.822378 containerd[1456]: time="2026-04-28T01:15:11.822291664Z" level=info msg="StartContainer for \"1d705f4ad509a9c487ea177d5a566867e2d31e5f792322fc420fa83118a97426\" returns successfully" Apr 28 01:15:12.311234 systemd-networkd[1373]: vxlan.calico: Gained IPv6LL Apr 28 01:15:12.482586 systemd[1]: run-containerd-runc-k8s.io-abf9bcc4552ca009cceefeaded287134ab9ac1cba29d848b6548469ebb38f445-runc.ItHGxt.mount: Deactivated successfully. Apr 28 01:15:12.503315 systemd-networkd[1373]: calia1e36eb8f4e: Gained IPv6LL Apr 28 01:15:13.469203 containerd[1456]: time="2026-04-28T01:15:13.469088806Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 01:15:13.469610 containerd[1456]: time="2026-04-28T01:15:13.469553636Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.5: active requests=0, bytes read=6050387" Apr 28 01:15:13.470827 containerd[1456]: time="2026-04-28T01:15:13.470774968Z" level=info msg="ImageCreate event name:\"sha256:50f42a8b70f740407562ef3a08c005eb77150af95c21140e6080af9e61c8f197\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 01:15:13.472892 containerd[1456]: time="2026-04-28T01:15:13.472840679Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:b143cf26c347546feabb95cec04a2349f5ae297830cc54fdc2578b89d1a3e021\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 01:15:13.473626 containerd[1456]: time="2026-04-28T01:15:13.473582096Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.5\" with image id \"sha256:50f42a8b70f740407562ef3a08c005eb77150af95c21140e6080af9e61c8f197\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:b143cf26c347546feabb95cec04a2349f5ae297830cc54fdc2578b89d1a3e021\", size \"9011804\" in 1.770280597s" Apr 28 01:15:13.473676 containerd[1456]: time="2026-04-28T01:15:13.473631026Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.5\" returns image reference \"sha256:50f42a8b70f740407562ef3a08c005eb77150af95c21140e6080af9e61c8f197\"" Apr 28 01:15:13.475305 containerd[1456]: time="2026-04-28T01:15:13.475262362Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.5\"" Apr 28 01:15:13.478561 containerd[1456]: time="2026-04-28T01:15:13.478504800Z" level=info msg="CreateContainer within sandbox \"e11eadfd65761a6541501313b0745f93fb3cb4460ed13708c61bf377fb99d7bf\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Apr 28 01:15:13.490647 containerd[1456]: time="2026-04-28T01:15:13.490605737Z" level=info msg="CreateContainer within sandbox \"e11eadfd65761a6541501313b0745f93fb3cb4460ed13708c61bf377fb99d7bf\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"c71b9a3a291662de2b974565330dff279f679f5f2d462366941910867d9693cc\"" Apr 28 01:15:13.491121 containerd[1456]: time="2026-04-28T01:15:13.491104783Z" level=info msg="StartContainer for \"c71b9a3a291662de2b974565330dff279f679f5f2d462366941910867d9693cc\"" Apr 28 01:15:13.522434 systemd[1]: Started cri-containerd-c71b9a3a291662de2b974565330dff279f679f5f2d462366941910867d9693cc.scope - libcontainer container c71b9a3a291662de2b974565330dff279f679f5f2d462366941910867d9693cc. Apr 28 01:15:13.557919 containerd[1456]: time="2026-04-28T01:15:13.557872315Z" level=info msg="StartContainer for \"c71b9a3a291662de2b974565330dff279f679f5f2d462366941910867d9693cc\" returns successfully" Apr 28 01:15:15.427409 containerd[1456]: time="2026-04-28T01:15:15.427307577Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 01:15:15.428202 containerd[1456]: time="2026-04-28T01:15:15.428152738Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.5: active requests=0, bytes read=13498053" Apr 28 01:15:15.429208 containerd[1456]: time="2026-04-28T01:15:15.429160004Z" level=info msg="ImageCreate event name:\"sha256:c4d89610d9eecf5b8a3542441aa9a40814ec45484688b6f68d6fe8aee64beb80\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 01:15:15.431190 containerd[1456]: time="2026-04-28T01:15:15.431148795Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:26849483b0c4d797a8ff818d988924bdf696996ca559c8c56b647aaaf70a448a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 01:15:15.431711 containerd[1456]: time="2026-04-28T01:15:15.431670898Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.5\" with image id \"sha256:c4d89610d9eecf5b8a3542441aa9a40814ec45484688b6f68d6fe8aee64beb80\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:26849483b0c4d797a8ff818d988924bdf696996ca559c8c56b647aaaf70a448a\", size \"16459430\" in 1.956373638s" Apr 28 01:15:15.431735 containerd[1456]: time="2026-04-28T01:15:15.431707959Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.5\" returns image reference \"sha256:c4d89610d9eecf5b8a3542441aa9a40814ec45484688b6f68d6fe8aee64beb80\"" Apr 28 01:15:15.433038 containerd[1456]: time="2026-04-28T01:15:15.433010193Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.5\"" Apr 28 01:15:15.436349 containerd[1456]: time="2026-04-28T01:15:15.436295634Z" level=info msg="CreateContainer within sandbox \"9635502dfdc3634270ccdb6083a50fe62325ea79ef5649a638eddd881871a50b\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 28 01:15:15.448252 containerd[1456]: time="2026-04-28T01:15:15.448205323Z" level=info msg="CreateContainer within sandbox \"9635502dfdc3634270ccdb6083a50fe62325ea79ef5649a638eddd881871a50b\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"ece7b65dc4ac3b313c6cafeee3b2937bf97758f1c12f0bed6f31635955ef1bdb\"" Apr 28 01:15:15.448717 containerd[1456]: time="2026-04-28T01:15:15.448671119Z" level=info msg="StartContainer for \"ece7b65dc4ac3b313c6cafeee3b2937bf97758f1c12f0bed6f31635955ef1bdb\"" Apr 28 01:15:15.469885 systemd[1]: run-containerd-runc-k8s.io-ece7b65dc4ac3b313c6cafeee3b2937bf97758f1c12f0bed6f31635955ef1bdb-runc.kmzDWz.mount: Deactivated successfully. Apr 28 01:15:15.480165 systemd[1]: Started cri-containerd-ece7b65dc4ac3b313c6cafeee3b2937bf97758f1c12f0bed6f31635955ef1bdb.scope - libcontainer container ece7b65dc4ac3b313c6cafeee3b2937bf97758f1c12f0bed6f31635955ef1bdb. Apr 28 01:15:15.503303 containerd[1456]: time="2026-04-28T01:15:15.503231758Z" level=info msg="StartContainer for \"ece7b65dc4ac3b313c6cafeee3b2937bf97758f1c12f0bed6f31635955ef1bdb\" returns successfully" Apr 28 01:15:15.648052 kubelet[2502]: I0428 01:15:15.647916 2502 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 28 01:15:15.648931 kubelet[2502]: I0428 01:15:15.648883 2502 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 28 01:15:15.953751 systemd[1]: Started sshd@7-10.0.0.153:22-10.0.0.1:57326.service - OpenSSH per-connection server daemon (10.0.0.1:57326). Apr 28 01:15:15.998692 sshd[4427]: Accepted publickey for core from 10.0.0.1 port 57326 ssh2: RSA SHA256:LlE/68A0qVd4DdmQfcok9T4l7BHzq3PFAQ3i8Jwjpps Apr 28 01:15:15.999817 sshd[4427]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:15:16.004459 systemd-logind[1435]: New session 8 of user core. Apr 28 01:15:16.010330 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 28 01:15:16.167718 sshd[4427]: pam_unix(sshd:session): session closed for user core Apr 28 01:15:16.170914 systemd[1]: sshd@7-10.0.0.153:22-10.0.0.1:57326.service: Deactivated successfully. Apr 28 01:15:16.174277 systemd[1]: session-8.scope: Deactivated successfully. Apr 28 01:15:16.175325 systemd-logind[1435]: Session 8 logged out. Waiting for processes to exit. Apr 28 01:15:16.176426 systemd-logind[1435]: Removed session 8. Apr 28 01:15:17.729273 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3262181425.mount: Deactivated successfully. Apr 28 01:15:17.754546 containerd[1456]: time="2026-04-28T01:15:17.754427792Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 01:15:17.755568 containerd[1456]: time="2026-04-28T01:15:17.755307650Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.5: active requests=0, bytes read=17000660" Apr 28 01:15:17.756430 containerd[1456]: time="2026-04-28T01:15:17.756360765Z" level=info msg="ImageCreate event name:\"sha256:32cfe8e323c5b51d8f6311b045681721ff6e6745a1c5b74bf0f0a3cdc1a7b5d7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 01:15:17.759360 containerd[1456]: time="2026-04-28T01:15:17.759143317Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:0bec142ebaa70bcdda5553c7316abcef9cb60a35c2e3ed16b75f26313de91eed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 01:15:17.760292 containerd[1456]: time="2026-04-28T01:15:17.759938499Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.5\" with image id \"sha256:32cfe8e323c5b51d8f6311b045681721ff6e6745a1c5b74bf0f0a3cdc1a7b5d7\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:0bec142ebaa70bcdda5553c7316abcef9cb60a35c2e3ed16b75f26313de91eed\", size \"17000490\" in 2.326892685s" Apr 28 01:15:17.760292 containerd[1456]: time="2026-04-28T01:15:17.760011023Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.5\" returns image reference \"sha256:32cfe8e323c5b51d8f6311b045681721ff6e6745a1c5b74bf0f0a3cdc1a7b5d7\"" Apr 28 01:15:17.765360 containerd[1456]: time="2026-04-28T01:15:17.765194460Z" level=info msg="CreateContainer within sandbox \"e11eadfd65761a6541501313b0745f93fb3cb4460ed13708c61bf377fb99d7bf\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Apr 28 01:15:17.779805 containerd[1456]: time="2026-04-28T01:15:17.779469130Z" level=info msg="CreateContainer within sandbox \"e11eadfd65761a6541501313b0745f93fb3cb4460ed13708c61bf377fb99d7bf\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"d479f0d92f2a5287c7d17a62f1124b8740b49c6594526d8a64a1109d7da8842e\"" Apr 28 01:15:17.782798 containerd[1456]: time="2026-04-28T01:15:17.782680151Z" level=info msg="StartContainer for \"d479f0d92f2a5287c7d17a62f1124b8740b49c6594526d8a64a1109d7da8842e\"" Apr 28 01:15:17.827266 systemd[1]: Started cri-containerd-d479f0d92f2a5287c7d17a62f1124b8740b49c6594526d8a64a1109d7da8842e.scope - libcontainer container d479f0d92f2a5287c7d17a62f1124b8740b49c6594526d8a64a1109d7da8842e. Apr 28 01:15:17.904890 containerd[1456]: time="2026-04-28T01:15:17.904328756Z" level=info msg="StartContainer for \"d479f0d92f2a5287c7d17a62f1124b8740b49c6594526d8a64a1109d7da8842e\" returns successfully" Apr 28 01:15:18.846316 kubelet[2502]: I0428 01:15:18.844422 2502 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-7dx44" podStartSLOduration=24.223218058 podStartE2EDuration="29.844407461s" podCreationTimestamp="2026-04-28 01:14:49 +0000 UTC" firstStartedPulling="2026-04-28 01:15:09.811437061 +0000 UTC m=+36.341799129" lastFinishedPulling="2026-04-28 01:15:15.432626464 +0000 UTC m=+41.962988532" observedRunningTime="2026-04-28 01:15:15.836173546 +0000 UTC m=+42.366535617" watchObservedRunningTime="2026-04-28 01:15:18.844407461 +0000 UTC m=+45.374769538" Apr 28 01:15:18.846316 kubelet[2502]: I0428 01:15:18.844647 2502 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-549c7b7876-9tshx" podStartSLOduration=2.437253741 podStartE2EDuration="8.844643201s" podCreationTimestamp="2026-04-28 01:15:10 +0000 UTC" firstStartedPulling="2026-04-28 01:15:11.353902626 +0000 UTC m=+37.884264686" lastFinishedPulling="2026-04-28 01:15:17.761292086 +0000 UTC m=+44.291654146" observedRunningTime="2026-04-28 01:15:18.844237524 +0000 UTC m=+45.374599596" watchObservedRunningTime="2026-04-28 01:15:18.844643201 +0000 UTC m=+45.375005273" Apr 28 01:15:19.585836 containerd[1456]: time="2026-04-28T01:15:19.584494262Z" level=info msg="StopPodSandbox for \"8b8b1804cb3be368633947b45b4abf8921f682c444d641033f588519a6ee9ea6\"" Apr 28 01:15:19.657841 containerd[1456]: 2026-04-28 01:15:19.625 [INFO][4511] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8b8b1804cb3be368633947b45b4abf8921f682c444d641033f588519a6ee9ea6" Apr 28 01:15:19.657841 containerd[1456]: 2026-04-28 01:15:19.625 [INFO][4511] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8b8b1804cb3be368633947b45b4abf8921f682c444d641033f588519a6ee9ea6" iface="eth0" netns="/var/run/netns/cni-e7336f1f-45e1-a069-9f1f-3372220db22a" Apr 28 01:15:19.657841 containerd[1456]: 2026-04-28 01:15:19.626 [INFO][4511] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8b8b1804cb3be368633947b45b4abf8921f682c444d641033f588519a6ee9ea6" iface="eth0" netns="/var/run/netns/cni-e7336f1f-45e1-a069-9f1f-3372220db22a" Apr 28 01:15:19.657841 containerd[1456]: 2026-04-28 01:15:19.626 [INFO][4511] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8b8b1804cb3be368633947b45b4abf8921f682c444d641033f588519a6ee9ea6" iface="eth0" netns="/var/run/netns/cni-e7336f1f-45e1-a069-9f1f-3372220db22a" Apr 28 01:15:19.657841 containerd[1456]: 2026-04-28 01:15:19.626 [INFO][4511] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8b8b1804cb3be368633947b45b4abf8921f682c444d641033f588519a6ee9ea6" Apr 28 01:15:19.657841 containerd[1456]: 2026-04-28 01:15:19.626 [INFO][4511] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8b8b1804cb3be368633947b45b4abf8921f682c444d641033f588519a6ee9ea6" Apr 28 01:15:19.657841 containerd[1456]: 2026-04-28 01:15:19.645 [INFO][4519] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8b8b1804cb3be368633947b45b4abf8921f682c444d641033f588519a6ee9ea6" HandleID="k8s-pod-network.8b8b1804cb3be368633947b45b4abf8921f682c444d641033f588519a6ee9ea6" Workload="localhost-k8s-calico--apiserver--59c6777c8b--mc98s-eth0" Apr 28 01:15:19.657841 containerd[1456]: 2026-04-28 01:15:19.645 [INFO][4519] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 28 01:15:19.657841 containerd[1456]: 2026-04-28 01:15:19.645 [INFO][4519] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 28 01:15:19.657841 containerd[1456]: 2026-04-28 01:15:19.653 [WARNING][4519] ipam/ipam_plugin.go 515: Asked to release address but it doesn't exist. Ignoring ContainerID="8b8b1804cb3be368633947b45b4abf8921f682c444d641033f588519a6ee9ea6" HandleID="k8s-pod-network.8b8b1804cb3be368633947b45b4abf8921f682c444d641033f588519a6ee9ea6" Workload="localhost-k8s-calico--apiserver--59c6777c8b--mc98s-eth0" Apr 28 01:15:19.657841 containerd[1456]: 2026-04-28 01:15:19.653 [INFO][4519] ipam/ipam_plugin.go 526: Releasing address using workloadID ContainerID="8b8b1804cb3be368633947b45b4abf8921f682c444d641033f588519a6ee9ea6" HandleID="k8s-pod-network.8b8b1804cb3be368633947b45b4abf8921f682c444d641033f588519a6ee9ea6" Workload="localhost-k8s-calico--apiserver--59c6777c8b--mc98s-eth0" Apr 28 01:15:19.657841 containerd[1456]: 2026-04-28 01:15:19.655 [INFO][4519] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 28 01:15:19.657841 containerd[1456]: 2026-04-28 01:15:19.656 [INFO][4511] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8b8b1804cb3be368633947b45b4abf8921f682c444d641033f588519a6ee9ea6" Apr 28 01:15:19.659661 systemd[1]: run-netns-cni\x2de7336f1f\x2d45e1\x2da069\x2d9f1f\x2d3372220db22a.mount: Deactivated successfully. Apr 28 01:15:19.659876 containerd[1456]: time="2026-04-28T01:15:19.659770845Z" level=info msg="TearDown network for sandbox \"8b8b1804cb3be368633947b45b4abf8921f682c444d641033f588519a6ee9ea6\" successfully" Apr 28 01:15:19.659876 containerd[1456]: time="2026-04-28T01:15:19.659794667Z" level=info msg="StopPodSandbox for \"8b8b1804cb3be368633947b45b4abf8921f682c444d641033f588519a6ee9ea6\" returns successfully" Apr 28 01:15:19.663298 containerd[1456]: time="2026-04-28T01:15:19.663197663Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59c6777c8b-mc98s,Uid:d6b226a2-c761-4a32-8ac6-230b897faf20,Namespace:calico-system,Attempt:1,}" Apr 28 01:15:19.765031 systemd-networkd[1373]: calic25d340f709: Link UP Apr 28 01:15:19.765249 systemd-networkd[1373]: calic25d340f709: Gained carrier Apr 28 01:15:19.778176 containerd[1456]: 2026-04-28 01:15:19.702 [INFO][4528] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--59c6777c8b--mc98s-eth0 calico-apiserver-59c6777c8b- calico-system d6b226a2-c761-4a32-8ac6-230b897faf20 1039 0 2026-04-28 01:14:48 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:59c6777c8b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-59c6777c8b-mc98s eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calic25d340f709 [] [] }} ContainerID="fb4883db6b5dc58af62fa07923e44145449c66544e8294ac80a4af2313a468c5" Namespace="calico-system" Pod="calico-apiserver-59c6777c8b-mc98s" WorkloadEndpoint="localhost-k8s-calico--apiserver--59c6777c8b--mc98s-" Apr 28 01:15:19.778176 containerd[1456]: 2026-04-28 01:15:19.702 [INFO][4528] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fb4883db6b5dc58af62fa07923e44145449c66544e8294ac80a4af2313a468c5" Namespace="calico-system" Pod="calico-apiserver-59c6777c8b-mc98s" WorkloadEndpoint="localhost-k8s-calico--apiserver--59c6777c8b--mc98s-eth0" Apr 28 01:15:19.778176 containerd[1456]: 2026-04-28 01:15:19.726 [INFO][4542] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fb4883db6b5dc58af62fa07923e44145449c66544e8294ac80a4af2313a468c5" HandleID="k8s-pod-network.fb4883db6b5dc58af62fa07923e44145449c66544e8294ac80a4af2313a468c5" Workload="localhost-k8s-calico--apiserver--59c6777c8b--mc98s-eth0" Apr 28 01:15:19.778176 containerd[1456]: 2026-04-28 01:15:19.733 [INFO][4542] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="fb4883db6b5dc58af62fa07923e44145449c66544e8294ac80a4af2313a468c5" HandleID="k8s-pod-network.fb4883db6b5dc58af62fa07923e44145449c66544e8294ac80a4af2313a468c5" Workload="localhost-k8s-calico--apiserver--59c6777c8b--mc98s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000585b60), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-59c6777c8b-mc98s", "timestamp":"2026-04-28 01:15:19.726794117 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00041cdc0)} Apr 28 01:15:19.778176 containerd[1456]: 2026-04-28 01:15:19.733 [INFO][4542] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 28 01:15:19.778176 containerd[1456]: 2026-04-28 01:15:19.733 [INFO][4542] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 28 01:15:19.778176 containerd[1456]: 2026-04-28 01:15:19.733 [INFO][4542] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 28 01:15:19.778176 containerd[1456]: 2026-04-28 01:15:19.736 [INFO][4542] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.fb4883db6b5dc58af62fa07923e44145449c66544e8294ac80a4af2313a468c5" host="localhost" Apr 28 01:15:19.778176 containerd[1456]: 2026-04-28 01:15:19.740 [INFO][4542] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 28 01:15:19.778176 containerd[1456]: 2026-04-28 01:15:19.745 [INFO][4542] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 28 01:15:19.778176 containerd[1456]: 2026-04-28 01:15:19.747 [INFO][4542] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 28 01:15:19.778176 containerd[1456]: 2026-04-28 01:15:19.749 [INFO][4542] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 28 01:15:19.778176 containerd[1456]: 2026-04-28 01:15:19.749 [INFO][4542] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.fb4883db6b5dc58af62fa07923e44145449c66544e8294ac80a4af2313a468c5" host="localhost" Apr 28 01:15:19.778176 containerd[1456]: 2026-04-28 01:15:19.751 [INFO][4542] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.fb4883db6b5dc58af62fa07923e44145449c66544e8294ac80a4af2313a468c5 Apr 28 01:15:19.778176 containerd[1456]: 2026-04-28 01:15:19.755 [INFO][4542] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.fb4883db6b5dc58af62fa07923e44145449c66544e8294ac80a4af2313a468c5" host="localhost" Apr 28 01:15:19.778176 containerd[1456]: 2026-04-28 01:15:19.760 [INFO][4542] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.fb4883db6b5dc58af62fa07923e44145449c66544e8294ac80a4af2313a468c5" host="localhost" Apr 28 01:15:19.778176 containerd[1456]: 2026-04-28 01:15:19.760 [INFO][4542] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.fb4883db6b5dc58af62fa07923e44145449c66544e8294ac80a4af2313a468c5" host="localhost" Apr 28 01:15:19.778176 containerd[1456]: 2026-04-28 01:15:19.760 [INFO][4542] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 28 01:15:19.778176 containerd[1456]: 2026-04-28 01:15:19.760 [INFO][4542] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="fb4883db6b5dc58af62fa07923e44145449c66544e8294ac80a4af2313a468c5" HandleID="k8s-pod-network.fb4883db6b5dc58af62fa07923e44145449c66544e8294ac80a4af2313a468c5" Workload="localhost-k8s-calico--apiserver--59c6777c8b--mc98s-eth0" Apr 28 01:15:19.778651 containerd[1456]: 2026-04-28 01:15:19.762 [INFO][4528] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fb4883db6b5dc58af62fa07923e44145449c66544e8294ac80a4af2313a468c5" Namespace="calico-system" Pod="calico-apiserver-59c6777c8b-mc98s" WorkloadEndpoint="localhost-k8s-calico--apiserver--59c6777c8b--mc98s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--59c6777c8b--mc98s-eth0", GenerateName:"calico-apiserver-59c6777c8b-", Namespace:"calico-system", SelfLink:"", UID:"d6b226a2-c761-4a32-8ac6-230b897faf20", ResourceVersion:"1039", Generation:0, CreationTimestamp:time.Date(2026, time.April, 28, 1, 14, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59c6777c8b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-59c6777c8b-mc98s", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calic25d340f709", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 28 01:15:19.778651 containerd[1456]: 2026-04-28 01:15:19.762 [INFO][4528] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="fb4883db6b5dc58af62fa07923e44145449c66544e8294ac80a4af2313a468c5" Namespace="calico-system" Pod="calico-apiserver-59c6777c8b-mc98s" WorkloadEndpoint="localhost-k8s-calico--apiserver--59c6777c8b--mc98s-eth0" Apr 28 01:15:19.778651 containerd[1456]: 2026-04-28 01:15:19.762 [INFO][4528] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic25d340f709 ContainerID="fb4883db6b5dc58af62fa07923e44145449c66544e8294ac80a4af2313a468c5" Namespace="calico-system" Pod="calico-apiserver-59c6777c8b-mc98s" WorkloadEndpoint="localhost-k8s-calico--apiserver--59c6777c8b--mc98s-eth0" Apr 28 01:15:19.778651 containerd[1456]: 2026-04-28 01:15:19.764 [INFO][4528] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fb4883db6b5dc58af62fa07923e44145449c66544e8294ac80a4af2313a468c5" Namespace="calico-system" Pod="calico-apiserver-59c6777c8b-mc98s" WorkloadEndpoint="localhost-k8s-calico--apiserver--59c6777c8b--mc98s-eth0" Apr 28 01:15:19.778651 containerd[1456]: 2026-04-28 01:15:19.766 [INFO][4528] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fb4883db6b5dc58af62fa07923e44145449c66544e8294ac80a4af2313a468c5" Namespace="calico-system" Pod="calico-apiserver-59c6777c8b-mc98s" WorkloadEndpoint="localhost-k8s-calico--apiserver--59c6777c8b--mc98s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--59c6777c8b--mc98s-eth0", GenerateName:"calico-apiserver-59c6777c8b-", Namespace:"calico-system", SelfLink:"", UID:"d6b226a2-c761-4a32-8ac6-230b897faf20", ResourceVersion:"1039", Generation:0, CreationTimestamp:time.Date(2026, time.April, 28, 1, 14, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59c6777c8b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fb4883db6b5dc58af62fa07923e44145449c66544e8294ac80a4af2313a468c5", Pod:"calico-apiserver-59c6777c8b-mc98s", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calic25d340f709", MAC:"7e:2d:a5:f1:ec:d4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 28 01:15:19.778651 containerd[1456]: 2026-04-28 01:15:19.775 [INFO][4528] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fb4883db6b5dc58af62fa07923e44145449c66544e8294ac80a4af2313a468c5" Namespace="calico-system" Pod="calico-apiserver-59c6777c8b-mc98s" WorkloadEndpoint="localhost-k8s-calico--apiserver--59c6777c8b--mc98s-eth0" Apr 28 01:15:19.794432 containerd[1456]: time="2026-04-28T01:15:19.794323994Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 28 01:15:19.794678 containerd[1456]: time="2026-04-28T01:15:19.794572467Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 28 01:15:19.795488 containerd[1456]: time="2026-04-28T01:15:19.795279044Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 01:15:19.795488 containerd[1456]: time="2026-04-28T01:15:19.795362949Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 01:15:19.816371 systemd[1]: Started cri-containerd-fb4883db6b5dc58af62fa07923e44145449c66544e8294ac80a4af2313a468c5.scope - libcontainer container fb4883db6b5dc58af62fa07923e44145449c66544e8294ac80a4af2313a468c5. Apr 28 01:15:19.825264 systemd-resolved[1375]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 28 01:15:19.850178 containerd[1456]: time="2026-04-28T01:15:19.849188931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59c6777c8b-mc98s,Uid:d6b226a2-c761-4a32-8ac6-230b897faf20,Namespace:calico-system,Attempt:1,} returns sandbox id \"fb4883db6b5dc58af62fa07923e44145449c66544e8294ac80a4af2313a468c5\"" Apr 28 01:15:19.851357 containerd[1456]: time="2026-04-28T01:15:19.851319201Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.5\"" Apr 28 01:15:20.585125 containerd[1456]: time="2026-04-28T01:15:20.584666370Z" level=info msg="StopPodSandbox for \"1b5e5ac5cf20259e031d9691f196db006eb613be1608fb4221becec133ac1551\"" Apr 28 01:15:20.585125 containerd[1456]: time="2026-04-28T01:15:20.584730282Z" level=info msg="StopPodSandbox for \"aeb4a688f40f86108f7453e3a5315c6e9ddbea828d70eb0aab8b026af70c8127\"" Apr 28 01:15:20.680018 containerd[1456]: 2026-04-28 01:15:20.642 [INFO][4624] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="1b5e5ac5cf20259e031d9691f196db006eb613be1608fb4221becec133ac1551" Apr 28 01:15:20.680018 containerd[1456]: 2026-04-28 01:15:20.642 [INFO][4624] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1b5e5ac5cf20259e031d9691f196db006eb613be1608fb4221becec133ac1551" iface="eth0" netns="/var/run/netns/cni-f5d5ff4d-8b13-0e51-c5de-128348daac5d" Apr 28 01:15:20.680018 containerd[1456]: 2026-04-28 01:15:20.642 [INFO][4624] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1b5e5ac5cf20259e031d9691f196db006eb613be1608fb4221becec133ac1551" iface="eth0" netns="/var/run/netns/cni-f5d5ff4d-8b13-0e51-c5de-128348daac5d" Apr 28 01:15:20.680018 containerd[1456]: 2026-04-28 01:15:20.642 [INFO][4624] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1b5e5ac5cf20259e031d9691f196db006eb613be1608fb4221becec133ac1551" iface="eth0" netns="/var/run/netns/cni-f5d5ff4d-8b13-0e51-c5de-128348daac5d" Apr 28 01:15:20.680018 containerd[1456]: 2026-04-28 01:15:20.642 [INFO][4624] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="1b5e5ac5cf20259e031d9691f196db006eb613be1608fb4221becec133ac1551" Apr 28 01:15:20.680018 containerd[1456]: 2026-04-28 01:15:20.642 [INFO][4624] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="1b5e5ac5cf20259e031d9691f196db006eb613be1608fb4221becec133ac1551" Apr 28 01:15:20.680018 containerd[1456]: 2026-04-28 01:15:20.666 [INFO][4640] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="1b5e5ac5cf20259e031d9691f196db006eb613be1608fb4221becec133ac1551" HandleID="k8s-pod-network.1b5e5ac5cf20259e031d9691f196db006eb613be1608fb4221becec133ac1551" Workload="localhost-k8s-coredns--66bc5c9577--jkglw-eth0" Apr 28 01:15:20.680018 containerd[1456]: 2026-04-28 01:15:20.666 [INFO][4640] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 28 01:15:20.680018 containerd[1456]: 2026-04-28 01:15:20.666 [INFO][4640] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 28 01:15:20.680018 containerd[1456]: 2026-04-28 01:15:20.675 [WARNING][4640] ipam/ipam_plugin.go 515: Asked to release address but it doesn't exist. Ignoring ContainerID="1b5e5ac5cf20259e031d9691f196db006eb613be1608fb4221becec133ac1551" HandleID="k8s-pod-network.1b5e5ac5cf20259e031d9691f196db006eb613be1608fb4221becec133ac1551" Workload="localhost-k8s-coredns--66bc5c9577--jkglw-eth0" Apr 28 01:15:20.680018 containerd[1456]: 2026-04-28 01:15:20.675 [INFO][4640] ipam/ipam_plugin.go 526: Releasing address using workloadID ContainerID="1b5e5ac5cf20259e031d9691f196db006eb613be1608fb4221becec133ac1551" HandleID="k8s-pod-network.1b5e5ac5cf20259e031d9691f196db006eb613be1608fb4221becec133ac1551" Workload="localhost-k8s-coredns--66bc5c9577--jkglw-eth0" Apr 28 01:15:20.680018 containerd[1456]: 2026-04-28 01:15:20.676 [INFO][4640] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 28 01:15:20.680018 containerd[1456]: 2026-04-28 01:15:20.678 [INFO][4624] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="1b5e5ac5cf20259e031d9691f196db006eb613be1608fb4221becec133ac1551" Apr 28 01:15:20.682254 containerd[1456]: time="2026-04-28T01:15:20.682141775Z" level=info msg="TearDown network for sandbox \"1b5e5ac5cf20259e031d9691f196db006eb613be1608fb4221becec133ac1551\" successfully" Apr 28 01:15:20.682254 containerd[1456]: time="2026-04-28T01:15:20.682182551Z" level=info msg="StopPodSandbox for \"1b5e5ac5cf20259e031d9691f196db006eb613be1608fb4221becec133ac1551\" returns successfully" Apr 28 01:15:20.683658 systemd[1]: run-netns-cni\x2df5d5ff4d\x2d8b13\x2d0e51\x2dc5de\x2d128348daac5d.mount: Deactivated successfully. Apr 28 01:15:20.687015 kubelet[2502]: E0428 01:15:20.686933 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:15:20.687632 containerd[1456]: time="2026-04-28T01:15:20.687579993Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-jkglw,Uid:adb579ea-e7d0-4c18-a435-e777162b9b49,Namespace:kube-system,Attempt:1,}" Apr 28 01:15:20.692905 containerd[1456]: 2026-04-28 01:15:20.642 [INFO][4620] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="aeb4a688f40f86108f7453e3a5315c6e9ddbea828d70eb0aab8b026af70c8127" Apr 28 01:15:20.692905 containerd[1456]: 2026-04-28 01:15:20.642 [INFO][4620] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="aeb4a688f40f86108f7453e3a5315c6e9ddbea828d70eb0aab8b026af70c8127" iface="eth0" netns="/var/run/netns/cni-4dca6370-91bd-6362-754d-9bd2b616c956" Apr 28 01:15:20.692905 containerd[1456]: 2026-04-28 01:15:20.642 [INFO][4620] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="aeb4a688f40f86108f7453e3a5315c6e9ddbea828d70eb0aab8b026af70c8127" iface="eth0" netns="/var/run/netns/cni-4dca6370-91bd-6362-754d-9bd2b616c956" Apr 28 01:15:20.692905 containerd[1456]: 2026-04-28 01:15:20.642 [INFO][4620] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="aeb4a688f40f86108f7453e3a5315c6e9ddbea828d70eb0aab8b026af70c8127" iface="eth0" netns="/var/run/netns/cni-4dca6370-91bd-6362-754d-9bd2b616c956" Apr 28 01:15:20.692905 containerd[1456]: 2026-04-28 01:15:20.642 [INFO][4620] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="aeb4a688f40f86108f7453e3a5315c6e9ddbea828d70eb0aab8b026af70c8127" Apr 28 01:15:20.692905 containerd[1456]: 2026-04-28 01:15:20.642 [INFO][4620] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="aeb4a688f40f86108f7453e3a5315c6e9ddbea828d70eb0aab8b026af70c8127" Apr 28 01:15:20.692905 containerd[1456]: 2026-04-28 01:15:20.672 [INFO][4642] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="aeb4a688f40f86108f7453e3a5315c6e9ddbea828d70eb0aab8b026af70c8127" HandleID="k8s-pod-network.aeb4a688f40f86108f7453e3a5315c6e9ddbea828d70eb0aab8b026af70c8127" Workload="localhost-k8s-calico--apiserver--59c6777c8b--8xfxq-eth0" Apr 28 01:15:20.692905 containerd[1456]: 2026-04-28 01:15:20.672 [INFO][4642] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 28 01:15:20.692905 containerd[1456]: 2026-04-28 01:15:20.676 [INFO][4642] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 28 01:15:20.692905 containerd[1456]: 2026-04-28 01:15:20.686 [WARNING][4642] ipam/ipam_plugin.go 515: Asked to release address but it doesn't exist. Ignoring ContainerID="aeb4a688f40f86108f7453e3a5315c6e9ddbea828d70eb0aab8b026af70c8127" HandleID="k8s-pod-network.aeb4a688f40f86108f7453e3a5315c6e9ddbea828d70eb0aab8b026af70c8127" Workload="localhost-k8s-calico--apiserver--59c6777c8b--8xfxq-eth0" Apr 28 01:15:20.692905 containerd[1456]: 2026-04-28 01:15:20.686 [INFO][4642] ipam/ipam_plugin.go 526: Releasing address using workloadID ContainerID="aeb4a688f40f86108f7453e3a5315c6e9ddbea828d70eb0aab8b026af70c8127" HandleID="k8s-pod-network.aeb4a688f40f86108f7453e3a5315c6e9ddbea828d70eb0aab8b026af70c8127" Workload="localhost-k8s-calico--apiserver--59c6777c8b--8xfxq-eth0" Apr 28 01:15:20.692905 containerd[1456]: 2026-04-28 01:15:20.688 [INFO][4642] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 28 01:15:20.692905 containerd[1456]: 2026-04-28 01:15:20.690 [INFO][4620] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="aeb4a688f40f86108f7453e3a5315c6e9ddbea828d70eb0aab8b026af70c8127" Apr 28 01:15:20.693639 containerd[1456]: time="2026-04-28T01:15:20.693464618Z" level=info msg="TearDown network for sandbox \"aeb4a688f40f86108f7453e3a5315c6e9ddbea828d70eb0aab8b026af70c8127\" successfully" Apr 28 01:15:20.693639 containerd[1456]: time="2026-04-28T01:15:20.693495425Z" level=info msg="StopPodSandbox for \"aeb4a688f40f86108f7453e3a5315c6e9ddbea828d70eb0aab8b026af70c8127\" returns successfully" Apr 28 01:15:20.696843 containerd[1456]: time="2026-04-28T01:15:20.696777229Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59c6777c8b-8xfxq,Uid:3581e29b-6797-4b71-bcef-fde26f9a5731,Namespace:calico-system,Attempt:1,}" Apr 28 01:15:20.697617 systemd[1]: run-netns-cni\x2d4dca6370\x2d91bd\x2d6362\x2d754d\x2d9bd2b616c956.mount: Deactivated successfully. Apr 28 01:15:20.800778 systemd-networkd[1373]: cali3c24f376b9e: Link UP Apr 28 01:15:20.802065 systemd-networkd[1373]: cali3c24f376b9e: Gained carrier Apr 28 01:15:20.814145 containerd[1456]: 2026-04-28 01:15:20.734 [INFO][4658] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--jkglw-eth0 coredns-66bc5c9577- kube-system adb579ea-e7d0-4c18-a435-e777162b9b49 1053 0 2026-04-28 01:14:40 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-jkglw eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali3c24f376b9e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="0c203d0a31637961fdae9cfdf99c18c8a9ef8e4f7cd5bc201d2927fa400ca179" Namespace="kube-system" Pod="coredns-66bc5c9577-jkglw" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--jkglw-" Apr 28 01:15:20.814145 containerd[1456]: 2026-04-28 01:15:20.734 [INFO][4658] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0c203d0a31637961fdae9cfdf99c18c8a9ef8e4f7cd5bc201d2927fa400ca179" Namespace="kube-system" Pod="coredns-66bc5c9577-jkglw" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--jkglw-eth0" Apr 28 01:15:20.814145 containerd[1456]: 2026-04-28 01:15:20.761 [INFO][4687] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0c203d0a31637961fdae9cfdf99c18c8a9ef8e4f7cd5bc201d2927fa400ca179" HandleID="k8s-pod-network.0c203d0a31637961fdae9cfdf99c18c8a9ef8e4f7cd5bc201d2927fa400ca179" Workload="localhost-k8s-coredns--66bc5c9577--jkglw-eth0" Apr 28 01:15:20.814145 containerd[1456]: 2026-04-28 01:15:20.771 [INFO][4687] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="0c203d0a31637961fdae9cfdf99c18c8a9ef8e4f7cd5bc201d2927fa400ca179" HandleID="k8s-pod-network.0c203d0a31637961fdae9cfdf99c18c8a9ef8e4f7cd5bc201d2927fa400ca179" Workload="localhost-k8s-coredns--66bc5c9577--jkglw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00013db00), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-jkglw", "timestamp":"2026-04-28 01:15:20.761123671 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000310f20)} Apr 28 01:15:20.814145 containerd[1456]: 2026-04-28 01:15:20.771 [INFO][4687] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 28 01:15:20.814145 containerd[1456]: 2026-04-28 01:15:20.771 [INFO][4687] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 28 01:15:20.814145 containerd[1456]: 2026-04-28 01:15:20.771 [INFO][4687] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 28 01:15:20.814145 containerd[1456]: 2026-04-28 01:15:20.774 [INFO][4687] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.0c203d0a31637961fdae9cfdf99c18c8a9ef8e4f7cd5bc201d2927fa400ca179" host="localhost" Apr 28 01:15:20.814145 containerd[1456]: 2026-04-28 01:15:20.777 [INFO][4687] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 28 01:15:20.814145 containerd[1456]: 2026-04-28 01:15:20.781 [INFO][4687] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 28 01:15:20.814145 containerd[1456]: 2026-04-28 01:15:20.782 [INFO][4687] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 28 01:15:20.814145 containerd[1456]: 2026-04-28 01:15:20.784 [INFO][4687] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 28 01:15:20.814145 containerd[1456]: 2026-04-28 01:15:20.784 [INFO][4687] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0c203d0a31637961fdae9cfdf99c18c8a9ef8e4f7cd5bc201d2927fa400ca179" host="localhost" Apr 28 01:15:20.814145 containerd[1456]: 2026-04-28 01:15:20.786 [INFO][4687] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.0c203d0a31637961fdae9cfdf99c18c8a9ef8e4f7cd5bc201d2927fa400ca179 Apr 28 01:15:20.814145 containerd[1456]: 2026-04-28 01:15:20.790 [INFO][4687] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0c203d0a31637961fdae9cfdf99c18c8a9ef8e4f7cd5bc201d2927fa400ca179" host="localhost" Apr 28 01:15:20.814145 containerd[1456]: 2026-04-28 01:15:20.795 [INFO][4687] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.0c203d0a31637961fdae9cfdf99c18c8a9ef8e4f7cd5bc201d2927fa400ca179" host="localhost" Apr 28 01:15:20.814145 containerd[1456]: 2026-04-28 01:15:20.795 [INFO][4687] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.0c203d0a31637961fdae9cfdf99c18c8a9ef8e4f7cd5bc201d2927fa400ca179" host="localhost" Apr 28 01:15:20.814145 containerd[1456]: 2026-04-28 01:15:20.795 [INFO][4687] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 28 01:15:20.814145 containerd[1456]: 2026-04-28 01:15:20.795 [INFO][4687] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="0c203d0a31637961fdae9cfdf99c18c8a9ef8e4f7cd5bc201d2927fa400ca179" HandleID="k8s-pod-network.0c203d0a31637961fdae9cfdf99c18c8a9ef8e4f7cd5bc201d2927fa400ca179" Workload="localhost-k8s-coredns--66bc5c9577--jkglw-eth0" Apr 28 01:15:20.814587 containerd[1456]: 2026-04-28 01:15:20.797 [INFO][4658] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0c203d0a31637961fdae9cfdf99c18c8a9ef8e4f7cd5bc201d2927fa400ca179" Namespace="kube-system" Pod="coredns-66bc5c9577-jkglw" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--jkglw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--jkglw-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"adb579ea-e7d0-4c18-a435-e777162b9b49", ResourceVersion:"1053", Generation:0, CreationTimestamp:time.Date(2026, time.April, 28, 1, 14, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-jkglw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3c24f376b9e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 28 01:15:20.814587 containerd[1456]: 2026-04-28 01:15:20.798 [INFO][4658] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="0c203d0a31637961fdae9cfdf99c18c8a9ef8e4f7cd5bc201d2927fa400ca179" Namespace="kube-system" Pod="coredns-66bc5c9577-jkglw" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--jkglw-eth0" Apr 28 01:15:20.814587 containerd[1456]: 2026-04-28 01:15:20.798 [INFO][4658] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3c24f376b9e ContainerID="0c203d0a31637961fdae9cfdf99c18c8a9ef8e4f7cd5bc201d2927fa400ca179" Namespace="kube-system" Pod="coredns-66bc5c9577-jkglw" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--jkglw-eth0" Apr 28 01:15:20.814587 containerd[1456]: 2026-04-28 01:15:20.801 [INFO][4658] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0c203d0a31637961fdae9cfdf99c18c8a9ef8e4f7cd5bc201d2927fa400ca179" Namespace="kube-system" Pod="coredns-66bc5c9577-jkglw" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--jkglw-eth0" Apr 28 01:15:20.814587 containerd[1456]: 2026-04-28 01:15:20.802 [INFO][4658] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0c203d0a31637961fdae9cfdf99c18c8a9ef8e4f7cd5bc201d2927fa400ca179" Namespace="kube-system" Pod="coredns-66bc5c9577-jkglw" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--jkglw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--jkglw-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"adb579ea-e7d0-4c18-a435-e777162b9b49", ResourceVersion:"1053", Generation:0, CreationTimestamp:time.Date(2026, time.April, 28, 1, 14, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0c203d0a31637961fdae9cfdf99c18c8a9ef8e4f7cd5bc201d2927fa400ca179", Pod:"coredns-66bc5c9577-jkglw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3c24f376b9e", MAC:"c6:3e:d1:69:52:2c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 28 01:15:20.814587 containerd[1456]: 2026-04-28 01:15:20.810 [INFO][4658] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0c203d0a31637961fdae9cfdf99c18c8a9ef8e4f7cd5bc201d2927fa400ca179" Namespace="kube-system" Pod="coredns-66bc5c9577-jkglw" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--jkglw-eth0" Apr 28 01:15:20.832565 containerd[1456]: time="2026-04-28T01:15:20.832473882Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 28 01:15:20.832565 containerd[1456]: time="2026-04-28T01:15:20.832532361Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 28 01:15:20.832565 containerd[1456]: time="2026-04-28T01:15:20.832547294Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 01:15:20.832914 containerd[1456]: time="2026-04-28T01:15:20.832605017Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 01:15:20.851158 systemd[1]: Started cri-containerd-0c203d0a31637961fdae9cfdf99c18c8a9ef8e4f7cd5bc201d2927fa400ca179.scope - libcontainer container 0c203d0a31637961fdae9cfdf99c18c8a9ef8e4f7cd5bc201d2927fa400ca179. Apr 28 01:15:20.861458 systemd-resolved[1375]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 28 01:15:20.889003 containerd[1456]: time="2026-04-28T01:15:20.888845852Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-jkglw,Uid:adb579ea-e7d0-4c18-a435-e777162b9b49,Namespace:kube-system,Attempt:1,} returns sandbox id \"0c203d0a31637961fdae9cfdf99c18c8a9ef8e4f7cd5bc201d2927fa400ca179\"" Apr 28 01:15:20.890333 kubelet[2502]: E0428 01:15:20.890012 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:15:20.895512 containerd[1456]: time="2026-04-28T01:15:20.895446996Z" level=info msg="CreateContainer within sandbox \"0c203d0a31637961fdae9cfdf99c18c8a9ef8e4f7cd5bc201d2927fa400ca179\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 28 01:15:20.910175 systemd-networkd[1373]: cali15c4e0f483d: Link UP Apr 28 01:15:20.911305 systemd-networkd[1373]: cali15c4e0f483d: Gained carrier Apr 28 01:15:20.914378 containerd[1456]: time="2026-04-28T01:15:20.914323377Z" level=info msg="CreateContainer within sandbox \"0c203d0a31637961fdae9cfdf99c18c8a9ef8e4f7cd5bc201d2927fa400ca179\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"872f03f5154a3fdbc8cd410a5da8e370eb5dc7e98d80c5dcc5f7fe707e9c6931\"" Apr 28 01:15:20.916183 containerd[1456]: time="2026-04-28T01:15:20.915815145Z" level=info msg="StartContainer for \"872f03f5154a3fdbc8cd410a5da8e370eb5dc7e98d80c5dcc5f7fe707e9c6931\"" Apr 28 01:15:20.923174 containerd[1456]: 2026-04-28 01:15:20.744 [INFO][4672] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--59c6777c8b--8xfxq-eth0 calico-apiserver-59c6777c8b- calico-system 3581e29b-6797-4b71-bcef-fde26f9a5731 1052 0 2026-04-28 01:14:48 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:59c6777c8b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-59c6777c8b-8xfxq eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali15c4e0f483d [] [] }} ContainerID="468e9a94a7ac43e0242f7e2ddb8760faa2eb2a721afec0cc5a25d429d2b4e135" Namespace="calico-system" Pod="calico-apiserver-59c6777c8b-8xfxq" WorkloadEndpoint="localhost-k8s-calico--apiserver--59c6777c8b--8xfxq-" Apr 28 01:15:20.923174 containerd[1456]: 2026-04-28 01:15:20.744 [INFO][4672] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="468e9a94a7ac43e0242f7e2ddb8760faa2eb2a721afec0cc5a25d429d2b4e135" Namespace="calico-system" Pod="calico-apiserver-59c6777c8b-8xfxq" WorkloadEndpoint="localhost-k8s-calico--apiserver--59c6777c8b--8xfxq-eth0" Apr 28 01:15:20.923174 containerd[1456]: 2026-04-28 01:15:20.768 [INFO][4695] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="468e9a94a7ac43e0242f7e2ddb8760faa2eb2a721afec0cc5a25d429d2b4e135" HandleID="k8s-pod-network.468e9a94a7ac43e0242f7e2ddb8760faa2eb2a721afec0cc5a25d429d2b4e135" Workload="localhost-k8s-calico--apiserver--59c6777c8b--8xfxq-eth0" Apr 28 01:15:20.923174 containerd[1456]: 2026-04-28 01:15:20.772 [INFO][4695] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="468e9a94a7ac43e0242f7e2ddb8760faa2eb2a721afec0cc5a25d429d2b4e135" HandleID="k8s-pod-network.468e9a94a7ac43e0242f7e2ddb8760faa2eb2a721afec0cc5a25d429d2b4e135" Workload="localhost-k8s-calico--apiserver--59c6777c8b--8xfxq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000197ba0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-59c6777c8b-8xfxq", "timestamp":"2026-04-28 01:15:20.768158124 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000502dc0)} Apr 28 01:15:20.923174 containerd[1456]: 2026-04-28 01:15:20.773 [INFO][4695] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 28 01:15:20.923174 containerd[1456]: 2026-04-28 01:15:20.795 [INFO][4695] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 28 01:15:20.923174 containerd[1456]: 2026-04-28 01:15:20.795 [INFO][4695] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 28 01:15:20.923174 containerd[1456]: 2026-04-28 01:15:20.876 [INFO][4695] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.468e9a94a7ac43e0242f7e2ddb8760faa2eb2a721afec0cc5a25d429d2b4e135" host="localhost" Apr 28 01:15:20.923174 containerd[1456]: 2026-04-28 01:15:20.882 [INFO][4695] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 28 01:15:20.923174 containerd[1456]: 2026-04-28 01:15:20.887 [INFO][4695] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 28 01:15:20.923174 containerd[1456]: 2026-04-28 01:15:20.890 [INFO][4695] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 28 01:15:20.923174 containerd[1456]: 2026-04-28 01:15:20.892 [INFO][4695] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 28 01:15:20.923174 containerd[1456]: 2026-04-28 01:15:20.892 [INFO][4695] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.468e9a94a7ac43e0242f7e2ddb8760faa2eb2a721afec0cc5a25d429d2b4e135" host="localhost" Apr 28 01:15:20.923174 containerd[1456]: 2026-04-28 01:15:20.894 [INFO][4695] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.468e9a94a7ac43e0242f7e2ddb8760faa2eb2a721afec0cc5a25d429d2b4e135 Apr 28 01:15:20.923174 containerd[1456]: 2026-04-28 01:15:20.898 [INFO][4695] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.468e9a94a7ac43e0242f7e2ddb8760faa2eb2a721afec0cc5a25d429d2b4e135" host="localhost" Apr 28 01:15:20.923174 containerd[1456]: 2026-04-28 01:15:20.906 [INFO][4695] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.468e9a94a7ac43e0242f7e2ddb8760faa2eb2a721afec0cc5a25d429d2b4e135" host="localhost" Apr 28 01:15:20.923174 containerd[1456]: 2026-04-28 01:15:20.906 [INFO][4695] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.468e9a94a7ac43e0242f7e2ddb8760faa2eb2a721afec0cc5a25d429d2b4e135" host="localhost" Apr 28 01:15:20.923174 containerd[1456]: 2026-04-28 01:15:20.906 [INFO][4695] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 28 01:15:20.923174 containerd[1456]: 2026-04-28 01:15:20.906 [INFO][4695] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="468e9a94a7ac43e0242f7e2ddb8760faa2eb2a721afec0cc5a25d429d2b4e135" HandleID="k8s-pod-network.468e9a94a7ac43e0242f7e2ddb8760faa2eb2a721afec0cc5a25d429d2b4e135" Workload="localhost-k8s-calico--apiserver--59c6777c8b--8xfxq-eth0" Apr 28 01:15:20.924280 containerd[1456]: 2026-04-28 01:15:20.908 [INFO][4672] cni-plugin/k8s.go 418: Populated endpoint ContainerID="468e9a94a7ac43e0242f7e2ddb8760faa2eb2a721afec0cc5a25d429d2b4e135" Namespace="calico-system" Pod="calico-apiserver-59c6777c8b-8xfxq" WorkloadEndpoint="localhost-k8s-calico--apiserver--59c6777c8b--8xfxq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--59c6777c8b--8xfxq-eth0", GenerateName:"calico-apiserver-59c6777c8b-", Namespace:"calico-system", SelfLink:"", UID:"3581e29b-6797-4b71-bcef-fde26f9a5731", ResourceVersion:"1052", Generation:0, CreationTimestamp:time.Date(2026, time.April, 28, 1, 14, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59c6777c8b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-59c6777c8b-8xfxq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali15c4e0f483d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 28 01:15:20.924280 containerd[1456]: 2026-04-28 01:15:20.908 [INFO][4672] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="468e9a94a7ac43e0242f7e2ddb8760faa2eb2a721afec0cc5a25d429d2b4e135" Namespace="calico-system" Pod="calico-apiserver-59c6777c8b-8xfxq" WorkloadEndpoint="localhost-k8s-calico--apiserver--59c6777c8b--8xfxq-eth0" Apr 28 01:15:20.924280 containerd[1456]: 2026-04-28 01:15:20.908 [INFO][4672] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali15c4e0f483d ContainerID="468e9a94a7ac43e0242f7e2ddb8760faa2eb2a721afec0cc5a25d429d2b4e135" Namespace="calico-system" Pod="calico-apiserver-59c6777c8b-8xfxq" WorkloadEndpoint="localhost-k8s-calico--apiserver--59c6777c8b--8xfxq-eth0" Apr 28 01:15:20.924280 containerd[1456]: 2026-04-28 01:15:20.911 [INFO][4672] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="468e9a94a7ac43e0242f7e2ddb8760faa2eb2a721afec0cc5a25d429d2b4e135" Namespace="calico-system" Pod="calico-apiserver-59c6777c8b-8xfxq" WorkloadEndpoint="localhost-k8s-calico--apiserver--59c6777c8b--8xfxq-eth0" Apr 28 01:15:20.924280 containerd[1456]: 2026-04-28 01:15:20.911 [INFO][4672] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="468e9a94a7ac43e0242f7e2ddb8760faa2eb2a721afec0cc5a25d429d2b4e135" Namespace="calico-system" Pod="calico-apiserver-59c6777c8b-8xfxq" WorkloadEndpoint="localhost-k8s-calico--apiserver--59c6777c8b--8xfxq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--59c6777c8b--8xfxq-eth0", GenerateName:"calico-apiserver-59c6777c8b-", Namespace:"calico-system", SelfLink:"", UID:"3581e29b-6797-4b71-bcef-fde26f9a5731", ResourceVersion:"1052", Generation:0, CreationTimestamp:time.Date(2026, time.April, 28, 1, 14, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59c6777c8b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"468e9a94a7ac43e0242f7e2ddb8760faa2eb2a721afec0cc5a25d429d2b4e135", Pod:"calico-apiserver-59c6777c8b-8xfxq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali15c4e0f483d", MAC:"2e:6c:13:1e:7a:f1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 28 01:15:20.924280 containerd[1456]: 2026-04-28 01:15:20.920 [INFO][4672] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="468e9a94a7ac43e0242f7e2ddb8760faa2eb2a721afec0cc5a25d429d2b4e135" Namespace="calico-system" Pod="calico-apiserver-59c6777c8b-8xfxq" WorkloadEndpoint="localhost-k8s-calico--apiserver--59c6777c8b--8xfxq-eth0" Apr 28 01:15:20.949066 containerd[1456]: time="2026-04-28T01:15:20.948866994Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 28 01:15:20.949066 containerd[1456]: time="2026-04-28T01:15:20.948937159Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 28 01:15:20.949066 containerd[1456]: time="2026-04-28T01:15:20.949010344Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 01:15:20.949594 containerd[1456]: time="2026-04-28T01:15:20.949418773Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 01:15:20.955185 systemd[1]: Started cri-containerd-872f03f5154a3fdbc8cd410a5da8e370eb5dc7e98d80c5dcc5f7fe707e9c6931.scope - libcontainer container 872f03f5154a3fdbc8cd410a5da8e370eb5dc7e98d80c5dcc5f7fe707e9c6931. Apr 28 01:15:20.969187 systemd[1]: Started cri-containerd-468e9a94a7ac43e0242f7e2ddb8760faa2eb2a721afec0cc5a25d429d2b4e135.scope - libcontainer container 468e9a94a7ac43e0242f7e2ddb8760faa2eb2a721afec0cc5a25d429d2b4e135. Apr 28 01:15:20.980914 systemd-resolved[1375]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 28 01:15:20.984812 containerd[1456]: time="2026-04-28T01:15:20.984663421Z" level=info msg="StartContainer for \"872f03f5154a3fdbc8cd410a5da8e370eb5dc7e98d80c5dcc5f7fe707e9c6931\" returns successfully" Apr 28 01:15:21.019255 containerd[1456]: time="2026-04-28T01:15:21.018304006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59c6777c8b-8xfxq,Uid:3581e29b-6797-4b71-bcef-fde26f9a5731,Namespace:calico-system,Attempt:1,} returns sandbox id \"468e9a94a7ac43e0242f7e2ddb8760faa2eb2a721afec0cc5a25d429d2b4e135\"" Apr 28 01:15:21.185388 systemd[1]: Started sshd@8-10.0.0.153:22-10.0.0.1:57332.service - OpenSSH per-connection server daemon (10.0.0.1:57332). Apr 28 01:15:21.226439 sshd[4853]: Accepted publickey for core from 10.0.0.1 port 57332 ssh2: RSA SHA256:LlE/68A0qVd4DdmQfcok9T4l7BHzq3PFAQ3i8Jwjpps Apr 28 01:15:21.227732 sshd[4853]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:15:21.231616 systemd-logind[1435]: New session 9 of user core. Apr 28 01:15:21.241511 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 28 01:15:21.397347 sshd[4853]: pam_unix(sshd:session): session closed for user core Apr 28 01:15:21.400432 systemd[1]: sshd@8-10.0.0.153:22-10.0.0.1:57332.service: Deactivated successfully. Apr 28 01:15:21.401739 systemd[1]: session-9.scope: Deactivated successfully. Apr 28 01:15:21.402480 systemd-logind[1435]: Session 9 logged out. Waiting for processes to exit. Apr 28 01:15:21.403813 systemd-logind[1435]: Removed session 9. Apr 28 01:15:21.592206 systemd-networkd[1373]: calic25d340f709: Gained IPv6LL Apr 28 01:15:21.848865 kubelet[2502]: E0428 01:15:21.848651 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:15:21.905213 kubelet[2502]: I0428 01:15:21.894930 2502 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-jkglw" podStartSLOduration=41.894917617 podStartE2EDuration="41.894917617s" podCreationTimestamp="2026-04-28 01:14:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-28 01:15:21.893450754 +0000 UTC m=+48.423812817" watchObservedRunningTime="2026-04-28 01:15:21.894917617 +0000 UTC m=+48.425279687" Apr 28 01:15:21.912706 systemd-networkd[1373]: cali3c24f376b9e: Gained IPv6LL Apr 28 01:15:22.167220 systemd-networkd[1373]: cali15c4e0f483d: Gained IPv6LL Apr 28 01:15:22.584912 containerd[1456]: time="2026-04-28T01:15:22.584816394Z" level=info msg="StopPodSandbox for \"55aed3bf1d0e6e0ea4708ed31c9d451023cf4ce0db272c7e28f6d525d6e56fe9\"" Apr 28 01:15:22.687830 containerd[1456]: 2026-04-28 01:15:22.641 [INFO][4886] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="55aed3bf1d0e6e0ea4708ed31c9d451023cf4ce0db272c7e28f6d525d6e56fe9" Apr 28 01:15:22.687830 containerd[1456]: 2026-04-28 01:15:22.641 [INFO][4886] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="55aed3bf1d0e6e0ea4708ed31c9d451023cf4ce0db272c7e28f6d525d6e56fe9" iface="eth0" netns="/var/run/netns/cni-95f02e13-df53-665c-baa4-c14bad40f2bc" Apr 28 01:15:22.687830 containerd[1456]: 2026-04-28 01:15:22.641 [INFO][4886] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="55aed3bf1d0e6e0ea4708ed31c9d451023cf4ce0db272c7e28f6d525d6e56fe9" iface="eth0" netns="/var/run/netns/cni-95f02e13-df53-665c-baa4-c14bad40f2bc" Apr 28 01:15:22.687830 containerd[1456]: 2026-04-28 01:15:22.642 [INFO][4886] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="55aed3bf1d0e6e0ea4708ed31c9d451023cf4ce0db272c7e28f6d525d6e56fe9" iface="eth0" netns="/var/run/netns/cni-95f02e13-df53-665c-baa4-c14bad40f2bc" Apr 28 01:15:22.687830 containerd[1456]: 2026-04-28 01:15:22.642 [INFO][4886] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="55aed3bf1d0e6e0ea4708ed31c9d451023cf4ce0db272c7e28f6d525d6e56fe9" Apr 28 01:15:22.687830 containerd[1456]: 2026-04-28 01:15:22.642 [INFO][4886] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="55aed3bf1d0e6e0ea4708ed31c9d451023cf4ce0db272c7e28f6d525d6e56fe9" Apr 28 01:15:22.687830 containerd[1456]: 2026-04-28 01:15:22.676 [INFO][4894] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="55aed3bf1d0e6e0ea4708ed31c9d451023cf4ce0db272c7e28f6d525d6e56fe9" HandleID="k8s-pod-network.55aed3bf1d0e6e0ea4708ed31c9d451023cf4ce0db272c7e28f6d525d6e56fe9" Workload="localhost-k8s-coredns--66bc5c9577--srt2r-eth0" Apr 28 01:15:22.687830 containerd[1456]: 2026-04-28 01:15:22.676 [INFO][4894] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 28 01:15:22.687830 containerd[1456]: 2026-04-28 01:15:22.676 [INFO][4894] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 28 01:15:22.687830 containerd[1456]: 2026-04-28 01:15:22.682 [WARNING][4894] ipam/ipam_plugin.go 515: Asked to release address but it doesn't exist. Ignoring ContainerID="55aed3bf1d0e6e0ea4708ed31c9d451023cf4ce0db272c7e28f6d525d6e56fe9" HandleID="k8s-pod-network.55aed3bf1d0e6e0ea4708ed31c9d451023cf4ce0db272c7e28f6d525d6e56fe9" Workload="localhost-k8s-coredns--66bc5c9577--srt2r-eth0" Apr 28 01:15:22.687830 containerd[1456]: 2026-04-28 01:15:22.683 [INFO][4894] ipam/ipam_plugin.go 526: Releasing address using workloadID ContainerID="55aed3bf1d0e6e0ea4708ed31c9d451023cf4ce0db272c7e28f6d525d6e56fe9" HandleID="k8s-pod-network.55aed3bf1d0e6e0ea4708ed31c9d451023cf4ce0db272c7e28f6d525d6e56fe9" Workload="localhost-k8s-coredns--66bc5c9577--srt2r-eth0" Apr 28 01:15:22.687830 containerd[1456]: 2026-04-28 01:15:22.684 [INFO][4894] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 28 01:15:22.687830 containerd[1456]: 2026-04-28 01:15:22.686 [INFO][4886] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="55aed3bf1d0e6e0ea4708ed31c9d451023cf4ce0db272c7e28f6d525d6e56fe9" Apr 28 01:15:22.688569 containerd[1456]: time="2026-04-28T01:15:22.688144722Z" level=info msg="TearDown network for sandbox \"55aed3bf1d0e6e0ea4708ed31c9d451023cf4ce0db272c7e28f6d525d6e56fe9\" successfully" Apr 28 01:15:22.688569 containerd[1456]: time="2026-04-28T01:15:22.688167656Z" level=info msg="StopPodSandbox for \"55aed3bf1d0e6e0ea4708ed31c9d451023cf4ce0db272c7e28f6d525d6e56fe9\" returns successfully" Apr 28 01:15:22.690673 systemd[1]: run-netns-cni\x2d95f02e13\x2ddf53\x2d665c\x2dbaa4\x2dc14bad40f2bc.mount: Deactivated successfully. Apr 28 01:15:22.692421 kubelet[2502]: E0428 01:15:22.692378 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:15:22.693079 containerd[1456]: time="2026-04-28T01:15:22.693009455Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-srt2r,Uid:1a56c2f2-44ed-42f1-8584-fb82c3e57985,Namespace:kube-system,Attempt:1,}" Apr 28 01:15:22.804384 systemd-networkd[1373]: cali459d3219c08: Link UP Apr 28 01:15:22.806037 systemd-networkd[1373]: cali459d3219c08: Gained carrier Apr 28 01:15:22.818910 containerd[1456]: 2026-04-28 01:15:22.740 [INFO][4907] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--srt2r-eth0 coredns-66bc5c9577- kube-system 1a56c2f2-44ed-42f1-8584-fb82c3e57985 1094 0 2026-04-28 01:14:40 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-srt2r eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali459d3219c08 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="47c1be337e98c8c1ec49530eded639d854af3f992b93ef4a8b8c36eaddd7187f" Namespace="kube-system" Pod="coredns-66bc5c9577-srt2r" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--srt2r-" Apr 28 01:15:22.818910 containerd[1456]: 2026-04-28 01:15:22.740 [INFO][4907] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="47c1be337e98c8c1ec49530eded639d854af3f992b93ef4a8b8c36eaddd7187f" Namespace="kube-system" Pod="coredns-66bc5c9577-srt2r" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--srt2r-eth0" Apr 28 01:15:22.818910 containerd[1456]: 2026-04-28 01:15:22.765 [INFO][4917] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="47c1be337e98c8c1ec49530eded639d854af3f992b93ef4a8b8c36eaddd7187f" HandleID="k8s-pod-network.47c1be337e98c8c1ec49530eded639d854af3f992b93ef4a8b8c36eaddd7187f" Workload="localhost-k8s-coredns--66bc5c9577--srt2r-eth0" Apr 28 01:15:22.818910 containerd[1456]: 2026-04-28 01:15:22.771 [INFO][4917] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="47c1be337e98c8c1ec49530eded639d854af3f992b93ef4a8b8c36eaddd7187f" HandleID="k8s-pod-network.47c1be337e98c8c1ec49530eded639d854af3f992b93ef4a8b8c36eaddd7187f" Workload="localhost-k8s-coredns--66bc5c9577--srt2r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001b0a60), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-srt2r", "timestamp":"2026-04-28 01:15:22.765595252 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001e4420)} Apr 28 01:15:22.818910 containerd[1456]: 2026-04-28 01:15:22.771 [INFO][4917] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 28 01:15:22.818910 containerd[1456]: 2026-04-28 01:15:22.771 [INFO][4917] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 28 01:15:22.818910 containerd[1456]: 2026-04-28 01:15:22.771 [INFO][4917] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 28 01:15:22.818910 containerd[1456]: 2026-04-28 01:15:22.773 [INFO][4917] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.47c1be337e98c8c1ec49530eded639d854af3f992b93ef4a8b8c36eaddd7187f" host="localhost" Apr 28 01:15:22.818910 containerd[1456]: 2026-04-28 01:15:22.777 [INFO][4917] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 28 01:15:22.818910 containerd[1456]: 2026-04-28 01:15:22.783 [INFO][4917] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 28 01:15:22.818910 containerd[1456]: 2026-04-28 01:15:22.786 [INFO][4917] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 28 01:15:22.818910 containerd[1456]: 2026-04-28 01:15:22.789 [INFO][4917] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 28 01:15:22.818910 containerd[1456]: 2026-04-28 01:15:22.789 [INFO][4917] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.47c1be337e98c8c1ec49530eded639d854af3f992b93ef4a8b8c36eaddd7187f" host="localhost" Apr 28 01:15:22.818910 containerd[1456]: 2026-04-28 01:15:22.790 [INFO][4917] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.47c1be337e98c8c1ec49530eded639d854af3f992b93ef4a8b8c36eaddd7187f Apr 28 01:15:22.818910 containerd[1456]: 2026-04-28 01:15:22.793 [INFO][4917] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.47c1be337e98c8c1ec49530eded639d854af3f992b93ef4a8b8c36eaddd7187f" host="localhost" Apr 28 01:15:22.818910 containerd[1456]: 2026-04-28 01:15:22.799 [INFO][4917] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.47c1be337e98c8c1ec49530eded639d854af3f992b93ef4a8b8c36eaddd7187f" host="localhost" Apr 28 01:15:22.818910 containerd[1456]: 2026-04-28 01:15:22.800 [INFO][4917] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.47c1be337e98c8c1ec49530eded639d854af3f992b93ef4a8b8c36eaddd7187f" host="localhost" Apr 28 01:15:22.818910 containerd[1456]: 2026-04-28 01:15:22.800 [INFO][4917] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 28 01:15:22.818910 containerd[1456]: 2026-04-28 01:15:22.800 [INFO][4917] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="47c1be337e98c8c1ec49530eded639d854af3f992b93ef4a8b8c36eaddd7187f" HandleID="k8s-pod-network.47c1be337e98c8c1ec49530eded639d854af3f992b93ef4a8b8c36eaddd7187f" Workload="localhost-k8s-coredns--66bc5c9577--srt2r-eth0" Apr 28 01:15:22.820293 containerd[1456]: 2026-04-28 01:15:22.802 [INFO][4907] cni-plugin/k8s.go 418: Populated endpoint ContainerID="47c1be337e98c8c1ec49530eded639d854af3f992b93ef4a8b8c36eaddd7187f" Namespace="kube-system" Pod="coredns-66bc5c9577-srt2r" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--srt2r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--srt2r-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"1a56c2f2-44ed-42f1-8584-fb82c3e57985", ResourceVersion:"1094", Generation:0, CreationTimestamp:time.Date(2026, time.April, 28, 1, 14, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-srt2r", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali459d3219c08", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 28 01:15:22.820293 containerd[1456]: 2026-04-28 01:15:22.802 [INFO][4907] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="47c1be337e98c8c1ec49530eded639d854af3f992b93ef4a8b8c36eaddd7187f" Namespace="kube-system" Pod="coredns-66bc5c9577-srt2r" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--srt2r-eth0" Apr 28 01:15:22.820293 containerd[1456]: 2026-04-28 01:15:22.802 [INFO][4907] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali459d3219c08 ContainerID="47c1be337e98c8c1ec49530eded639d854af3f992b93ef4a8b8c36eaddd7187f" Namespace="kube-system" Pod="coredns-66bc5c9577-srt2r" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--srt2r-eth0" Apr 28 01:15:22.820293 containerd[1456]: 2026-04-28 01:15:22.806 [INFO][4907] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="47c1be337e98c8c1ec49530eded639d854af3f992b93ef4a8b8c36eaddd7187f" Namespace="kube-system" Pod="coredns-66bc5c9577-srt2r" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--srt2r-eth0" Apr 28 01:15:22.820293 containerd[1456]: 2026-04-28 01:15:22.806 [INFO][4907] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="47c1be337e98c8c1ec49530eded639d854af3f992b93ef4a8b8c36eaddd7187f" Namespace="kube-system" Pod="coredns-66bc5c9577-srt2r" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--srt2r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--srt2r-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"1a56c2f2-44ed-42f1-8584-fb82c3e57985", ResourceVersion:"1094", Generation:0, CreationTimestamp:time.Date(2026, time.April, 28, 1, 14, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"47c1be337e98c8c1ec49530eded639d854af3f992b93ef4a8b8c36eaddd7187f", Pod:"coredns-66bc5c9577-srt2r", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali459d3219c08", MAC:"a2:e3:4f:36:f8:20", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 28 01:15:22.820293 containerd[1456]: 2026-04-28 01:15:22.815 [INFO][4907] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="47c1be337e98c8c1ec49530eded639d854af3f992b93ef4a8b8c36eaddd7187f" Namespace="kube-system" Pod="coredns-66bc5c9577-srt2r" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--srt2r-eth0" Apr 28 01:15:22.840285 containerd[1456]: time="2026-04-28T01:15:22.840157258Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 28 01:15:22.840285 containerd[1456]: time="2026-04-28T01:15:22.840189067Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 28 01:15:22.840285 containerd[1456]: time="2026-04-28T01:15:22.840196352Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 01:15:22.841625 containerd[1456]: time="2026-04-28T01:15:22.840239867Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 01:15:22.850427 kubelet[2502]: E0428 01:15:22.850340 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:15:22.864165 systemd[1]: Started cri-containerd-47c1be337e98c8c1ec49530eded639d854af3f992b93ef4a8b8c36eaddd7187f.scope - libcontainer container 47c1be337e98c8c1ec49530eded639d854af3f992b93ef4a8b8c36eaddd7187f. Apr 28 01:15:22.873668 systemd-resolved[1375]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 28 01:15:22.896871 containerd[1456]: time="2026-04-28T01:15:22.896798485Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-srt2r,Uid:1a56c2f2-44ed-42f1-8584-fb82c3e57985,Namespace:kube-system,Attempt:1,} returns sandbox id \"47c1be337e98c8c1ec49530eded639d854af3f992b93ef4a8b8c36eaddd7187f\"" Apr 28 01:15:22.898203 kubelet[2502]: E0428 01:15:22.898186 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:15:22.903574 containerd[1456]: time="2026-04-28T01:15:22.903535557Z" level=info msg="CreateContainer within sandbox \"47c1be337e98c8c1ec49530eded639d854af3f992b93ef4a8b8c36eaddd7187f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 28 01:15:22.919473 containerd[1456]: time="2026-04-28T01:15:22.919411517Z" level=info msg="CreateContainer within sandbox \"47c1be337e98c8c1ec49530eded639d854af3f992b93ef4a8b8c36eaddd7187f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d7dfbe2381dddf9a54840cd7f532a870810448ee12f01a77355eb2f38509d5b5\"" Apr 28 01:15:22.929885 containerd[1456]: time="2026-04-28T01:15:22.929817119Z" level=info msg="StartContainer for \"d7dfbe2381dddf9a54840cd7f532a870810448ee12f01a77355eb2f38509d5b5\"" Apr 28 01:15:22.964226 systemd[1]: Started cri-containerd-d7dfbe2381dddf9a54840cd7f532a870810448ee12f01a77355eb2f38509d5b5.scope - libcontainer container d7dfbe2381dddf9a54840cd7f532a870810448ee12f01a77355eb2f38509d5b5. Apr 28 01:15:23.008356 containerd[1456]: time="2026-04-28T01:15:23.008314252Z" level=info msg="StartContainer for \"d7dfbe2381dddf9a54840cd7f532a870810448ee12f01a77355eb2f38509d5b5\" returns successfully" Apr 28 01:15:23.105609 containerd[1456]: time="2026-04-28T01:15:23.105480656Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 01:15:23.106355 containerd[1456]: time="2026-04-28T01:15:23.106277040Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.5: active requests=0, bytes read=46175896" Apr 28 01:15:23.107184 containerd[1456]: time="2026-04-28T01:15:23.107142264Z" level=info msg="ImageCreate event name:\"sha256:3ba7bd8ea381d6c35b8cc8b5250ae89b7e94ecac0c672dca8a449986e5205cb1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 01:15:23.109153 containerd[1456]: time="2026-04-28T01:15:23.109085748Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:78a11eeba8e8a02ecd6014bc8260180819ee7005f9eacb364b9595d1e4b166e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 01:15:23.109754 containerd[1456]: time="2026-04-28T01:15:23.109704053Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.5\" with image id \"sha256:3ba7bd8ea381d6c35b8cc8b5250ae89b7e94ecac0c672dca8a449986e5205cb1\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:78a11eeba8e8a02ecd6014bc8260180819ee7005f9eacb364b9595d1e4b166e1\", size \"49137337\" in 3.258336818s" Apr 28 01:15:23.109754 containerd[1456]: time="2026-04-28T01:15:23.109749624Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.5\" returns image reference \"sha256:3ba7bd8ea381d6c35b8cc8b5250ae89b7e94ecac0c672dca8a449986e5205cb1\"" Apr 28 01:15:23.111042 containerd[1456]: time="2026-04-28T01:15:23.110712715Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.5\"" Apr 28 01:15:23.114442 containerd[1456]: time="2026-04-28T01:15:23.114394925Z" level=info msg="CreateContainer within sandbox \"fb4883db6b5dc58af62fa07923e44145449c66544e8294ac80a4af2313a468c5\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 28 01:15:23.125212 containerd[1456]: time="2026-04-28T01:15:23.125156656Z" level=info msg="CreateContainer within sandbox \"fb4883db6b5dc58af62fa07923e44145449c66544e8294ac80a4af2313a468c5\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"007f1b60d2a75e406f1458dece359394a3994d84438e75ed6f81e26a66c01d11\"" Apr 28 01:15:23.125818 containerd[1456]: time="2026-04-28T01:15:23.125790304Z" level=info msg="StartContainer for \"007f1b60d2a75e406f1458dece359394a3994d84438e75ed6f81e26a66c01d11\"" Apr 28 01:15:23.158338 systemd[1]: Started cri-containerd-007f1b60d2a75e406f1458dece359394a3994d84438e75ed6f81e26a66c01d11.scope - libcontainer container 007f1b60d2a75e406f1458dece359394a3994d84438e75ed6f81e26a66c01d11. Apr 28 01:15:23.196284 containerd[1456]: time="2026-04-28T01:15:23.196130280Z" level=info msg="StartContainer for \"007f1b60d2a75e406f1458dece359394a3994d84438e75ed6f81e26a66c01d11\" returns successfully" Apr 28 01:15:23.506994 containerd[1456]: time="2026-04-28T01:15:23.506273224Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 01:15:23.507207 containerd[1456]: time="2026-04-28T01:15:23.507160626Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.5: active requests=0, bytes read=77" Apr 28 01:15:23.509613 containerd[1456]: time="2026-04-28T01:15:23.509567325Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.5\" with image id \"sha256:3ba7bd8ea381d6c35b8cc8b5250ae89b7e94ecac0c672dca8a449986e5205cb1\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:78a11eeba8e8a02ecd6014bc8260180819ee7005f9eacb364b9595d1e4b166e1\", size \"49137337\" in 398.825003ms" Apr 28 01:15:23.509672 containerd[1456]: time="2026-04-28T01:15:23.509622987Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.5\" returns image reference \"sha256:3ba7bd8ea381d6c35b8cc8b5250ae89b7e94ecac0c672dca8a449986e5205cb1\"" Apr 28 01:15:23.514380 containerd[1456]: time="2026-04-28T01:15:23.514337212Z" level=info msg="CreateContainer within sandbox \"468e9a94a7ac43e0242f7e2ddb8760faa2eb2a721afec0cc5a25d429d2b4e135\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 28 01:15:23.526752 containerd[1456]: time="2026-04-28T01:15:23.526686699Z" level=info msg="CreateContainer within sandbox \"468e9a94a7ac43e0242f7e2ddb8760faa2eb2a721afec0cc5a25d429d2b4e135\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"909ea53c56fee501cbb5c566c9f3d92cf248ea085f519d7fbbc220d001a86c93\"" Apr 28 01:15:23.527374 containerd[1456]: time="2026-04-28T01:15:23.527299817Z" level=info msg="StartContainer for \"909ea53c56fee501cbb5c566c9f3d92cf248ea085f519d7fbbc220d001a86c93\"" Apr 28 01:15:23.557159 systemd[1]: Started cri-containerd-909ea53c56fee501cbb5c566c9f3d92cf248ea085f519d7fbbc220d001a86c93.scope - libcontainer container 909ea53c56fee501cbb5c566c9f3d92cf248ea085f519d7fbbc220d001a86c93. Apr 28 01:15:23.585919 containerd[1456]: time="2026-04-28T01:15:23.585594802Z" level=info msg="StopPodSandbox for \"7237d00fc7b9392bd10921833cb84c027ae94507cfc1031bc9e63919575fcc6d\"" Apr 28 01:15:23.587579 containerd[1456]: time="2026-04-28T01:15:23.587325755Z" level=info msg="StopPodSandbox for \"4ded7607b6f80e97a1e2bd090c72ab2bf1d445053e40515597369a926c12750c\"" Apr 28 01:15:23.618911 containerd[1456]: time="2026-04-28T01:15:23.618821212Z" level=info msg="StartContainer for \"909ea53c56fee501cbb5c566c9f3d92cf248ea085f519d7fbbc220d001a86c93\" returns successfully" Apr 28 01:15:23.725295 containerd[1456]: 2026-04-28 01:15:23.660 [INFO][5117] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="4ded7607b6f80e97a1e2bd090c72ab2bf1d445053e40515597369a926c12750c" Apr 28 01:15:23.725295 containerd[1456]: 2026-04-28 01:15:23.660 [INFO][5117] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4ded7607b6f80e97a1e2bd090c72ab2bf1d445053e40515597369a926c12750c" iface="eth0" netns="/var/run/netns/cni-a33a9c9c-383d-5f8e-f760-853b4ee812b2" Apr 28 01:15:23.725295 containerd[1456]: 2026-04-28 01:15:23.660 [INFO][5117] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4ded7607b6f80e97a1e2bd090c72ab2bf1d445053e40515597369a926c12750c" iface="eth0" netns="/var/run/netns/cni-a33a9c9c-383d-5f8e-f760-853b4ee812b2" Apr 28 01:15:23.725295 containerd[1456]: 2026-04-28 01:15:23.661 [INFO][5117] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4ded7607b6f80e97a1e2bd090c72ab2bf1d445053e40515597369a926c12750c" iface="eth0" netns="/var/run/netns/cni-a33a9c9c-383d-5f8e-f760-853b4ee812b2" Apr 28 01:15:23.725295 containerd[1456]: 2026-04-28 01:15:23.661 [INFO][5117] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="4ded7607b6f80e97a1e2bd090c72ab2bf1d445053e40515597369a926c12750c" Apr 28 01:15:23.725295 containerd[1456]: 2026-04-28 01:15:23.661 [INFO][5117] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="4ded7607b6f80e97a1e2bd090c72ab2bf1d445053e40515597369a926c12750c" Apr 28 01:15:23.725295 containerd[1456]: 2026-04-28 01:15:23.698 [INFO][5135] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="4ded7607b6f80e97a1e2bd090c72ab2bf1d445053e40515597369a926c12750c" HandleID="k8s-pod-network.4ded7607b6f80e97a1e2bd090c72ab2bf1d445053e40515597369a926c12750c" Workload="localhost-k8s-calico--kube--controllers--94c7fdbdd--4ls56-eth0" Apr 28 01:15:23.725295 containerd[1456]: 2026-04-28 01:15:23.698 [INFO][5135] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 28 01:15:23.725295 containerd[1456]: 2026-04-28 01:15:23.698 [INFO][5135] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 28 01:15:23.725295 containerd[1456]: 2026-04-28 01:15:23.708 [WARNING][5135] ipam/ipam_plugin.go 515: Asked to release address but it doesn't exist. Ignoring ContainerID="4ded7607b6f80e97a1e2bd090c72ab2bf1d445053e40515597369a926c12750c" HandleID="k8s-pod-network.4ded7607b6f80e97a1e2bd090c72ab2bf1d445053e40515597369a926c12750c" Workload="localhost-k8s-calico--kube--controllers--94c7fdbdd--4ls56-eth0" Apr 28 01:15:23.725295 containerd[1456]: 2026-04-28 01:15:23.710 [INFO][5135] ipam/ipam_plugin.go 526: Releasing address using workloadID ContainerID="4ded7607b6f80e97a1e2bd090c72ab2bf1d445053e40515597369a926c12750c" HandleID="k8s-pod-network.4ded7607b6f80e97a1e2bd090c72ab2bf1d445053e40515597369a926c12750c" Workload="localhost-k8s-calico--kube--controllers--94c7fdbdd--4ls56-eth0" Apr 28 01:15:23.725295 containerd[1456]: 2026-04-28 01:15:23.715 [INFO][5135] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 28 01:15:23.725295 containerd[1456]: 2026-04-28 01:15:23.721 [INFO][5117] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="4ded7607b6f80e97a1e2bd090c72ab2bf1d445053e40515597369a926c12750c" Apr 28 01:15:23.726182 containerd[1456]: time="2026-04-28T01:15:23.726015897Z" level=info msg="TearDown network for sandbox \"4ded7607b6f80e97a1e2bd090c72ab2bf1d445053e40515597369a926c12750c\" successfully" Apr 28 01:15:23.726182 containerd[1456]: time="2026-04-28T01:15:23.726036024Z" level=info msg="StopPodSandbox for \"4ded7607b6f80e97a1e2bd090c72ab2bf1d445053e40515597369a926c12750c\" returns successfully" Apr 28 01:15:23.732456 systemd[1]: run-netns-cni\x2da33a9c9c\x2d383d\x2d5f8e\x2df760\x2d853b4ee812b2.mount: Deactivated successfully. Apr 28 01:15:23.734079 containerd[1456]: time="2026-04-28T01:15:23.734051973Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-94c7fdbdd-4ls56,Uid:25037838-9eb1-455f-8e2a-2a6ebd0fe4f3,Namespace:calico-system,Attempt:1,}" Apr 28 01:15:23.749925 containerd[1456]: 2026-04-28 01:15:23.672 [INFO][5102] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="7237d00fc7b9392bd10921833cb84c027ae94507cfc1031bc9e63919575fcc6d" Apr 28 01:15:23.749925 containerd[1456]: 2026-04-28 01:15:23.672 [INFO][5102] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7237d00fc7b9392bd10921833cb84c027ae94507cfc1031bc9e63919575fcc6d" iface="eth0" netns="/var/run/netns/cni-d9539c5a-9c8e-bee0-e97a-aaa2be88f97a" Apr 28 01:15:23.749925 containerd[1456]: 2026-04-28 01:15:23.672 [INFO][5102] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7237d00fc7b9392bd10921833cb84c027ae94507cfc1031bc9e63919575fcc6d" iface="eth0" netns="/var/run/netns/cni-d9539c5a-9c8e-bee0-e97a-aaa2be88f97a" Apr 28 01:15:23.749925 containerd[1456]: 2026-04-28 01:15:23.673 [INFO][5102] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7237d00fc7b9392bd10921833cb84c027ae94507cfc1031bc9e63919575fcc6d" iface="eth0" netns="/var/run/netns/cni-d9539c5a-9c8e-bee0-e97a-aaa2be88f97a" Apr 28 01:15:23.749925 containerd[1456]: 2026-04-28 01:15:23.673 [INFO][5102] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="7237d00fc7b9392bd10921833cb84c027ae94507cfc1031bc9e63919575fcc6d" Apr 28 01:15:23.749925 containerd[1456]: 2026-04-28 01:15:23.673 [INFO][5102] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="7237d00fc7b9392bd10921833cb84c027ae94507cfc1031bc9e63919575fcc6d" Apr 28 01:15:23.749925 containerd[1456]: 2026-04-28 01:15:23.722 [INFO][5141] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="7237d00fc7b9392bd10921833cb84c027ae94507cfc1031bc9e63919575fcc6d" HandleID="k8s-pod-network.7237d00fc7b9392bd10921833cb84c027ae94507cfc1031bc9e63919575fcc6d" Workload="localhost-k8s-goldmane--6b4b7f4496--47t8q-eth0" Apr 28 01:15:23.749925 containerd[1456]: 2026-04-28 01:15:23.723 [INFO][5141] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 28 01:15:23.749925 containerd[1456]: 2026-04-28 01:15:23.723 [INFO][5141] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 28 01:15:23.749925 containerd[1456]: 2026-04-28 01:15:23.734 [WARNING][5141] ipam/ipam_plugin.go 515: Asked to release address but it doesn't exist. Ignoring ContainerID="7237d00fc7b9392bd10921833cb84c027ae94507cfc1031bc9e63919575fcc6d" HandleID="k8s-pod-network.7237d00fc7b9392bd10921833cb84c027ae94507cfc1031bc9e63919575fcc6d" Workload="localhost-k8s-goldmane--6b4b7f4496--47t8q-eth0" Apr 28 01:15:23.749925 containerd[1456]: 2026-04-28 01:15:23.734 [INFO][5141] ipam/ipam_plugin.go 526: Releasing address using workloadID ContainerID="7237d00fc7b9392bd10921833cb84c027ae94507cfc1031bc9e63919575fcc6d" HandleID="k8s-pod-network.7237d00fc7b9392bd10921833cb84c027ae94507cfc1031bc9e63919575fcc6d" Workload="localhost-k8s-goldmane--6b4b7f4496--47t8q-eth0" Apr 28 01:15:23.749925 containerd[1456]: 2026-04-28 01:15:23.740 [INFO][5141] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 28 01:15:23.749925 containerd[1456]: 2026-04-28 01:15:23.742 [INFO][5102] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="7237d00fc7b9392bd10921833cb84c027ae94507cfc1031bc9e63919575fcc6d" Apr 28 01:15:23.749925 containerd[1456]: time="2026-04-28T01:15:23.749292104Z" level=info msg="TearDown network for sandbox \"7237d00fc7b9392bd10921833cb84c027ae94507cfc1031bc9e63919575fcc6d\" successfully" Apr 28 01:15:23.749925 containerd[1456]: time="2026-04-28T01:15:23.749308385Z" level=info msg="StopPodSandbox for \"7237d00fc7b9392bd10921833cb84c027ae94507cfc1031bc9e63919575fcc6d\" returns successfully" Apr 28 01:15:23.751731 containerd[1456]: time="2026-04-28T01:15:23.751685697Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-6b4b7f4496-47t8q,Uid:7a7fa1aa-06eb-4cb9-b502-88326017cac4,Namespace:calico-system,Attempt:1,}" Apr 28 01:15:23.753458 systemd[1]: run-netns-cni\x2dd9539c5a\x2d9c8e\x2dbee0\x2de97a\x2daaa2be88f97a.mount: Deactivated successfully. Apr 28 01:15:23.863812 kubelet[2502]: E0428 01:15:23.863587 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:15:23.883097 kubelet[2502]: E0428 01:15:23.883073 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:15:23.899976 kubelet[2502]: I0428 01:15:23.899838 2502 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-srt2r" podStartSLOduration=43.899821715 podStartE2EDuration="43.899821715s" podCreationTimestamp="2026-04-28 01:14:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-28 01:15:23.885610378 +0000 UTC m=+50.415972449" watchObservedRunningTime="2026-04-28 01:15:23.899821715 +0000 UTC m=+50.430183786" Apr 28 01:15:23.924065 kubelet[2502]: I0428 01:15:23.918636 2502 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-59c6777c8b-8xfxq" podStartSLOduration=33.428106116 podStartE2EDuration="35.918621745s" podCreationTimestamp="2026-04-28 01:14:48 +0000 UTC" firstStartedPulling="2026-04-28 01:15:21.020165426 +0000 UTC m=+47.550527486" lastFinishedPulling="2026-04-28 01:15:23.510681048 +0000 UTC m=+50.041043115" observedRunningTime="2026-04-28 01:15:23.902401061 +0000 UTC m=+50.432763129" watchObservedRunningTime="2026-04-28 01:15:23.918621745 +0000 UTC m=+50.448984129" Apr 28 01:15:23.941972 kubelet[2502]: I0428 01:15:23.941874 2502 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-59c6777c8b-mc98s" podStartSLOduration=32.68217215 podStartE2EDuration="35.941861887s" podCreationTimestamp="2026-04-28 01:14:48 +0000 UTC" firstStartedPulling="2026-04-28 01:15:19.850890316 +0000 UTC m=+46.381252376" lastFinishedPulling="2026-04-28 01:15:23.110580052 +0000 UTC m=+49.640942113" observedRunningTime="2026-04-28 01:15:23.925231471 +0000 UTC m=+50.455593539" watchObservedRunningTime="2026-04-28 01:15:23.941861887 +0000 UTC m=+50.472223956" Apr 28 01:15:23.968341 systemd-networkd[1373]: calif321db00996: Link UP Apr 28 01:15:23.971067 systemd-networkd[1373]: calif321db00996: Gained carrier Apr 28 01:15:23.998041 containerd[1456]: 2026-04-28 01:15:23.800 [INFO][5157] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--94c7fdbdd--4ls56-eth0 calico-kube-controllers-94c7fdbdd- calico-system 25037838-9eb1-455f-8e2a-2a6ebd0fe4f3 1115 0 2026-04-28 01:14:49 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:94c7fdbdd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-94c7fdbdd-4ls56 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calif321db00996 [] [] }} ContainerID="12063ac1f5d0912507992a459ff3c577788d841ec7fcebb877a3eabaa52c6999" Namespace="calico-system" Pod="calico-kube-controllers-94c7fdbdd-4ls56" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--94c7fdbdd--4ls56-" Apr 28 01:15:23.998041 containerd[1456]: 2026-04-28 01:15:23.800 [INFO][5157] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="12063ac1f5d0912507992a459ff3c577788d841ec7fcebb877a3eabaa52c6999" Namespace="calico-system" Pod="calico-kube-controllers-94c7fdbdd-4ls56" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--94c7fdbdd--4ls56-eth0" Apr 28 01:15:23.998041 containerd[1456]: 2026-04-28 01:15:23.843 [INFO][5186] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="12063ac1f5d0912507992a459ff3c577788d841ec7fcebb877a3eabaa52c6999" HandleID="k8s-pod-network.12063ac1f5d0912507992a459ff3c577788d841ec7fcebb877a3eabaa52c6999" Workload="localhost-k8s-calico--kube--controllers--94c7fdbdd--4ls56-eth0" Apr 28 01:15:23.998041 containerd[1456]: 2026-04-28 01:15:23.856 [INFO][5186] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="12063ac1f5d0912507992a459ff3c577788d841ec7fcebb877a3eabaa52c6999" HandleID="k8s-pod-network.12063ac1f5d0912507992a459ff3c577788d841ec7fcebb877a3eabaa52c6999" Workload="localhost-k8s-calico--kube--controllers--94c7fdbdd--4ls56-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000051860), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-94c7fdbdd-4ls56", "timestamp":"2026-04-28 01:15:23.843484465 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000386f20)} Apr 28 01:15:23.998041 containerd[1456]: 2026-04-28 01:15:23.856 [INFO][5186] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 28 01:15:23.998041 containerd[1456]: 2026-04-28 01:15:23.856 [INFO][5186] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 28 01:15:23.998041 containerd[1456]: 2026-04-28 01:15:23.856 [INFO][5186] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 28 01:15:23.998041 containerd[1456]: 2026-04-28 01:15:23.862 [INFO][5186] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.12063ac1f5d0912507992a459ff3c577788d841ec7fcebb877a3eabaa52c6999" host="localhost" Apr 28 01:15:23.998041 containerd[1456]: 2026-04-28 01:15:23.874 [INFO][5186] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 28 01:15:23.998041 containerd[1456]: 2026-04-28 01:15:23.890 [INFO][5186] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 28 01:15:23.998041 containerd[1456]: 2026-04-28 01:15:23.897 [INFO][5186] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 28 01:15:23.998041 containerd[1456]: 2026-04-28 01:15:23.912 [INFO][5186] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 28 01:15:23.998041 containerd[1456]: 2026-04-28 01:15:23.912 [INFO][5186] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.12063ac1f5d0912507992a459ff3c577788d841ec7fcebb877a3eabaa52c6999" host="localhost" Apr 28 01:15:23.998041 containerd[1456]: 2026-04-28 01:15:23.928 [INFO][5186] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.12063ac1f5d0912507992a459ff3c577788d841ec7fcebb877a3eabaa52c6999 Apr 28 01:15:23.998041 containerd[1456]: 2026-04-28 01:15:23.946 [INFO][5186] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.12063ac1f5d0912507992a459ff3c577788d841ec7fcebb877a3eabaa52c6999" host="localhost" Apr 28 01:15:23.998041 containerd[1456]: 2026-04-28 01:15:23.956 [INFO][5186] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.12063ac1f5d0912507992a459ff3c577788d841ec7fcebb877a3eabaa52c6999" host="localhost" Apr 28 01:15:23.998041 containerd[1456]: 2026-04-28 01:15:23.956 [INFO][5186] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.12063ac1f5d0912507992a459ff3c577788d841ec7fcebb877a3eabaa52c6999" host="localhost" Apr 28 01:15:23.998041 containerd[1456]: 2026-04-28 01:15:23.956 [INFO][5186] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 28 01:15:23.998041 containerd[1456]: 2026-04-28 01:15:23.956 [INFO][5186] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="12063ac1f5d0912507992a459ff3c577788d841ec7fcebb877a3eabaa52c6999" HandleID="k8s-pod-network.12063ac1f5d0912507992a459ff3c577788d841ec7fcebb877a3eabaa52c6999" Workload="localhost-k8s-calico--kube--controllers--94c7fdbdd--4ls56-eth0" Apr 28 01:15:23.998726 containerd[1456]: 2026-04-28 01:15:23.961 [INFO][5157] cni-plugin/k8s.go 418: Populated endpoint ContainerID="12063ac1f5d0912507992a459ff3c577788d841ec7fcebb877a3eabaa52c6999" Namespace="calico-system" Pod="calico-kube-controllers-94c7fdbdd-4ls56" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--94c7fdbdd--4ls56-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--94c7fdbdd--4ls56-eth0", GenerateName:"calico-kube-controllers-94c7fdbdd-", Namespace:"calico-system", SelfLink:"", UID:"25037838-9eb1-455f-8e2a-2a6ebd0fe4f3", ResourceVersion:"1115", Generation:0, CreationTimestamp:time.Date(2026, time.April, 28, 1, 14, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"94c7fdbdd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-94c7fdbdd-4ls56", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif321db00996", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 28 01:15:23.998726 containerd[1456]: 2026-04-28 01:15:23.961 [INFO][5157] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="12063ac1f5d0912507992a459ff3c577788d841ec7fcebb877a3eabaa52c6999" Namespace="calico-system" Pod="calico-kube-controllers-94c7fdbdd-4ls56" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--94c7fdbdd--4ls56-eth0" Apr 28 01:15:23.998726 containerd[1456]: 2026-04-28 01:15:23.961 [INFO][5157] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif321db00996 ContainerID="12063ac1f5d0912507992a459ff3c577788d841ec7fcebb877a3eabaa52c6999" Namespace="calico-system" Pod="calico-kube-controllers-94c7fdbdd-4ls56" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--94c7fdbdd--4ls56-eth0" Apr 28 01:15:23.998726 containerd[1456]: 2026-04-28 01:15:23.972 [INFO][5157] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="12063ac1f5d0912507992a459ff3c577788d841ec7fcebb877a3eabaa52c6999" Namespace="calico-system" Pod="calico-kube-controllers-94c7fdbdd-4ls56" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--94c7fdbdd--4ls56-eth0" Apr 28 01:15:23.998726 containerd[1456]: 2026-04-28 01:15:23.976 [INFO][5157] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="12063ac1f5d0912507992a459ff3c577788d841ec7fcebb877a3eabaa52c6999" Namespace="calico-system" Pod="calico-kube-controllers-94c7fdbdd-4ls56" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--94c7fdbdd--4ls56-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--94c7fdbdd--4ls56-eth0", GenerateName:"calico-kube-controllers-94c7fdbdd-", Namespace:"calico-system", SelfLink:"", UID:"25037838-9eb1-455f-8e2a-2a6ebd0fe4f3", ResourceVersion:"1115", Generation:0, CreationTimestamp:time.Date(2026, time.April, 28, 1, 14, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"94c7fdbdd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"12063ac1f5d0912507992a459ff3c577788d841ec7fcebb877a3eabaa52c6999", Pod:"calico-kube-controllers-94c7fdbdd-4ls56", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif321db00996", MAC:"26:18:4b:ed:4c:50", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 28 01:15:23.998726 containerd[1456]: 2026-04-28 01:15:23.994 [INFO][5157] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="12063ac1f5d0912507992a459ff3c577788d841ec7fcebb877a3eabaa52c6999" Namespace="calico-system" Pod="calico-kube-controllers-94c7fdbdd-4ls56" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--94c7fdbdd--4ls56-eth0" Apr 28 01:15:24.030718 containerd[1456]: time="2026-04-28T01:15:24.030129170Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 28 01:15:24.030718 containerd[1456]: time="2026-04-28T01:15:24.030167532Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 28 01:15:24.030718 containerd[1456]: time="2026-04-28T01:15:24.030175371Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 01:15:24.030718 containerd[1456]: time="2026-04-28T01:15:24.030231893Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 01:15:24.042867 systemd-networkd[1373]: calid3632f6c7cb: Link UP Apr 28 01:15:24.043068 systemd-networkd[1373]: calid3632f6c7cb: Gained carrier Apr 28 01:15:24.065235 containerd[1456]: 2026-04-28 01:15:23.831 [INFO][5169] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--6b4b7f4496--47t8q-eth0 goldmane-6b4b7f4496- calico-system 7a7fa1aa-06eb-4cb9-b502-88326017cac4 1116 0 2026-04-28 01:14:48 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:6b4b7f4496 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-6b4b7f4496-47t8q eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calid3632f6c7cb [] [] }} ContainerID="65a7252e095fd37f77b560b3db5f02475baf8a17b2141c7649999972e87aea26" Namespace="calico-system" Pod="goldmane-6b4b7f4496-47t8q" WorkloadEndpoint="localhost-k8s-goldmane--6b4b7f4496--47t8q-" Apr 28 01:15:24.065235 containerd[1456]: 2026-04-28 01:15:23.831 [INFO][5169] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="65a7252e095fd37f77b560b3db5f02475baf8a17b2141c7649999972e87aea26" Namespace="calico-system" Pod="goldmane-6b4b7f4496-47t8q" WorkloadEndpoint="localhost-k8s-goldmane--6b4b7f4496--47t8q-eth0" Apr 28 01:15:24.065235 containerd[1456]: 2026-04-28 01:15:23.891 [INFO][5194] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="65a7252e095fd37f77b560b3db5f02475baf8a17b2141c7649999972e87aea26" HandleID="k8s-pod-network.65a7252e095fd37f77b560b3db5f02475baf8a17b2141c7649999972e87aea26" Workload="localhost-k8s-goldmane--6b4b7f4496--47t8q-eth0" Apr 28 01:15:24.065235 containerd[1456]: 2026-04-28 01:15:23.899 [INFO][5194] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="65a7252e095fd37f77b560b3db5f02475baf8a17b2141c7649999972e87aea26" HandleID="k8s-pod-network.65a7252e095fd37f77b560b3db5f02475baf8a17b2141c7649999972e87aea26" Workload="localhost-k8s-goldmane--6b4b7f4496--47t8q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000305550), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-6b4b7f4496-47t8q", "timestamp":"2026-04-28 01:15:23.891253553 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0006b4c60)} Apr 28 01:15:24.065235 containerd[1456]: 2026-04-28 01:15:23.900 [INFO][5194] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 28 01:15:24.065235 containerd[1456]: 2026-04-28 01:15:23.956 [INFO][5194] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 28 01:15:24.065235 containerd[1456]: 2026-04-28 01:15:23.957 [INFO][5194] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 28 01:15:24.065235 containerd[1456]: 2026-04-28 01:15:23.966 [INFO][5194] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.65a7252e095fd37f77b560b3db5f02475baf8a17b2141c7649999972e87aea26" host="localhost" Apr 28 01:15:24.065235 containerd[1456]: 2026-04-28 01:15:23.985 [INFO][5194] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 28 01:15:24.065235 containerd[1456]: 2026-04-28 01:15:23.999 [INFO][5194] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 28 01:15:24.065235 containerd[1456]: 2026-04-28 01:15:24.006 [INFO][5194] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 28 01:15:24.065235 containerd[1456]: 2026-04-28 01:15:24.010 [INFO][5194] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 28 01:15:24.065235 containerd[1456]: 2026-04-28 01:15:24.010 [INFO][5194] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.65a7252e095fd37f77b560b3db5f02475baf8a17b2141c7649999972e87aea26" host="localhost" Apr 28 01:15:24.065235 containerd[1456]: 2026-04-28 01:15:24.014 [INFO][5194] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.65a7252e095fd37f77b560b3db5f02475baf8a17b2141c7649999972e87aea26 Apr 28 01:15:24.065235 containerd[1456]: 2026-04-28 01:15:24.020 [INFO][5194] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.65a7252e095fd37f77b560b3db5f02475baf8a17b2141c7649999972e87aea26" host="localhost" Apr 28 01:15:24.065235 containerd[1456]: 2026-04-28 01:15:24.029 [INFO][5194] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.65a7252e095fd37f77b560b3db5f02475baf8a17b2141c7649999972e87aea26" host="localhost" Apr 28 01:15:24.065235 containerd[1456]: 2026-04-28 01:15:24.029 [INFO][5194] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.65a7252e095fd37f77b560b3db5f02475baf8a17b2141c7649999972e87aea26" host="localhost" Apr 28 01:15:24.065235 containerd[1456]: 2026-04-28 01:15:24.030 [INFO][5194] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 28 01:15:24.065235 containerd[1456]: 2026-04-28 01:15:24.034 [INFO][5194] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="65a7252e095fd37f77b560b3db5f02475baf8a17b2141c7649999972e87aea26" HandleID="k8s-pod-network.65a7252e095fd37f77b560b3db5f02475baf8a17b2141c7649999972e87aea26" Workload="localhost-k8s-goldmane--6b4b7f4496--47t8q-eth0" Apr 28 01:15:24.066652 containerd[1456]: 2026-04-28 01:15:24.037 [INFO][5169] cni-plugin/k8s.go 418: Populated endpoint ContainerID="65a7252e095fd37f77b560b3db5f02475baf8a17b2141c7649999972e87aea26" Namespace="calico-system" Pod="goldmane-6b4b7f4496-47t8q" WorkloadEndpoint="localhost-k8s-goldmane--6b4b7f4496--47t8q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--6b4b7f4496--47t8q-eth0", GenerateName:"goldmane-6b4b7f4496-", Namespace:"calico-system", SelfLink:"", UID:"7a7fa1aa-06eb-4cb9-b502-88326017cac4", ResourceVersion:"1116", Generation:0, CreationTimestamp:time.Date(2026, time.April, 28, 1, 14, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"6b4b7f4496", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-6b4b7f4496-47t8q", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid3632f6c7cb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 28 01:15:24.066652 containerd[1456]: 2026-04-28 01:15:24.038 [INFO][5169] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="65a7252e095fd37f77b560b3db5f02475baf8a17b2141c7649999972e87aea26" Namespace="calico-system" Pod="goldmane-6b4b7f4496-47t8q" WorkloadEndpoint="localhost-k8s-goldmane--6b4b7f4496--47t8q-eth0" Apr 28 01:15:24.066652 containerd[1456]: 2026-04-28 01:15:24.038 [INFO][5169] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid3632f6c7cb ContainerID="65a7252e095fd37f77b560b3db5f02475baf8a17b2141c7649999972e87aea26" Namespace="calico-system" Pod="goldmane-6b4b7f4496-47t8q" WorkloadEndpoint="localhost-k8s-goldmane--6b4b7f4496--47t8q-eth0" Apr 28 01:15:24.066652 containerd[1456]: 2026-04-28 01:15:24.042 [INFO][5169] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="65a7252e095fd37f77b560b3db5f02475baf8a17b2141c7649999972e87aea26" Namespace="calico-system" Pod="goldmane-6b4b7f4496-47t8q" WorkloadEndpoint="localhost-k8s-goldmane--6b4b7f4496--47t8q-eth0" Apr 28 01:15:24.066652 containerd[1456]: 2026-04-28 01:15:24.044 [INFO][5169] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="65a7252e095fd37f77b560b3db5f02475baf8a17b2141c7649999972e87aea26" Namespace="calico-system" Pod="goldmane-6b4b7f4496-47t8q" WorkloadEndpoint="localhost-k8s-goldmane--6b4b7f4496--47t8q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--6b4b7f4496--47t8q-eth0", GenerateName:"goldmane-6b4b7f4496-", Namespace:"calico-system", SelfLink:"", UID:"7a7fa1aa-06eb-4cb9-b502-88326017cac4", ResourceVersion:"1116", Generation:0, CreationTimestamp:time.Date(2026, time.April, 28, 1, 14, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"6b4b7f4496", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"65a7252e095fd37f77b560b3db5f02475baf8a17b2141c7649999972e87aea26", Pod:"goldmane-6b4b7f4496-47t8q", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid3632f6c7cb", MAC:"5a:4b:fc:28:0a:bc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 28 01:15:24.066652 containerd[1456]: 2026-04-28 01:15:24.061 [INFO][5169] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="65a7252e095fd37f77b560b3db5f02475baf8a17b2141c7649999972e87aea26" Namespace="calico-system" Pod="goldmane-6b4b7f4496-47t8q" WorkloadEndpoint="localhost-k8s-goldmane--6b4b7f4496--47t8q-eth0" Apr 28 01:15:24.066147 systemd[1]: Started cri-containerd-12063ac1f5d0912507992a459ff3c577788d841ec7fcebb877a3eabaa52c6999.scope - libcontainer container 12063ac1f5d0912507992a459ff3c577788d841ec7fcebb877a3eabaa52c6999. Apr 28 01:15:24.087034 systemd-resolved[1375]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 28 01:15:24.091306 containerd[1456]: time="2026-04-28T01:15:24.091188211Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 28 01:15:24.091306 containerd[1456]: time="2026-04-28T01:15:24.091253111Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 28 01:15:24.091306 containerd[1456]: time="2026-04-28T01:15:24.091264767Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 01:15:24.091674 containerd[1456]: time="2026-04-28T01:15:24.091322404Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 01:15:24.120280 systemd[1]: Started cri-containerd-65a7252e095fd37f77b560b3db5f02475baf8a17b2141c7649999972e87aea26.scope - libcontainer container 65a7252e095fd37f77b560b3db5f02475baf8a17b2141c7649999972e87aea26. Apr 28 01:15:24.125887 containerd[1456]: time="2026-04-28T01:15:24.125162085Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-94c7fdbdd-4ls56,Uid:25037838-9eb1-455f-8e2a-2a6ebd0fe4f3,Namespace:calico-system,Attempt:1,} returns sandbox id \"12063ac1f5d0912507992a459ff3c577788d841ec7fcebb877a3eabaa52c6999\"" Apr 28 01:15:24.128256 containerd[1456]: time="2026-04-28T01:15:24.128174764Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.5\"" Apr 28 01:15:24.140539 systemd-resolved[1375]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 28 01:15:24.176567 containerd[1456]: time="2026-04-28T01:15:24.176489555Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-6b4b7f4496-47t8q,Uid:7a7fa1aa-06eb-4cb9-b502-88326017cac4,Namespace:calico-system,Attempt:1,} returns sandbox id \"65a7252e095fd37f77b560b3db5f02475baf8a17b2141c7649999972e87aea26\"" Apr 28 01:15:24.535357 systemd-networkd[1373]: cali459d3219c08: Gained IPv6LL Apr 28 01:15:24.905492 kubelet[2502]: I0428 01:15:24.905322 2502 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 28 01:15:24.905492 kubelet[2502]: I0428 01:15:24.905322 2502 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 28 01:15:24.905920 kubelet[2502]: E0428 01:15:24.905889 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:15:25.367336 systemd-networkd[1373]: calif321db00996: Gained IPv6LL Apr 28 01:15:25.912068 kubelet[2502]: E0428 01:15:25.908262 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:15:26.007248 systemd-networkd[1373]: calid3632f6c7cb: Gained IPv6LL Apr 28 01:15:26.410258 systemd[1]: Started sshd@9-10.0.0.153:22-10.0.0.1:44028.service - OpenSSH per-connection server daemon (10.0.0.1:44028). Apr 28 01:15:26.457846 sshd[5310]: Accepted publickey for core from 10.0.0.1 port 44028 ssh2: RSA SHA256:LlE/68A0qVd4DdmQfcok9T4l7BHzq3PFAQ3i8Jwjpps Apr 28 01:15:26.459369 sshd[5310]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:15:26.464540 systemd-logind[1435]: New session 10 of user core. Apr 28 01:15:26.471172 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 28 01:15:26.650475 sshd[5310]: pam_unix(sshd:session): session closed for user core Apr 28 01:15:26.653479 systemd[1]: sshd@9-10.0.0.153:22-10.0.0.1:44028.service: Deactivated successfully. Apr 28 01:15:26.655096 systemd[1]: session-10.scope: Deactivated successfully. Apr 28 01:15:26.655710 systemd-logind[1435]: Session 10 logged out. Waiting for processes to exit. Apr 28 01:15:26.656772 systemd-logind[1435]: Removed session 10. Apr 28 01:15:28.371456 containerd[1456]: time="2026-04-28T01:15:28.371279087Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 01:15:28.372215 containerd[1456]: time="2026-04-28T01:15:28.372163297Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.5: active requests=0, bytes read=50078175" Apr 28 01:15:28.374645 containerd[1456]: time="2026-04-28T01:15:28.374593524Z" level=info msg="ImageCreate event name:\"sha256:d686db0e796dab36cb761ce46b93cabed881d9328bea92a965ad505653a85e37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 01:15:28.376912 containerd[1456]: time="2026-04-28T01:15:28.376846004Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5fa7fb7e707d54479cd5d93cfe42352076b805f36560df457b53701d9e738d72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 01:15:28.377528 containerd[1456]: time="2026-04-28T01:15:28.377399952Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.5\" with image id \"sha256:d686db0e796dab36cb761ce46b93cabed881d9328bea92a965ad505653a85e37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5fa7fb7e707d54479cd5d93cfe42352076b805f36560df457b53701d9e738d72\", size \"53039568\" in 4.249179936s" Apr 28 01:15:28.377528 containerd[1456]: time="2026-04-28T01:15:28.377526159Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.5\" returns image reference \"sha256:d686db0e796dab36cb761ce46b93cabed881d9328bea92a965ad505653a85e37\"" Apr 28 01:15:28.379576 containerd[1456]: time="2026-04-28T01:15:28.379477436Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.5\"" Apr 28 01:15:28.400640 containerd[1456]: time="2026-04-28T01:15:28.400577564Z" level=info msg="CreateContainer within sandbox \"12063ac1f5d0912507992a459ff3c577788d841ec7fcebb877a3eabaa52c6999\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 28 01:15:28.414898 containerd[1456]: time="2026-04-28T01:15:28.414792258Z" level=info msg="CreateContainer within sandbox \"12063ac1f5d0912507992a459ff3c577788d841ec7fcebb877a3eabaa52c6999\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"2acfb40123755e3f8f2adf35ff2bc9f9f2ad79373bb46262bf8c889db3da7a14\"" Apr 28 01:15:28.416326 containerd[1456]: time="2026-04-28T01:15:28.416096301Z" level=info msg="StartContainer for \"2acfb40123755e3f8f2adf35ff2bc9f9f2ad79373bb46262bf8c889db3da7a14\"" Apr 28 01:15:28.452158 systemd[1]: Started cri-containerd-2acfb40123755e3f8f2adf35ff2bc9f9f2ad79373bb46262bf8c889db3da7a14.scope - libcontainer container 2acfb40123755e3f8f2adf35ff2bc9f9f2ad79373bb46262bf8c889db3da7a14. Apr 28 01:15:28.498746 containerd[1456]: time="2026-04-28T01:15:28.498640210Z" level=info msg="StartContainer for \"2acfb40123755e3f8f2adf35ff2bc9f9f2ad79373bb46262bf8c889db3da7a14\" returns successfully" Apr 28 01:15:28.933035 kubelet[2502]: I0428 01:15:28.932663 2502 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-94c7fdbdd-4ls56" podStartSLOduration=35.681220651 podStartE2EDuration="39.932647658s" podCreationTimestamp="2026-04-28 01:14:49 +0000 UTC" firstStartedPulling="2026-04-28 01:15:24.127349326 +0000 UTC m=+50.657711386" lastFinishedPulling="2026-04-28 01:15:28.378776334 +0000 UTC m=+54.909138393" observedRunningTime="2026-04-28 01:15:28.932141976 +0000 UTC m=+55.462504045" watchObservedRunningTime="2026-04-28 01:15:28.932647658 +0000 UTC m=+55.463009730" Apr 28 01:15:31.666578 systemd[1]: Started sshd@10-10.0.0.153:22-10.0.0.1:44038.service - OpenSSH per-connection server daemon (10.0.0.1:44038). Apr 28 01:15:31.730216 sshd[5415]: Accepted publickey for core from 10.0.0.1 port 44038 ssh2: RSA SHA256:LlE/68A0qVd4DdmQfcok9T4l7BHzq3PFAQ3i8Jwjpps Apr 28 01:15:31.732189 sshd[5415]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:15:31.738238 systemd-logind[1435]: New session 11 of user core. Apr 28 01:15:31.744151 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 28 01:15:31.803855 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2437570477.mount: Deactivated successfully. Apr 28 01:15:32.002246 sshd[5415]: pam_unix(sshd:session): session closed for user core Apr 28 01:15:32.006435 systemd[1]: sshd@10-10.0.0.153:22-10.0.0.1:44038.service: Deactivated successfully. Apr 28 01:15:32.008065 systemd[1]: session-11.scope: Deactivated successfully. Apr 28 01:15:32.008772 systemd-logind[1435]: Session 11 logged out. Waiting for processes to exit. Apr 28 01:15:32.009886 systemd-logind[1435]: Removed session 11. Apr 28 01:15:32.779916 containerd[1456]: time="2026-04-28T01:15:32.779801086Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 01:15:32.780559 containerd[1456]: time="2026-04-28T01:15:32.780504706Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.5: active requests=0, bytes read=53086083" Apr 28 01:15:32.781998 containerd[1456]: time="2026-04-28T01:15:32.781873260Z" level=info msg="ImageCreate event name:\"sha256:c7fd07b105db0e1cb9381872c0af21769c4fad1e0a5dab3a06b15a879b74b421\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 01:15:32.806535 containerd[1456]: time="2026-04-28T01:15:32.806411088Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:edfd1b6c377013f23afd5e76cb975b6cb59d1bc6554f79c0719d617f8dd0468e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 01:15:32.808206 containerd[1456]: time="2026-04-28T01:15:32.808099949Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.5\" with image id \"sha256:c7fd07b105db0e1cb9381872c0af21769c4fad1e0a5dab3a06b15a879b74b421\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:edfd1b6c377013f23afd5e76cb975b6cb59d1bc6554f79c0719d617f8dd0468e\", size \"53085929\" in 4.428597423s" Apr 28 01:15:32.808206 containerd[1456]: time="2026-04-28T01:15:32.808197020Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.5\" returns image reference \"sha256:c7fd07b105db0e1cb9381872c0af21769c4fad1e0a5dab3a06b15a879b74b421\"" Apr 28 01:15:32.815324 containerd[1456]: time="2026-04-28T01:15:32.815244839Z" level=info msg="CreateContainer within sandbox \"65a7252e095fd37f77b560b3db5f02475baf8a17b2141c7649999972e87aea26\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Apr 28 01:15:32.834615 containerd[1456]: time="2026-04-28T01:15:32.834542101Z" level=info msg="CreateContainer within sandbox \"65a7252e095fd37f77b560b3db5f02475baf8a17b2141c7649999972e87aea26\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"ec649dcc263104798be73d1f14fdbfe7fab21393d1632db4b1f14ae32146bfeb\"" Apr 28 01:15:32.835425 containerd[1456]: time="2026-04-28T01:15:32.835382469Z" level=info msg="StartContainer for \"ec649dcc263104798be73d1f14fdbfe7fab21393d1632db4b1f14ae32146bfeb\"" Apr 28 01:15:32.956215 systemd[1]: Started cri-containerd-ec649dcc263104798be73d1f14fdbfe7fab21393d1632db4b1f14ae32146bfeb.scope - libcontainer container ec649dcc263104798be73d1f14fdbfe7fab21393d1632db4b1f14ae32146bfeb. Apr 28 01:15:33.000323 containerd[1456]: time="2026-04-28T01:15:33.000194529Z" level=info msg="StartContainer for \"ec649dcc263104798be73d1f14fdbfe7fab21393d1632db4b1f14ae32146bfeb\" returns successfully" Apr 28 01:15:33.574319 containerd[1456]: time="2026-04-28T01:15:33.574276158Z" level=info msg="StopPodSandbox for \"7237d00fc7b9392bd10921833cb84c027ae94507cfc1031bc9e63919575fcc6d\"" Apr 28 01:15:33.680186 containerd[1456]: 2026-04-28 01:15:33.617 [WARNING][5485] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7237d00fc7b9392bd10921833cb84c027ae94507cfc1031bc9e63919575fcc6d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--6b4b7f4496--47t8q-eth0", GenerateName:"goldmane-6b4b7f4496-", Namespace:"calico-system", SelfLink:"", UID:"7a7fa1aa-06eb-4cb9-b502-88326017cac4", ResourceVersion:"1138", Generation:0, CreationTimestamp:time.Date(2026, time.April, 28, 1, 14, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"6b4b7f4496", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"65a7252e095fd37f77b560b3db5f02475baf8a17b2141c7649999972e87aea26", Pod:"goldmane-6b4b7f4496-47t8q", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid3632f6c7cb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 28 01:15:33.680186 containerd[1456]: 2026-04-28 01:15:33.618 [INFO][5485] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="7237d00fc7b9392bd10921833cb84c027ae94507cfc1031bc9e63919575fcc6d" Apr 28 01:15:33.680186 containerd[1456]: 2026-04-28 01:15:33.618 [INFO][5485] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7237d00fc7b9392bd10921833cb84c027ae94507cfc1031bc9e63919575fcc6d" iface="eth0" netns="" Apr 28 01:15:33.680186 containerd[1456]: 2026-04-28 01:15:33.618 [INFO][5485] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="7237d00fc7b9392bd10921833cb84c027ae94507cfc1031bc9e63919575fcc6d" Apr 28 01:15:33.680186 containerd[1456]: 2026-04-28 01:15:33.618 [INFO][5485] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="7237d00fc7b9392bd10921833cb84c027ae94507cfc1031bc9e63919575fcc6d" Apr 28 01:15:33.680186 containerd[1456]: 2026-04-28 01:15:33.667 [INFO][5496] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="7237d00fc7b9392bd10921833cb84c027ae94507cfc1031bc9e63919575fcc6d" HandleID="k8s-pod-network.7237d00fc7b9392bd10921833cb84c027ae94507cfc1031bc9e63919575fcc6d" Workload="localhost-k8s-goldmane--6b4b7f4496--47t8q-eth0" Apr 28 01:15:33.680186 containerd[1456]: 2026-04-28 01:15:33.668 [INFO][5496] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 28 01:15:33.680186 containerd[1456]: 2026-04-28 01:15:33.668 [INFO][5496] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 28 01:15:33.680186 containerd[1456]: 2026-04-28 01:15:33.674 [WARNING][5496] ipam/ipam_plugin.go 515: Asked to release address but it doesn't exist. Ignoring ContainerID="7237d00fc7b9392bd10921833cb84c027ae94507cfc1031bc9e63919575fcc6d" HandleID="k8s-pod-network.7237d00fc7b9392bd10921833cb84c027ae94507cfc1031bc9e63919575fcc6d" Workload="localhost-k8s-goldmane--6b4b7f4496--47t8q-eth0" Apr 28 01:15:33.680186 containerd[1456]: 2026-04-28 01:15:33.674 [INFO][5496] ipam/ipam_plugin.go 526: Releasing address using workloadID ContainerID="7237d00fc7b9392bd10921833cb84c027ae94507cfc1031bc9e63919575fcc6d" HandleID="k8s-pod-network.7237d00fc7b9392bd10921833cb84c027ae94507cfc1031bc9e63919575fcc6d" Workload="localhost-k8s-goldmane--6b4b7f4496--47t8q-eth0" Apr 28 01:15:33.680186 containerd[1456]: 2026-04-28 01:15:33.676 [INFO][5496] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 28 01:15:33.680186 containerd[1456]: 2026-04-28 01:15:33.678 [INFO][5485] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="7237d00fc7b9392bd10921833cb84c027ae94507cfc1031bc9e63919575fcc6d" Apr 28 01:15:33.687927 containerd[1456]: time="2026-04-28T01:15:33.687848520Z" level=info msg="TearDown network for sandbox \"7237d00fc7b9392bd10921833cb84c027ae94507cfc1031bc9e63919575fcc6d\" successfully" Apr 28 01:15:33.687927 containerd[1456]: time="2026-04-28T01:15:33.687914966Z" level=info msg="StopPodSandbox for \"7237d00fc7b9392bd10921833cb84c027ae94507cfc1031bc9e63919575fcc6d\" returns successfully" Apr 28 01:15:33.738452 containerd[1456]: time="2026-04-28T01:15:33.738364422Z" level=info msg="RemovePodSandbox for \"7237d00fc7b9392bd10921833cb84c027ae94507cfc1031bc9e63919575fcc6d\"" Apr 28 01:15:33.740694 containerd[1456]: time="2026-04-28T01:15:33.740634288Z" level=info msg="Forcibly stopping sandbox \"7237d00fc7b9392bd10921833cb84c027ae94507cfc1031bc9e63919575fcc6d\"" Apr 28 01:15:33.834780 containerd[1456]: 2026-04-28 01:15:33.779 [WARNING][5513] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7237d00fc7b9392bd10921833cb84c027ae94507cfc1031bc9e63919575fcc6d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--6b4b7f4496--47t8q-eth0", GenerateName:"goldmane-6b4b7f4496-", Namespace:"calico-system", SelfLink:"", UID:"7a7fa1aa-06eb-4cb9-b502-88326017cac4", ResourceVersion:"1138", Generation:0, CreationTimestamp:time.Date(2026, time.April, 28, 1, 14, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"6b4b7f4496", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"65a7252e095fd37f77b560b3db5f02475baf8a17b2141c7649999972e87aea26", Pod:"goldmane-6b4b7f4496-47t8q", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid3632f6c7cb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 28 01:15:33.834780 containerd[1456]: 2026-04-28 01:15:33.780 [INFO][5513] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="7237d00fc7b9392bd10921833cb84c027ae94507cfc1031bc9e63919575fcc6d" Apr 28 01:15:33.834780 containerd[1456]: 2026-04-28 01:15:33.780 [INFO][5513] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7237d00fc7b9392bd10921833cb84c027ae94507cfc1031bc9e63919575fcc6d" iface="eth0" netns="" Apr 28 01:15:33.834780 containerd[1456]: 2026-04-28 01:15:33.780 [INFO][5513] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="7237d00fc7b9392bd10921833cb84c027ae94507cfc1031bc9e63919575fcc6d" Apr 28 01:15:33.834780 containerd[1456]: 2026-04-28 01:15:33.780 [INFO][5513] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="7237d00fc7b9392bd10921833cb84c027ae94507cfc1031bc9e63919575fcc6d" Apr 28 01:15:33.834780 containerd[1456]: 2026-04-28 01:15:33.812 [INFO][5523] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="7237d00fc7b9392bd10921833cb84c027ae94507cfc1031bc9e63919575fcc6d" HandleID="k8s-pod-network.7237d00fc7b9392bd10921833cb84c027ae94507cfc1031bc9e63919575fcc6d" Workload="localhost-k8s-goldmane--6b4b7f4496--47t8q-eth0" Apr 28 01:15:33.834780 containerd[1456]: 2026-04-28 01:15:33.813 [INFO][5523] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 28 01:15:33.834780 containerd[1456]: 2026-04-28 01:15:33.813 [INFO][5523] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 28 01:15:33.834780 containerd[1456]: 2026-04-28 01:15:33.825 [WARNING][5523] ipam/ipam_plugin.go 515: Asked to release address but it doesn't exist. Ignoring ContainerID="7237d00fc7b9392bd10921833cb84c027ae94507cfc1031bc9e63919575fcc6d" HandleID="k8s-pod-network.7237d00fc7b9392bd10921833cb84c027ae94507cfc1031bc9e63919575fcc6d" Workload="localhost-k8s-goldmane--6b4b7f4496--47t8q-eth0" Apr 28 01:15:33.834780 containerd[1456]: 2026-04-28 01:15:33.825 [INFO][5523] ipam/ipam_plugin.go 526: Releasing address using workloadID ContainerID="7237d00fc7b9392bd10921833cb84c027ae94507cfc1031bc9e63919575fcc6d" HandleID="k8s-pod-network.7237d00fc7b9392bd10921833cb84c027ae94507cfc1031bc9e63919575fcc6d" Workload="localhost-k8s-goldmane--6b4b7f4496--47t8q-eth0" Apr 28 01:15:33.834780 containerd[1456]: 2026-04-28 01:15:33.830 [INFO][5523] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 28 01:15:33.834780 containerd[1456]: 2026-04-28 01:15:33.832 [INFO][5513] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="7237d00fc7b9392bd10921833cb84c027ae94507cfc1031bc9e63919575fcc6d" Apr 28 01:15:33.834780 containerd[1456]: time="2026-04-28T01:15:33.834699632Z" level=info msg="TearDown network for sandbox \"7237d00fc7b9392bd10921833cb84c027ae94507cfc1031bc9e63919575fcc6d\" successfully" Apr 28 01:15:33.865360 containerd[1456]: time="2026-04-28T01:15:33.865288513Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7237d00fc7b9392bd10921833cb84c027ae94507cfc1031bc9e63919575fcc6d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 28 01:15:33.865509 containerd[1456]: time="2026-04-28T01:15:33.865411835Z" level=info msg="RemovePodSandbox \"7237d00fc7b9392bd10921833cb84c027ae94507cfc1031bc9e63919575fcc6d\" returns successfully" Apr 28 01:15:33.873804 containerd[1456]: time="2026-04-28T01:15:33.873746830Z" level=info msg="StopPodSandbox for \"aeb4a688f40f86108f7453e3a5315c6e9ddbea828d70eb0aab8b026af70c8127\"" Apr 28 01:15:33.954864 kubelet[2502]: I0428 01:15:33.954655 2502 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-6b4b7f4496-47t8q" podStartSLOduration=37.323310357 podStartE2EDuration="45.95464222s" podCreationTimestamp="2026-04-28 01:14:48 +0000 UTC" firstStartedPulling="2026-04-28 01:15:24.178417865 +0000 UTC m=+50.708779929" lastFinishedPulling="2026-04-28 01:15:32.809749718 +0000 UTC m=+59.340111792" observedRunningTime="2026-04-28 01:15:33.954440919 +0000 UTC m=+60.484802989" watchObservedRunningTime="2026-04-28 01:15:33.95464222 +0000 UTC m=+60.485004287" Apr 28 01:15:33.969572 systemd[1]: run-containerd-runc-k8s.io-ec649dcc263104798be73d1f14fdbfe7fab21393d1632db4b1f14ae32146bfeb-runc.CIAkXr.mount: Deactivated successfully. Apr 28 01:15:33.972894 containerd[1456]: 2026-04-28 01:15:33.914 [WARNING][5540] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="aeb4a688f40f86108f7453e3a5315c6e9ddbea828d70eb0aab8b026af70c8127" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--59c6777c8b--8xfxq-eth0", GenerateName:"calico-apiserver-59c6777c8b-", Namespace:"calico-system", SelfLink:"", UID:"3581e29b-6797-4b71-bcef-fde26f9a5731", ResourceVersion:"1122", Generation:0, CreationTimestamp:time.Date(2026, time.April, 28, 1, 14, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59c6777c8b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"468e9a94a7ac43e0242f7e2ddb8760faa2eb2a721afec0cc5a25d429d2b4e135", Pod:"calico-apiserver-59c6777c8b-8xfxq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali15c4e0f483d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 28 01:15:33.972894 containerd[1456]: 2026-04-28 01:15:33.914 [INFO][5540] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="aeb4a688f40f86108f7453e3a5315c6e9ddbea828d70eb0aab8b026af70c8127" Apr 28 01:15:33.972894 containerd[1456]: 2026-04-28 01:15:33.914 [INFO][5540] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="aeb4a688f40f86108f7453e3a5315c6e9ddbea828d70eb0aab8b026af70c8127" iface="eth0" netns="" Apr 28 01:15:33.972894 containerd[1456]: 2026-04-28 01:15:33.914 [INFO][5540] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="aeb4a688f40f86108f7453e3a5315c6e9ddbea828d70eb0aab8b026af70c8127" Apr 28 01:15:33.972894 containerd[1456]: 2026-04-28 01:15:33.914 [INFO][5540] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="aeb4a688f40f86108f7453e3a5315c6e9ddbea828d70eb0aab8b026af70c8127" Apr 28 01:15:33.972894 containerd[1456]: 2026-04-28 01:15:33.941 [INFO][5548] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="aeb4a688f40f86108f7453e3a5315c6e9ddbea828d70eb0aab8b026af70c8127" HandleID="k8s-pod-network.aeb4a688f40f86108f7453e3a5315c6e9ddbea828d70eb0aab8b026af70c8127" Workload="localhost-k8s-calico--apiserver--59c6777c8b--8xfxq-eth0" Apr 28 01:15:33.972894 containerd[1456]: 2026-04-28 01:15:33.949 [INFO][5548] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 28 01:15:33.972894 containerd[1456]: 2026-04-28 01:15:33.949 [INFO][5548] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 28 01:15:33.972894 containerd[1456]: 2026-04-28 01:15:33.958 [WARNING][5548] ipam/ipam_plugin.go 515: Asked to release address but it doesn't exist. Ignoring ContainerID="aeb4a688f40f86108f7453e3a5315c6e9ddbea828d70eb0aab8b026af70c8127" HandleID="k8s-pod-network.aeb4a688f40f86108f7453e3a5315c6e9ddbea828d70eb0aab8b026af70c8127" Workload="localhost-k8s-calico--apiserver--59c6777c8b--8xfxq-eth0" Apr 28 01:15:33.972894 containerd[1456]: 2026-04-28 01:15:33.958 [INFO][5548] ipam/ipam_plugin.go 526: Releasing address using workloadID ContainerID="aeb4a688f40f86108f7453e3a5315c6e9ddbea828d70eb0aab8b026af70c8127" HandleID="k8s-pod-network.aeb4a688f40f86108f7453e3a5315c6e9ddbea828d70eb0aab8b026af70c8127" Workload="localhost-k8s-calico--apiserver--59c6777c8b--8xfxq-eth0" Apr 28 01:15:33.972894 containerd[1456]: 2026-04-28 01:15:33.963 [INFO][5548] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 28 01:15:33.972894 containerd[1456]: 2026-04-28 01:15:33.970 [INFO][5540] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="aeb4a688f40f86108f7453e3a5315c6e9ddbea828d70eb0aab8b026af70c8127" Apr 28 01:15:33.972894 containerd[1456]: time="2026-04-28T01:15:33.972780043Z" level=info msg="TearDown network for sandbox \"aeb4a688f40f86108f7453e3a5315c6e9ddbea828d70eb0aab8b026af70c8127\" successfully" Apr 28 01:15:33.972894 containerd[1456]: time="2026-04-28T01:15:33.972811053Z" level=info msg="StopPodSandbox for \"aeb4a688f40f86108f7453e3a5315c6e9ddbea828d70eb0aab8b026af70c8127\" returns successfully" Apr 28 01:15:33.973509 containerd[1456]: time="2026-04-28T01:15:33.973469863Z" level=info msg="RemovePodSandbox for \"aeb4a688f40f86108f7453e3a5315c6e9ddbea828d70eb0aab8b026af70c8127\"" Apr 28 01:15:33.973509 containerd[1456]: time="2026-04-28T01:15:33.973492397Z" level=info msg="Forcibly stopping sandbox \"aeb4a688f40f86108f7453e3a5315c6e9ddbea828d70eb0aab8b026af70c8127\"" Apr 28 01:15:34.092803 containerd[1456]: 2026-04-28 01:15:34.019 [WARNING][5583] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="aeb4a688f40f86108f7453e3a5315c6e9ddbea828d70eb0aab8b026af70c8127" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--59c6777c8b--8xfxq-eth0", GenerateName:"calico-apiserver-59c6777c8b-", Namespace:"calico-system", SelfLink:"", UID:"3581e29b-6797-4b71-bcef-fde26f9a5731", ResourceVersion:"1122", Generation:0, CreationTimestamp:time.Date(2026, time.April, 28, 1, 14, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59c6777c8b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"468e9a94a7ac43e0242f7e2ddb8760faa2eb2a721afec0cc5a25d429d2b4e135", Pod:"calico-apiserver-59c6777c8b-8xfxq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali15c4e0f483d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 28 01:15:34.092803 containerd[1456]: 2026-04-28 01:15:34.020 [INFO][5583] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="aeb4a688f40f86108f7453e3a5315c6e9ddbea828d70eb0aab8b026af70c8127" Apr 28 01:15:34.092803 containerd[1456]: 2026-04-28 01:15:34.020 [INFO][5583] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="aeb4a688f40f86108f7453e3a5315c6e9ddbea828d70eb0aab8b026af70c8127" iface="eth0" netns="" Apr 28 01:15:34.092803 containerd[1456]: 2026-04-28 01:15:34.020 [INFO][5583] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="aeb4a688f40f86108f7453e3a5315c6e9ddbea828d70eb0aab8b026af70c8127" Apr 28 01:15:34.092803 containerd[1456]: 2026-04-28 01:15:34.020 [INFO][5583] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="aeb4a688f40f86108f7453e3a5315c6e9ddbea828d70eb0aab8b026af70c8127" Apr 28 01:15:34.092803 containerd[1456]: 2026-04-28 01:15:34.041 [INFO][5597] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="aeb4a688f40f86108f7453e3a5315c6e9ddbea828d70eb0aab8b026af70c8127" HandleID="k8s-pod-network.aeb4a688f40f86108f7453e3a5315c6e9ddbea828d70eb0aab8b026af70c8127" Workload="localhost-k8s-calico--apiserver--59c6777c8b--8xfxq-eth0" Apr 28 01:15:34.092803 containerd[1456]: 2026-04-28 01:15:34.041 [INFO][5597] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 28 01:15:34.092803 containerd[1456]: 2026-04-28 01:15:34.041 [INFO][5597] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 28 01:15:34.092803 containerd[1456]: 2026-04-28 01:15:34.059 [WARNING][5597] ipam/ipam_plugin.go 515: Asked to release address but it doesn't exist. Ignoring ContainerID="aeb4a688f40f86108f7453e3a5315c6e9ddbea828d70eb0aab8b026af70c8127" HandleID="k8s-pod-network.aeb4a688f40f86108f7453e3a5315c6e9ddbea828d70eb0aab8b026af70c8127" Workload="localhost-k8s-calico--apiserver--59c6777c8b--8xfxq-eth0" Apr 28 01:15:34.092803 containerd[1456]: 2026-04-28 01:15:34.059 [INFO][5597] ipam/ipam_plugin.go 526: Releasing address using workloadID ContainerID="aeb4a688f40f86108f7453e3a5315c6e9ddbea828d70eb0aab8b026af70c8127" HandleID="k8s-pod-network.aeb4a688f40f86108f7453e3a5315c6e9ddbea828d70eb0aab8b026af70c8127" Workload="localhost-k8s-calico--apiserver--59c6777c8b--8xfxq-eth0" Apr 28 01:15:34.092803 containerd[1456]: 2026-04-28 01:15:34.070 [INFO][5597] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 28 01:15:34.092803 containerd[1456]: 2026-04-28 01:15:34.091 [INFO][5583] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="aeb4a688f40f86108f7453e3a5315c6e9ddbea828d70eb0aab8b026af70c8127" Apr 28 01:15:34.092803 containerd[1456]: time="2026-04-28T01:15:34.092762382Z" level=info msg="TearDown network for sandbox \"aeb4a688f40f86108f7453e3a5315c6e9ddbea828d70eb0aab8b026af70c8127\" successfully" Apr 28 01:15:34.097245 containerd[1456]: time="2026-04-28T01:15:34.097148010Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"aeb4a688f40f86108f7453e3a5315c6e9ddbea828d70eb0aab8b026af70c8127\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 28 01:15:34.097245 containerd[1456]: time="2026-04-28T01:15:34.097220225Z" level=info msg="RemovePodSandbox \"aeb4a688f40f86108f7453e3a5315c6e9ddbea828d70eb0aab8b026af70c8127\" returns successfully" Apr 28 01:15:34.105421 containerd[1456]: time="2026-04-28T01:15:34.105376197Z" level=info msg="StopPodSandbox for \"4ded7607b6f80e97a1e2bd090c72ab2bf1d445053e40515597369a926c12750c\"" Apr 28 01:15:34.191371 containerd[1456]: 2026-04-28 01:15:34.150 [WARNING][5614] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4ded7607b6f80e97a1e2bd090c72ab2bf1d445053e40515597369a926c12750c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--94c7fdbdd--4ls56-eth0", GenerateName:"calico-kube-controllers-94c7fdbdd-", Namespace:"calico-system", SelfLink:"", UID:"25037838-9eb1-455f-8e2a-2a6ebd0fe4f3", ResourceVersion:"1168", Generation:0, CreationTimestamp:time.Date(2026, time.April, 28, 1, 14, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"94c7fdbdd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"12063ac1f5d0912507992a459ff3c577788d841ec7fcebb877a3eabaa52c6999", Pod:"calico-kube-controllers-94c7fdbdd-4ls56", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif321db00996", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 28 01:15:34.191371 containerd[1456]: 2026-04-28 01:15:34.151 [INFO][5614] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="4ded7607b6f80e97a1e2bd090c72ab2bf1d445053e40515597369a926c12750c" Apr 28 01:15:34.191371 containerd[1456]: 2026-04-28 01:15:34.151 [INFO][5614] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4ded7607b6f80e97a1e2bd090c72ab2bf1d445053e40515597369a926c12750c" iface="eth0" netns="" Apr 28 01:15:34.191371 containerd[1456]: 2026-04-28 01:15:34.151 [INFO][5614] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="4ded7607b6f80e97a1e2bd090c72ab2bf1d445053e40515597369a926c12750c" Apr 28 01:15:34.191371 containerd[1456]: 2026-04-28 01:15:34.151 [INFO][5614] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="4ded7607b6f80e97a1e2bd090c72ab2bf1d445053e40515597369a926c12750c" Apr 28 01:15:34.191371 containerd[1456]: 2026-04-28 01:15:34.178 [INFO][5624] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="4ded7607b6f80e97a1e2bd090c72ab2bf1d445053e40515597369a926c12750c" HandleID="k8s-pod-network.4ded7607b6f80e97a1e2bd090c72ab2bf1d445053e40515597369a926c12750c" Workload="localhost-k8s-calico--kube--controllers--94c7fdbdd--4ls56-eth0" Apr 28 01:15:34.191371 containerd[1456]: 2026-04-28 01:15:34.179 [INFO][5624] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 28 01:15:34.191371 containerd[1456]: 2026-04-28 01:15:34.179 [INFO][5624] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 28 01:15:34.191371 containerd[1456]: 2026-04-28 01:15:34.184 [WARNING][5624] ipam/ipam_plugin.go 515: Asked to release address but it doesn't exist. Ignoring ContainerID="4ded7607b6f80e97a1e2bd090c72ab2bf1d445053e40515597369a926c12750c" HandleID="k8s-pod-network.4ded7607b6f80e97a1e2bd090c72ab2bf1d445053e40515597369a926c12750c" Workload="localhost-k8s-calico--kube--controllers--94c7fdbdd--4ls56-eth0" Apr 28 01:15:34.191371 containerd[1456]: 2026-04-28 01:15:34.184 [INFO][5624] ipam/ipam_plugin.go 526: Releasing address using workloadID ContainerID="4ded7607b6f80e97a1e2bd090c72ab2bf1d445053e40515597369a926c12750c" HandleID="k8s-pod-network.4ded7607b6f80e97a1e2bd090c72ab2bf1d445053e40515597369a926c12750c" Workload="localhost-k8s-calico--kube--controllers--94c7fdbdd--4ls56-eth0" Apr 28 01:15:34.191371 containerd[1456]: 2026-04-28 01:15:34.187 [INFO][5624] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 28 01:15:34.191371 containerd[1456]: 2026-04-28 01:15:34.189 [INFO][5614] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="4ded7607b6f80e97a1e2bd090c72ab2bf1d445053e40515597369a926c12750c" Apr 28 01:15:34.191845 containerd[1456]: time="2026-04-28T01:15:34.191414924Z" level=info msg="TearDown network for sandbox \"4ded7607b6f80e97a1e2bd090c72ab2bf1d445053e40515597369a926c12750c\" successfully" Apr 28 01:15:34.191845 containerd[1456]: time="2026-04-28T01:15:34.191445745Z" level=info msg="StopPodSandbox for \"4ded7607b6f80e97a1e2bd090c72ab2bf1d445053e40515597369a926c12750c\" returns successfully" Apr 28 01:15:34.192121 containerd[1456]: time="2026-04-28T01:15:34.192101534Z" level=info msg="RemovePodSandbox for \"4ded7607b6f80e97a1e2bd090c72ab2bf1d445053e40515597369a926c12750c\"" Apr 28 01:15:34.192247 containerd[1456]: time="2026-04-28T01:15:34.192213225Z" level=info msg="Forcibly stopping sandbox \"4ded7607b6f80e97a1e2bd090c72ab2bf1d445053e40515597369a926c12750c\"" Apr 28 01:15:34.281849 containerd[1456]: 2026-04-28 01:15:34.236 [WARNING][5640] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4ded7607b6f80e97a1e2bd090c72ab2bf1d445053e40515597369a926c12750c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--94c7fdbdd--4ls56-eth0", GenerateName:"calico-kube-controllers-94c7fdbdd-", Namespace:"calico-system", SelfLink:"", UID:"25037838-9eb1-455f-8e2a-2a6ebd0fe4f3", ResourceVersion:"1168", Generation:0, CreationTimestamp:time.Date(2026, time.April, 28, 1, 14, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"94c7fdbdd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"12063ac1f5d0912507992a459ff3c577788d841ec7fcebb877a3eabaa52c6999", Pod:"calico-kube-controllers-94c7fdbdd-4ls56", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif321db00996", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 28 01:15:34.281849 containerd[1456]: 2026-04-28 01:15:34.236 [INFO][5640] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="4ded7607b6f80e97a1e2bd090c72ab2bf1d445053e40515597369a926c12750c" Apr 28 01:15:34.281849 containerd[1456]: 2026-04-28 01:15:34.236 [INFO][5640] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4ded7607b6f80e97a1e2bd090c72ab2bf1d445053e40515597369a926c12750c" iface="eth0" netns="" Apr 28 01:15:34.281849 containerd[1456]: 2026-04-28 01:15:34.236 [INFO][5640] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="4ded7607b6f80e97a1e2bd090c72ab2bf1d445053e40515597369a926c12750c" Apr 28 01:15:34.281849 containerd[1456]: 2026-04-28 01:15:34.236 [INFO][5640] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="4ded7607b6f80e97a1e2bd090c72ab2bf1d445053e40515597369a926c12750c" Apr 28 01:15:34.281849 containerd[1456]: 2026-04-28 01:15:34.268 [INFO][5649] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="4ded7607b6f80e97a1e2bd090c72ab2bf1d445053e40515597369a926c12750c" HandleID="k8s-pod-network.4ded7607b6f80e97a1e2bd090c72ab2bf1d445053e40515597369a926c12750c" Workload="localhost-k8s-calico--kube--controllers--94c7fdbdd--4ls56-eth0" Apr 28 01:15:34.281849 containerd[1456]: 2026-04-28 01:15:34.268 [INFO][5649] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 28 01:15:34.281849 containerd[1456]: 2026-04-28 01:15:34.268 [INFO][5649] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 28 01:15:34.281849 containerd[1456]: 2026-04-28 01:15:34.276 [WARNING][5649] ipam/ipam_plugin.go 515: Asked to release address but it doesn't exist. Ignoring ContainerID="4ded7607b6f80e97a1e2bd090c72ab2bf1d445053e40515597369a926c12750c" HandleID="k8s-pod-network.4ded7607b6f80e97a1e2bd090c72ab2bf1d445053e40515597369a926c12750c" Workload="localhost-k8s-calico--kube--controllers--94c7fdbdd--4ls56-eth0" Apr 28 01:15:34.281849 containerd[1456]: 2026-04-28 01:15:34.276 [INFO][5649] ipam/ipam_plugin.go 526: Releasing address using workloadID ContainerID="4ded7607b6f80e97a1e2bd090c72ab2bf1d445053e40515597369a926c12750c" HandleID="k8s-pod-network.4ded7607b6f80e97a1e2bd090c72ab2bf1d445053e40515597369a926c12750c" Workload="localhost-k8s-calico--kube--controllers--94c7fdbdd--4ls56-eth0" Apr 28 01:15:34.281849 containerd[1456]: 2026-04-28 01:15:34.278 [INFO][5649] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 28 01:15:34.281849 containerd[1456]: 2026-04-28 01:15:34.280 [INFO][5640] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="4ded7607b6f80e97a1e2bd090c72ab2bf1d445053e40515597369a926c12750c" Apr 28 01:15:34.282297 containerd[1456]: time="2026-04-28T01:15:34.281877299Z" level=info msg="TearDown network for sandbox \"4ded7607b6f80e97a1e2bd090c72ab2bf1d445053e40515597369a926c12750c\" successfully" Apr 28 01:15:34.286097 containerd[1456]: time="2026-04-28T01:15:34.286025155Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4ded7607b6f80e97a1e2bd090c72ab2bf1d445053e40515597369a926c12750c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 28 01:15:34.286097 containerd[1456]: time="2026-04-28T01:15:34.286107559Z" level=info msg="RemovePodSandbox \"4ded7607b6f80e97a1e2bd090c72ab2bf1d445053e40515597369a926c12750c\" returns successfully" Apr 28 01:15:34.287483 containerd[1456]: time="2026-04-28T01:15:34.287379191Z" level=info msg="StopPodSandbox for \"55aed3bf1d0e6e0ea4708ed31c9d451023cf4ce0db272c7e28f6d525d6e56fe9\"" Apr 28 01:15:34.373643 containerd[1456]: 2026-04-28 01:15:34.331 [WARNING][5667] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="55aed3bf1d0e6e0ea4708ed31c9d451023cf4ce0db272c7e28f6d525d6e56fe9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--srt2r-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"1a56c2f2-44ed-42f1-8584-fb82c3e57985", ResourceVersion:"1128", Generation:0, CreationTimestamp:time.Date(2026, time.April, 28, 1, 14, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"47c1be337e98c8c1ec49530eded639d854af3f992b93ef4a8b8c36eaddd7187f", Pod:"coredns-66bc5c9577-srt2r", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali459d3219c08", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 28 01:15:34.373643 containerd[1456]: 2026-04-28 01:15:34.331 [INFO][5667] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="55aed3bf1d0e6e0ea4708ed31c9d451023cf4ce0db272c7e28f6d525d6e56fe9" Apr 28 01:15:34.373643 containerd[1456]: 2026-04-28 01:15:34.331 [INFO][5667] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="55aed3bf1d0e6e0ea4708ed31c9d451023cf4ce0db272c7e28f6d525d6e56fe9" iface="eth0" netns="" Apr 28 01:15:34.373643 containerd[1456]: 2026-04-28 01:15:34.331 [INFO][5667] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="55aed3bf1d0e6e0ea4708ed31c9d451023cf4ce0db272c7e28f6d525d6e56fe9" Apr 28 01:15:34.373643 containerd[1456]: 2026-04-28 01:15:34.331 [INFO][5667] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="55aed3bf1d0e6e0ea4708ed31c9d451023cf4ce0db272c7e28f6d525d6e56fe9" Apr 28 01:15:34.373643 containerd[1456]: 2026-04-28 01:15:34.358 [INFO][5676] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="55aed3bf1d0e6e0ea4708ed31c9d451023cf4ce0db272c7e28f6d525d6e56fe9" HandleID="k8s-pod-network.55aed3bf1d0e6e0ea4708ed31c9d451023cf4ce0db272c7e28f6d525d6e56fe9" Workload="localhost-k8s-coredns--66bc5c9577--srt2r-eth0" Apr 28 01:15:34.373643 containerd[1456]: 2026-04-28 01:15:34.358 [INFO][5676] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 28 01:15:34.373643 containerd[1456]: 2026-04-28 01:15:34.358 [INFO][5676] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 28 01:15:34.373643 containerd[1456]: 2026-04-28 01:15:34.365 [WARNING][5676] ipam/ipam_plugin.go 515: Asked to release address but it doesn't exist. Ignoring ContainerID="55aed3bf1d0e6e0ea4708ed31c9d451023cf4ce0db272c7e28f6d525d6e56fe9" HandleID="k8s-pod-network.55aed3bf1d0e6e0ea4708ed31c9d451023cf4ce0db272c7e28f6d525d6e56fe9" Workload="localhost-k8s-coredns--66bc5c9577--srt2r-eth0" Apr 28 01:15:34.373643 containerd[1456]: 2026-04-28 01:15:34.366 [INFO][5676] ipam/ipam_plugin.go 526: Releasing address using workloadID ContainerID="55aed3bf1d0e6e0ea4708ed31c9d451023cf4ce0db272c7e28f6d525d6e56fe9" HandleID="k8s-pod-network.55aed3bf1d0e6e0ea4708ed31c9d451023cf4ce0db272c7e28f6d525d6e56fe9" Workload="localhost-k8s-coredns--66bc5c9577--srt2r-eth0" Apr 28 01:15:34.373643 containerd[1456]: 2026-04-28 01:15:34.369 [INFO][5676] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 28 01:15:34.373643 containerd[1456]: 2026-04-28 01:15:34.372 [INFO][5667] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="55aed3bf1d0e6e0ea4708ed31c9d451023cf4ce0db272c7e28f6d525d6e56fe9" Apr 28 01:15:34.373643 containerd[1456]: time="2026-04-28T01:15:34.373597382Z" level=info msg="TearDown network for sandbox \"55aed3bf1d0e6e0ea4708ed31c9d451023cf4ce0db272c7e28f6d525d6e56fe9\" successfully" Apr 28 01:15:34.373643 containerd[1456]: time="2026-04-28T01:15:34.373619836Z" level=info msg="StopPodSandbox for \"55aed3bf1d0e6e0ea4708ed31c9d451023cf4ce0db272c7e28f6d525d6e56fe9\" returns successfully" Apr 28 01:15:34.374623 containerd[1456]: time="2026-04-28T01:15:34.374457987Z" level=info msg="RemovePodSandbox for \"55aed3bf1d0e6e0ea4708ed31c9d451023cf4ce0db272c7e28f6d525d6e56fe9\"" Apr 28 01:15:34.374623 containerd[1456]: time="2026-04-28T01:15:34.374483782Z" level=info msg="Forcibly stopping sandbox \"55aed3bf1d0e6e0ea4708ed31c9d451023cf4ce0db272c7e28f6d525d6e56fe9\"" Apr 28 01:15:34.463021 containerd[1456]: 2026-04-28 01:15:34.423 [WARNING][5695] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="55aed3bf1d0e6e0ea4708ed31c9d451023cf4ce0db272c7e28f6d525d6e56fe9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--srt2r-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"1a56c2f2-44ed-42f1-8584-fb82c3e57985", ResourceVersion:"1128", Generation:0, CreationTimestamp:time.Date(2026, time.April, 28, 1, 14, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"47c1be337e98c8c1ec49530eded639d854af3f992b93ef4a8b8c36eaddd7187f", Pod:"coredns-66bc5c9577-srt2r", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali459d3219c08", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 28 01:15:34.463021 containerd[1456]: 2026-04-28 01:15:34.424 [INFO][5695] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="55aed3bf1d0e6e0ea4708ed31c9d451023cf4ce0db272c7e28f6d525d6e56fe9" Apr 28 01:15:34.463021 containerd[1456]: 2026-04-28 01:15:34.424 [INFO][5695] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="55aed3bf1d0e6e0ea4708ed31c9d451023cf4ce0db272c7e28f6d525d6e56fe9" iface="eth0" netns="" Apr 28 01:15:34.463021 containerd[1456]: 2026-04-28 01:15:34.424 [INFO][5695] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="55aed3bf1d0e6e0ea4708ed31c9d451023cf4ce0db272c7e28f6d525d6e56fe9" Apr 28 01:15:34.463021 containerd[1456]: 2026-04-28 01:15:34.424 [INFO][5695] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="55aed3bf1d0e6e0ea4708ed31c9d451023cf4ce0db272c7e28f6d525d6e56fe9" Apr 28 01:15:34.463021 containerd[1456]: 2026-04-28 01:15:34.449 [INFO][5704] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="55aed3bf1d0e6e0ea4708ed31c9d451023cf4ce0db272c7e28f6d525d6e56fe9" HandleID="k8s-pod-network.55aed3bf1d0e6e0ea4708ed31c9d451023cf4ce0db272c7e28f6d525d6e56fe9" Workload="localhost-k8s-coredns--66bc5c9577--srt2r-eth0" Apr 28 01:15:34.463021 containerd[1456]: 2026-04-28 01:15:34.449 [INFO][5704] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 28 01:15:34.463021 containerd[1456]: 2026-04-28 01:15:34.449 [INFO][5704] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 28 01:15:34.463021 containerd[1456]: 2026-04-28 01:15:34.457 [WARNING][5704] ipam/ipam_plugin.go 515: Asked to release address but it doesn't exist. Ignoring ContainerID="55aed3bf1d0e6e0ea4708ed31c9d451023cf4ce0db272c7e28f6d525d6e56fe9" HandleID="k8s-pod-network.55aed3bf1d0e6e0ea4708ed31c9d451023cf4ce0db272c7e28f6d525d6e56fe9" Workload="localhost-k8s-coredns--66bc5c9577--srt2r-eth0" Apr 28 01:15:34.463021 containerd[1456]: 2026-04-28 01:15:34.457 [INFO][5704] ipam/ipam_plugin.go 526: Releasing address using workloadID ContainerID="55aed3bf1d0e6e0ea4708ed31c9d451023cf4ce0db272c7e28f6d525d6e56fe9" HandleID="k8s-pod-network.55aed3bf1d0e6e0ea4708ed31c9d451023cf4ce0db272c7e28f6d525d6e56fe9" Workload="localhost-k8s-coredns--66bc5c9577--srt2r-eth0" Apr 28 01:15:34.463021 containerd[1456]: 2026-04-28 01:15:34.459 [INFO][5704] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 28 01:15:34.463021 containerd[1456]: 2026-04-28 01:15:34.460 [INFO][5695] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="55aed3bf1d0e6e0ea4708ed31c9d451023cf4ce0db272c7e28f6d525d6e56fe9" Apr 28 01:15:34.463528 containerd[1456]: time="2026-04-28T01:15:34.463035442Z" level=info msg="TearDown network for sandbox \"55aed3bf1d0e6e0ea4708ed31c9d451023cf4ce0db272c7e28f6d525d6e56fe9\" successfully" Apr 28 01:15:34.475443 containerd[1456]: time="2026-04-28T01:15:34.475368438Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"55aed3bf1d0e6e0ea4708ed31c9d451023cf4ce0db272c7e28f6d525d6e56fe9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 28 01:15:34.475443 containerd[1456]: time="2026-04-28T01:15:34.475452932Z" level=info msg="RemovePodSandbox \"55aed3bf1d0e6e0ea4708ed31c9d451023cf4ce0db272c7e28f6d525d6e56fe9\" returns successfully" Apr 28 01:15:34.476313 containerd[1456]: time="2026-04-28T01:15:34.476220378Z" level=info msg="StopPodSandbox for \"8b8b1804cb3be368633947b45b4abf8921f682c444d641033f588519a6ee9ea6\"" Apr 28 01:15:34.564422 containerd[1456]: 2026-04-28 01:15:34.522 [WARNING][5722] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8b8b1804cb3be368633947b45b4abf8921f682c444d641033f588519a6ee9ea6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--59c6777c8b--mc98s-eth0", GenerateName:"calico-apiserver-59c6777c8b-", Namespace:"calico-system", SelfLink:"", UID:"d6b226a2-c761-4a32-8ac6-230b897faf20", ResourceVersion:"1125", Generation:0, CreationTimestamp:time.Date(2026, time.April, 28, 1, 14, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59c6777c8b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fb4883db6b5dc58af62fa07923e44145449c66544e8294ac80a4af2313a468c5", Pod:"calico-apiserver-59c6777c8b-mc98s", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calic25d340f709", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 28 01:15:34.564422 containerd[1456]: 2026-04-28 01:15:34.522 [INFO][5722] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8b8b1804cb3be368633947b45b4abf8921f682c444d641033f588519a6ee9ea6" Apr 28 01:15:34.564422 containerd[1456]: 2026-04-28 01:15:34.522 [INFO][5722] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8b8b1804cb3be368633947b45b4abf8921f682c444d641033f588519a6ee9ea6" iface="eth0" netns="" Apr 28 01:15:34.564422 containerd[1456]: 2026-04-28 01:15:34.522 [INFO][5722] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8b8b1804cb3be368633947b45b4abf8921f682c444d641033f588519a6ee9ea6" Apr 28 01:15:34.564422 containerd[1456]: 2026-04-28 01:15:34.523 [INFO][5722] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8b8b1804cb3be368633947b45b4abf8921f682c444d641033f588519a6ee9ea6" Apr 28 01:15:34.564422 containerd[1456]: 2026-04-28 01:15:34.551 [INFO][5730] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8b8b1804cb3be368633947b45b4abf8921f682c444d641033f588519a6ee9ea6" HandleID="k8s-pod-network.8b8b1804cb3be368633947b45b4abf8921f682c444d641033f588519a6ee9ea6" Workload="localhost-k8s-calico--apiserver--59c6777c8b--mc98s-eth0" Apr 28 01:15:34.564422 containerd[1456]: 2026-04-28 01:15:34.551 [INFO][5730] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 28 01:15:34.564422 containerd[1456]: 2026-04-28 01:15:34.551 [INFO][5730] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 28 01:15:34.564422 containerd[1456]: 2026-04-28 01:15:34.558 [WARNING][5730] ipam/ipam_plugin.go 515: Asked to release address but it doesn't exist. Ignoring ContainerID="8b8b1804cb3be368633947b45b4abf8921f682c444d641033f588519a6ee9ea6" HandleID="k8s-pod-network.8b8b1804cb3be368633947b45b4abf8921f682c444d641033f588519a6ee9ea6" Workload="localhost-k8s-calico--apiserver--59c6777c8b--mc98s-eth0" Apr 28 01:15:34.564422 containerd[1456]: 2026-04-28 01:15:34.558 [INFO][5730] ipam/ipam_plugin.go 526: Releasing address using workloadID ContainerID="8b8b1804cb3be368633947b45b4abf8921f682c444d641033f588519a6ee9ea6" HandleID="k8s-pod-network.8b8b1804cb3be368633947b45b4abf8921f682c444d641033f588519a6ee9ea6" Workload="localhost-k8s-calico--apiserver--59c6777c8b--mc98s-eth0" Apr 28 01:15:34.564422 containerd[1456]: 2026-04-28 01:15:34.560 [INFO][5730] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 28 01:15:34.564422 containerd[1456]: 2026-04-28 01:15:34.561 [INFO][5722] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8b8b1804cb3be368633947b45b4abf8921f682c444d641033f588519a6ee9ea6" Apr 28 01:15:34.564422 containerd[1456]: time="2026-04-28T01:15:34.564387495Z" level=info msg="TearDown network for sandbox \"8b8b1804cb3be368633947b45b4abf8921f682c444d641033f588519a6ee9ea6\" successfully" Apr 28 01:15:34.564422 containerd[1456]: time="2026-04-28T01:15:34.564409543Z" level=info msg="StopPodSandbox for \"8b8b1804cb3be368633947b45b4abf8921f682c444d641033f588519a6ee9ea6\" returns successfully" Apr 28 01:15:34.565219 containerd[1456]: time="2026-04-28T01:15:34.565188225Z" level=info msg="RemovePodSandbox for \"8b8b1804cb3be368633947b45b4abf8921f682c444d641033f588519a6ee9ea6\"" Apr 28 01:15:34.565219 containerd[1456]: time="2026-04-28T01:15:34.565213058Z" level=info msg="Forcibly stopping sandbox \"8b8b1804cb3be368633947b45b4abf8921f682c444d641033f588519a6ee9ea6\"" Apr 28 01:15:34.640488 containerd[1456]: 2026-04-28 01:15:34.604 [WARNING][5747] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8b8b1804cb3be368633947b45b4abf8921f682c444d641033f588519a6ee9ea6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--59c6777c8b--mc98s-eth0", GenerateName:"calico-apiserver-59c6777c8b-", Namespace:"calico-system", SelfLink:"", UID:"d6b226a2-c761-4a32-8ac6-230b897faf20", ResourceVersion:"1125", Generation:0, CreationTimestamp:time.Date(2026, time.April, 28, 1, 14, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59c6777c8b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fb4883db6b5dc58af62fa07923e44145449c66544e8294ac80a4af2313a468c5", Pod:"calico-apiserver-59c6777c8b-mc98s", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calic25d340f709", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 28 01:15:34.640488 containerd[1456]: 2026-04-28 01:15:34.604 [INFO][5747] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8b8b1804cb3be368633947b45b4abf8921f682c444d641033f588519a6ee9ea6" Apr 28 01:15:34.640488 containerd[1456]: 2026-04-28 01:15:34.604 [INFO][5747] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8b8b1804cb3be368633947b45b4abf8921f682c444d641033f588519a6ee9ea6" iface="eth0" netns="" Apr 28 01:15:34.640488 containerd[1456]: 2026-04-28 01:15:34.604 [INFO][5747] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8b8b1804cb3be368633947b45b4abf8921f682c444d641033f588519a6ee9ea6" Apr 28 01:15:34.640488 containerd[1456]: 2026-04-28 01:15:34.604 [INFO][5747] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8b8b1804cb3be368633947b45b4abf8921f682c444d641033f588519a6ee9ea6" Apr 28 01:15:34.640488 containerd[1456]: 2026-04-28 01:15:34.628 [INFO][5755] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8b8b1804cb3be368633947b45b4abf8921f682c444d641033f588519a6ee9ea6" HandleID="k8s-pod-network.8b8b1804cb3be368633947b45b4abf8921f682c444d641033f588519a6ee9ea6" Workload="localhost-k8s-calico--apiserver--59c6777c8b--mc98s-eth0" Apr 28 01:15:34.640488 containerd[1456]: 2026-04-28 01:15:34.628 [INFO][5755] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 28 01:15:34.640488 containerd[1456]: 2026-04-28 01:15:34.628 [INFO][5755] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 28 01:15:34.640488 containerd[1456]: 2026-04-28 01:15:34.635 [WARNING][5755] ipam/ipam_plugin.go 515: Asked to release address but it doesn't exist. Ignoring ContainerID="8b8b1804cb3be368633947b45b4abf8921f682c444d641033f588519a6ee9ea6" HandleID="k8s-pod-network.8b8b1804cb3be368633947b45b4abf8921f682c444d641033f588519a6ee9ea6" Workload="localhost-k8s-calico--apiserver--59c6777c8b--mc98s-eth0" Apr 28 01:15:34.640488 containerd[1456]: 2026-04-28 01:15:34.635 [INFO][5755] ipam/ipam_plugin.go 526: Releasing address using workloadID ContainerID="8b8b1804cb3be368633947b45b4abf8921f682c444d641033f588519a6ee9ea6" HandleID="k8s-pod-network.8b8b1804cb3be368633947b45b4abf8921f682c444d641033f588519a6ee9ea6" Workload="localhost-k8s-calico--apiserver--59c6777c8b--mc98s-eth0" Apr 28 01:15:34.640488 containerd[1456]: 2026-04-28 01:15:34.637 [INFO][5755] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 28 01:15:34.640488 containerd[1456]: 2026-04-28 01:15:34.638 [INFO][5747] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8b8b1804cb3be368633947b45b4abf8921f682c444d641033f588519a6ee9ea6" Apr 28 01:15:34.640488 containerd[1456]: time="2026-04-28T01:15:34.640425118Z" level=info msg="TearDown network for sandbox \"8b8b1804cb3be368633947b45b4abf8921f682c444d641033f588519a6ee9ea6\" successfully" Apr 28 01:15:34.644462 containerd[1456]: time="2026-04-28T01:15:34.644393855Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8b8b1804cb3be368633947b45b4abf8921f682c444d641033f588519a6ee9ea6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 28 01:15:34.644628 containerd[1456]: time="2026-04-28T01:15:34.644477651Z" level=info msg="RemovePodSandbox \"8b8b1804cb3be368633947b45b4abf8921f682c444d641033f588519a6ee9ea6\" returns successfully" Apr 28 01:15:34.645573 containerd[1456]: time="2026-04-28T01:15:34.645526079Z" level=info msg="StopPodSandbox for \"1b5e5ac5cf20259e031d9691f196db006eb613be1608fb4221becec133ac1551\"" Apr 28 01:15:34.711249 containerd[1456]: 2026-04-28 01:15:34.680 [WARNING][5772] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1b5e5ac5cf20259e031d9691f196db006eb613be1608fb4221becec133ac1551" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--jkglw-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"adb579ea-e7d0-4c18-a435-e777162b9b49", ResourceVersion:"1081", Generation:0, CreationTimestamp:time.Date(2026, time.April, 28, 1, 14, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0c203d0a31637961fdae9cfdf99c18c8a9ef8e4f7cd5bc201d2927fa400ca179", Pod:"coredns-66bc5c9577-jkglw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3c24f376b9e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 28 01:15:34.711249 containerd[1456]: 2026-04-28 01:15:34.680 [INFO][5772] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="1b5e5ac5cf20259e031d9691f196db006eb613be1608fb4221becec133ac1551" Apr 28 01:15:34.711249 containerd[1456]: 2026-04-28 01:15:34.680 [INFO][5772] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1b5e5ac5cf20259e031d9691f196db006eb613be1608fb4221becec133ac1551" iface="eth0" netns="" Apr 28 01:15:34.711249 containerd[1456]: 2026-04-28 01:15:34.680 [INFO][5772] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="1b5e5ac5cf20259e031d9691f196db006eb613be1608fb4221becec133ac1551" Apr 28 01:15:34.711249 containerd[1456]: 2026-04-28 01:15:34.680 [INFO][5772] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="1b5e5ac5cf20259e031d9691f196db006eb613be1608fb4221becec133ac1551" Apr 28 01:15:34.711249 containerd[1456]: 2026-04-28 01:15:34.700 [INFO][5781] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="1b5e5ac5cf20259e031d9691f196db006eb613be1608fb4221becec133ac1551" HandleID="k8s-pod-network.1b5e5ac5cf20259e031d9691f196db006eb613be1608fb4221becec133ac1551" Workload="localhost-k8s-coredns--66bc5c9577--jkglw-eth0" Apr 28 01:15:34.711249 containerd[1456]: 2026-04-28 01:15:34.700 [INFO][5781] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 28 01:15:34.711249 containerd[1456]: 2026-04-28 01:15:34.700 [INFO][5781] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 28 01:15:34.711249 containerd[1456]: 2026-04-28 01:15:34.706 [WARNING][5781] ipam/ipam_plugin.go 515: Asked to release address but it doesn't exist. Ignoring ContainerID="1b5e5ac5cf20259e031d9691f196db006eb613be1608fb4221becec133ac1551" HandleID="k8s-pod-network.1b5e5ac5cf20259e031d9691f196db006eb613be1608fb4221becec133ac1551" Workload="localhost-k8s-coredns--66bc5c9577--jkglw-eth0" Apr 28 01:15:34.711249 containerd[1456]: 2026-04-28 01:15:34.706 [INFO][5781] ipam/ipam_plugin.go 526: Releasing address using workloadID ContainerID="1b5e5ac5cf20259e031d9691f196db006eb613be1608fb4221becec133ac1551" HandleID="k8s-pod-network.1b5e5ac5cf20259e031d9691f196db006eb613be1608fb4221becec133ac1551" Workload="localhost-k8s-coredns--66bc5c9577--jkglw-eth0" Apr 28 01:15:34.711249 containerd[1456]: 2026-04-28 01:15:34.708 [INFO][5781] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 28 01:15:34.711249 containerd[1456]: 2026-04-28 01:15:34.709 [INFO][5772] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="1b5e5ac5cf20259e031d9691f196db006eb613be1608fb4221becec133ac1551" Apr 28 01:15:34.711796 containerd[1456]: time="2026-04-28T01:15:34.711287163Z" level=info msg="TearDown network for sandbox \"1b5e5ac5cf20259e031d9691f196db006eb613be1608fb4221becec133ac1551\" successfully" Apr 28 01:15:34.711796 containerd[1456]: time="2026-04-28T01:15:34.711308063Z" level=info msg="StopPodSandbox for \"1b5e5ac5cf20259e031d9691f196db006eb613be1608fb4221becec133ac1551\" returns successfully" Apr 28 01:15:34.711856 containerd[1456]: time="2026-04-28T01:15:34.711824836Z" level=info msg="RemovePodSandbox for \"1b5e5ac5cf20259e031d9691f196db006eb613be1608fb4221becec133ac1551\"" Apr 28 01:15:34.711876 containerd[1456]: time="2026-04-28T01:15:34.711861548Z" level=info msg="Forcibly stopping sandbox \"1b5e5ac5cf20259e031d9691f196db006eb613be1608fb4221becec133ac1551\"" Apr 28 01:15:34.816752 containerd[1456]: 2026-04-28 01:15:34.750 [WARNING][5798] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1b5e5ac5cf20259e031d9691f196db006eb613be1608fb4221becec133ac1551" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--jkglw-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"adb579ea-e7d0-4c18-a435-e777162b9b49", ResourceVersion:"1081", Generation:0, CreationTimestamp:time.Date(2026, time.April, 28, 1, 14, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0c203d0a31637961fdae9cfdf99c18c8a9ef8e4f7cd5bc201d2927fa400ca179", Pod:"coredns-66bc5c9577-jkglw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3c24f376b9e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 28 01:15:34.816752 containerd[1456]: 2026-04-28 01:15:34.751 [INFO][5798] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="1b5e5ac5cf20259e031d9691f196db006eb613be1608fb4221becec133ac1551" Apr 28 01:15:34.816752 containerd[1456]: 2026-04-28 01:15:34.751 [INFO][5798] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1b5e5ac5cf20259e031d9691f196db006eb613be1608fb4221becec133ac1551" iface="eth0" netns="" Apr 28 01:15:34.816752 containerd[1456]: 2026-04-28 01:15:34.751 [INFO][5798] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="1b5e5ac5cf20259e031d9691f196db006eb613be1608fb4221becec133ac1551" Apr 28 01:15:34.816752 containerd[1456]: 2026-04-28 01:15:34.751 [INFO][5798] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="1b5e5ac5cf20259e031d9691f196db006eb613be1608fb4221becec133ac1551" Apr 28 01:15:34.816752 containerd[1456]: 2026-04-28 01:15:34.777 [INFO][5806] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="1b5e5ac5cf20259e031d9691f196db006eb613be1608fb4221becec133ac1551" HandleID="k8s-pod-network.1b5e5ac5cf20259e031d9691f196db006eb613be1608fb4221becec133ac1551" Workload="localhost-k8s-coredns--66bc5c9577--jkglw-eth0" Apr 28 01:15:34.816752 containerd[1456]: 2026-04-28 01:15:34.778 [INFO][5806] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 28 01:15:34.816752 containerd[1456]: 2026-04-28 01:15:34.778 [INFO][5806] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 28 01:15:34.816752 containerd[1456]: 2026-04-28 01:15:34.809 [WARNING][5806] ipam/ipam_plugin.go 515: Asked to release address but it doesn't exist. Ignoring ContainerID="1b5e5ac5cf20259e031d9691f196db006eb613be1608fb4221becec133ac1551" HandleID="k8s-pod-network.1b5e5ac5cf20259e031d9691f196db006eb613be1608fb4221becec133ac1551" Workload="localhost-k8s-coredns--66bc5c9577--jkglw-eth0" Apr 28 01:15:34.816752 containerd[1456]: 2026-04-28 01:15:34.809 [INFO][5806] ipam/ipam_plugin.go 526: Releasing address using workloadID ContainerID="1b5e5ac5cf20259e031d9691f196db006eb613be1608fb4221becec133ac1551" HandleID="k8s-pod-network.1b5e5ac5cf20259e031d9691f196db006eb613be1608fb4221becec133ac1551" Workload="localhost-k8s-coredns--66bc5c9577--jkglw-eth0" Apr 28 01:15:34.816752 containerd[1456]: 2026-04-28 01:15:34.811 [INFO][5806] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 28 01:15:34.816752 containerd[1456]: 2026-04-28 01:15:34.813 [INFO][5798] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="1b5e5ac5cf20259e031d9691f196db006eb613be1608fb4221becec133ac1551" Apr 28 01:15:34.817296 containerd[1456]: time="2026-04-28T01:15:34.816804319Z" level=info msg="TearDown network for sandbox \"1b5e5ac5cf20259e031d9691f196db006eb613be1608fb4221becec133ac1551\" successfully" Apr 28 01:15:34.823944 containerd[1456]: time="2026-04-28T01:15:34.823883609Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1b5e5ac5cf20259e031d9691f196db006eb613be1608fb4221becec133ac1551\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 28 01:15:34.824067 containerd[1456]: time="2026-04-28T01:15:34.823986909Z" level=info msg="RemovePodSandbox \"1b5e5ac5cf20259e031d9691f196db006eb613be1608fb4221becec133ac1551\" returns successfully" Apr 28 01:15:34.824778 containerd[1456]: time="2026-04-28T01:15:34.824720757Z" level=info msg="StopPodSandbox for \"f1b614515eb80a635e30633e8feba558c3ed2b080a2416ae21bf47d7e6880c91\"" Apr 28 01:15:34.909802 containerd[1456]: 2026-04-28 01:15:34.868 [WARNING][5824] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="f1b614515eb80a635e30633e8feba558c3ed2b080a2416ae21bf47d7e6880c91" WorkloadEndpoint="localhost-k8s-whisker--569f54558--r54gv-eth0" Apr 28 01:15:34.909802 containerd[1456]: 2026-04-28 01:15:34.868 [INFO][5824] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="f1b614515eb80a635e30633e8feba558c3ed2b080a2416ae21bf47d7e6880c91" Apr 28 01:15:34.909802 containerd[1456]: 2026-04-28 01:15:34.868 [INFO][5824] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f1b614515eb80a635e30633e8feba558c3ed2b080a2416ae21bf47d7e6880c91" iface="eth0" netns="" Apr 28 01:15:34.909802 containerd[1456]: 2026-04-28 01:15:34.868 [INFO][5824] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="f1b614515eb80a635e30633e8feba558c3ed2b080a2416ae21bf47d7e6880c91" Apr 28 01:15:34.909802 containerd[1456]: 2026-04-28 01:15:34.868 [INFO][5824] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="f1b614515eb80a635e30633e8feba558c3ed2b080a2416ae21bf47d7e6880c91" Apr 28 01:15:34.909802 containerd[1456]: 2026-04-28 01:15:34.897 [INFO][5832] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="f1b614515eb80a635e30633e8feba558c3ed2b080a2416ae21bf47d7e6880c91" HandleID="k8s-pod-network.f1b614515eb80a635e30633e8feba558c3ed2b080a2416ae21bf47d7e6880c91" Workload="localhost-k8s-whisker--569f54558--r54gv-eth0" Apr 28 01:15:34.909802 containerd[1456]: 2026-04-28 01:15:34.897 [INFO][5832] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 28 01:15:34.909802 containerd[1456]: 2026-04-28 01:15:34.897 [INFO][5832] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 28 01:15:34.909802 containerd[1456]: 2026-04-28 01:15:34.904 [WARNING][5832] ipam/ipam_plugin.go 515: Asked to release address but it doesn't exist. Ignoring ContainerID="f1b614515eb80a635e30633e8feba558c3ed2b080a2416ae21bf47d7e6880c91" HandleID="k8s-pod-network.f1b614515eb80a635e30633e8feba558c3ed2b080a2416ae21bf47d7e6880c91" Workload="localhost-k8s-whisker--569f54558--r54gv-eth0" Apr 28 01:15:34.909802 containerd[1456]: 2026-04-28 01:15:34.904 [INFO][5832] ipam/ipam_plugin.go 526: Releasing address using workloadID ContainerID="f1b614515eb80a635e30633e8feba558c3ed2b080a2416ae21bf47d7e6880c91" HandleID="k8s-pod-network.f1b614515eb80a635e30633e8feba558c3ed2b080a2416ae21bf47d7e6880c91" Workload="localhost-k8s-whisker--569f54558--r54gv-eth0" Apr 28 01:15:34.909802 containerd[1456]: 2026-04-28 01:15:34.906 [INFO][5832] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 28 01:15:34.909802 containerd[1456]: 2026-04-28 01:15:34.908 [INFO][5824] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="f1b614515eb80a635e30633e8feba558c3ed2b080a2416ae21bf47d7e6880c91" Apr 28 01:15:34.909802 containerd[1456]: time="2026-04-28T01:15:34.909753753Z" level=info msg="TearDown network for sandbox \"f1b614515eb80a635e30633e8feba558c3ed2b080a2416ae21bf47d7e6880c91\" successfully" Apr 28 01:15:34.909802 containerd[1456]: time="2026-04-28T01:15:34.909778457Z" level=info msg="StopPodSandbox for \"f1b614515eb80a635e30633e8feba558c3ed2b080a2416ae21bf47d7e6880c91\" returns successfully" Apr 28 01:15:34.910852 containerd[1456]: time="2026-04-28T01:15:34.910585397Z" level=info msg="RemovePodSandbox for \"f1b614515eb80a635e30633e8feba558c3ed2b080a2416ae21bf47d7e6880c91\"" Apr 28 01:15:34.910852 containerd[1456]: time="2026-04-28T01:15:34.910608646Z" level=info msg="Forcibly stopping sandbox \"f1b614515eb80a635e30633e8feba558c3ed2b080a2416ae21bf47d7e6880c91\"" Apr 28 01:15:34.994890 containerd[1456]: 2026-04-28 01:15:34.950 [WARNING][5849] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="f1b614515eb80a635e30633e8feba558c3ed2b080a2416ae21bf47d7e6880c91" WorkloadEndpoint="localhost-k8s-whisker--569f54558--r54gv-eth0" Apr 28 01:15:34.994890 containerd[1456]: 2026-04-28 01:15:34.950 [INFO][5849] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="f1b614515eb80a635e30633e8feba558c3ed2b080a2416ae21bf47d7e6880c91" Apr 28 01:15:34.994890 containerd[1456]: 2026-04-28 01:15:34.950 [INFO][5849] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f1b614515eb80a635e30633e8feba558c3ed2b080a2416ae21bf47d7e6880c91" iface="eth0" netns="" Apr 28 01:15:34.994890 containerd[1456]: 2026-04-28 01:15:34.950 [INFO][5849] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="f1b614515eb80a635e30633e8feba558c3ed2b080a2416ae21bf47d7e6880c91" Apr 28 01:15:34.994890 containerd[1456]: 2026-04-28 01:15:34.950 [INFO][5849] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="f1b614515eb80a635e30633e8feba558c3ed2b080a2416ae21bf47d7e6880c91" Apr 28 01:15:34.994890 containerd[1456]: 2026-04-28 01:15:34.979 [INFO][5858] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="f1b614515eb80a635e30633e8feba558c3ed2b080a2416ae21bf47d7e6880c91" HandleID="k8s-pod-network.f1b614515eb80a635e30633e8feba558c3ed2b080a2416ae21bf47d7e6880c91" Workload="localhost-k8s-whisker--569f54558--r54gv-eth0" Apr 28 01:15:34.994890 containerd[1456]: 2026-04-28 01:15:34.980 [INFO][5858] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 28 01:15:34.994890 containerd[1456]: 2026-04-28 01:15:34.980 [INFO][5858] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 28 01:15:34.994890 containerd[1456]: 2026-04-28 01:15:34.987 [WARNING][5858] ipam/ipam_plugin.go 515: Asked to release address but it doesn't exist. Ignoring ContainerID="f1b614515eb80a635e30633e8feba558c3ed2b080a2416ae21bf47d7e6880c91" HandleID="k8s-pod-network.f1b614515eb80a635e30633e8feba558c3ed2b080a2416ae21bf47d7e6880c91" Workload="localhost-k8s-whisker--569f54558--r54gv-eth0" Apr 28 01:15:34.994890 containerd[1456]: 2026-04-28 01:15:34.988 [INFO][5858] ipam/ipam_plugin.go 526: Releasing address using workloadID ContainerID="f1b614515eb80a635e30633e8feba558c3ed2b080a2416ae21bf47d7e6880c91" HandleID="k8s-pod-network.f1b614515eb80a635e30633e8feba558c3ed2b080a2416ae21bf47d7e6880c91" Workload="localhost-k8s-whisker--569f54558--r54gv-eth0" Apr 28 01:15:34.994890 containerd[1456]: 2026-04-28 01:15:34.991 [INFO][5858] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 28 01:15:34.994890 containerd[1456]: 2026-04-28 01:15:34.993 [INFO][5849] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="f1b614515eb80a635e30633e8feba558c3ed2b080a2416ae21bf47d7e6880c91" Apr 28 01:15:34.995238 containerd[1456]: time="2026-04-28T01:15:34.994908324Z" level=info msg="TearDown network for sandbox \"f1b614515eb80a635e30633e8feba558c3ed2b080a2416ae21bf47d7e6880c91\" successfully" Apr 28 01:15:34.998717 containerd[1456]: time="2026-04-28T01:15:34.998653538Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f1b614515eb80a635e30633e8feba558c3ed2b080a2416ae21bf47d7e6880c91\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 28 01:15:34.998717 containerd[1456]: time="2026-04-28T01:15:34.998721422Z" level=info msg="RemovePodSandbox \"f1b614515eb80a635e30633e8feba558c3ed2b080a2416ae21bf47d7e6880c91\" returns successfully" Apr 28 01:15:37.013800 systemd[1]: Started sshd@11-10.0.0.153:22-10.0.0.1:57944.service - OpenSSH per-connection server daemon (10.0.0.1:57944). Apr 28 01:15:37.097416 sshd[5889]: Accepted publickey for core from 10.0.0.1 port 57944 ssh2: RSA SHA256:LlE/68A0qVd4DdmQfcok9T4l7BHzq3PFAQ3i8Jwjpps Apr 28 01:15:37.098858 sshd[5889]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:15:37.103798 systemd-logind[1435]: New session 12 of user core. Apr 28 01:15:37.110242 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 28 01:15:37.347768 sshd[5889]: pam_unix(sshd:session): session closed for user core Apr 28 01:15:37.350709 systemd[1]: sshd@11-10.0.0.153:22-10.0.0.1:57944.service: Deactivated successfully. Apr 28 01:15:37.352274 systemd[1]: session-12.scope: Deactivated successfully. Apr 28 01:15:37.352998 systemd-logind[1435]: Session 12 logged out. Waiting for processes to exit. Apr 28 01:15:37.354207 systemd-logind[1435]: Removed session 12. Apr 28 01:15:42.407616 systemd[1]: Started sshd@12-10.0.0.153:22-10.0.0.1:57946.service - OpenSSH per-connection server daemon (10.0.0.1:57946). Apr 28 01:15:42.448051 sshd[5933]: Accepted publickey for core from 10.0.0.1 port 57946 ssh2: RSA SHA256:LlE/68A0qVd4DdmQfcok9T4l7BHzq3PFAQ3i8Jwjpps Apr 28 01:15:42.449611 sshd[5933]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:15:42.455545 systemd-logind[1435]: New session 13 of user core. Apr 28 01:15:42.467549 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 28 01:15:42.640784 sshd[5933]: pam_unix(sshd:session): session closed for user core Apr 28 01:15:42.647017 systemd[1]: sshd@12-10.0.0.153:22-10.0.0.1:57946.service: Deactivated successfully. Apr 28 01:15:42.648275 systemd[1]: session-13.scope: Deactivated successfully. Apr 28 01:15:42.649386 systemd-logind[1435]: Session 13 logged out. Waiting for processes to exit. Apr 28 01:15:42.657245 systemd[1]: Started sshd@13-10.0.0.153:22-10.0.0.1:57962.service - OpenSSH per-connection server daemon (10.0.0.1:57962). Apr 28 01:15:42.658050 systemd-logind[1435]: Removed session 13. Apr 28 01:15:42.686455 sshd[5953]: Accepted publickey for core from 10.0.0.1 port 57962 ssh2: RSA SHA256:LlE/68A0qVd4DdmQfcok9T4l7BHzq3PFAQ3i8Jwjpps Apr 28 01:15:42.687826 sshd[5953]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:15:42.692319 systemd-logind[1435]: New session 14 of user core. Apr 28 01:15:42.702085 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 28 01:15:42.914424 sshd[5953]: pam_unix(sshd:session): session closed for user core Apr 28 01:15:42.929818 systemd[1]: sshd@13-10.0.0.153:22-10.0.0.1:57962.service: Deactivated successfully. Apr 28 01:15:42.933763 systemd[1]: session-14.scope: Deactivated successfully. Apr 28 01:15:42.936788 systemd-logind[1435]: Session 14 logged out. Waiting for processes to exit. Apr 28 01:15:42.946567 systemd[1]: Started sshd@14-10.0.0.153:22-10.0.0.1:57972.service - OpenSSH per-connection server daemon (10.0.0.1:57972). Apr 28 01:15:42.947480 systemd-logind[1435]: Removed session 14. Apr 28 01:15:42.973109 sshd[5965]: Accepted publickey for core from 10.0.0.1 port 57972 ssh2: RSA SHA256:LlE/68A0qVd4DdmQfcok9T4l7BHzq3PFAQ3i8Jwjpps Apr 28 01:15:42.974413 sshd[5965]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:15:42.980132 systemd-logind[1435]: New session 15 of user core. Apr 28 01:15:42.990433 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 28 01:15:43.098003 sshd[5965]: pam_unix(sshd:session): session closed for user core Apr 28 01:15:43.100875 systemd[1]: sshd@14-10.0.0.153:22-10.0.0.1:57972.service: Deactivated successfully. Apr 28 01:15:43.103036 systemd[1]: session-15.scope: Deactivated successfully. Apr 28 01:15:43.103639 systemd-logind[1435]: Session 15 logged out. Waiting for processes to exit. Apr 28 01:15:43.104431 systemd-logind[1435]: Removed session 15. Apr 28 01:15:45.585738 kubelet[2502]: E0428 01:15:45.585644 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:15:47.802104 kubelet[2502]: I0428 01:15:47.802018 2502 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 28 01:15:48.108900 systemd[1]: Started sshd@15-10.0.0.153:22-10.0.0.1:55966.service - OpenSSH per-connection server daemon (10.0.0.1:55966). Apr 28 01:15:48.142880 sshd[5984]: Accepted publickey for core from 10.0.0.1 port 55966 ssh2: RSA SHA256:LlE/68A0qVd4DdmQfcok9T4l7BHzq3PFAQ3i8Jwjpps Apr 28 01:15:48.144812 sshd[5984]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:15:48.149372 systemd-logind[1435]: New session 16 of user core. Apr 28 01:15:48.157343 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 28 01:15:48.275861 sshd[5984]: pam_unix(sshd:session): session closed for user core Apr 28 01:15:48.278929 systemd[1]: sshd@15-10.0.0.153:22-10.0.0.1:55966.service: Deactivated successfully. Apr 28 01:15:48.282924 systemd[1]: session-16.scope: Deactivated successfully. Apr 28 01:15:48.284545 systemd-logind[1435]: Session 16 logged out. Waiting for processes to exit. Apr 28 01:15:48.285406 systemd-logind[1435]: Removed session 16. Apr 28 01:15:49.585627 kubelet[2502]: E0428 01:15:49.585487 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:15:53.287301 systemd[1]: Started sshd@16-10.0.0.153:22-10.0.0.1:55968.service - OpenSSH per-connection server daemon (10.0.0.1:55968). Apr 28 01:15:53.330671 sshd[6006]: Accepted publickey for core from 10.0.0.1 port 55968 ssh2: RSA SHA256:LlE/68A0qVd4DdmQfcok9T4l7BHzq3PFAQ3i8Jwjpps Apr 28 01:15:53.332503 sshd[6006]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:15:53.336495 systemd-logind[1435]: New session 17 of user core. Apr 28 01:15:53.345195 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 28 01:15:53.503531 sshd[6006]: pam_unix(sshd:session): session closed for user core Apr 28 01:15:53.506581 systemd[1]: sshd@16-10.0.0.153:22-10.0.0.1:55968.service: Deactivated successfully. Apr 28 01:15:53.508036 systemd[1]: session-17.scope: Deactivated successfully. Apr 28 01:15:53.508637 systemd-logind[1435]: Session 17 logged out. Waiting for processes to exit. Apr 28 01:15:53.509475 systemd-logind[1435]: Removed session 17. Apr 28 01:15:53.584695 kubelet[2502]: E0428 01:15:53.584279 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:15:55.584521 kubelet[2502]: E0428 01:15:55.584471 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:15:58.518410 systemd[1]: Started sshd@17-10.0.0.153:22-10.0.0.1:58988.service - OpenSSH per-connection server daemon (10.0.0.1:58988). Apr 28 01:15:58.552359 sshd[6020]: Accepted publickey for core from 10.0.0.1 port 58988 ssh2: RSA SHA256:LlE/68A0qVd4DdmQfcok9T4l7BHzq3PFAQ3i8Jwjpps Apr 28 01:15:58.553793 sshd[6020]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:15:58.557365 systemd-logind[1435]: New session 18 of user core. Apr 28 01:15:58.562164 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 28 01:15:58.680863 sshd[6020]: pam_unix(sshd:session): session closed for user core Apr 28 01:15:58.684166 systemd[1]: sshd@17-10.0.0.153:22-10.0.0.1:58988.service: Deactivated successfully. Apr 28 01:15:58.685614 systemd[1]: session-18.scope: Deactivated successfully. Apr 28 01:15:58.686126 systemd-logind[1435]: Session 18 logged out. Waiting for processes to exit. Apr 28 01:15:58.686925 systemd-logind[1435]: Removed session 18. Apr 28 01:15:58.936996 systemd[1]: run-containerd-runc-k8s.io-2acfb40123755e3f8f2adf35ff2bc9f9f2ad79373bb46262bf8c889db3da7a14-runc.H7sCA8.mount: Deactivated successfully. Apr 28 01:16:02.427259 kernel: hrtimer: interrupt took 22413815 ns Apr 28 01:16:03.279924 kubelet[2502]: I0428 01:16:03.279869 2502 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 28 01:16:03.691579 systemd[1]: Started sshd@18-10.0.0.153:22-10.0.0.1:58990.service - OpenSSH per-connection server daemon (10.0.0.1:58990). Apr 28 01:16:03.729625 sshd[6076]: Accepted publickey for core from 10.0.0.1 port 58990 ssh2: RSA SHA256:LlE/68A0qVd4DdmQfcok9T4l7BHzq3PFAQ3i8Jwjpps Apr 28 01:16:03.730933 sshd[6076]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:16:03.735157 systemd-logind[1435]: New session 19 of user core. Apr 28 01:16:03.744196 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 28 01:16:03.904445 sshd[6076]: pam_unix(sshd:session): session closed for user core Apr 28 01:16:03.907930 systemd[1]: sshd@18-10.0.0.153:22-10.0.0.1:58990.service: Deactivated successfully. Apr 28 01:16:03.909753 systemd[1]: session-19.scope: Deactivated successfully. Apr 28 01:16:03.910471 systemd-logind[1435]: Session 19 logged out. Waiting for processes to exit. Apr 28 01:16:03.911306 systemd-logind[1435]: Removed session 19. Apr 28 01:16:08.916661 systemd[1]: Started sshd@19-10.0.0.153:22-10.0.0.1:40814.service - OpenSSH per-connection server daemon (10.0.0.1:40814). Apr 28 01:16:08.985206 sshd[6116]: Accepted publickey for core from 10.0.0.1 port 40814 ssh2: RSA SHA256:LlE/68A0qVd4DdmQfcok9T4l7BHzq3PFAQ3i8Jwjpps Apr 28 01:16:08.986738 sshd[6116]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:16:08.991074 systemd-logind[1435]: New session 20 of user core. Apr 28 01:16:09.003330 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 28 01:16:09.120711 sshd[6116]: pam_unix(sshd:session): session closed for user core Apr 28 01:16:09.129174 systemd[1]: sshd@19-10.0.0.153:22-10.0.0.1:40814.service: Deactivated successfully. Apr 28 01:16:09.130602 systemd[1]: session-20.scope: Deactivated successfully. Apr 28 01:16:09.131677 systemd-logind[1435]: Session 20 logged out. Waiting for processes to exit. Apr 28 01:16:09.136226 systemd[1]: Started sshd@20-10.0.0.153:22-10.0.0.1:40830.service - OpenSSH per-connection server daemon (10.0.0.1:40830). Apr 28 01:16:09.137934 systemd-logind[1435]: Removed session 20. Apr 28 01:16:09.178940 sshd[6131]: Accepted publickey for core from 10.0.0.1 port 40830 ssh2: RSA SHA256:LlE/68A0qVd4DdmQfcok9T4l7BHzq3PFAQ3i8Jwjpps Apr 28 01:16:09.180218 sshd[6131]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:16:09.184854 systemd-logind[1435]: New session 21 of user core. Apr 28 01:16:09.191090 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 28 01:16:09.501241 sshd[6131]: pam_unix(sshd:session): session closed for user core Apr 28 01:16:09.508556 systemd[1]: sshd@20-10.0.0.153:22-10.0.0.1:40830.service: Deactivated successfully. Apr 28 01:16:09.510153 systemd[1]: session-21.scope: Deactivated successfully. Apr 28 01:16:09.510815 systemd-logind[1435]: Session 21 logged out. Waiting for processes to exit. Apr 28 01:16:09.518240 systemd[1]: Started sshd@21-10.0.0.153:22-10.0.0.1:40838.service - OpenSSH per-connection server daemon (10.0.0.1:40838). Apr 28 01:16:09.519169 systemd-logind[1435]: Removed session 21. Apr 28 01:16:09.580626 sshd[6143]: Accepted publickey for core from 10.0.0.1 port 40838 ssh2: RSA SHA256:LlE/68A0qVd4DdmQfcok9T4l7BHzq3PFAQ3i8Jwjpps Apr 28 01:16:09.582614 sshd[6143]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:16:09.587330 systemd-logind[1435]: New session 22 of user core. Apr 28 01:16:09.593105 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 28 01:16:10.253494 sshd[6143]: pam_unix(sshd:session): session closed for user core Apr 28 01:16:10.263262 systemd[1]: sshd@21-10.0.0.153:22-10.0.0.1:40838.service: Deactivated successfully. Apr 28 01:16:10.266760 systemd[1]: session-22.scope: Deactivated successfully. Apr 28 01:16:10.270251 systemd-logind[1435]: Session 22 logged out. Waiting for processes to exit. Apr 28 01:16:10.281811 systemd[1]: Started sshd@22-10.0.0.153:22-10.0.0.1:40846.service - OpenSSH per-connection server daemon (10.0.0.1:40846). Apr 28 01:16:10.288149 systemd-logind[1435]: Removed session 22. Apr 28 01:16:10.384926 sshd[6161]: Accepted publickey for core from 10.0.0.1 port 40846 ssh2: RSA SHA256:LlE/68A0qVd4DdmQfcok9T4l7BHzq3PFAQ3i8Jwjpps Apr 28 01:16:10.388014 sshd[6161]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:16:10.394408 systemd-logind[1435]: New session 23 of user core. Apr 28 01:16:10.409382 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 28 01:16:10.761669 sshd[6161]: pam_unix(sshd:session): session closed for user core Apr 28 01:16:10.772583 systemd[1]: sshd@22-10.0.0.153:22-10.0.0.1:40846.service: Deactivated successfully. Apr 28 01:16:10.774844 systemd[1]: session-23.scope: Deactivated successfully. Apr 28 01:16:10.777295 systemd-logind[1435]: Session 23 logged out. Waiting for processes to exit. Apr 28 01:16:10.785388 systemd[1]: Started sshd@23-10.0.0.153:22-10.0.0.1:40856.service - OpenSSH per-connection server daemon (10.0.0.1:40856). Apr 28 01:16:10.788180 systemd-logind[1435]: Removed session 23. Apr 28 01:16:10.815514 sshd[6173]: Accepted publickey for core from 10.0.0.1 port 40856 ssh2: RSA SHA256:LlE/68A0qVd4DdmQfcok9T4l7BHzq3PFAQ3i8Jwjpps Apr 28 01:16:10.816924 sshd[6173]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:16:10.822175 systemd-logind[1435]: New session 24 of user core. Apr 28 01:16:10.830446 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 28 01:16:10.996615 sshd[6173]: pam_unix(sshd:session): session closed for user core Apr 28 01:16:11.000823 systemd[1]: sshd@23-10.0.0.153:22-10.0.0.1:40856.service: Deactivated successfully. Apr 28 01:16:11.002765 systemd[1]: session-24.scope: Deactivated successfully. Apr 28 01:16:11.003442 systemd-logind[1435]: Session 24 logged out. Waiting for processes to exit. Apr 28 01:16:11.004593 systemd-logind[1435]: Removed session 24. Apr 28 01:16:16.010474 systemd[1]: Started sshd@24-10.0.0.153:22-10.0.0.1:37192.service - OpenSSH per-connection server daemon (10.0.0.1:37192). Apr 28 01:16:16.061469 sshd[6213]: Accepted publickey for core from 10.0.0.1 port 37192 ssh2: RSA SHA256:LlE/68A0qVd4DdmQfcok9T4l7BHzq3PFAQ3i8Jwjpps Apr 28 01:16:16.064633 sshd[6213]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:16:16.111897 systemd-logind[1435]: New session 25 of user core. Apr 28 01:16:16.121440 systemd[1]: Started session-25.scope - Session 25 of User core. Apr 28 01:16:16.301561 sshd[6213]: pam_unix(sshd:session): session closed for user core Apr 28 01:16:16.304508 systemd[1]: sshd@24-10.0.0.153:22-10.0.0.1:37192.service: Deactivated successfully. Apr 28 01:16:16.305880 systemd[1]: session-25.scope: Deactivated successfully. Apr 28 01:16:16.306773 systemd-logind[1435]: Session 25 logged out. Waiting for processes to exit. Apr 28 01:16:16.307734 systemd-logind[1435]: Removed session 25. Apr 28 01:16:17.588343 kubelet[2502]: E0428 01:16:17.588298 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 01:16:21.315755 systemd[1]: Started sshd@25-10.0.0.153:22-10.0.0.1:37200.service - OpenSSH per-connection server daemon (10.0.0.1:37200). Apr 28 01:16:21.363913 sshd[6251]: Accepted publickey for core from 10.0.0.1 port 37200 ssh2: RSA SHA256:LlE/68A0qVd4DdmQfcok9T4l7BHzq3PFAQ3i8Jwjpps Apr 28 01:16:21.365681 sshd[6251]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:16:21.373030 systemd-logind[1435]: New session 26 of user core. Apr 28 01:16:21.381639 systemd[1]: Started session-26.scope - Session 26 of User core. Apr 28 01:16:21.714769 sshd[6251]: pam_unix(sshd:session): session closed for user core Apr 28 01:16:21.717668 systemd[1]: sshd@25-10.0.0.153:22-10.0.0.1:37200.service: Deactivated successfully. Apr 28 01:16:21.719092 systemd[1]: session-26.scope: Deactivated successfully. Apr 28 01:16:21.719584 systemd-logind[1435]: Session 26 logged out. Waiting for processes to exit. Apr 28 01:16:21.720479 systemd-logind[1435]: Removed session 26. Apr 28 01:16:26.734482 systemd[1]: Started sshd@26-10.0.0.153:22-10.0.0.1:59504.service - OpenSSH per-connection server daemon (10.0.0.1:59504). Apr 28 01:16:26.801135 sshd[6266]: Accepted publickey for core from 10.0.0.1 port 59504 ssh2: RSA SHA256:LlE/68A0qVd4DdmQfcok9T4l7BHzq3PFAQ3i8Jwjpps Apr 28 01:16:26.803411 sshd[6266]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 01:16:26.810367 systemd-logind[1435]: New session 27 of user core. Apr 28 01:16:26.817351 systemd[1]: Started session-27.scope - Session 27 of User core. Apr 28 01:16:27.010058 sshd[6266]: pam_unix(sshd:session): session closed for user core Apr 28 01:16:27.013611 systemd[1]: sshd@26-10.0.0.153:22-10.0.0.1:59504.service: Deactivated successfully. Apr 28 01:16:27.015650 systemd[1]: session-27.scope: Deactivated successfully. Apr 28 01:16:27.017748 systemd-logind[1435]: Session 27 logged out. Waiting for processes to exit. Apr 28 01:16:27.018865 systemd-logind[1435]: Removed session 27.