Apr 16 01:52:33.981732 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Apr 15 22:45:03 -00 2026 Apr 16 01:52:33.981751 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=27643dbc59f658eac8bb37add3a8b4ed010a3c31134319f01549aa493a1f070c Apr 16 01:52:33.981761 kernel: BIOS-provided physical RAM map: Apr 16 01:52:33.981767 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Apr 16 01:52:33.981772 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Apr 16 01:52:33.981777 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Apr 16 01:52:33.981783 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Apr 16 01:52:33.981788 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Apr 16 01:52:33.981793 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Apr 16 01:52:33.981798 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Apr 16 01:52:33.981805 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Apr 16 01:52:33.981810 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Apr 16 01:52:33.981815 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Apr 16 01:52:33.981820 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Apr 16 01:52:33.981826 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Apr 16 01:52:33.981832 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Apr 16 01:52:33.981839 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Apr 16 01:52:33.981844 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Apr 16 01:52:33.981850 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Apr 16 01:52:33.981855 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Apr 16 01:52:33.981860 kernel: NX (Execute Disable) protection: active Apr 16 01:52:33.981866 kernel: APIC: Static calls initialized Apr 16 01:52:33.981871 kernel: efi: EFI v2.7 by EDK II Apr 16 01:52:33.981877 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b675198 Apr 16 01:52:33.981882 kernel: SMBIOS 2.8 present. Apr 16 01:52:33.981888 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Apr 16 01:52:33.981893 kernel: Hypervisor detected: KVM Apr 16 01:52:33.981900 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 16 01:52:33.981905 kernel: kvm-clock: using sched offset of 22791997828 cycles Apr 16 01:52:33.981912 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 16 01:52:33.981917 kernel: tsc: Detected 2793.438 MHz processor Apr 16 01:52:33.981924 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 16 01:52:33.981930 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 16 01:52:33.981936 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x10000000000 Apr 16 01:52:33.981941 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Apr 16 01:52:33.981947 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 16 01:52:33.981955 kernel: Using GB pages for direct mapping Apr 16 01:52:33.981960 kernel: Secure boot disabled Apr 16 01:52:33.981966 kernel: ACPI: Early table checksum verification disabled Apr 16 01:52:33.981972 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Apr 16 01:52:33.981980 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Apr 16 01:52:33.981986 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 01:52:33.981992 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 01:52:33.982000 kernel: ACPI: FACS 0x000000009CBDD000 000040 Apr 16 01:52:33.982005 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 01:52:33.982011 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 01:52:33.982016 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 01:52:33.982021 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 01:52:33.982026 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Apr 16 01:52:33.982031 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Apr 16 01:52:33.982037 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Apr 16 01:52:33.982042 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Apr 16 01:52:33.982046 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Apr 16 01:52:33.982051 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Apr 16 01:52:33.982056 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Apr 16 01:52:33.982061 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Apr 16 01:52:33.982066 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Apr 16 01:52:33.982071 kernel: No NUMA configuration found Apr 16 01:52:33.982075 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Apr 16 01:52:33.982082 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Apr 16 01:52:33.982087 kernel: Zone ranges: Apr 16 01:52:33.982092 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 16 01:52:33.982096 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Apr 16 01:52:33.982101 kernel: Normal empty Apr 16 01:52:33.982106 kernel: Movable zone start for each node Apr 16 01:52:33.982111 kernel: Early memory node ranges Apr 16 01:52:33.982116 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Apr 16 01:52:33.982121 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Apr 16 01:52:33.982127 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Apr 16 01:52:33.982131 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Apr 16 01:52:33.982136 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Apr 16 01:52:33.982141 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Apr 16 01:52:33.982146 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Apr 16 01:52:33.982151 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 16 01:52:33.982155 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Apr 16 01:52:33.982160 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Apr 16 01:52:33.982165 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 16 01:52:33.982170 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Apr 16 01:52:33.982177 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Apr 16 01:52:33.982181 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Apr 16 01:52:33.982212 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 16 01:52:33.982217 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 16 01:52:33.982222 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 16 01:52:33.982227 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 16 01:52:33.982232 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 16 01:52:33.982237 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 16 01:52:33.982242 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 16 01:52:33.982249 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 16 01:52:33.982254 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 16 01:52:33.982258 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 16 01:52:33.982263 kernel: TSC deadline timer available Apr 16 01:52:33.982268 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Apr 16 01:52:33.982273 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 16 01:52:33.982278 kernel: kvm-guest: KVM setup pv remote TLB flush Apr 16 01:52:33.982283 kernel: kvm-guest: setup PV sched yield Apr 16 01:52:33.982288 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Apr 16 01:52:33.982294 kernel: Booting paravirtualized kernel on KVM Apr 16 01:52:33.982299 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 16 01:52:33.982305 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Apr 16 01:52:33.982310 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Apr 16 01:52:33.982315 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Apr 16 01:52:33.982320 kernel: pcpu-alloc: [0] 0 1 2 3 Apr 16 01:52:33.982324 kernel: kvm-guest: PV spinlocks enabled Apr 16 01:52:33.982329 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 16 01:52:33.982335 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=27643dbc59f658eac8bb37add3a8b4ed010a3c31134319f01549aa493a1f070c Apr 16 01:52:33.982342 kernel: random: crng init done Apr 16 01:52:33.982347 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 16 01:52:33.982352 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 16 01:52:33.982357 kernel: Fallback order for Node 0: 0 Apr 16 01:52:33.982362 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Apr 16 01:52:33.982367 kernel: Policy zone: DMA32 Apr 16 01:52:33.982372 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 16 01:52:33.982377 kernel: Memory: 2399660K/2567000K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42896K init, 2300K bss, 167136K reserved, 0K cma-reserved) Apr 16 01:52:33.982383 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Apr 16 01:52:33.982388 kernel: ftrace: allocating 37996 entries in 149 pages Apr 16 01:52:33.982393 kernel: ftrace: allocated 149 pages with 4 groups Apr 16 01:52:33.982398 kernel: Dynamic Preempt: voluntary Apr 16 01:52:33.982403 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 16 01:52:33.982414 kernel: rcu: RCU event tracing is enabled. Apr 16 01:52:33.982421 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Apr 16 01:52:33.982427 kernel: Trampoline variant of Tasks RCU enabled. Apr 16 01:52:33.982432 kernel: Rude variant of Tasks RCU enabled. Apr 16 01:52:33.982437 kernel: Tracing variant of Tasks RCU enabled. Apr 16 01:52:33.982443 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 16 01:52:33.982448 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Apr 16 01:52:33.982455 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Apr 16 01:52:33.982460 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 16 01:52:33.982466 kernel: Console: colour dummy device 80x25 Apr 16 01:52:33.982471 kernel: printk: console [ttyS0] enabled Apr 16 01:52:33.982477 kernel: ACPI: Core revision 20230628 Apr 16 01:52:33.982484 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 16 01:52:33.982490 kernel: APIC: Switch to symmetric I/O mode setup Apr 16 01:52:33.982546 kernel: x2apic enabled Apr 16 01:52:33.982551 kernel: APIC: Switched APIC routing to: physical x2apic Apr 16 01:52:33.982557 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Apr 16 01:52:33.982562 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Apr 16 01:52:33.982568 kernel: kvm-guest: setup PV IPIs Apr 16 01:52:33.982574 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 16 01:52:33.982579 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 16 01:52:33.982587 kernel: Calibrating delay loop (skipped) preset value.. 5586.87 BogoMIPS (lpj=2793438) Apr 16 01:52:33.982592 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 16 01:52:33.982598 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Apr 16 01:52:33.982603 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Apr 16 01:52:33.982609 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 16 01:52:33.982614 kernel: Spectre V2 : Mitigation: Retpolines Apr 16 01:52:33.982620 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 16 01:52:33.982625 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 16 01:52:33.982632 kernel: RETBleed: Vulnerable Apr 16 01:52:33.982637 kernel: Speculative Store Bypass: Vulnerable Apr 16 01:52:33.982643 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 16 01:52:33.982648 kernel: GDS: Unknown: Dependent on hypervisor status Apr 16 01:52:33.982654 kernel: active return thunk: its_return_thunk Apr 16 01:52:33.982659 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 16 01:52:33.982665 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 16 01:52:33.982670 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 16 01:52:33.982676 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 16 01:52:33.982683 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 16 01:52:33.982688 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 16 01:52:33.982694 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 16 01:52:33.982699 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 16 01:52:33.982705 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Apr 16 01:52:33.982710 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Apr 16 01:52:33.982715 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Apr 16 01:52:33.982721 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Apr 16 01:52:33.982726 kernel: Freeing SMP alternatives memory: 32K Apr 16 01:52:33.982734 kernel: pid_max: default: 32768 minimum: 301 Apr 16 01:52:33.982739 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 16 01:52:33.982745 kernel: landlock: Up and running. Apr 16 01:52:33.982750 kernel: SELinux: Initializing. Apr 16 01:52:33.982756 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 16 01:52:33.982761 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 16 01:52:33.982767 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8370C CPU @ 2.80GHz (family: 0x6, model: 0x6a, stepping: 0x6) Apr 16 01:52:33.982772 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 16 01:52:33.982778 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 16 01:52:33.982785 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 16 01:52:33.982790 kernel: Performance Events: unsupported p6 CPU model 106 no PMU driver, software events only. Apr 16 01:52:33.982796 kernel: signal: max sigframe size: 3632 Apr 16 01:52:33.982801 kernel: rcu: Hierarchical SRCU implementation. Apr 16 01:52:33.982807 kernel: rcu: Max phase no-delay instances is 400. Apr 16 01:52:33.982812 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 16 01:52:33.982818 kernel: smp: Bringing up secondary CPUs ... Apr 16 01:52:33.982823 kernel: smpboot: x86: Booting SMP configuration: Apr 16 01:52:33.982828 kernel: .... node #0, CPUs: #1 #2 #3 Apr 16 01:52:33.982835 kernel: smp: Brought up 1 node, 4 CPUs Apr 16 01:52:33.982841 kernel: smpboot: Max logical packages: 1 Apr 16 01:52:33.982846 kernel: smpboot: Total of 4 processors activated (22347.50 BogoMIPS) Apr 16 01:52:33.982851 kernel: devtmpfs: initialized Apr 16 01:52:33.982857 kernel: x86/mm: Memory block size: 128MB Apr 16 01:52:33.982862 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Apr 16 01:52:33.982868 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Apr 16 01:52:33.982873 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Apr 16 01:52:33.982879 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Apr 16 01:52:33.982886 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Apr 16 01:52:33.982892 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 16 01:52:33.982897 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Apr 16 01:52:33.982903 kernel: pinctrl core: initialized pinctrl subsystem Apr 16 01:52:33.982908 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 16 01:52:33.982913 kernel: audit: initializing netlink subsys (disabled) Apr 16 01:52:33.982919 kernel: audit: type=2000 audit(1776304348.253:1): state=initialized audit_enabled=0 res=1 Apr 16 01:52:33.982924 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 16 01:52:33.982930 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 16 01:52:33.982937 kernel: cpuidle: using governor menu Apr 16 01:52:33.982942 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 16 01:52:33.982948 kernel: dca service started, version 1.12.1 Apr 16 01:52:33.982953 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Apr 16 01:52:33.982959 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Apr 16 01:52:33.982964 kernel: PCI: Using configuration type 1 for base access Apr 16 01:52:33.982970 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 16 01:52:33.982975 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 16 01:52:33.982980 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 16 01:52:33.982987 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 16 01:52:33.982992 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 16 01:52:33.982998 kernel: ACPI: Added _OSI(Module Device) Apr 16 01:52:33.983003 kernel: ACPI: Added _OSI(Processor Device) Apr 16 01:52:33.983008 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 16 01:52:33.983014 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 16 01:52:33.983019 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 16 01:52:33.983024 kernel: ACPI: Interpreter enabled Apr 16 01:52:33.983030 kernel: ACPI: PM: (supports S0 S3 S5) Apr 16 01:52:33.983037 kernel: ACPI: Using IOAPIC for interrupt routing Apr 16 01:52:33.983042 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 16 01:52:33.983047 kernel: PCI: Using E820 reservations for host bridge windows Apr 16 01:52:33.983053 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 16 01:52:33.983058 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 16 01:52:33.983161 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 16 01:52:33.983253 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 16 01:52:33.983312 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 16 01:52:33.983319 kernel: PCI host bridge to bus 0000:00 Apr 16 01:52:33.983378 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 16 01:52:33.983429 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 16 01:52:33.983478 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 16 01:52:33.983631 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Apr 16 01:52:33.983681 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Apr 16 01:52:33.983731 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Apr 16 01:52:33.983780 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 16 01:52:33.983850 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Apr 16 01:52:33.983911 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Apr 16 01:52:33.983967 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Apr 16 01:52:33.984020 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Apr 16 01:52:33.984074 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Apr 16 01:52:33.984130 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Apr 16 01:52:33.984210 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 16 01:52:33.984274 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Apr 16 01:52:33.984330 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Apr 16 01:52:33.984384 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Apr 16 01:52:33.984439 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Apr 16 01:52:33.984537 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Apr 16 01:52:33.984598 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Apr 16 01:52:33.984653 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Apr 16 01:52:33.984707 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Apr 16 01:52:33.984767 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Apr 16 01:52:33.984821 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Apr 16 01:52:33.984874 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Apr 16 01:52:33.984930 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Apr 16 01:52:33.984984 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Apr 16 01:52:33.985042 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Apr 16 01:52:33.985097 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 16 01:52:33.985155 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Apr 16 01:52:33.985236 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Apr 16 01:52:33.985292 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Apr 16 01:52:33.985354 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Apr 16 01:52:33.985410 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Apr 16 01:52:33.985417 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 16 01:52:33.985423 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 16 01:52:33.985428 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 16 01:52:33.985434 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 16 01:52:33.985439 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 16 01:52:33.985445 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 16 01:52:33.985452 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 16 01:52:33.985457 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 16 01:52:33.985463 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 16 01:52:33.985468 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 16 01:52:33.985473 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 16 01:52:33.985479 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 16 01:52:33.985484 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 16 01:52:33.985490 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 16 01:52:33.985539 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 16 01:52:33.985546 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 16 01:52:33.985552 kernel: iommu: Default domain type: Translated Apr 16 01:52:33.985557 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 16 01:52:33.985563 kernel: efivars: Registered efivars operations Apr 16 01:52:33.985568 kernel: PCI: Using ACPI for IRQ routing Apr 16 01:52:33.985574 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 16 01:52:33.985579 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Apr 16 01:52:33.985585 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Apr 16 01:52:33.985590 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Apr 16 01:52:33.985597 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Apr 16 01:52:33.985655 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 16 01:52:33.985709 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 16 01:52:33.985762 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 16 01:52:33.985769 kernel: vgaarb: loaded Apr 16 01:52:33.985775 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 16 01:52:33.985780 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 16 01:52:33.985786 kernel: clocksource: Switched to clocksource kvm-clock Apr 16 01:52:33.985791 kernel: VFS: Disk quotas dquot_6.6.0 Apr 16 01:52:33.985798 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 16 01:52:33.985804 kernel: pnp: PnP ACPI init Apr 16 01:52:33.985865 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Apr 16 01:52:33.985873 kernel: pnp: PnP ACPI: found 6 devices Apr 16 01:52:33.985879 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 16 01:52:33.985884 kernel: NET: Registered PF_INET protocol family Apr 16 01:52:33.985890 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 16 01:52:33.985895 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 16 01:52:33.985902 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 16 01:52:33.985908 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 16 01:52:33.985913 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 16 01:52:33.985919 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 16 01:52:33.985924 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 16 01:52:33.985930 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 16 01:52:33.985935 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 16 01:52:33.985941 kernel: NET: Registered PF_XDP protocol family Apr 16 01:52:33.985995 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Apr 16 01:52:33.986052 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Apr 16 01:52:33.986103 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 16 01:52:33.986152 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 16 01:52:33.986232 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 16 01:52:33.986281 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Apr 16 01:52:33.986329 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Apr 16 01:52:33.986378 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Apr 16 01:52:33.986387 kernel: PCI: CLS 0 bytes, default 64 Apr 16 01:52:33.986393 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 16 01:52:33.986398 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 16 01:52:33.986404 kernel: Initialise system trusted keyrings Apr 16 01:52:33.986409 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 16 01:52:33.986415 kernel: Key type asymmetric registered Apr 16 01:52:33.986421 kernel: Asymmetric key parser 'x509' registered Apr 16 01:52:33.986426 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 16 01:52:33.986432 kernel: io scheduler mq-deadline registered Apr 16 01:52:33.986438 kernel: io scheduler kyber registered Apr 16 01:52:33.986444 kernel: io scheduler bfq registered Apr 16 01:52:33.986449 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 16 01:52:33.986455 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 16 01:52:33.986461 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 16 01:52:33.986467 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Apr 16 01:52:33.986472 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 16 01:52:33.986478 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 16 01:52:33.986483 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 16 01:52:33.986491 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 16 01:52:33.986738 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 16 01:52:33.986810 kernel: rtc_cmos 00:04: RTC can wake from S4 Apr 16 01:52:33.986818 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 16 01:52:33.986868 kernel: rtc_cmos 00:04: registered as rtc0 Apr 16 01:52:33.986919 kernel: rtc_cmos 00:04: setting system clock to 2026-04-16T01:52:33 UTC (1776304353) Apr 16 01:52:33.986971 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Apr 16 01:52:33.986978 kernel: intel_pstate: CPU model not supported Apr 16 01:52:33.986987 kernel: efifb: probing for efifb Apr 16 01:52:33.986992 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Apr 16 01:52:33.986998 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Apr 16 01:52:33.987004 kernel: efifb: scrolling: redraw Apr 16 01:52:33.987009 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Apr 16 01:52:33.987014 kernel: Console: switching to colour frame buffer device 100x37 Apr 16 01:52:33.987020 kernel: fb0: EFI VGA frame buffer device Apr 16 01:52:33.987036 kernel: pstore: Using crash dump compression: deflate Apr 16 01:52:33.987043 kernel: pstore: Registered efi_pstore as persistent store backend Apr 16 01:52:33.987051 kernel: NET: Registered PF_INET6 protocol family Apr 16 01:52:33.987057 kernel: Segment Routing with IPv6 Apr 16 01:52:33.987062 kernel: In-situ OAM (IOAM) with IPv6 Apr 16 01:52:33.987068 kernel: NET: Registered PF_PACKET protocol family Apr 16 01:52:33.987074 kernel: Key type dns_resolver registered Apr 16 01:52:33.987079 kernel: IPI shorthand broadcast: enabled Apr 16 01:52:33.987085 kernel: sched_clock: Marking stable (2373015370, 3336129141)->(7188946037, -1479801526) Apr 16 01:52:33.987091 kernel: registered taskstats version 1 Apr 16 01:52:33.987096 kernel: Loading compiled-in X.509 certificates Apr 16 01:52:33.987104 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 6e6d886174c86dc730e1b14e46a1dab518d9b090' Apr 16 01:52:33.987109 kernel: Key type .fscrypt registered Apr 16 01:52:33.987115 kernel: Key type fscrypt-provisioning registered Apr 16 01:52:33.987120 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 16 01:52:33.987126 kernel: ima: Allocated hash algorithm: sha1 Apr 16 01:52:33.987131 kernel: ima: No architecture policies found Apr 16 01:52:33.987137 kernel: clk: Disabling unused clocks Apr 16 01:52:33.987143 kernel: Freeing unused kernel image (initmem) memory: 42896K Apr 16 01:52:33.987149 kernel: Write protecting the kernel read-only data: 36864k Apr 16 01:52:33.987156 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 16 01:52:33.987161 kernel: Run /init as init process Apr 16 01:52:33.987167 kernel: with arguments: Apr 16 01:52:33.987172 kernel: /init Apr 16 01:52:33.987178 kernel: with environment: Apr 16 01:52:33.987210 kernel: HOME=/ Apr 16 01:52:33.987216 kernel: TERM=linux Apr 16 01:52:33.987225 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 16 01:52:33.987235 systemd[1]: Detected virtualization kvm. Apr 16 01:52:33.987243 systemd[1]: Detected architecture x86-64. Apr 16 01:52:33.987249 systemd[1]: Running in initrd. Apr 16 01:52:33.987255 systemd[1]: No hostname configured, using default hostname. Apr 16 01:52:33.987261 systemd[1]: Hostname set to . Apr 16 01:52:33.987269 systemd[1]: Initializing machine ID from VM UUID. Apr 16 01:52:33.987275 systemd[1]: Queued start job for default target initrd.target. Apr 16 01:52:33.987281 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 16 01:52:33.987287 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 16 01:52:33.987293 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 16 01:52:33.987299 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 16 01:52:33.987306 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 16 01:52:33.987312 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 16 01:52:33.987320 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 16 01:52:33.987326 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 16 01:52:33.987332 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 16 01:52:33.987338 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 16 01:52:33.987344 systemd[1]: Reached target paths.target - Path Units. Apr 16 01:52:33.987353 systemd[1]: Reached target slices.target - Slice Units. Apr 16 01:52:33.987359 systemd[1]: Reached target swap.target - Swaps. Apr 16 01:52:33.987366 systemd[1]: Reached target timers.target - Timer Units. Apr 16 01:52:33.987372 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 16 01:52:33.987378 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 16 01:52:33.987385 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 16 01:52:33.987391 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 16 01:52:33.987397 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 16 01:52:33.987403 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 16 01:52:33.987409 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 16 01:52:33.987415 systemd[1]: Reached target sockets.target - Socket Units. Apr 16 01:52:33.987422 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 16 01:52:33.987428 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 16 01:52:33.987434 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 16 01:52:33.987440 systemd[1]: Starting systemd-fsck-usr.service... Apr 16 01:52:33.987446 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 16 01:52:33.987452 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 16 01:52:33.987458 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 16 01:52:33.987477 systemd-journald[194]: Collecting audit messages is disabled. Apr 16 01:52:33.987546 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 16 01:52:33.987553 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 16 01:52:33.987561 systemd-journald[194]: Journal started Apr 16 01:52:33.987578 systemd-journald[194]: Runtime Journal (/run/log/journal/15bc0f64e9044ba8b73b5f86612fedef) is 6.0M, max 48.3M, 42.2M free. Apr 16 01:52:33.993731 systemd[1]: Started systemd-journald.service - Journal Service. Apr 16 01:52:33.995981 systemd-modules-load[195]: Inserted module 'overlay' Apr 16 01:52:33.997176 systemd[1]: Finished systemd-fsck-usr.service. Apr 16 01:52:34.008633 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 16 01:52:34.015152 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 16 01:52:34.020843 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 01:52:34.024090 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 16 01:52:34.029778 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 16 01:52:34.037637 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 16 01:52:34.040329 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 16 01:52:34.055411 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 16 01:52:34.058535 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 16 01:52:34.057893 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 16 01:52:34.067668 systemd-modules-load[195]: Inserted module 'br_netfilter' Apr 16 01:52:34.068954 kernel: Bridge firewalling registered Apr 16 01:52:34.074682 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 16 01:52:34.076968 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 16 01:52:34.079137 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 16 01:52:34.089855 dracut-cmdline[225]: dracut-dracut-053 Apr 16 01:52:34.091973 dracut-cmdline[225]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=27643dbc59f658eac8bb37add3a8b4ed010a3c31134319f01549aa493a1f070c Apr 16 01:52:34.093783 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 16 01:52:34.112646 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 16 01:52:34.135759 systemd-resolved[244]: Positive Trust Anchors: Apr 16 01:52:34.135789 systemd-resolved[244]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 16 01:52:34.135814 systemd-resolved[244]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 16 01:52:34.137737 systemd-resolved[244]: Defaulting to hostname 'linux'. Apr 16 01:52:34.138465 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 16 01:52:34.155314 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 16 01:52:34.201562 kernel: SCSI subsystem initialized Apr 16 01:52:34.210572 kernel: Loading iSCSI transport class v2.0-870. Apr 16 01:52:34.222607 kernel: iscsi: registered transport (tcp) Apr 16 01:52:34.241561 kernel: iscsi: registered transport (qla4xxx) Apr 16 01:52:34.241585 kernel: QLogic iSCSI HBA Driver Apr 16 01:52:34.273055 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 16 01:52:34.284732 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 16 01:52:34.308140 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 16 01:52:34.308175 kernel: device-mapper: uevent: version 1.0.3 Apr 16 01:52:34.308728 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 16 01:52:34.350585 kernel: raid6: avx512x4 gen() 47052 MB/s Apr 16 01:52:34.368569 kernel: raid6: avx512x2 gen() 45656 MB/s Apr 16 01:52:34.386565 kernel: raid6: avx512x1 gen() 45689 MB/s Apr 16 01:52:34.404564 kernel: raid6: avx2x4 gen() 37229 MB/s Apr 16 01:52:34.422591 kernel: raid6: avx2x2 gen() 37057 MB/s Apr 16 01:52:34.441156 kernel: raid6: avx2x1 gen() 25990 MB/s Apr 16 01:52:34.441212 kernel: raid6: using algorithm avx512x4 gen() 47052 MB/s Apr 16 01:52:34.460472 kernel: raid6: .... xor() 10393 MB/s, rmw enabled Apr 16 01:52:34.460539 kernel: raid6: using avx512x2 recovery algorithm Apr 16 01:52:34.479568 kernel: xor: automatically using best checksumming function avx Apr 16 01:52:34.624578 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 16 01:52:34.634615 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 16 01:52:34.646710 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 16 01:52:34.658471 systemd-udevd[416]: Using default interface naming scheme 'v255'. Apr 16 01:52:34.661244 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 16 01:52:34.665644 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 16 01:52:34.679598 dracut-pre-trigger[424]: rd.md=0: removing MD RAID activation Apr 16 01:52:34.704596 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 16 01:52:34.710952 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 16 01:52:34.741839 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 16 01:52:34.751779 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 16 01:52:34.764319 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 16 01:52:34.771048 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 16 01:52:34.786716 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Apr 16 01:52:34.781744 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 16 01:52:34.793555 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Apr 16 01:52:34.793656 kernel: cryptd: max_cpu_qlen set to 1000 Apr 16 01:52:34.787590 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 16 01:52:34.806528 kernel: libata version 3.00 loaded. Apr 16 01:52:34.810542 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 16 01:52:34.810586 kernel: GPT:9289727 != 19775487 Apr 16 01:52:34.810594 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 16 01:52:34.810602 kernel: GPT:9289727 != 19775487 Apr 16 01:52:34.810608 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 16 01:52:34.810615 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 16 01:52:34.814331 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 16 01:52:34.831663 kernel: AVX2 version of gcm_enc/dec engaged. Apr 16 01:52:34.831685 kernel: ahci 0000:00:1f.2: version 3.0 Apr 16 01:52:34.831858 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 16 01:52:34.831867 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Apr 16 01:52:34.831942 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 16 01:52:34.832061 kernel: AES CTR mode by8 optimization enabled Apr 16 01:52:34.832630 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 16 01:52:34.840046 kernel: scsi host0: ahci Apr 16 01:52:34.840163 kernel: scsi host1: ahci Apr 16 01:52:34.840265 kernel: scsi host2: ahci Apr 16 01:52:34.832730 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 16 01:52:34.837604 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 16 01:52:34.846552 kernel: scsi host3: ahci Apr 16 01:52:34.850079 kernel: scsi host4: ahci Apr 16 01:52:34.850180 kernel: scsi host5: ahci Apr 16 01:52:34.849713 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 16 01:52:34.870101 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Apr 16 01:52:34.870117 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Apr 16 01:52:34.870124 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Apr 16 01:52:34.870132 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Apr 16 01:52:34.870139 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Apr 16 01:52:34.870146 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Apr 16 01:52:34.849921 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 01:52:34.862758 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 16 01:52:34.882751 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (470) Apr 16 01:52:34.882809 kernel: BTRFS: device fsid 936fcbd8-a8ab-4e87-b115-d77c7a08e984 devid 1 transid 34 /dev/vda3 scanned by (udev-worker) (469) Apr 16 01:52:34.876658 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 16 01:52:34.885599 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 16 01:52:34.900372 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Apr 16 01:52:34.904849 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Apr 16 01:52:34.912376 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 16 01:52:34.917618 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Apr 16 01:52:34.918432 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Apr 16 01:52:34.935764 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 16 01:52:34.938664 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 16 01:52:34.946993 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 16 01:52:34.938708 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 01:52:34.952565 disk-uuid[559]: Primary Header is updated. Apr 16 01:52:34.952565 disk-uuid[559]: Secondary Entries is updated. Apr 16 01:52:34.952565 disk-uuid[559]: Secondary Header is updated. Apr 16 01:52:34.945779 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 16 01:52:34.948139 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 16 01:52:34.969074 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 01:52:34.980657 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 16 01:52:34.993768 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 16 01:52:35.177568 kernel: ata1: SATA link down (SStatus 0 SControl 300) Apr 16 01:52:35.177633 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 16 01:52:35.179566 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 16 01:52:35.181582 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 16 01:52:35.183580 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 16 01:52:35.185576 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 16 01:52:35.188180 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Apr 16 01:52:35.188230 kernel: ata3.00: applying bridge limits Apr 16 01:52:35.189601 kernel: ata3.00: configured for UDMA/100 Apr 16 01:52:35.190536 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 16 01:52:35.248986 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Apr 16 01:52:35.249269 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 16 01:52:35.263571 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Apr 16 01:52:35.962558 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 16 01:52:35.962642 disk-uuid[560]: The operation has completed successfully. Apr 16 01:52:35.988616 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 16 01:52:35.988731 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 16 01:52:36.011699 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 16 01:52:36.017064 sh[604]: Success Apr 16 01:52:36.029549 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Apr 16 01:52:36.060655 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 16 01:52:36.085127 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 16 01:52:36.091428 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 16 01:52:36.105927 kernel: BTRFS info (device dm-0): first mount of filesystem 936fcbd8-a8ab-4e87-b115-d77c7a08e984 Apr 16 01:52:36.105949 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 16 01:52:36.105958 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 16 01:52:36.105966 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 16 01:52:36.109768 kernel: BTRFS info (device dm-0): using free space tree Apr 16 01:52:36.115789 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 16 01:52:36.117784 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 16 01:52:36.132717 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 16 01:52:36.134891 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 16 01:52:36.158187 kernel: BTRFS info (device vda6): first mount of filesystem 90718864-f2fc-45a7-9234-85fc9574bf9c Apr 16 01:52:36.158261 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 16 01:52:36.158273 kernel: BTRFS info (device vda6): using free space tree Apr 16 01:52:36.164602 kernel: BTRFS info (device vda6): auto enabling async discard Apr 16 01:52:36.171748 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 16 01:52:36.176662 kernel: BTRFS info (device vda6): last unmount of filesystem 90718864-f2fc-45a7-9234-85fc9574bf9c Apr 16 01:52:36.185457 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 16 01:52:36.193693 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 16 01:52:36.239813 ignition[721]: Ignition 2.19.0 Apr 16 01:52:36.240566 ignition[721]: Stage: fetch-offline Apr 16 01:52:36.240601 ignition[721]: no configs at "/usr/lib/ignition/base.d" Apr 16 01:52:36.240608 ignition[721]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 16 01:52:36.240695 ignition[721]: parsed url from cmdline: "" Apr 16 01:52:36.240697 ignition[721]: no config URL provided Apr 16 01:52:36.240701 ignition[721]: reading system config file "/usr/lib/ignition/user.ign" Apr 16 01:52:36.240707 ignition[721]: no config at "/usr/lib/ignition/user.ign" Apr 16 01:52:36.240728 ignition[721]: op(1): [started] loading QEMU firmware config module Apr 16 01:52:36.253632 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 16 01:52:36.240731 ignition[721]: op(1): executing: "modprobe" "qemu_fw_cfg" Apr 16 01:52:36.252363 ignition[721]: op(1): [finished] loading QEMU firmware config module Apr 16 01:52:36.265667 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 16 01:52:36.284562 systemd-networkd[792]: lo: Link UP Apr 16 01:52:36.284585 systemd-networkd[792]: lo: Gained carrier Apr 16 01:52:36.285421 systemd-networkd[792]: Enumeration completed Apr 16 01:52:36.285699 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 16 01:52:36.286165 systemd-networkd[792]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 16 01:52:36.286167 systemd-networkd[792]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 16 01:52:36.287042 systemd-networkd[792]: eth0: Link UP Apr 16 01:52:36.287045 systemd-networkd[792]: eth0: Gained carrier Apr 16 01:52:36.287050 systemd-networkd[792]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 16 01:52:36.289457 systemd[1]: Reached target network.target - Network. Apr 16 01:52:36.308584 systemd-networkd[792]: eth0: DHCPv4 address 10.0.0.136/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 16 01:52:36.457384 ignition[721]: parsing config with SHA512: bb932d0b4ac761155bd56c8d812fe62a34b5a0f9d02284a842219752224bf60b2350a9b43f71331e326c9390a38ee414979b51de218e9b7959f2707a63b15d2d Apr 16 01:52:36.460457 unknown[721]: fetched base config from "system" Apr 16 01:52:36.460465 unknown[721]: fetched user config from "qemu" Apr 16 01:52:36.461128 ignition[721]: fetch-offline: fetch-offline passed Apr 16 01:52:36.462181 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 16 01:52:36.461194 ignition[721]: Ignition finished successfully Apr 16 01:52:36.465437 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Apr 16 01:52:36.473851 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 16 01:52:36.492097 ignition[796]: Ignition 2.19.0 Apr 16 01:52:36.492121 ignition[796]: Stage: kargs Apr 16 01:52:36.492280 ignition[796]: no configs at "/usr/lib/ignition/base.d" Apr 16 01:52:36.492287 ignition[796]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 16 01:52:36.492903 ignition[796]: kargs: kargs passed Apr 16 01:52:36.492931 ignition[796]: Ignition finished successfully Apr 16 01:52:36.502848 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 16 01:52:36.518713 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 16 01:52:36.536546 ignition[804]: Ignition 2.19.0 Apr 16 01:52:36.536575 ignition[804]: Stage: disks Apr 16 01:52:36.536745 ignition[804]: no configs at "/usr/lib/ignition/base.d" Apr 16 01:52:36.536755 ignition[804]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 16 01:52:36.537790 ignition[804]: disks: disks passed Apr 16 01:52:36.537835 ignition[804]: Ignition finished successfully Apr 16 01:52:36.546823 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 16 01:52:36.549277 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 16 01:52:36.550344 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 16 01:52:36.556412 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 16 01:52:36.563084 systemd[1]: Reached target sysinit.target - System Initialization. Apr 16 01:52:36.565957 systemd[1]: Reached target basic.target - Basic System. Apr 16 01:52:36.581777 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 16 01:52:36.592908 systemd-fsck[814]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 16 01:52:36.597961 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 16 01:52:36.601417 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 16 01:52:36.695576 kernel: EXT4-fs (vda9): mounted filesystem 9ac74074-8829-477f-a4c4-5563740ec49b r/w with ordered data mode. Quota mode: none. Apr 16 01:52:36.695738 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 16 01:52:36.697930 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 16 01:52:36.715638 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 16 01:52:36.718090 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 16 01:52:36.724562 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (822) Apr 16 01:52:36.730579 kernel: BTRFS info (device vda6): first mount of filesystem 90718864-f2fc-45a7-9234-85fc9574bf9c Apr 16 01:52:36.730610 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 16 01:52:36.730622 kernel: BTRFS info (device vda6): using free space tree Apr 16 01:52:36.733443 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 16 01:52:36.733537 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 16 01:52:36.733556 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 16 01:52:36.740559 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 16 01:52:36.741994 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 16 01:52:36.759538 kernel: BTRFS info (device vda6): auto enabling async discard Apr 16 01:52:36.761855 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 16 01:52:36.781081 initrd-setup-root[846]: cut: /sysroot/etc/passwd: No such file or directory Apr 16 01:52:36.785962 initrd-setup-root[853]: cut: /sysroot/etc/group: No such file or directory Apr 16 01:52:36.790643 initrd-setup-root[860]: cut: /sysroot/etc/shadow: No such file or directory Apr 16 01:52:36.794902 initrd-setup-root[867]: cut: /sysroot/etc/gshadow: No such file or directory Apr 16 01:52:36.878639 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 16 01:52:36.890633 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 16 01:52:36.892790 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 16 01:52:36.908584 kernel: BTRFS info (device vda6): last unmount of filesystem 90718864-f2fc-45a7-9234-85fc9574bf9c Apr 16 01:52:36.922055 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 16 01:52:36.935653 ignition[936]: INFO : Ignition 2.19.0 Apr 16 01:52:36.935653 ignition[936]: INFO : Stage: mount Apr 16 01:52:36.939319 ignition[936]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 16 01:52:36.939319 ignition[936]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 16 01:52:36.939319 ignition[936]: INFO : mount: mount passed Apr 16 01:52:36.939319 ignition[936]: INFO : Ignition finished successfully Apr 16 01:52:36.942367 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 16 01:52:36.957661 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 16 01:52:37.097703 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 16 01:52:37.107772 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 16 01:52:37.120758 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (948) Apr 16 01:52:37.120786 kernel: BTRFS info (device vda6): first mount of filesystem 90718864-f2fc-45a7-9234-85fc9574bf9c Apr 16 01:52:37.120795 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 16 01:52:37.125641 kernel: BTRFS info (device vda6): using free space tree Apr 16 01:52:37.130585 kernel: BTRFS info (device vda6): auto enabling async discard Apr 16 01:52:37.132172 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 16 01:52:37.173851 ignition[965]: INFO : Ignition 2.19.0 Apr 16 01:52:37.173851 ignition[965]: INFO : Stage: files Apr 16 01:52:37.176992 ignition[965]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 16 01:52:37.176992 ignition[965]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 16 01:52:37.176992 ignition[965]: DEBUG : files: compiled without relabeling support, skipping Apr 16 01:52:37.184054 ignition[965]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 16 01:52:37.184054 ignition[965]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 16 01:52:37.192163 ignition[965]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 16 01:52:37.195154 ignition[965]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 16 01:52:37.195154 ignition[965]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 16 01:52:37.195154 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 16 01:52:37.195154 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 16 01:52:37.192891 unknown[965]: wrote ssh authorized keys file for user: core Apr 16 01:52:37.274199 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 16 01:52:37.384198 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 16 01:52:37.384198 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 16 01:52:37.393449 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 16 01:52:37.393449 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 16 01:52:37.393449 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 16 01:52:37.393449 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 16 01:52:37.393449 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 16 01:52:37.393449 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 16 01:52:37.393449 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 16 01:52:37.393449 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 16 01:52:37.393449 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 16 01:52:37.393449 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 16 01:52:37.393449 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 16 01:52:37.393449 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 16 01:52:37.393449 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Apr 16 01:52:37.462769 systemd-networkd[792]: eth0: Gained IPv6LL Apr 16 01:52:37.673170 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 16 01:52:38.164835 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 16 01:52:38.164835 ignition[965]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 16 01:52:38.173546 ignition[965]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 16 01:52:38.173546 ignition[965]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 16 01:52:38.173546 ignition[965]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 16 01:52:38.173546 ignition[965]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Apr 16 01:52:38.173546 ignition[965]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 16 01:52:38.173546 ignition[965]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 16 01:52:38.173546 ignition[965]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Apr 16 01:52:38.173546 ignition[965]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Apr 16 01:52:38.206663 ignition[965]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Apr 16 01:52:38.216071 ignition[965]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Apr 16 01:52:38.220318 ignition[965]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Apr 16 01:52:38.220318 ignition[965]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Apr 16 01:52:38.220318 ignition[965]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Apr 16 01:52:38.220318 ignition[965]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 16 01:52:38.220318 ignition[965]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 16 01:52:38.220318 ignition[965]: INFO : files: files passed Apr 16 01:52:38.220318 ignition[965]: INFO : Ignition finished successfully Apr 16 01:52:38.217667 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 16 01:52:38.229680 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 16 01:52:38.238627 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 16 01:52:38.245386 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 16 01:52:38.245469 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 16 01:52:38.268071 initrd-setup-root-after-ignition[993]: grep: /sysroot/oem/oem-release: No such file or directory Apr 16 01:52:38.273703 initrd-setup-root-after-ignition[995]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 16 01:52:38.273703 initrd-setup-root-after-ignition[995]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 16 01:52:38.281178 initrd-setup-root-after-ignition[999]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 16 01:52:38.281578 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 16 01:52:38.288198 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 16 01:52:38.310684 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 16 01:52:38.331393 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 16 01:52:38.331557 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 16 01:52:38.333921 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 16 01:52:38.343743 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 16 01:52:38.345014 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 16 01:52:38.357679 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 16 01:52:38.369824 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 16 01:52:38.375479 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 16 01:52:38.389916 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 16 01:52:38.391576 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 16 01:52:38.397566 systemd[1]: Stopped target timers.target - Timer Units. Apr 16 01:52:38.407560 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 16 01:52:38.407675 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 16 01:52:38.415592 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 16 01:52:38.417182 systemd[1]: Stopped target basic.target - Basic System. Apr 16 01:52:38.422254 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 16 01:52:38.427271 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 16 01:52:38.433367 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 16 01:52:38.439103 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 16 01:52:38.444191 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 16 01:52:38.450680 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 16 01:52:38.452113 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 16 01:52:38.459290 systemd[1]: Stopped target swap.target - Swaps. Apr 16 01:52:38.464628 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 16 01:52:38.464749 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 16 01:52:38.473694 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 16 01:52:38.478973 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 16 01:52:38.484190 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 16 01:52:38.485206 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 16 01:52:38.487751 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 16 01:52:38.487883 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 16 01:52:38.496892 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 16 01:52:38.497063 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 16 01:52:38.501236 systemd[1]: Stopped target paths.target - Path Units. Apr 16 01:52:38.506806 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 16 01:52:38.517581 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 16 01:52:38.519283 systemd[1]: Stopped target slices.target - Slice Units. Apr 16 01:52:38.520157 systemd[1]: Stopped target sockets.target - Socket Units. Apr 16 01:52:38.527490 systemd[1]: iscsid.socket: Deactivated successfully. Apr 16 01:52:38.527631 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 16 01:52:38.533267 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 16 01:52:38.533322 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 16 01:52:38.537087 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 16 01:52:38.537177 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 16 01:52:38.541172 systemd[1]: ignition-files.service: Deactivated successfully. Apr 16 01:52:38.541290 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 16 01:52:38.555760 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 16 01:52:38.565161 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 16 01:52:38.566116 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 16 01:52:38.581649 ignition[1019]: INFO : Ignition 2.19.0 Apr 16 01:52:38.581649 ignition[1019]: INFO : Stage: umount Apr 16 01:52:38.581649 ignition[1019]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 16 01:52:38.581649 ignition[1019]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 16 01:52:38.581649 ignition[1019]: INFO : umount: umount passed Apr 16 01:52:38.581649 ignition[1019]: INFO : Ignition finished successfully Apr 16 01:52:38.566250 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 16 01:52:38.570156 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 16 01:52:38.570263 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 16 01:52:38.574003 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 16 01:52:38.574248 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 16 01:52:38.594999 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 16 01:52:38.595781 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 16 01:52:38.595849 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 16 01:52:38.602807 systemd[1]: Stopped target network.target - Network. Apr 16 01:52:38.609613 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 16 01:52:38.609684 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 16 01:52:38.615018 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 16 01:52:38.615057 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 16 01:52:38.619234 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 16 01:52:38.619269 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 16 01:52:38.621583 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 16 01:52:38.621613 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 16 01:52:38.632804 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 16 01:52:38.637304 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 16 01:52:38.641834 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 16 01:52:38.641901 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 16 01:52:38.645414 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 16 01:52:38.645483 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 16 01:52:38.670745 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 16 01:52:38.670848 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 16 01:52:38.671794 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 16 01:52:38.671831 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 16 01:52:38.695235 systemd-networkd[792]: eth0: DHCPv6 lease lost Apr 16 01:52:38.698323 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 16 01:52:38.698478 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 16 01:52:38.703597 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 16 01:52:38.703621 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 16 01:52:38.723713 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 16 01:52:38.725043 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 16 01:52:38.725101 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 16 01:52:38.729256 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 16 01:52:38.729290 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 16 01:52:38.736110 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 16 01:52:38.736143 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 16 01:52:38.740152 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 16 01:52:38.761678 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 16 01:52:38.761892 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 16 01:52:38.763728 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 16 01:52:38.763756 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 16 01:52:38.769427 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 16 01:52:38.769462 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 16 01:52:38.779098 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 16 01:52:38.779142 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 16 01:52:38.787622 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 16 01:52:38.787658 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 16 01:52:38.795248 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 16 01:52:38.795285 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 16 01:52:38.805982 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 16 01:52:38.811014 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 16 01:52:38.811081 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 16 01:52:38.817980 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 16 01:52:38.818014 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 16 01:52:38.823749 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 16 01:52:38.823783 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 16 01:52:38.825562 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 16 01:52:38.825592 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 01:52:38.835002 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 16 01:52:38.835091 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 16 01:52:38.840455 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 16 01:52:38.840613 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 16 01:52:38.847811 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 16 01:52:38.849861 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 16 01:52:38.859416 systemd[1]: Switching root. Apr 16 01:52:38.891908 systemd-journald[194]: Journal stopped Apr 16 01:52:39.780416 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Apr 16 01:52:39.780478 kernel: SELinux: policy capability network_peer_controls=1 Apr 16 01:52:39.780491 kernel: SELinux: policy capability open_perms=1 Apr 16 01:52:39.780545 kernel: SELinux: policy capability extended_socket_class=1 Apr 16 01:52:39.780554 kernel: SELinux: policy capability always_check_network=0 Apr 16 01:52:39.780564 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 16 01:52:39.780573 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 16 01:52:39.780585 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 16 01:52:39.780595 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 16 01:52:39.780604 kernel: audit: type=1403 audit(1776304359.019:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 16 01:52:39.780614 systemd[1]: Successfully loaded SELinux policy in 38.660ms. Apr 16 01:52:39.780635 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.383ms. Apr 16 01:52:39.780649 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 16 01:52:39.780658 systemd[1]: Detected virtualization kvm. Apr 16 01:52:39.780669 systemd[1]: Detected architecture x86-64. Apr 16 01:52:39.780678 systemd[1]: Detected first boot. Apr 16 01:52:39.780689 systemd[1]: Initializing machine ID from VM UUID. Apr 16 01:52:39.780699 zram_generator::config[1061]: No configuration found. Apr 16 01:52:39.780710 systemd[1]: Populated /etc with preset unit settings. Apr 16 01:52:39.780719 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 16 01:52:39.780728 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 16 01:52:39.780737 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 16 01:52:39.780747 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 16 01:52:39.780756 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 16 01:52:39.780767 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 16 01:52:39.780777 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 16 01:52:39.780786 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 16 01:52:39.780796 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 16 01:52:39.780805 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 16 01:52:39.780814 systemd[1]: Created slice user.slice - User and Session Slice. Apr 16 01:52:39.780823 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 16 01:52:39.780832 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 16 01:52:39.780841 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 16 01:52:39.780852 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 16 01:52:39.780861 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 16 01:52:39.780871 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 16 01:52:39.780880 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 16 01:52:39.780890 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 16 01:52:39.780899 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 16 01:52:39.780908 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 16 01:52:39.780917 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 16 01:52:39.780928 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 16 01:52:39.780938 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 16 01:52:39.780946 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 16 01:52:39.780959 systemd[1]: Reached target slices.target - Slice Units. Apr 16 01:52:39.780968 systemd[1]: Reached target swap.target - Swaps. Apr 16 01:52:39.780977 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 16 01:52:39.780986 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 16 01:52:39.780995 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 16 01:52:39.781004 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 16 01:52:39.781015 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 16 01:52:39.781024 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 16 01:52:39.781033 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 16 01:52:39.781043 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 16 01:52:39.781052 systemd[1]: Mounting media.mount - External Media Directory... Apr 16 01:52:39.781061 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 16 01:52:39.781071 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 16 01:52:39.781081 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 16 01:52:39.781090 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 16 01:52:39.781101 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 16 01:52:39.781110 systemd[1]: Reached target machines.target - Containers. Apr 16 01:52:39.781119 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 16 01:52:39.781128 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 16 01:52:39.781137 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 16 01:52:39.781146 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 16 01:52:39.781155 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 16 01:52:39.781164 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 16 01:52:39.781175 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 16 01:52:39.781184 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 16 01:52:39.781193 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 16 01:52:39.781202 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 16 01:52:39.781241 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 16 01:52:39.781252 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 16 01:52:39.781262 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 16 01:52:39.781270 systemd[1]: Stopped systemd-fsck-usr.service. Apr 16 01:52:39.781282 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 16 01:52:39.781290 kernel: loop: module loaded Apr 16 01:52:39.781301 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 16 01:52:39.781310 kernel: ACPI: bus type drm_connector registered Apr 16 01:52:39.781318 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 16 01:52:39.781342 systemd-journald[1138]: Collecting audit messages is disabled. Apr 16 01:52:39.781363 systemd-journald[1138]: Journal started Apr 16 01:52:39.781384 systemd-journald[1138]: Runtime Journal (/run/log/journal/15bc0f64e9044ba8b73b5f86612fedef) is 6.0M, max 48.3M, 42.2M free. Apr 16 01:52:39.791657 kernel: fuse: init (API version 7.39) Apr 16 01:52:39.403186 systemd[1]: Queued start job for default target multi-user.target. Apr 16 01:52:39.438146 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Apr 16 01:52:39.438657 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 16 01:52:39.797142 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 16 01:52:39.803539 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 16 01:52:39.803580 systemd[1]: verity-setup.service: Deactivated successfully. Apr 16 01:52:39.806569 systemd[1]: Stopped verity-setup.service. Apr 16 01:52:39.808564 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 16 01:52:39.815830 systemd[1]: Started systemd-journald.service - Journal Service. Apr 16 01:52:39.818364 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 16 01:52:39.820794 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 16 01:52:39.823287 systemd[1]: Mounted media.mount - External Media Directory. Apr 16 01:52:39.825541 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 16 01:52:39.828040 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 16 01:52:39.830573 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 16 01:52:39.832888 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 16 01:52:39.835709 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 16 01:52:39.838635 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 16 01:52:39.838771 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 16 01:52:39.841614 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 16 01:52:39.841739 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 16 01:52:39.844398 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 16 01:52:39.844566 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 16 01:52:39.847071 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 16 01:52:39.847202 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 16 01:52:39.850093 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 16 01:52:39.850250 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 16 01:52:39.852793 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 16 01:52:39.852925 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 16 01:52:39.855430 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 16 01:52:39.858130 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 16 01:52:39.861023 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 16 01:52:39.864056 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 16 01:52:39.874670 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 16 01:52:39.887648 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 16 01:52:39.891287 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 16 01:52:39.893829 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 16 01:52:39.893876 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 16 01:52:39.897066 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 16 01:52:39.900631 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 16 01:52:39.903968 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 16 01:52:39.906329 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 16 01:52:39.910709 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 16 01:52:39.913983 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 16 01:52:39.916823 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 16 01:52:39.917579 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 16 01:52:39.919964 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 16 01:52:39.924041 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 16 01:52:39.930728 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 16 01:52:39.936385 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 16 01:52:39.942578 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 16 01:52:39.950316 kernel: loop0: detected capacity change from 0 to 140768 Apr 16 01:52:39.948856 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 16 01:52:39.950189 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 16 01:52:39.950704 systemd-journald[1138]: Time spent on flushing to /var/log/journal/15bc0f64e9044ba8b73b5f86612fedef is 32.605ms for 1001 entries. Apr 16 01:52:39.950704 systemd-journald[1138]: System Journal (/var/log/journal/15bc0f64e9044ba8b73b5f86612fedef) is 8.0M, max 195.6M, 187.6M free. Apr 16 01:52:39.989682 systemd-journald[1138]: Received client request to flush runtime journal. Apr 16 01:52:39.989715 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 16 01:52:39.959606 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 16 01:52:39.962911 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 16 01:52:39.965889 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 16 01:52:39.973129 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 16 01:52:39.980888 systemd-tmpfiles[1178]: ACLs are not supported, ignoring. Apr 16 01:52:39.980896 systemd-tmpfiles[1178]: ACLs are not supported, ignoring. Apr 16 01:52:39.983804 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 16 01:52:39.987025 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 16 01:52:39.990865 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 16 01:52:39.995189 udevadm[1179]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Apr 16 01:52:39.996261 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 16 01:52:40.004592 kernel: loop1: detected capacity change from 0 to 228704 Apr 16 01:52:40.009578 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 16 01:52:40.010109 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 16 01:52:40.025306 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 16 01:52:40.036726 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 16 01:52:40.046551 kernel: loop2: detected capacity change from 0 to 142488 Apr 16 01:52:40.058570 systemd-tmpfiles[1199]: ACLs are not supported, ignoring. Apr 16 01:52:40.058594 systemd-tmpfiles[1199]: ACLs are not supported, ignoring. Apr 16 01:52:40.062306 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 16 01:52:40.088562 kernel: loop3: detected capacity change from 0 to 140768 Apr 16 01:52:40.101585 kernel: loop4: detected capacity change from 0 to 228704 Apr 16 01:52:40.111650 kernel: loop5: detected capacity change from 0 to 142488 Apr 16 01:52:40.122540 (sd-merge)[1206]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Apr 16 01:52:40.122831 (sd-merge)[1206]: Merged extensions into '/usr'. Apr 16 01:52:40.126349 systemd[1]: Reloading requested from client PID 1176 ('systemd-sysext') (unit systemd-sysext.service)... Apr 16 01:52:40.126359 systemd[1]: Reloading... Apr 16 01:52:40.172560 zram_generator::config[1228]: No configuration found. Apr 16 01:52:40.212583 ldconfig[1171]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 16 01:52:40.260845 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 16 01:52:40.294821 systemd[1]: Reloading finished in 168 ms. Apr 16 01:52:40.333771 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 16 01:52:40.336735 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 16 01:52:40.339756 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 16 01:52:40.360750 systemd[1]: Starting ensure-sysext.service... Apr 16 01:52:40.363581 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 16 01:52:40.367052 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 16 01:52:40.371039 systemd[1]: Reloading requested from client PID 1270 ('systemctl') (unit ensure-sysext.service)... Apr 16 01:52:40.371066 systemd[1]: Reloading... Apr 16 01:52:40.383862 systemd-tmpfiles[1271]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 16 01:52:40.384089 systemd-tmpfiles[1271]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 16 01:52:40.384762 systemd-tmpfiles[1271]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 16 01:52:40.384924 systemd-tmpfiles[1271]: ACLs are not supported, ignoring. Apr 16 01:52:40.384960 systemd-tmpfiles[1271]: ACLs are not supported, ignoring. Apr 16 01:52:40.387459 systemd-tmpfiles[1271]: Detected autofs mount point /boot during canonicalization of boot. Apr 16 01:52:40.387478 systemd-tmpfiles[1271]: Skipping /boot Apr 16 01:52:40.388657 systemd-udevd[1272]: Using default interface naming scheme 'v255'. Apr 16 01:52:40.393057 systemd-tmpfiles[1271]: Detected autofs mount point /boot during canonicalization of boot. Apr 16 01:52:40.393132 systemd-tmpfiles[1271]: Skipping /boot Apr 16 01:52:40.409481 zram_generator::config[1295]: No configuration found. Apr 16 01:52:40.444277 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1304) Apr 16 01:52:40.476563 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Apr 16 01:52:40.500611 kernel: ACPI: button: Power Button [PWRF] Apr 16 01:52:40.501119 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 16 01:52:40.517539 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Apr 16 01:52:40.529728 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Apr 16 01:52:40.529938 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 16 01:52:40.530027 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Apr 16 01:52:40.530118 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 16 01:52:40.552638 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 16 01:52:40.556186 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 16 01:52:40.556829 systemd[1]: Reloading finished in 185 ms. Apr 16 01:52:40.559556 kernel: mousedev: PS/2 mouse device common for all mice Apr 16 01:52:40.589451 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 16 01:52:40.638950 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 16 01:52:40.709695 systemd[1]: Finished ensure-sysext.service. Apr 16 01:52:40.718057 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 16 01:52:40.729469 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 16 01:52:40.744827 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 16 01:52:40.749440 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 16 01:52:40.752481 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 16 01:52:40.753429 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 16 01:52:40.761124 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 16 01:52:40.764415 lvm[1371]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 16 01:52:40.765658 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 16 01:52:40.770774 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 16 01:52:40.774955 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 16 01:52:40.777550 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 16 01:52:40.778697 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 16 01:52:40.782983 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 16 01:52:40.787715 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 16 01:52:40.794712 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 16 01:52:40.799175 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 16 01:52:40.804838 augenrules[1394]: No rules Apr 16 01:52:40.804398 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 16 01:52:40.806938 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 16 01:52:40.813032 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 16 01:52:40.814026 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 16 01:52:40.817053 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 16 01:52:40.821095 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 16 01:52:40.821280 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 16 01:52:40.824956 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 16 01:52:40.825100 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 16 01:52:40.826276 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 16 01:52:40.826398 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 16 01:52:40.826669 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 16 01:52:40.826805 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 16 01:52:40.827411 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 16 01:52:40.828193 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 16 01:52:40.836550 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 16 01:52:40.837061 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 16 01:52:40.849996 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 16 01:52:40.851311 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 16 01:52:40.851380 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 16 01:52:40.853017 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 16 01:52:40.855489 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 16 01:52:40.862562 lvm[1413]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 16 01:52:40.867018 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 01:52:40.870116 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 16 01:52:40.873108 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 16 01:52:40.877264 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 16 01:52:40.890182 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 16 01:52:40.900680 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 16 01:52:40.946804 systemd-networkd[1389]: lo: Link UP Apr 16 01:52:40.946810 systemd-networkd[1389]: lo: Gained carrier Apr 16 01:52:40.947953 systemd-networkd[1389]: Enumeration completed Apr 16 01:52:40.948679 systemd-networkd[1389]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 16 01:52:40.948736 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 16 01:52:40.950154 systemd-networkd[1389]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 16 01:52:40.951410 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 16 01:52:40.952413 systemd-networkd[1389]: eth0: Link UP Apr 16 01:52:40.952415 systemd-networkd[1389]: eth0: Gained carrier Apr 16 01:52:40.952426 systemd-networkd[1389]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 16 01:52:40.954268 systemd[1]: Reached target time-set.target - System Time Set. Apr 16 01:52:40.962727 systemd-resolved[1391]: Positive Trust Anchors: Apr 16 01:52:40.962758 systemd-resolved[1391]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 16 01:52:40.962782 systemd-resolved[1391]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 16 01:52:40.966306 systemd-resolved[1391]: Defaulting to hostname 'linux'. Apr 16 01:52:40.967742 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 16 01:52:40.970783 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 16 01:52:40.973596 systemd[1]: Reached target network.target - Network. Apr 16 01:52:40.975808 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 16 01:52:40.978721 systemd[1]: Reached target sysinit.target - System Initialization. Apr 16 01:52:40.979755 systemd-networkd[1389]: eth0: DHCPv4 address 10.0.0.136/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 16 01:52:40.980611 systemd-timesyncd[1395]: Network configuration changed, trying to establish connection. Apr 16 01:52:42.112914 systemd-resolved[1391]: Clock change detected. Flushing caches. Apr 16 01:52:42.112950 systemd-timesyncd[1395]: Contacted time server 10.0.0.1:123 (10.0.0.1). Apr 16 01:52:42.112983 systemd-timesyncd[1395]: Initial clock synchronization to Thu 2026-04-16 01:52:42.112790 UTC. Apr 16 01:52:42.113076 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 16 01:52:42.115898 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 16 01:52:42.118709 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 16 01:52:42.121212 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 16 01:52:42.123991 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 16 01:52:42.126759 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 16 01:52:42.126823 systemd[1]: Reached target paths.target - Path Units. Apr 16 01:52:42.128950 systemd[1]: Reached target timers.target - Timer Units. Apr 16 01:52:42.131800 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 16 01:52:42.135476 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 16 01:52:42.144462 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 16 01:52:42.147662 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 16 01:52:42.150355 systemd[1]: Reached target sockets.target - Socket Units. Apr 16 01:52:42.152632 systemd[1]: Reached target basic.target - Basic System. Apr 16 01:52:42.154816 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 16 01:52:42.154945 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 16 01:52:42.156044 systemd[1]: Starting containerd.service - containerd container runtime... Apr 16 01:52:42.159437 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 16 01:52:42.162455 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 16 01:52:42.168067 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 16 01:52:42.169355 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 16 01:52:42.170734 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 16 01:52:42.172537 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 16 01:52:42.174990 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 16 01:52:42.181686 jq[1436]: false Apr 16 01:52:42.186051 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 16 01:52:42.189442 dbus-daemon[1435]: [system] SELinux support is enabled Apr 16 01:52:42.192380 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 16 01:52:42.193934 extend-filesystems[1437]: Found loop3 Apr 16 01:52:42.196352 extend-filesystems[1437]: Found loop4 Apr 16 01:52:42.196352 extend-filesystems[1437]: Found loop5 Apr 16 01:52:42.196352 extend-filesystems[1437]: Found sr0 Apr 16 01:52:42.196352 extend-filesystems[1437]: Found vda Apr 16 01:52:42.196352 extend-filesystems[1437]: Found vda1 Apr 16 01:52:42.196352 extend-filesystems[1437]: Found vda2 Apr 16 01:52:42.196352 extend-filesystems[1437]: Found vda3 Apr 16 01:52:42.196352 extend-filesystems[1437]: Found usr Apr 16 01:52:42.196352 extend-filesystems[1437]: Found vda4 Apr 16 01:52:42.196352 extend-filesystems[1437]: Found vda6 Apr 16 01:52:42.196352 extend-filesystems[1437]: Found vda7 Apr 16 01:52:42.196352 extend-filesystems[1437]: Found vda9 Apr 16 01:52:42.196352 extend-filesystems[1437]: Checking size of /dev/vda9 Apr 16 01:52:42.277055 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Apr 16 01:52:42.277086 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1323) Apr 16 01:52:42.277096 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Apr 16 01:52:42.195173 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 16 01:52:42.277153 extend-filesystems[1437]: Resized partition /dev/vda9 Apr 16 01:52:42.195561 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 16 01:52:42.281791 extend-filesystems[1458]: resize2fs 1.47.1 (20-May-2024) Apr 16 01:52:42.281791 extend-filesystems[1458]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Apr 16 01:52:42.281791 extend-filesystems[1458]: old_desc_blocks = 1, new_desc_blocks = 1 Apr 16 01:52:42.281791 extend-filesystems[1458]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Apr 16 01:52:42.298146 update_engine[1451]: I20260416 01:52:42.221410 1451 main.cc:92] Flatcar Update Engine starting Apr 16 01:52:42.298146 update_engine[1451]: I20260416 01:52:42.225183 1451 update_check_scheduler.cc:74] Next update check in 3m2s Apr 16 01:52:42.196354 systemd[1]: Starting update-engine.service - Update Engine... Apr 16 01:52:42.300963 extend-filesystems[1437]: Resized filesystem in /dev/vda9 Apr 16 01:52:42.308433 jq[1453]: true Apr 16 01:52:42.201034 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 16 01:52:42.205097 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 16 01:52:42.212911 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 16 01:52:42.309275 tar[1460]: linux-amd64/LICENSE Apr 16 01:52:42.309275 tar[1460]: linux-amd64/helm Apr 16 01:52:42.213077 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 16 01:52:42.309468 jq[1461]: true Apr 16 01:52:42.213336 systemd[1]: motdgen.service: Deactivated successfully. Apr 16 01:52:42.213447 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 16 01:52:42.236275 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 16 01:52:42.236395 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 16 01:52:42.263070 (ntainerd)[1462]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 16 01:52:42.266422 systemd[1]: Started update-engine.service - Update Engine. Apr 16 01:52:42.270189 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 16 01:52:42.270210 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 16 01:52:42.273368 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 16 01:52:42.273380 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 16 01:52:42.290015 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 16 01:52:42.295556 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 16 01:52:42.295725 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 16 01:52:42.301073 systemd-logind[1449]: Watching system buttons on /dev/input/event1 (Power Button) Apr 16 01:52:42.301084 systemd-logind[1449]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 16 01:52:42.307058 systemd-logind[1449]: New seat seat0. Apr 16 01:52:42.315271 systemd[1]: Started systemd-logind.service - User Login Management. Apr 16 01:52:42.325065 bash[1486]: Updated "/home/core/.ssh/authorized_keys" Apr 16 01:52:42.326180 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 16 01:52:42.328410 locksmithd[1482]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 16 01:52:42.330051 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Apr 16 01:52:42.435525 containerd[1462]: time="2026-04-16T01:52:42.435382103Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 16 01:52:42.456259 containerd[1462]: time="2026-04-16T01:52:42.456206136Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 16 01:52:42.458059 containerd[1462]: time="2026-04-16T01:52:42.458001441Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 16 01:52:42.458059 containerd[1462]: time="2026-04-16T01:52:42.458050253Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 16 01:52:42.458119 containerd[1462]: time="2026-04-16T01:52:42.458064058Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 16 01:52:42.458372 containerd[1462]: time="2026-04-16T01:52:42.458334884Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 16 01:52:42.458372 containerd[1462]: time="2026-04-16T01:52:42.458369307Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 16 01:52:42.458498 containerd[1462]: time="2026-04-16T01:52:42.458458107Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 16 01:52:42.458562 containerd[1462]: time="2026-04-16T01:52:42.458541272Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 16 01:52:42.458892 containerd[1462]: time="2026-04-16T01:52:42.458815816Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 16 01:52:42.458925 containerd[1462]: time="2026-04-16T01:52:42.458893994Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 16 01:52:42.458925 containerd[1462]: time="2026-04-16T01:52:42.458905905Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 16 01:52:42.458925 containerd[1462]: time="2026-04-16T01:52:42.458913137Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 16 01:52:42.459060 containerd[1462]: time="2026-04-16T01:52:42.459023513Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 16 01:52:42.459392 containerd[1462]: time="2026-04-16T01:52:42.459350623Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 16 01:52:42.459612 containerd[1462]: time="2026-04-16T01:52:42.459549551Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 16 01:52:42.459668 containerd[1462]: time="2026-04-16T01:52:42.459654874Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 16 01:52:42.459795 containerd[1462]: time="2026-04-16T01:52:42.459765864Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 16 01:52:42.459929 containerd[1462]: time="2026-04-16T01:52:42.459902000Z" level=info msg="metadata content store policy set" policy=shared Apr 16 01:52:42.465186 containerd[1462]: time="2026-04-16T01:52:42.465106379Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 16 01:52:42.465186 containerd[1462]: time="2026-04-16T01:52:42.465162718Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 16 01:52:42.465186 containerd[1462]: time="2026-04-16T01:52:42.465176824Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 16 01:52:42.465186 containerd[1462]: time="2026-04-16T01:52:42.465188337Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 16 01:52:42.465338 containerd[1462]: time="2026-04-16T01:52:42.465199767Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 16 01:52:42.465338 containerd[1462]: time="2026-04-16T01:52:42.465293200Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 16 01:52:42.465670 containerd[1462]: time="2026-04-16T01:52:42.465608912Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 16 01:52:42.465766 containerd[1462]: time="2026-04-16T01:52:42.465707903Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 16 01:52:42.465766 containerd[1462]: time="2026-04-16T01:52:42.465719486Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 16 01:52:42.465766 containerd[1462]: time="2026-04-16T01:52:42.465729010Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 16 01:52:42.465766 containerd[1462]: time="2026-04-16T01:52:42.465743916Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 16 01:52:42.465766 containerd[1462]: time="2026-04-16T01:52:42.465753403Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 16 01:52:42.465766 containerd[1462]: time="2026-04-16T01:52:42.465762355Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 16 01:52:42.465943 containerd[1462]: time="2026-04-16T01:52:42.465772192Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 16 01:52:42.465943 containerd[1462]: time="2026-04-16T01:52:42.465782458Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 16 01:52:42.465943 containerd[1462]: time="2026-04-16T01:52:42.465791745Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 16 01:52:42.465943 containerd[1462]: time="2026-04-16T01:52:42.465800657Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 16 01:52:42.465943 containerd[1462]: time="2026-04-16T01:52:42.465808296Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 16 01:52:42.465943 containerd[1462]: time="2026-04-16T01:52:42.465822206Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 16 01:52:42.465943 containerd[1462]: time="2026-04-16T01:52:42.465831016Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 16 01:52:42.465943 containerd[1462]: time="2026-04-16T01:52:42.465885571Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 16 01:52:42.465943 containerd[1462]: time="2026-04-16T01:52:42.465894707Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 16 01:52:42.465943 containerd[1462]: time="2026-04-16T01:52:42.465904249Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 16 01:52:42.465943 containerd[1462]: time="2026-04-16T01:52:42.465913972Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 16 01:52:42.465943 containerd[1462]: time="2026-04-16T01:52:42.465922549Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 16 01:52:42.465943 containerd[1462]: time="2026-04-16T01:52:42.465931166Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 16 01:52:42.465943 containerd[1462]: time="2026-04-16T01:52:42.465941359Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 16 01:52:42.466135 containerd[1462]: time="2026-04-16T01:52:42.465951995Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 16 01:52:42.466135 containerd[1462]: time="2026-04-16T01:52:42.465960065Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 16 01:52:42.466135 containerd[1462]: time="2026-04-16T01:52:42.465968376Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 16 01:52:42.466135 containerd[1462]: time="2026-04-16T01:52:42.465976780Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 16 01:52:42.466135 containerd[1462]: time="2026-04-16T01:52:42.465986834Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 16 01:52:42.466135 containerd[1462]: time="2026-04-16T01:52:42.466003114Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 16 01:52:42.466135 containerd[1462]: time="2026-04-16T01:52:42.466011736Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 16 01:52:42.466135 containerd[1462]: time="2026-04-16T01:52:42.466019883Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 16 01:52:42.466135 containerd[1462]: time="2026-04-16T01:52:42.466054853Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 16 01:52:42.466135 containerd[1462]: time="2026-04-16T01:52:42.466070002Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 16 01:52:42.466135 containerd[1462]: time="2026-04-16T01:52:42.466077553Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 16 01:52:42.466135 containerd[1462]: time="2026-04-16T01:52:42.466085576Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 16 01:52:42.466135 containerd[1462]: time="2026-04-16T01:52:42.466092627Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 16 01:52:42.466319 containerd[1462]: time="2026-04-16T01:52:42.466100701Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 16 01:52:42.466319 containerd[1462]: time="2026-04-16T01:52:42.466107407Z" level=info msg="NRI interface is disabled by configuration." Apr 16 01:52:42.466319 containerd[1462]: time="2026-04-16T01:52:42.466114126Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 16 01:52:42.466359 containerd[1462]: time="2026-04-16T01:52:42.466307003Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 16 01:52:42.466359 containerd[1462]: time="2026-04-16T01:52:42.466344624Z" level=info msg="Connect containerd service" Apr 16 01:52:42.466502 containerd[1462]: time="2026-04-16T01:52:42.466368765Z" level=info msg="using legacy CRI server" Apr 16 01:52:42.466502 containerd[1462]: time="2026-04-16T01:52:42.466373894Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 16 01:52:42.466502 containerd[1462]: time="2026-04-16T01:52:42.466456058Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 16 01:52:42.467087 containerd[1462]: time="2026-04-16T01:52:42.467000851Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 16 01:52:42.467265 containerd[1462]: time="2026-04-16T01:52:42.467158643Z" level=info msg="Start subscribing containerd event" Apr 16 01:52:42.467265 containerd[1462]: time="2026-04-16T01:52:42.467222235Z" level=info msg="Start recovering state" Apr 16 01:52:42.467299 containerd[1462]: time="2026-04-16T01:52:42.467279179Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 16 01:52:42.467447 containerd[1462]: time="2026-04-16T01:52:42.467420024Z" level=info msg="Start event monitor" Apr 16 01:52:42.467466 containerd[1462]: time="2026-04-16T01:52:42.467458595Z" level=info msg="Start snapshots syncer" Apr 16 01:52:42.467480 containerd[1462]: time="2026-04-16T01:52:42.467467133Z" level=info msg="Start cni network conf syncer for default" Apr 16 01:52:42.467480 containerd[1462]: time="2026-04-16T01:52:42.467472552Z" level=info msg="Start streaming server" Apr 16 01:52:42.467679 containerd[1462]: time="2026-04-16T01:52:42.467635033Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 16 01:52:42.467803 containerd[1462]: time="2026-04-16T01:52:42.467774995Z" level=info msg="containerd successfully booted in 0.033576s" Apr 16 01:52:42.468084 systemd[1]: Started containerd.service - containerd container runtime. Apr 16 01:52:42.479829 sshd_keygen[1457]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 16 01:52:42.500049 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 16 01:52:42.511166 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 16 01:52:42.519512 systemd[1]: issuegen.service: Deactivated successfully. Apr 16 01:52:42.519706 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 16 01:52:42.523500 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 16 01:52:42.536029 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 16 01:52:42.550161 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 16 01:52:42.554028 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 16 01:52:42.556647 systemd[1]: Reached target getty.target - Login Prompts. Apr 16 01:52:42.718707 tar[1460]: linux-amd64/README.md Apr 16 01:52:42.742701 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 16 01:52:43.202255 systemd-networkd[1389]: eth0: Gained IPv6LL Apr 16 01:52:43.205182 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 16 01:52:43.208414 systemd[1]: Reached target network-online.target - Network is Online. Apr 16 01:52:43.220198 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Apr 16 01:52:43.224277 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 01:52:43.228042 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 16 01:52:43.244879 systemd[1]: coreos-metadata.service: Deactivated successfully. Apr 16 01:52:43.245046 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Apr 16 01:52:43.248950 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 16 01:52:43.252995 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 16 01:52:43.904411 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 01:52:43.909033 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 16 01:52:43.909390 (kubelet)[1545]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 01:52:43.912931 systemd[1]: Startup finished in 2.514s (kernel) + 5.265s (initrd) + 3.799s (userspace) = 11.580s. Apr 16 01:52:44.359932 kubelet[1545]: E0416 01:52:44.359687 1545 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 01:52:44.362106 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 01:52:44.362234 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 01:52:48.297579 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 16 01:52:48.299026 systemd[1]: Started sshd@0-10.0.0.136:22-10.0.0.1:33940.service - OpenSSH per-connection server daemon (10.0.0.1:33940). Apr 16 01:52:48.345753 sshd[1559]: Accepted publickey for core from 10.0.0.1 port 33940 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:52:48.347690 sshd[1559]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:52:48.354176 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 16 01:52:48.361098 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 16 01:52:48.362721 systemd-logind[1449]: New session 1 of user core. Apr 16 01:52:48.370254 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 16 01:52:48.382296 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 16 01:52:48.384478 (systemd)[1563]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 16 01:52:48.452182 systemd[1563]: Queued start job for default target default.target. Apr 16 01:52:48.469007 systemd[1563]: Created slice app.slice - User Application Slice. Apr 16 01:52:48.469070 systemd[1563]: Reached target paths.target - Paths. Apr 16 01:52:48.469087 systemd[1563]: Reached target timers.target - Timers. Apr 16 01:52:48.470452 systemd[1563]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 16 01:52:48.480250 systemd[1563]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 16 01:52:48.480341 systemd[1563]: Reached target sockets.target - Sockets. Apr 16 01:52:48.480355 systemd[1563]: Reached target basic.target - Basic System. Apr 16 01:52:48.480388 systemd[1563]: Reached target default.target - Main User Target. Apr 16 01:52:48.480414 systemd[1563]: Startup finished in 90ms. Apr 16 01:52:48.480506 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 16 01:52:48.481801 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 16 01:52:48.551208 systemd[1]: Started sshd@1-10.0.0.136:22-10.0.0.1:33942.service - OpenSSH per-connection server daemon (10.0.0.1:33942). Apr 16 01:52:48.582733 sshd[1574]: Accepted publickey for core from 10.0.0.1 port 33942 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:52:48.584075 sshd[1574]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:52:48.587797 systemd-logind[1449]: New session 2 of user core. Apr 16 01:52:48.603046 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 16 01:52:48.655939 sshd[1574]: pam_unix(sshd:session): session closed for user core Apr 16 01:52:48.681379 systemd[1]: sshd@1-10.0.0.136:22-10.0.0.1:33942.service: Deactivated successfully. Apr 16 01:52:48.682787 systemd[1]: session-2.scope: Deactivated successfully. Apr 16 01:52:48.684064 systemd-logind[1449]: Session 2 logged out. Waiting for processes to exit. Apr 16 01:52:48.685140 systemd[1]: Started sshd@2-10.0.0.136:22-10.0.0.1:33944.service - OpenSSH per-connection server daemon (10.0.0.1:33944). Apr 16 01:52:48.685973 systemd-logind[1449]: Removed session 2. Apr 16 01:52:48.718351 sshd[1581]: Accepted publickey for core from 10.0.0.1 port 33944 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:52:48.719585 sshd[1581]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:52:48.724326 systemd-logind[1449]: New session 3 of user core. Apr 16 01:52:48.744388 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 16 01:52:48.794088 sshd[1581]: pam_unix(sshd:session): session closed for user core Apr 16 01:52:48.804370 systemd[1]: sshd@2-10.0.0.136:22-10.0.0.1:33944.service: Deactivated successfully. Apr 16 01:52:48.806335 systemd[1]: session-3.scope: Deactivated successfully. Apr 16 01:52:48.807787 systemd-logind[1449]: Session 3 logged out. Waiting for processes to exit. Apr 16 01:52:48.822214 systemd[1]: Started sshd@3-10.0.0.136:22-10.0.0.1:33960.service - OpenSSH per-connection server daemon (10.0.0.1:33960). Apr 16 01:52:48.823181 systemd-logind[1449]: Removed session 3. Apr 16 01:52:48.855031 sshd[1588]: Accepted publickey for core from 10.0.0.1 port 33960 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:52:48.856204 sshd[1588]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:52:48.860130 systemd-logind[1449]: New session 4 of user core. Apr 16 01:52:48.866166 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 16 01:52:48.921124 sshd[1588]: pam_unix(sshd:session): session closed for user core Apr 16 01:52:48.930934 systemd[1]: sshd@3-10.0.0.136:22-10.0.0.1:33960.service: Deactivated successfully. Apr 16 01:52:48.932124 systemd[1]: session-4.scope: Deactivated successfully. Apr 16 01:52:48.933247 systemd-logind[1449]: Session 4 logged out. Waiting for processes to exit. Apr 16 01:52:48.934348 systemd[1]: Started sshd@4-10.0.0.136:22-10.0.0.1:33968.service - OpenSSH per-connection server daemon (10.0.0.1:33968). Apr 16 01:52:48.935122 systemd-logind[1449]: Removed session 4. Apr 16 01:52:48.968690 sshd[1595]: Accepted publickey for core from 10.0.0.1 port 33968 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:52:48.969747 sshd[1595]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:52:48.973421 systemd-logind[1449]: New session 5 of user core. Apr 16 01:52:48.988062 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 16 01:52:49.047194 sudo[1598]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 16 01:52:49.047478 sudo[1598]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 16 01:52:49.074159 sudo[1598]: pam_unix(sudo:session): session closed for user root Apr 16 01:52:49.076199 sshd[1595]: pam_unix(sshd:session): session closed for user core Apr 16 01:52:49.089326 systemd[1]: sshd@4-10.0.0.136:22-10.0.0.1:33968.service: Deactivated successfully. Apr 16 01:52:49.090812 systemd[1]: session-5.scope: Deactivated successfully. Apr 16 01:52:49.092083 systemd-logind[1449]: Session 5 logged out. Waiting for processes to exit. Apr 16 01:52:49.093276 systemd[1]: Started sshd@5-10.0.0.136:22-10.0.0.1:33976.service - OpenSSH per-connection server daemon (10.0.0.1:33976). Apr 16 01:52:49.094123 systemd-logind[1449]: Removed session 5. Apr 16 01:52:49.129380 sshd[1603]: Accepted publickey for core from 10.0.0.1 port 33976 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:52:49.130555 sshd[1603]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:52:49.134690 systemd-logind[1449]: New session 6 of user core. Apr 16 01:52:49.144115 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 16 01:52:49.195470 sudo[1607]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 16 01:52:49.195732 sudo[1607]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 16 01:52:49.199334 sudo[1607]: pam_unix(sudo:session): session closed for user root Apr 16 01:52:49.204093 sudo[1606]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 16 01:52:49.204366 sudo[1606]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 16 01:52:49.221377 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 16 01:52:49.223552 auditctl[1610]: No rules Apr 16 01:52:49.224377 systemd[1]: audit-rules.service: Deactivated successfully. Apr 16 01:52:49.224576 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 16 01:52:49.226586 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 16 01:52:49.253659 augenrules[1628]: No rules Apr 16 01:52:49.254787 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 16 01:52:49.255589 sudo[1606]: pam_unix(sudo:session): session closed for user root Apr 16 01:52:49.257457 sshd[1603]: pam_unix(sshd:session): session closed for user core Apr 16 01:52:49.276444 systemd[1]: sshd@5-10.0.0.136:22-10.0.0.1:33976.service: Deactivated successfully. Apr 16 01:52:49.277819 systemd[1]: session-6.scope: Deactivated successfully. Apr 16 01:52:49.279364 systemd-logind[1449]: Session 6 logged out. Waiting for processes to exit. Apr 16 01:52:49.288253 systemd[1]: Started sshd@6-10.0.0.136:22-10.0.0.1:33990.service - OpenSSH per-connection server daemon (10.0.0.1:33990). Apr 16 01:52:49.289192 systemd-logind[1449]: Removed session 6. Apr 16 01:52:49.318772 sshd[1636]: Accepted publickey for core from 10.0.0.1 port 33990 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:52:49.319538 sshd[1636]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:52:49.323520 systemd-logind[1449]: New session 7 of user core. Apr 16 01:52:49.337157 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 16 01:52:49.390490 sudo[1639]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 16 01:52:49.390767 sudo[1639]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 16 01:52:49.661137 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 16 01:52:49.661275 (dockerd)[1657]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 16 01:52:49.928717 dockerd[1657]: time="2026-04-16T01:52:49.928539718Z" level=info msg="Starting up" Apr 16 01:52:50.076785 dockerd[1657]: time="2026-04-16T01:52:50.076689501Z" level=info msg="Loading containers: start." Apr 16 01:52:50.190895 kernel: Initializing XFRM netlink socket Apr 16 01:52:50.279979 systemd-networkd[1389]: docker0: Link UP Apr 16 01:52:50.304768 dockerd[1657]: time="2026-04-16T01:52:50.304681969Z" level=info msg="Loading containers: done." Apr 16 01:52:50.318025 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4133823836-merged.mount: Deactivated successfully. Apr 16 01:52:50.320950 dockerd[1657]: time="2026-04-16T01:52:50.320831314Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 16 01:52:50.321060 dockerd[1657]: time="2026-04-16T01:52:50.321026293Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 16 01:52:50.321199 dockerd[1657]: time="2026-04-16T01:52:50.321149418Z" level=info msg="Daemon has completed initialization" Apr 16 01:52:50.360822 dockerd[1657]: time="2026-04-16T01:52:50.360741542Z" level=info msg="API listen on /run/docker.sock" Apr 16 01:52:50.361047 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 16 01:52:50.775988 containerd[1462]: time="2026-04-16T01:52:50.775767891Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\"" Apr 16 01:52:51.497106 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2600728447.mount: Deactivated successfully. Apr 16 01:52:52.315368 containerd[1462]: time="2026-04-16T01:52:52.315258028Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:52:52.316217 containerd[1462]: time="2026-04-16T01:52:52.316147651Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.11: active requests=0, bytes read=30193427" Apr 16 01:52:52.317336 containerd[1462]: time="2026-04-16T01:52:52.317300144Z" level=info msg="ImageCreate event name:\"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:52:52.320255 containerd[1462]: time="2026-04-16T01:52:52.320199782Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:52:52.320939 containerd[1462]: time="2026-04-16T01:52:52.320899089Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.11\" with image id \"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\", size \"30190588\" in 1.545096651s" Apr 16 01:52:52.321001 containerd[1462]: time="2026-04-16T01:52:52.320942750Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\" returns image reference \"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\"" Apr 16 01:52:52.321555 containerd[1462]: time="2026-04-16T01:52:52.321517341Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\"" Apr 16 01:52:53.214459 containerd[1462]: time="2026-04-16T01:52:53.214363742Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:52:53.215150 containerd[1462]: time="2026-04-16T01:52:53.215111977Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.11: active requests=0, bytes read=26171379" Apr 16 01:52:53.216407 containerd[1462]: time="2026-04-16T01:52:53.216332263Z" level=info msg="ImageCreate event name:\"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:52:53.220142 containerd[1462]: time="2026-04-16T01:52:53.220060694Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:52:53.221143 containerd[1462]: time="2026-04-16T01:52:53.221087476Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.11\" with image id \"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\", size \"27737794\" in 899.52134ms" Apr 16 01:52:53.221143 containerd[1462]: time="2026-04-16T01:52:53.221130702Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\" returns image reference \"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\"" Apr 16 01:52:53.222168 containerd[1462]: time="2026-04-16T01:52:53.221929434Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\"" Apr 16 01:52:54.183483 containerd[1462]: time="2026-04-16T01:52:54.183410553Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:52:54.184427 containerd[1462]: time="2026-04-16T01:52:54.184348042Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.11: active requests=0, bytes read=20289688" Apr 16 01:52:54.185351 containerd[1462]: time="2026-04-16T01:52:54.185281327Z" level=info msg="ImageCreate event name:\"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:52:54.187726 containerd[1462]: time="2026-04-16T01:52:54.187630864Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:52:54.188734 containerd[1462]: time="2026-04-16T01:52:54.188686388Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.11\" with image id \"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\", size \"21856121\" in 966.73431ms" Apr 16 01:52:54.188734 containerd[1462]: time="2026-04-16T01:52:54.188724137Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\" returns image reference \"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\"" Apr 16 01:52:54.189430 containerd[1462]: time="2026-04-16T01:52:54.189379791Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\"" Apr 16 01:52:54.612706 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 16 01:52:54.620182 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 01:52:54.781767 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 01:52:54.786546 (kubelet)[1879]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 01:52:54.836110 kubelet[1879]: E0416 01:52:54.836026 1879 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 01:52:54.839129 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 01:52:54.839301 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 01:52:55.096770 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount552775128.mount: Deactivated successfully. Apr 16 01:52:55.415563 containerd[1462]: time="2026-04-16T01:52:55.415402126Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:52:55.416521 containerd[1462]: time="2026-04-16T01:52:55.416467092Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.11: active requests=0, bytes read=32010605" Apr 16 01:52:55.417745 containerd[1462]: time="2026-04-16T01:52:55.417704558Z" level=info msg="ImageCreate event name:\"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:52:55.420159 containerd[1462]: time="2026-04-16T01:52:55.420118045Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:52:55.420767 containerd[1462]: time="2026-04-16T01:52:55.420727295Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.11\" with image id \"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\", repo tag \"registry.k8s.io/kube-proxy:v1.33.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\", size \"32009730\" in 1.231306788s" Apr 16 01:52:55.420767 containerd[1462]: time="2026-04-16T01:52:55.420762883Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\" returns image reference \"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\"" Apr 16 01:52:55.421923 containerd[1462]: time="2026-04-16T01:52:55.421382606Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Apr 16 01:52:55.802326 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2188474105.mount: Deactivated successfully. Apr 16 01:52:56.498707 containerd[1462]: time="2026-04-16T01:52:56.498611910Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:52:56.499910 containerd[1462]: time="2026-04-16T01:52:56.499813391Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20941714" Apr 16 01:52:56.501050 containerd[1462]: time="2026-04-16T01:52:56.501014086Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:52:56.503737 containerd[1462]: time="2026-04-16T01:52:56.503685476Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:52:56.504726 containerd[1462]: time="2026-04-16T01:52:56.504684637Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.0832792s" Apr 16 01:52:56.504726 containerd[1462]: time="2026-04-16T01:52:56.504725284Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Apr 16 01:52:56.505514 containerd[1462]: time="2026-04-16T01:52:56.505468120Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Apr 16 01:52:56.824381 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2743697916.mount: Deactivated successfully. Apr 16 01:52:56.833021 containerd[1462]: time="2026-04-16T01:52:56.832945149Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:52:56.833936 containerd[1462]: time="2026-04-16T01:52:56.833881398Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321070" Apr 16 01:52:56.834961 containerd[1462]: time="2026-04-16T01:52:56.834918402Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:52:56.837145 containerd[1462]: time="2026-04-16T01:52:56.837098088Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:52:56.837575 containerd[1462]: time="2026-04-16T01:52:56.837526136Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 332.016572ms" Apr 16 01:52:56.837575 containerd[1462]: time="2026-04-16T01:52:56.837567489Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Apr 16 01:52:56.838259 containerd[1462]: time="2026-04-16T01:52:56.838245012Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Apr 16 01:52:57.235483 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1431364411.mount: Deactivated successfully. Apr 16 01:52:57.889025 containerd[1462]: time="2026-04-16T01:52:57.888935658Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:52:57.889728 containerd[1462]: time="2026-04-16T01:52:57.889629565Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=23718826" Apr 16 01:52:57.890570 containerd[1462]: time="2026-04-16T01:52:57.890525750Z" level=info msg="ImageCreate event name:\"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:52:57.893578 containerd[1462]: time="2026-04-16T01:52:57.893500812Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:52:57.894521 containerd[1462]: time="2026-04-16T01:52:57.894472259Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"23716032\" in 1.056155557s" Apr 16 01:52:57.894521 containerd[1462]: time="2026-04-16T01:52:57.894513816Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\"" Apr 16 01:52:59.955584 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 01:52:59.966307 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 01:52:59.988431 systemd[1]: Reloading requested from client PID 2045 ('systemctl') (unit session-7.scope)... Apr 16 01:52:59.988460 systemd[1]: Reloading... Apr 16 01:53:00.053919 zram_generator::config[2084]: No configuration found. Apr 16 01:53:00.136661 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 16 01:53:00.186456 systemd[1]: Reloading finished in 197 ms. Apr 16 01:53:00.252827 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 01:53:00.255943 systemd[1]: kubelet.service: Deactivated successfully. Apr 16 01:53:00.256125 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 01:53:00.257988 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 01:53:00.380228 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 01:53:00.384240 (kubelet)[2134]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 16 01:53:00.430708 kubelet[2134]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 16 01:53:00.430708 kubelet[2134]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 16 01:53:00.430708 kubelet[2134]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 16 01:53:00.431135 kubelet[2134]: I0416 01:53:00.430752 2134 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 16 01:53:01.072389 kubelet[2134]: I0416 01:53:01.072313 2134 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 16 01:53:01.072389 kubelet[2134]: I0416 01:53:01.072371 2134 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 16 01:53:01.072597 kubelet[2134]: I0416 01:53:01.072564 2134 server.go:956] "Client rotation is on, will bootstrap in background" Apr 16 01:53:01.100135 kubelet[2134]: E0416 01:53:01.100050 2134 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.136:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.136:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 16 01:53:01.100712 kubelet[2134]: I0416 01:53:01.100642 2134 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 16 01:53:01.106292 kubelet[2134]: E0416 01:53:01.106237 2134 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 16 01:53:01.106292 kubelet[2134]: I0416 01:53:01.106284 2134 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 16 01:53:01.109977 kubelet[2134]: I0416 01:53:01.109925 2134 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 16 01:53:01.110816 kubelet[2134]: I0416 01:53:01.110757 2134 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 16 01:53:01.111006 kubelet[2134]: I0416 01:53:01.110794 2134 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 16 01:53:01.111006 kubelet[2134]: I0416 01:53:01.110992 2134 topology_manager.go:138] "Creating topology manager with none policy" Apr 16 01:53:01.111006 kubelet[2134]: I0416 01:53:01.111001 2134 container_manager_linux.go:303] "Creating device plugin manager" Apr 16 01:53:01.111153 kubelet[2134]: I0416 01:53:01.111094 2134 state_mem.go:36] "Initialized new in-memory state store" Apr 16 01:53:01.115746 kubelet[2134]: I0416 01:53:01.115636 2134 kubelet.go:480] "Attempting to sync node with API server" Apr 16 01:53:01.115746 kubelet[2134]: I0416 01:53:01.115700 2134 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 16 01:53:01.115746 kubelet[2134]: I0416 01:53:01.115726 2134 kubelet.go:386] "Adding apiserver pod source" Apr 16 01:53:01.115746 kubelet[2134]: I0416 01:53:01.115743 2134 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 16 01:53:01.118571 kubelet[2134]: I0416 01:53:01.118509 2134 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 16 01:53:01.119230 kubelet[2134]: I0416 01:53:01.119217 2134 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 16 01:53:01.120542 kubelet[2134]: W0416 01:53:01.120396 2134 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 16 01:53:01.123383 kubelet[2134]: E0416 01:53:01.123202 2134 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.136:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.136:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 16 01:53:01.123622 kubelet[2134]: E0416 01:53:01.123496 2134 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.136:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.136:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 16 01:53:01.126136 kubelet[2134]: I0416 01:53:01.126091 2134 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 16 01:53:01.126200 kubelet[2134]: I0416 01:53:01.126158 2134 server.go:1289] "Started kubelet" Apr 16 01:53:01.128056 kubelet[2134]: I0416 01:53:01.126997 2134 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 16 01:53:01.128056 kubelet[2134]: I0416 01:53:01.127502 2134 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 16 01:53:01.128056 kubelet[2134]: I0416 01:53:01.127536 2134 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 16 01:53:01.133361 kubelet[2134]: I0416 01:53:01.132313 2134 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 16 01:53:01.134764 kubelet[2134]: I0416 01:53:01.134717 2134 server.go:317] "Adding debug handlers to kubelet server" Apr 16 01:53:01.137768 kubelet[2134]: E0416 01:53:01.133777 2134 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.136:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.136:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a6b36c61fe21d9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-16 01:53:01.126123993 +0000 UTC m=+0.737837966,LastTimestamp:2026-04-16 01:53:01.126123993 +0000 UTC m=+0.737837966,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 16 01:53:01.139416 kubelet[2134]: I0416 01:53:01.138561 2134 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 16 01:53:01.143767 kubelet[2134]: I0416 01:53:01.143731 2134 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 16 01:53:01.143965 kubelet[2134]: I0416 01:53:01.143939 2134 factory.go:223] Registration of the systemd container factory successfully Apr 16 01:53:01.144003 kubelet[2134]: E0416 01:53:01.143977 2134 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:53:01.144022 kubelet[2134]: I0416 01:53:01.144009 2134 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 16 01:53:01.144614 kubelet[2134]: I0416 01:53:01.144579 2134 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 16 01:53:01.144675 kubelet[2134]: I0416 01:53:01.144650 2134 reconciler.go:26] "Reconciler: start to sync state" Apr 16 01:53:01.144742 kubelet[2134]: E0416 01:53:01.144660 2134 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.136:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.136:6443: connect: connection refused" interval="200ms" Apr 16 01:53:01.146085 kubelet[2134]: E0416 01:53:01.145742 2134 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 16 01:53:01.146085 kubelet[2134]: E0416 01:53:01.145824 2134 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.136:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.136:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 16 01:53:01.146998 kubelet[2134]: I0416 01:53:01.146831 2134 factory.go:223] Registration of the containerd container factory successfully Apr 16 01:53:01.160194 kubelet[2134]: I0416 01:53:01.160183 2134 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 16 01:53:01.160282 kubelet[2134]: I0416 01:53:01.160276 2134 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 16 01:53:01.160315 kubelet[2134]: I0416 01:53:01.160312 2134 state_mem.go:36] "Initialized new in-memory state store" Apr 16 01:53:01.162638 kubelet[2134]: I0416 01:53:01.162496 2134 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 16 01:53:01.164671 kubelet[2134]: I0416 01:53:01.164623 2134 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 16 01:53:01.164671 kubelet[2134]: I0416 01:53:01.164664 2134 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 16 01:53:01.164794 kubelet[2134]: I0416 01:53:01.164724 2134 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 16 01:53:01.164794 kubelet[2134]: I0416 01:53:01.164734 2134 kubelet.go:2436] "Starting kubelet main sync loop" Apr 16 01:53:01.164794 kubelet[2134]: E0416 01:53:01.164765 2134 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 16 01:53:01.225411 kubelet[2134]: I0416 01:53:01.225314 2134 policy_none.go:49] "None policy: Start" Apr 16 01:53:01.225411 kubelet[2134]: I0416 01:53:01.225365 2134 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 16 01:53:01.225411 kubelet[2134]: I0416 01:53:01.225379 2134 state_mem.go:35] "Initializing new in-memory state store" Apr 16 01:53:01.225933 kubelet[2134]: E0416 01:53:01.225815 2134 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.136:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.136:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 16 01:53:01.233280 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 16 01:53:01.245264 kubelet[2134]: E0416 01:53:01.245165 2134 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:53:01.251964 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 16 01:53:01.255526 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 16 01:53:01.262519 kubelet[2134]: E0416 01:53:01.262471 2134 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 16 01:53:01.262722 kubelet[2134]: I0416 01:53:01.262654 2134 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 16 01:53:01.262752 kubelet[2134]: I0416 01:53:01.262710 2134 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 16 01:53:01.263170 kubelet[2134]: I0416 01:53:01.262991 2134 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 16 01:53:01.263930 kubelet[2134]: E0416 01:53:01.263890 2134 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 16 01:53:01.263965 kubelet[2134]: E0416 01:53:01.263933 2134 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 16 01:53:01.275044 systemd[1]: Created slice kubepods-burstable-pode9ca41790ae21be9f4cbd451ade0acec.slice - libcontainer container kubepods-burstable-pode9ca41790ae21be9f4cbd451ade0acec.slice. Apr 16 01:53:01.300407 kubelet[2134]: E0416 01:53:01.300338 2134 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 01:53:01.303975 systemd[1]: Created slice kubepods-burstable-pod33fee6ba1581201eda98a989140db110.slice - libcontainer container kubepods-burstable-pod33fee6ba1581201eda98a989140db110.slice. Apr 16 01:53:01.305259 kubelet[2134]: E0416 01:53:01.305232 2134 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 01:53:01.306406 systemd[1]: Created slice kubepods-burstable-podd829cbc3adb17191ff0c90ac35f04582.slice - libcontainer container kubepods-burstable-podd829cbc3adb17191ff0c90ac35f04582.slice. Apr 16 01:53:01.307773 kubelet[2134]: E0416 01:53:01.307731 2134 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 01:53:01.345936 kubelet[2134]: E0416 01:53:01.345584 2134 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.136:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.136:6443: connect: connection refused" interval="400ms" Apr 16 01:53:01.365502 kubelet[2134]: I0416 01:53:01.365354 2134 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 16 01:53:01.365744 kubelet[2134]: E0416 01:53:01.365673 2134 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.136:6443/api/v1/nodes\": dial tcp 10.0.0.136:6443: connect: connection refused" node="localhost" Apr 16 01:53:01.445447 kubelet[2134]: I0416 01:53:01.445320 2134 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 01:53:01.445447 kubelet[2134]: I0416 01:53:01.445393 2134 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d829cbc3adb17191ff0c90ac35f04582-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d829cbc3adb17191ff0c90ac35f04582\") " pod="kube-system/kube-apiserver-localhost" Apr 16 01:53:01.445447 kubelet[2134]: I0416 01:53:01.445409 2134 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d829cbc3adb17191ff0c90ac35f04582-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d829cbc3adb17191ff0c90ac35f04582\") " pod="kube-system/kube-apiserver-localhost" Apr 16 01:53:01.445447 kubelet[2134]: I0416 01:53:01.445423 2134 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 01:53:01.445447 kubelet[2134]: I0416 01:53:01.445438 2134 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 01:53:01.446107 kubelet[2134]: I0416 01:53:01.445486 2134 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/33fee6ba1581201eda98a989140db110-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"33fee6ba1581201eda98a989140db110\") " pod="kube-system/kube-scheduler-localhost" Apr 16 01:53:01.446107 kubelet[2134]: I0416 01:53:01.445499 2134 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d829cbc3adb17191ff0c90ac35f04582-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d829cbc3adb17191ff0c90ac35f04582\") " pod="kube-system/kube-apiserver-localhost" Apr 16 01:53:01.446107 kubelet[2134]: I0416 01:53:01.445516 2134 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 01:53:01.446107 kubelet[2134]: I0416 01:53:01.445719 2134 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 01:53:01.568666 kubelet[2134]: I0416 01:53:01.568525 2134 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 16 01:53:01.569043 kubelet[2134]: E0416 01:53:01.568996 2134 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.136:6443/api/v1/nodes\": dial tcp 10.0.0.136:6443: connect: connection refused" node="localhost" Apr 16 01:53:01.602392 kubelet[2134]: E0416 01:53:01.601960 2134 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:53:01.603069 containerd[1462]: time="2026-04-16T01:53:01.602826294Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:e9ca41790ae21be9f4cbd451ade0acec,Namespace:kube-system,Attempt:0,}" Apr 16 01:53:01.606002 kubelet[2134]: E0416 01:53:01.605961 2134 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:53:01.606500 containerd[1462]: time="2026-04-16T01:53:01.606453072Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:33fee6ba1581201eda98a989140db110,Namespace:kube-system,Attempt:0,}" Apr 16 01:53:01.609037 kubelet[2134]: E0416 01:53:01.608974 2134 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:53:01.609535 containerd[1462]: time="2026-04-16T01:53:01.609429257Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d829cbc3adb17191ff0c90ac35f04582,Namespace:kube-system,Attempt:0,}" Apr 16 01:53:01.746748 kubelet[2134]: E0416 01:53:01.746647 2134 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.136:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.136:6443: connect: connection refused" interval="800ms" Apr 16 01:53:01.972052 kubelet[2134]: I0416 01:53:01.971778 2134 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 16 01:53:01.972726 kubelet[2134]: E0416 01:53:01.972466 2134 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.136:6443/api/v1/nodes\": dial tcp 10.0.0.136:6443: connect: connection refused" node="localhost" Apr 16 01:53:02.012768 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1944182157.mount: Deactivated successfully. Apr 16 01:53:02.020017 containerd[1462]: time="2026-04-16T01:53:02.019935994Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 16 01:53:02.021567 containerd[1462]: time="2026-04-16T01:53:02.021454890Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=311988" Apr 16 01:53:02.025233 containerd[1462]: time="2026-04-16T01:53:02.025174383Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 16 01:53:02.026502 containerd[1462]: time="2026-04-16T01:53:02.026410492Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 16 01:53:02.027556 containerd[1462]: time="2026-04-16T01:53:02.027419562Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 16 01:53:02.029297 containerd[1462]: time="2026-04-16T01:53:02.028516716Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 16 01:53:02.029903 containerd[1462]: time="2026-04-16T01:53:02.029790772Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 16 01:53:02.031199 containerd[1462]: time="2026-04-16T01:53:02.031089886Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 16 01:53:02.033137 containerd[1462]: time="2026-04-16T01:53:02.033077494Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 426.535559ms" Apr 16 01:53:02.033787 containerd[1462]: time="2026-04-16T01:53:02.033741493Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 424.259837ms" Apr 16 01:53:02.035783 containerd[1462]: time="2026-04-16T01:53:02.035648534Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 432.67867ms" Apr 16 01:53:02.143611 containerd[1462]: time="2026-04-16T01:53:02.142771982Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 16 01:53:02.143611 containerd[1462]: time="2026-04-16T01:53:02.142901007Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 16 01:53:02.143611 containerd[1462]: time="2026-04-16T01:53:02.142911554Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 01:53:02.143611 containerd[1462]: time="2026-04-16T01:53:02.143436755Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 16 01:53:02.143611 containerd[1462]: time="2026-04-16T01:53:02.143489454Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 16 01:53:02.143611 containerd[1462]: time="2026-04-16T01:53:02.143498026Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 01:53:02.143979 containerd[1462]: time="2026-04-16T01:53:02.143820340Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 01:53:02.146075 containerd[1462]: time="2026-04-16T01:53:02.145976901Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 16 01:53:02.146075 containerd[1462]: time="2026-04-16T01:53:02.146011828Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 16 01:53:02.146075 containerd[1462]: time="2026-04-16T01:53:02.146022507Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 01:53:02.146567 containerd[1462]: time="2026-04-16T01:53:02.146276528Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 01:53:02.146567 containerd[1462]: time="2026-04-16T01:53:02.146284275Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 01:53:02.167054 systemd[1]: Started cri-containerd-c1d18162bb109891b0e152b8a7fe5b51d4b596734417cc1a6c559b781b1e9aae.scope - libcontainer container c1d18162bb109891b0e152b8a7fe5b51d4b596734417cc1a6c559b781b1e9aae. Apr 16 01:53:02.171328 systemd[1]: Started cri-containerd-ed171ff70675ff2a89cb5fdeb2b880de905981160ee46e9f9b9d039d32263df5.scope - libcontainer container ed171ff70675ff2a89cb5fdeb2b880de905981160ee46e9f9b9d039d32263df5. Apr 16 01:53:02.173742 systemd[1]: Started cri-containerd-76c03453ed85c513148d752aa1b62be51674b61d788b389496edbe5b8f4dcbec.scope - libcontainer container 76c03453ed85c513148d752aa1b62be51674b61d788b389496edbe5b8f4dcbec. Apr 16 01:53:02.215822 kubelet[2134]: E0416 01:53:02.215794 2134 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.136:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.136:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 16 01:53:02.216261 containerd[1462]: time="2026-04-16T01:53:02.216186006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:33fee6ba1581201eda98a989140db110,Namespace:kube-system,Attempt:0,} returns sandbox id \"ed171ff70675ff2a89cb5fdeb2b880de905981160ee46e9f9b9d039d32263df5\"" Apr 16 01:53:02.218074 kubelet[2134]: E0416 01:53:02.218057 2134 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:53:02.219768 containerd[1462]: time="2026-04-16T01:53:02.219671285Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d829cbc3adb17191ff0c90ac35f04582,Namespace:kube-system,Attempt:0,} returns sandbox id \"c1d18162bb109891b0e152b8a7fe5b51d4b596734417cc1a6c559b781b1e9aae\"" Apr 16 01:53:02.221629 kubelet[2134]: E0416 01:53:02.221604 2134 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.136:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.136:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 16 01:53:02.222219 kubelet[2134]: E0416 01:53:02.222165 2134 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:53:02.223671 containerd[1462]: time="2026-04-16T01:53:02.223617935Z" level=info msg="CreateContainer within sandbox \"ed171ff70675ff2a89cb5fdeb2b880de905981160ee46e9f9b9d039d32263df5\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 16 01:53:02.226031 containerd[1462]: time="2026-04-16T01:53:02.226011894Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:e9ca41790ae21be9f4cbd451ade0acec,Namespace:kube-system,Attempt:0,} returns sandbox id \"76c03453ed85c513148d752aa1b62be51674b61d788b389496edbe5b8f4dcbec\"" Apr 16 01:53:02.226965 kubelet[2134]: E0416 01:53:02.226952 2134 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:53:02.228536 containerd[1462]: time="2026-04-16T01:53:02.228445542Z" level=info msg="CreateContainer within sandbox \"c1d18162bb109891b0e152b8a7fe5b51d4b596734417cc1a6c559b781b1e9aae\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 16 01:53:02.231058 containerd[1462]: time="2026-04-16T01:53:02.231028439Z" level=info msg="CreateContainer within sandbox \"76c03453ed85c513148d752aa1b62be51674b61d788b389496edbe5b8f4dcbec\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 16 01:53:02.242095 kubelet[2134]: E0416 01:53:02.242068 2134 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.136:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.136:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 16 01:53:02.246904 containerd[1462]: time="2026-04-16T01:53:02.246769602Z" level=info msg="CreateContainer within sandbox \"ed171ff70675ff2a89cb5fdeb2b880de905981160ee46e9f9b9d039d32263df5\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"594b46e6cb63d43d1e0797698280c16fa2d3dc52dcfcf25010f95d16b693216d\"" Apr 16 01:53:02.248215 containerd[1462]: time="2026-04-16T01:53:02.248167446Z" level=info msg="StartContainer for \"594b46e6cb63d43d1e0797698280c16fa2d3dc52dcfcf25010f95d16b693216d\"" Apr 16 01:53:02.254633 containerd[1462]: time="2026-04-16T01:53:02.254525281Z" level=info msg="CreateContainer within sandbox \"c1d18162bb109891b0e152b8a7fe5b51d4b596734417cc1a6c559b781b1e9aae\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"09ee45958d331865a938e04560af61c56bc9d467c7674a5c4c257f441de2b688\"" Apr 16 01:53:02.256419 containerd[1462]: time="2026-04-16T01:53:02.255160410Z" level=info msg="StartContainer for \"09ee45958d331865a938e04560af61c56bc9d467c7674a5c4c257f441de2b688\"" Apr 16 01:53:02.259517 containerd[1462]: time="2026-04-16T01:53:02.259494411Z" level=info msg="CreateContainer within sandbox \"76c03453ed85c513148d752aa1b62be51674b61d788b389496edbe5b8f4dcbec\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"59f5111445ebc7f145828882189a9e10f10ec0fd048ca4a99d59c61bc4f46578\"" Apr 16 01:53:02.260213 containerd[1462]: time="2026-04-16T01:53:02.260182149Z" level=info msg="StartContainer for \"59f5111445ebc7f145828882189a9e10f10ec0fd048ca4a99d59c61bc4f46578\"" Apr 16 01:53:02.283264 systemd[1]: Started cri-containerd-594b46e6cb63d43d1e0797698280c16fa2d3dc52dcfcf25010f95d16b693216d.scope - libcontainer container 594b46e6cb63d43d1e0797698280c16fa2d3dc52dcfcf25010f95d16b693216d. Apr 16 01:53:02.286531 systemd[1]: Started cri-containerd-09ee45958d331865a938e04560af61c56bc9d467c7674a5c4c257f441de2b688.scope - libcontainer container 09ee45958d331865a938e04560af61c56bc9d467c7674a5c4c257f441de2b688. Apr 16 01:53:02.287236 systemd[1]: Started cri-containerd-59f5111445ebc7f145828882189a9e10f10ec0fd048ca4a99d59c61bc4f46578.scope - libcontainer container 59f5111445ebc7f145828882189a9e10f10ec0fd048ca4a99d59c61bc4f46578. Apr 16 01:53:02.322894 kubelet[2134]: E0416 01:53:02.322764 2134 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.136:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.136:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 16 01:53:02.331334 containerd[1462]: time="2026-04-16T01:53:02.330958951Z" level=info msg="StartContainer for \"09ee45958d331865a938e04560af61c56bc9d467c7674a5c4c257f441de2b688\" returns successfully" Apr 16 01:53:02.332080 containerd[1462]: time="2026-04-16T01:53:02.331977835Z" level=info msg="StartContainer for \"594b46e6cb63d43d1e0797698280c16fa2d3dc52dcfcf25010f95d16b693216d\" returns successfully" Apr 16 01:53:02.340365 containerd[1462]: time="2026-04-16T01:53:02.340148461Z" level=info msg="StartContainer for \"59f5111445ebc7f145828882189a9e10f10ec0fd048ca4a99d59c61bc4f46578\" returns successfully" Apr 16 01:53:02.775546 kubelet[2134]: I0416 01:53:02.775470 2134 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 16 01:53:03.141567 kubelet[2134]: E0416 01:53:03.141384 2134 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Apr 16 01:53:03.177575 kubelet[2134]: E0416 01:53:03.177518 2134 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 01:53:03.177781 kubelet[2134]: E0416 01:53:03.177671 2134 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:53:03.178927 kubelet[2134]: E0416 01:53:03.178908 2134 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 01:53:03.179065 kubelet[2134]: E0416 01:53:03.178988 2134 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:53:03.180674 kubelet[2134]: E0416 01:53:03.180615 2134 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 01:53:03.180769 kubelet[2134]: E0416 01:53:03.180738 2134 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:53:03.225434 kubelet[2134]: I0416 01:53:03.225225 2134 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Apr 16 01:53:03.225434 kubelet[2134]: E0416 01:53:03.225277 2134 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Apr 16 01:53:03.237149 kubelet[2134]: E0416 01:53:03.237091 2134 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:53:03.337814 kubelet[2134]: E0416 01:53:03.337651 2134 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:53:03.438637 kubelet[2134]: E0416 01:53:03.438355 2134 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:53:03.539934 kubelet[2134]: E0416 01:53:03.539671 2134 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:53:03.640580 kubelet[2134]: E0416 01:53:03.640435 2134 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:53:03.741571 kubelet[2134]: E0416 01:53:03.741361 2134 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:53:03.842660 kubelet[2134]: E0416 01:53:03.842536 2134 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:53:03.943807 kubelet[2134]: E0416 01:53:03.943622 2134 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:53:04.044985 kubelet[2134]: E0416 01:53:04.044783 2134 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:53:04.145728 kubelet[2134]: E0416 01:53:04.145598 2134 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:53:04.184055 kubelet[2134]: E0416 01:53:04.183935 2134 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 01:53:04.184668 kubelet[2134]: E0416 01:53:04.184117 2134 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:53:04.185608 kubelet[2134]: E0416 01:53:04.185542 2134 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 01:53:04.186043 kubelet[2134]: E0416 01:53:04.185814 2134 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:53:04.246003 kubelet[2134]: E0416 01:53:04.245930 2134 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:53:04.347624 kubelet[2134]: E0416 01:53:04.346955 2134 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:53:04.447997 kubelet[2134]: E0416 01:53:04.447811 2134 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:53:04.548202 kubelet[2134]: E0416 01:53:04.548102 2134 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:53:04.649336 kubelet[2134]: E0416 01:53:04.649112 2134 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:53:04.749693 kubelet[2134]: E0416 01:53:04.749574 2134 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:53:04.850747 kubelet[2134]: E0416 01:53:04.850471 2134 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:53:05.045169 kubelet[2134]: I0416 01:53:05.044932 2134 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 16 01:53:05.057229 kubelet[2134]: I0416 01:53:05.057178 2134 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 16 01:53:05.061884 kubelet[2134]: I0416 01:53:05.061787 2134 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 16 01:53:05.126265 kubelet[2134]: I0416 01:53:05.126144 2134 apiserver.go:52] "Watching apiserver" Apr 16 01:53:05.128598 kubelet[2134]: E0416 01:53:05.128515 2134 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:53:05.145732 kubelet[2134]: I0416 01:53:05.145619 2134 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 16 01:53:05.185521 kubelet[2134]: E0416 01:53:05.185406 2134 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:53:05.185941 kubelet[2134]: E0416 01:53:05.185760 2134 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:53:05.336909 systemd[1]: Reloading requested from client PID 2423 ('systemctl') (unit session-7.scope)... Apr 16 01:53:05.336942 systemd[1]: Reloading... Apr 16 01:53:05.416913 zram_generator::config[2462]: No configuration found. Apr 16 01:53:05.502940 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 16 01:53:05.559675 systemd[1]: Reloading finished in 222 ms. Apr 16 01:53:05.596419 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 01:53:05.621254 systemd[1]: kubelet.service: Deactivated successfully. Apr 16 01:53:05.621549 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 01:53:05.621614 systemd[1]: kubelet.service: Consumed 1.213s CPU time, 132.4M memory peak, 0B memory swap peak. Apr 16 01:53:05.636544 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 01:53:05.745163 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 01:53:05.749482 (kubelet)[2507]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 16 01:53:05.798190 kubelet[2507]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 16 01:53:05.798190 kubelet[2507]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 16 01:53:05.798190 kubelet[2507]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 16 01:53:05.798190 kubelet[2507]: I0416 01:53:05.798189 2507 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 16 01:53:05.805598 kubelet[2507]: I0416 01:53:05.805519 2507 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 16 01:53:05.805598 kubelet[2507]: I0416 01:53:05.805562 2507 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 16 01:53:05.805744 kubelet[2507]: I0416 01:53:05.805700 2507 server.go:956] "Client rotation is on, will bootstrap in background" Apr 16 01:53:05.806885 kubelet[2507]: I0416 01:53:05.806784 2507 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 16 01:53:05.808922 kubelet[2507]: I0416 01:53:05.808830 2507 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 16 01:53:05.812353 kubelet[2507]: E0416 01:53:05.812246 2507 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 16 01:53:05.812353 kubelet[2507]: I0416 01:53:05.812285 2507 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 16 01:53:05.818207 kubelet[2507]: I0416 01:53:05.818183 2507 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 16 01:53:05.818648 kubelet[2507]: I0416 01:53:05.818548 2507 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 16 01:53:05.818758 kubelet[2507]: I0416 01:53:05.818592 2507 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 16 01:53:05.818758 kubelet[2507]: I0416 01:53:05.818738 2507 topology_manager.go:138] "Creating topology manager with none policy" Apr 16 01:53:05.818758 kubelet[2507]: I0416 01:53:05.818749 2507 container_manager_linux.go:303] "Creating device plugin manager" Apr 16 01:53:05.819000 kubelet[2507]: I0416 01:53:05.818788 2507 state_mem.go:36] "Initialized new in-memory state store" Apr 16 01:53:05.819000 kubelet[2507]: I0416 01:53:05.818978 2507 kubelet.go:480] "Attempting to sync node with API server" Apr 16 01:53:05.819000 kubelet[2507]: I0416 01:53:05.818987 2507 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 16 01:53:05.819070 kubelet[2507]: I0416 01:53:05.819004 2507 kubelet.go:386] "Adding apiserver pod source" Apr 16 01:53:05.819070 kubelet[2507]: I0416 01:53:05.819016 2507 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 16 01:53:05.822971 kubelet[2507]: I0416 01:53:05.819959 2507 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 16 01:53:05.822971 kubelet[2507]: I0416 01:53:05.820353 2507 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 16 01:53:05.823132 kubelet[2507]: I0416 01:53:05.823096 2507 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 16 01:53:05.823212 kubelet[2507]: I0416 01:53:05.823170 2507 server.go:1289] "Started kubelet" Apr 16 01:53:05.825776 kubelet[2507]: I0416 01:53:05.825739 2507 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 16 01:53:05.835077 kubelet[2507]: I0416 01:53:05.834001 2507 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 16 01:53:05.835077 kubelet[2507]: I0416 01:53:05.834128 2507 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 16 01:53:05.835077 kubelet[2507]: I0416 01:53:05.834224 2507 reconciler.go:26] "Reconciler: start to sync state" Apr 16 01:53:05.835317 kubelet[2507]: I0416 01:53:05.835170 2507 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 16 01:53:05.839358 kubelet[2507]: I0416 01:53:05.839291 2507 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 16 01:53:05.841417 kubelet[2507]: I0416 01:53:05.841401 2507 factory.go:223] Registration of the systemd container factory successfully Apr 16 01:53:05.841520 kubelet[2507]: I0416 01:53:05.841478 2507 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 16 01:53:05.843407 kubelet[2507]: I0416 01:53:05.843102 2507 factory.go:223] Registration of the containerd container factory successfully Apr 16 01:53:05.848914 kubelet[2507]: E0416 01:53:05.848665 2507 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 16 01:53:05.850077 kubelet[2507]: I0416 01:53:05.838642 2507 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 16 01:53:05.852746 kubelet[2507]: I0416 01:53:05.851429 2507 server.go:317] "Adding debug handlers to kubelet server" Apr 16 01:53:05.852746 kubelet[2507]: I0416 01:53:05.852025 2507 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 16 01:53:05.855119 kubelet[2507]: I0416 01:53:05.855049 2507 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 16 01:53:05.859902 kubelet[2507]: I0416 01:53:05.859302 2507 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 16 01:53:05.859902 kubelet[2507]: I0416 01:53:05.859321 2507 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 16 01:53:05.859902 kubelet[2507]: I0416 01:53:05.859339 2507 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 16 01:53:05.859902 kubelet[2507]: I0416 01:53:05.859395 2507 kubelet.go:2436] "Starting kubelet main sync loop" Apr 16 01:53:05.859902 kubelet[2507]: E0416 01:53:05.859426 2507 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 16 01:53:05.878834 kubelet[2507]: I0416 01:53:05.878788 2507 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 16 01:53:05.878834 kubelet[2507]: I0416 01:53:05.878819 2507 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 16 01:53:05.878834 kubelet[2507]: I0416 01:53:05.878871 2507 state_mem.go:36] "Initialized new in-memory state store" Apr 16 01:53:05.878999 kubelet[2507]: I0416 01:53:05.878963 2507 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 16 01:53:05.878999 kubelet[2507]: I0416 01:53:05.878972 2507 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 16 01:53:05.878999 kubelet[2507]: I0416 01:53:05.878985 2507 policy_none.go:49] "None policy: Start" Apr 16 01:53:05.878999 kubelet[2507]: I0416 01:53:05.878992 2507 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 16 01:53:05.878999 kubelet[2507]: I0416 01:53:05.878998 2507 state_mem.go:35] "Initializing new in-memory state store" Apr 16 01:53:05.879099 kubelet[2507]: I0416 01:53:05.879066 2507 state_mem.go:75] "Updated machine memory state" Apr 16 01:53:05.883410 kubelet[2507]: E0416 01:53:05.883377 2507 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 16 01:53:05.883615 kubelet[2507]: I0416 01:53:05.883590 2507 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 16 01:53:05.883664 kubelet[2507]: I0416 01:53:05.883622 2507 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 16 01:53:05.883950 kubelet[2507]: I0416 01:53:05.883825 2507 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 16 01:53:05.886692 kubelet[2507]: E0416 01:53:05.886120 2507 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 16 01:53:05.961104 kubelet[2507]: I0416 01:53:05.961043 2507 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 16 01:53:05.961104 kubelet[2507]: I0416 01:53:05.961113 2507 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 16 01:53:05.961895 kubelet[2507]: I0416 01:53:05.961437 2507 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 16 01:53:05.968694 kubelet[2507]: E0416 01:53:05.968632 2507 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Apr 16 01:53:05.969551 kubelet[2507]: E0416 01:53:05.969446 2507 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Apr 16 01:53:05.970027 kubelet[2507]: E0416 01:53:05.969974 2507 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Apr 16 01:53:05.994155 kubelet[2507]: I0416 01:53:05.994059 2507 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 16 01:53:06.001761 kubelet[2507]: I0416 01:53:06.001624 2507 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Apr 16 01:53:06.002007 kubelet[2507]: I0416 01:53:06.001788 2507 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Apr 16 01:53:06.035444 kubelet[2507]: I0416 01:53:06.035284 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d829cbc3adb17191ff0c90ac35f04582-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d829cbc3adb17191ff0c90ac35f04582\") " pod="kube-system/kube-apiserver-localhost" Apr 16 01:53:06.035444 kubelet[2507]: I0416 01:53:06.035367 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d829cbc3adb17191ff0c90ac35f04582-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d829cbc3adb17191ff0c90ac35f04582\") " pod="kube-system/kube-apiserver-localhost" Apr 16 01:53:06.035444 kubelet[2507]: I0416 01:53:06.035396 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d829cbc3adb17191ff0c90ac35f04582-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d829cbc3adb17191ff0c90ac35f04582\") " pod="kube-system/kube-apiserver-localhost" Apr 16 01:53:06.035444 kubelet[2507]: I0416 01:53:06.035421 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 01:53:06.035444 kubelet[2507]: I0416 01:53:06.035444 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 01:53:06.035701 kubelet[2507]: I0416 01:53:06.035462 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/33fee6ba1581201eda98a989140db110-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"33fee6ba1581201eda98a989140db110\") " pod="kube-system/kube-scheduler-localhost" Apr 16 01:53:06.035701 kubelet[2507]: I0416 01:53:06.035481 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 01:53:06.035701 kubelet[2507]: I0416 01:53:06.035499 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 01:53:06.035701 kubelet[2507]: I0416 01:53:06.035519 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 01:53:06.270491 kubelet[2507]: E0416 01:53:06.270061 2507 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:53:06.270491 kubelet[2507]: E0416 01:53:06.270198 2507 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:53:06.270491 kubelet[2507]: E0416 01:53:06.270212 2507 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:53:06.820278 kubelet[2507]: I0416 01:53:06.820171 2507 apiserver.go:52] "Watching apiserver" Apr 16 01:53:06.834558 kubelet[2507]: I0416 01:53:06.834415 2507 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 16 01:53:06.869475 kubelet[2507]: I0416 01:53:06.869388 2507 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 16 01:53:06.870397 kubelet[2507]: I0416 01:53:06.869808 2507 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 16 01:53:06.870397 kubelet[2507]: E0416 01:53:06.870207 2507 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:53:06.879149 kubelet[2507]: E0416 01:53:06.879009 2507 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Apr 16 01:53:06.879382 kubelet[2507]: E0416 01:53:06.879216 2507 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:53:06.879601 kubelet[2507]: E0416 01:53:06.879493 2507 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Apr 16 01:53:06.879601 kubelet[2507]: E0416 01:53:06.879599 2507 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:53:06.886113 kubelet[2507]: I0416 01:53:06.885477 2507 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.885462548 podStartE2EDuration="1.885462548s" podCreationTimestamp="2026-04-16 01:53:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 01:53:06.885383275 +0000 UTC m=+1.131082916" watchObservedRunningTime="2026-04-16 01:53:06.885462548 +0000 UTC m=+1.131162192" Apr 16 01:53:06.900923 kubelet[2507]: I0416 01:53:06.900607 2507 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.900592644 podStartE2EDuration="1.900592644s" podCreationTimestamp="2026-04-16 01:53:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 01:53:06.894087888 +0000 UTC m=+1.139787533" watchObservedRunningTime="2026-04-16 01:53:06.900592644 +0000 UTC m=+1.146292283" Apr 16 01:53:07.871877 kubelet[2507]: E0416 01:53:07.871763 2507 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:53:07.872294 kubelet[2507]: E0416 01:53:07.872016 2507 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:53:08.873931 kubelet[2507]: E0416 01:53:08.873788 2507 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:53:08.873931 kubelet[2507]: E0416 01:53:08.873805 2507 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:53:09.875351 kubelet[2507]: E0416 01:53:09.875229 2507 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:53:11.874178 kubelet[2507]: I0416 01:53:11.874098 2507 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 16 01:53:11.874517 kubelet[2507]: I0416 01:53:11.874492 2507 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 16 01:53:11.874548 containerd[1462]: time="2026-04-16T01:53:11.874336776Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 16 01:53:12.512164 kubelet[2507]: E0416 01:53:12.512104 2507 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:53:12.525190 kubelet[2507]: I0416 01:53:12.524793 2507 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=7.5246945610000004 podStartE2EDuration="7.524694561s" podCreationTimestamp="2026-04-16 01:53:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 01:53:06.900902146 +0000 UTC m=+1.146601780" watchObservedRunningTime="2026-04-16 01:53:12.524694561 +0000 UTC m=+6.770394196" Apr 16 01:53:12.792711 systemd[1]: Created slice kubepods-besteffort-pod1fdfd190_7db0_4e41_b37a_fa857da93805.slice - libcontainer container kubepods-besteffort-pod1fdfd190_7db0_4e41_b37a_fa857da93805.slice. Apr 16 01:53:12.880259 kubelet[2507]: E0416 01:53:12.880200 2507 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:53:12.984445 kubelet[2507]: I0416 01:53:12.984341 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1fdfd190-7db0-4e41-b37a-fa857da93805-lib-modules\") pod \"kube-proxy-rkdrl\" (UID: \"1fdfd190-7db0-4e41-b37a-fa857da93805\") " pod="kube-system/kube-proxy-rkdrl" Apr 16 01:53:12.984445 kubelet[2507]: I0416 01:53:12.984393 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1fdfd190-7db0-4e41-b37a-fa857da93805-kube-proxy\") pod \"kube-proxy-rkdrl\" (UID: \"1fdfd190-7db0-4e41-b37a-fa857da93805\") " pod="kube-system/kube-proxy-rkdrl" Apr 16 01:53:12.984445 kubelet[2507]: I0416 01:53:12.984410 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1fdfd190-7db0-4e41-b37a-fa857da93805-xtables-lock\") pod \"kube-proxy-rkdrl\" (UID: \"1fdfd190-7db0-4e41-b37a-fa857da93805\") " pod="kube-system/kube-proxy-rkdrl" Apr 16 01:53:12.984445 kubelet[2507]: I0416 01:53:12.984425 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k648r\" (UniqueName: \"kubernetes.io/projected/1fdfd190-7db0-4e41-b37a-fa857da93805-kube-api-access-k648r\") pod \"kube-proxy-rkdrl\" (UID: \"1fdfd190-7db0-4e41-b37a-fa857da93805\") " pod="kube-system/kube-proxy-rkdrl" Apr 16 01:53:13.060149 systemd[1]: Created slice kubepods-besteffort-pod965384cd_8579_41df_9a56_5223baa59f4a.slice - libcontainer container kubepods-besteffort-pod965384cd_8579_41df_9a56_5223baa59f4a.slice. Apr 16 01:53:13.100707 kubelet[2507]: E0416 01:53:13.100641 2507 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:53:13.101499 containerd[1462]: time="2026-04-16T01:53:13.101429544Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rkdrl,Uid:1fdfd190-7db0-4e41-b37a-fa857da93805,Namespace:kube-system,Attempt:0,}" Apr 16 01:53:13.126730 containerd[1462]: time="2026-04-16T01:53:13.126600620Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 16 01:53:13.127325 containerd[1462]: time="2026-04-16T01:53:13.127222341Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 16 01:53:13.127325 containerd[1462]: time="2026-04-16T01:53:13.127240723Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 01:53:13.127404 containerd[1462]: time="2026-04-16T01:53:13.127296167Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 01:53:13.140041 systemd[1]: run-containerd-runc-k8s.io-83a4d967a1c968b582932ecf79854b6d709a84a06fd626ba23aeeb4021b5d2b5-runc.YPE2i9.mount: Deactivated successfully. Apr 16 01:53:13.148104 systemd[1]: Started cri-containerd-83a4d967a1c968b582932ecf79854b6d709a84a06fd626ba23aeeb4021b5d2b5.scope - libcontainer container 83a4d967a1c968b582932ecf79854b6d709a84a06fd626ba23aeeb4021b5d2b5. Apr 16 01:53:13.167280 containerd[1462]: time="2026-04-16T01:53:13.167169668Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rkdrl,Uid:1fdfd190-7db0-4e41-b37a-fa857da93805,Namespace:kube-system,Attempt:0,} returns sandbox id \"83a4d967a1c968b582932ecf79854b6d709a84a06fd626ba23aeeb4021b5d2b5\"" Apr 16 01:53:13.168165 kubelet[2507]: E0416 01:53:13.168119 2507 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:53:13.172869 containerd[1462]: time="2026-04-16T01:53:13.172798711Z" level=info msg="CreateContainer within sandbox \"83a4d967a1c968b582932ecf79854b6d709a84a06fd626ba23aeeb4021b5d2b5\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 16 01:53:13.186082 kubelet[2507]: I0416 01:53:13.186004 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/965384cd-8579-41df-9a56-5223baa59f4a-var-lib-calico\") pod \"tigera-operator-6bf85f8dd-k2f5v\" (UID: \"965384cd-8579-41df-9a56-5223baa59f4a\") " pod="tigera-operator/tigera-operator-6bf85f8dd-k2f5v" Apr 16 01:53:13.186082 kubelet[2507]: I0416 01:53:13.186074 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cv8ss\" (UniqueName: \"kubernetes.io/projected/965384cd-8579-41df-9a56-5223baa59f4a-kube-api-access-cv8ss\") pod \"tigera-operator-6bf85f8dd-k2f5v\" (UID: \"965384cd-8579-41df-9a56-5223baa59f4a\") " pod="tigera-operator/tigera-operator-6bf85f8dd-k2f5v" Apr 16 01:53:13.187244 containerd[1462]: time="2026-04-16T01:53:13.187174672Z" level=info msg="CreateContainer within sandbox \"83a4d967a1c968b582932ecf79854b6d709a84a06fd626ba23aeeb4021b5d2b5\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"9216715f06566ccc9d4d4c8e905d30cf5edb2af00bb3b55e25e87e0c080a525f\"" Apr 16 01:53:13.187678 containerd[1462]: time="2026-04-16T01:53:13.187658320Z" level=info msg="StartContainer for \"9216715f06566ccc9d4d4c8e905d30cf5edb2af00bb3b55e25e87e0c080a525f\"" Apr 16 01:53:13.213098 systemd[1]: Started cri-containerd-9216715f06566ccc9d4d4c8e905d30cf5edb2af00bb3b55e25e87e0c080a525f.scope - libcontainer container 9216715f06566ccc9d4d4c8e905d30cf5edb2af00bb3b55e25e87e0c080a525f. Apr 16 01:53:13.234976 containerd[1462]: time="2026-04-16T01:53:13.234931200Z" level=info msg="StartContainer for \"9216715f06566ccc9d4d4c8e905d30cf5edb2af00bb3b55e25e87e0c080a525f\" returns successfully" Apr 16 01:53:13.364052 containerd[1462]: time="2026-04-16T01:53:13.363038212Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-k2f5v,Uid:965384cd-8579-41df-9a56-5223baa59f4a,Namespace:tigera-operator,Attempt:0,}" Apr 16 01:53:13.389132 containerd[1462]: time="2026-04-16T01:53:13.388989627Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 16 01:53:13.389132 containerd[1462]: time="2026-04-16T01:53:13.389056214Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 16 01:53:13.389132 containerd[1462]: time="2026-04-16T01:53:13.389068464Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 01:53:13.389322 containerd[1462]: time="2026-04-16T01:53:13.389134875Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 01:53:13.416052 systemd[1]: Started cri-containerd-66eead181ea5a429595173d36f0a59609663b67eaeae05627d067152ca623f21.scope - libcontainer container 66eead181ea5a429595173d36f0a59609663b67eaeae05627d067152ca623f21. Apr 16 01:53:13.448907 containerd[1462]: time="2026-04-16T01:53:13.448863654Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-k2f5v,Uid:965384cd-8579-41df-9a56-5223baa59f4a,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"66eead181ea5a429595173d36f0a59609663b67eaeae05627d067152ca623f21\"" Apr 16 01:53:13.450482 containerd[1462]: time="2026-04-16T01:53:13.450451405Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Apr 16 01:53:13.884206 kubelet[2507]: E0416 01:53:13.883447 2507 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:53:14.848797 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount392657145.mount: Deactivated successfully. Apr 16 01:53:15.410917 containerd[1462]: time="2026-04-16T01:53:15.410820683Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:53:15.411612 containerd[1462]: time="2026-04-16T01:53:15.411563118Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Apr 16 01:53:15.412633 containerd[1462]: time="2026-04-16T01:53:15.412604490Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:53:15.415537 containerd[1462]: time="2026-04-16T01:53:15.415495410Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:53:15.416068 containerd[1462]: time="2026-04-16T01:53:15.416043675Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 1.965553375s" Apr 16 01:53:15.416099 containerd[1462]: time="2026-04-16T01:53:15.416076081Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Apr 16 01:53:15.419880 containerd[1462]: time="2026-04-16T01:53:15.419829000Z" level=info msg="CreateContainer within sandbox \"66eead181ea5a429595173d36f0a59609663b67eaeae05627d067152ca623f21\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 16 01:53:15.432153 containerd[1462]: time="2026-04-16T01:53:15.432099129Z" level=info msg="CreateContainer within sandbox \"66eead181ea5a429595173d36f0a59609663b67eaeae05627d067152ca623f21\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"9aaa5ceb03e251700d10535f403f0c0bd2cfff81976bb9be69533370d3bb26d5\"" Apr 16 01:53:15.432731 containerd[1462]: time="2026-04-16T01:53:15.432703037Z" level=info msg="StartContainer for \"9aaa5ceb03e251700d10535f403f0c0bd2cfff81976bb9be69533370d3bb26d5\"" Apr 16 01:53:15.466004 systemd[1]: Started cri-containerd-9aaa5ceb03e251700d10535f403f0c0bd2cfff81976bb9be69533370d3bb26d5.scope - libcontainer container 9aaa5ceb03e251700d10535f403f0c0bd2cfff81976bb9be69533370d3bb26d5. Apr 16 01:53:15.544542 containerd[1462]: time="2026-04-16T01:53:15.544505340Z" level=info msg="StartContainer for \"9aaa5ceb03e251700d10535f403f0c0bd2cfff81976bb9be69533370d3bb26d5\" returns successfully" Apr 16 01:53:15.897069 kubelet[2507]: I0416 01:53:15.896977 2507 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-rkdrl" podStartSLOduration=3.8969567019999998 podStartE2EDuration="3.896956702s" podCreationTimestamp="2026-04-16 01:53:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 01:53:13.892037548 +0000 UTC m=+8.137737186" watchObservedRunningTime="2026-04-16 01:53:15.896956702 +0000 UTC m=+10.142656336" Apr 16 01:53:17.879544 kubelet[2507]: E0416 01:53:17.878353 2507 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:53:17.887089 kubelet[2507]: I0416 01:53:17.887030 2507 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6bf85f8dd-k2f5v" podStartSLOduration=2.920194626 podStartE2EDuration="4.88701119s" podCreationTimestamp="2026-04-16 01:53:13 +0000 UTC" firstStartedPulling="2026-04-16 01:53:13.450069493 +0000 UTC m=+7.695769121" lastFinishedPulling="2026-04-16 01:53:15.416886058 +0000 UTC m=+9.662585685" observedRunningTime="2026-04-16 01:53:15.897158203 +0000 UTC m=+10.142857842" watchObservedRunningTime="2026-04-16 01:53:17.88701119 +0000 UTC m=+12.132710831" Apr 16 01:53:17.893119 kubelet[2507]: E0416 01:53:17.893074 2507 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:53:19.777569 kubelet[2507]: E0416 01:53:19.777495 2507 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:53:19.895912 kubelet[2507]: E0416 01:53:19.895876 2507 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:53:20.609172 sudo[1639]: pam_unix(sudo:session): session closed for user root Apr 16 01:53:20.611892 sshd[1636]: pam_unix(sshd:session): session closed for user core Apr 16 01:53:20.614167 systemd-logind[1449]: Session 7 logged out. Waiting for processes to exit. Apr 16 01:53:20.616388 systemd[1]: sshd@6-10.0.0.136:22-10.0.0.1:33990.service: Deactivated successfully. Apr 16 01:53:20.618583 systemd[1]: session-7.scope: Deactivated successfully. Apr 16 01:53:20.619137 systemd[1]: session-7.scope: Consumed 4.215s CPU time, 160.7M memory peak, 0B memory swap peak. Apr 16 01:53:20.620252 systemd-logind[1449]: Removed session 7. Apr 16 01:53:22.248529 systemd[1]: Created slice kubepods-besteffort-pod18f29b86_ff5f_4066_9aa0_86f0ab10cfb9.slice - libcontainer container kubepods-besteffort-pod18f29b86_ff5f_4066_9aa0_86f0ab10cfb9.slice. Apr 16 01:53:22.291131 systemd[1]: Created slice kubepods-besteffort-pod19254348_cd83_4ac8_a704_df635c6abc25.slice - libcontainer container kubepods-besteffort-pod19254348_cd83_4ac8_a704_df635c6abc25.slice. Apr 16 01:53:22.347242 kubelet[2507]: I0416 01:53:22.347144 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18f29b86-ff5f-4066-9aa0-86f0ab10cfb9-tigera-ca-bundle\") pod \"calico-typha-8944d7cb-lpd64\" (UID: \"18f29b86-ff5f-4066-9aa0-86f0ab10cfb9\") " pod="calico-system/calico-typha-8944d7cb-lpd64" Apr 16 01:53:22.347242 kubelet[2507]: I0416 01:53:22.347217 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/18f29b86-ff5f-4066-9aa0-86f0ab10cfb9-typha-certs\") pod \"calico-typha-8944d7cb-lpd64\" (UID: \"18f29b86-ff5f-4066-9aa0-86f0ab10cfb9\") " pod="calico-system/calico-typha-8944d7cb-lpd64" Apr 16 01:53:22.347769 kubelet[2507]: I0416 01:53:22.347286 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdzhq\" (UniqueName: \"kubernetes.io/projected/18f29b86-ff5f-4066-9aa0-86f0ab10cfb9-kube-api-access-tdzhq\") pod \"calico-typha-8944d7cb-lpd64\" (UID: \"18f29b86-ff5f-4066-9aa0-86f0ab10cfb9\") " pod="calico-system/calico-typha-8944d7cb-lpd64" Apr 16 01:53:22.387969 kubelet[2507]: E0416 01:53:22.387770 2507 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ns4nx" podUID="e69aa1a1-0d4e-40a8-84a3-c556bcb00606" Apr 16 01:53:22.447886 kubelet[2507]: I0416 01:53:22.447714 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/19254348-cd83-4ac8-a704-df635c6abc25-cni-bin-dir\") pod \"calico-node-8m9qq\" (UID: \"19254348-cd83-4ac8-a704-df635c6abc25\") " pod="calico-system/calico-node-8m9qq" Apr 16 01:53:22.447886 kubelet[2507]: I0416 01:53:22.447804 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/19254348-cd83-4ac8-a704-df635c6abc25-var-run-calico\") pod \"calico-node-8m9qq\" (UID: \"19254348-cd83-4ac8-a704-df635c6abc25\") " pod="calico-system/calico-node-8m9qq" Apr 16 01:53:22.447886 kubelet[2507]: I0416 01:53:22.447877 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/19254348-cd83-4ac8-a704-df635c6abc25-bpffs\") pod \"calico-node-8m9qq\" (UID: \"19254348-cd83-4ac8-a704-df635c6abc25\") " pod="calico-system/calico-node-8m9qq" Apr 16 01:53:22.447886 kubelet[2507]: I0416 01:53:22.447895 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/19254348-cd83-4ac8-a704-df635c6abc25-lib-modules\") pod \"calico-node-8m9qq\" (UID: \"19254348-cd83-4ac8-a704-df635c6abc25\") " pod="calico-system/calico-node-8m9qq" Apr 16 01:53:22.447886 kubelet[2507]: I0416 01:53:22.447912 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/19254348-cd83-4ac8-a704-df635c6abc25-nodeproc\") pod \"calico-node-8m9qq\" (UID: \"19254348-cd83-4ac8-a704-df635c6abc25\") " pod="calico-system/calico-node-8m9qq" Apr 16 01:53:22.448277 kubelet[2507]: I0416 01:53:22.447927 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/19254348-cd83-4ac8-a704-df635c6abc25-policysync\") pod \"calico-node-8m9qq\" (UID: \"19254348-cd83-4ac8-a704-df635c6abc25\") " pod="calico-system/calico-node-8m9qq" Apr 16 01:53:22.448277 kubelet[2507]: I0416 01:53:22.447962 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/19254348-cd83-4ac8-a704-df635c6abc25-cni-log-dir\") pod \"calico-node-8m9qq\" (UID: \"19254348-cd83-4ac8-a704-df635c6abc25\") " pod="calico-system/calico-node-8m9qq" Apr 16 01:53:22.448277 kubelet[2507]: I0416 01:53:22.447978 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/19254348-cd83-4ac8-a704-df635c6abc25-cni-net-dir\") pod \"calico-node-8m9qq\" (UID: \"19254348-cd83-4ac8-a704-df635c6abc25\") " pod="calico-system/calico-node-8m9qq" Apr 16 01:53:22.448277 kubelet[2507]: I0416 01:53:22.447997 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/19254348-cd83-4ac8-a704-df635c6abc25-flexvol-driver-host\") pod \"calico-node-8m9qq\" (UID: \"19254348-cd83-4ac8-a704-df635c6abc25\") " pod="calico-system/calico-node-8m9qq" Apr 16 01:53:22.448277 kubelet[2507]: I0416 01:53:22.448016 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/19254348-cd83-4ac8-a704-df635c6abc25-var-lib-calico\") pod \"calico-node-8m9qq\" (UID: \"19254348-cd83-4ac8-a704-df635c6abc25\") " pod="calico-system/calico-node-8m9qq" Apr 16 01:53:22.448453 kubelet[2507]: I0416 01:53:22.448039 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/19254348-cd83-4ac8-a704-df635c6abc25-node-certs\") pod \"calico-node-8m9qq\" (UID: \"19254348-cd83-4ac8-a704-df635c6abc25\") " pod="calico-system/calico-node-8m9qq" Apr 16 01:53:22.448453 kubelet[2507]: I0416 01:53:22.448058 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/19254348-cd83-4ac8-a704-df635c6abc25-tigera-ca-bundle\") pod \"calico-node-8m9qq\" (UID: \"19254348-cd83-4ac8-a704-df635c6abc25\") " pod="calico-system/calico-node-8m9qq" Apr 16 01:53:22.448453 kubelet[2507]: I0416 01:53:22.448077 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gw7pt\" (UniqueName: \"kubernetes.io/projected/19254348-cd83-4ac8-a704-df635c6abc25-kube-api-access-gw7pt\") pod \"calico-node-8m9qq\" (UID: \"19254348-cd83-4ac8-a704-df635c6abc25\") " pod="calico-system/calico-node-8m9qq" Apr 16 01:53:22.448453 kubelet[2507]: I0416 01:53:22.448095 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/19254348-cd83-4ac8-a704-df635c6abc25-sys-fs\") pod \"calico-node-8m9qq\" (UID: \"19254348-cd83-4ac8-a704-df635c6abc25\") " pod="calico-system/calico-node-8m9qq" Apr 16 01:53:22.448453 kubelet[2507]: I0416 01:53:22.448114 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/19254348-cd83-4ac8-a704-df635c6abc25-xtables-lock\") pod \"calico-node-8m9qq\" (UID: \"19254348-cd83-4ac8-a704-df635c6abc25\") " pod="calico-system/calico-node-8m9qq" Apr 16 01:53:22.549035 kubelet[2507]: I0416 01:53:22.548746 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/e69aa1a1-0d4e-40a8-84a3-c556bcb00606-socket-dir\") pod \"csi-node-driver-ns4nx\" (UID: \"e69aa1a1-0d4e-40a8-84a3-c556bcb00606\") " pod="calico-system/csi-node-driver-ns4nx" Apr 16 01:53:22.549035 kubelet[2507]: I0416 01:53:22.548805 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/e69aa1a1-0d4e-40a8-84a3-c556bcb00606-varrun\") pod \"csi-node-driver-ns4nx\" (UID: \"e69aa1a1-0d4e-40a8-84a3-c556bcb00606\") " pod="calico-system/csi-node-driver-ns4nx" Apr 16 01:53:22.549210 kubelet[2507]: I0416 01:53:22.549091 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e69aa1a1-0d4e-40a8-84a3-c556bcb00606-kubelet-dir\") pod \"csi-node-driver-ns4nx\" (UID: \"e69aa1a1-0d4e-40a8-84a3-c556bcb00606\") " pod="calico-system/csi-node-driver-ns4nx" Apr 16 01:53:22.549210 kubelet[2507]: I0416 01:53:22.549124 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/e69aa1a1-0d4e-40a8-84a3-c556bcb00606-registration-dir\") pod \"csi-node-driver-ns4nx\" (UID: \"e69aa1a1-0d4e-40a8-84a3-c556bcb00606\") " pod="calico-system/csi-node-driver-ns4nx" Apr 16 01:53:22.549210 kubelet[2507]: I0416 01:53:22.549193 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnqpk\" (UniqueName: \"kubernetes.io/projected/e69aa1a1-0d4e-40a8-84a3-c556bcb00606-kube-api-access-jnqpk\") pod \"csi-node-driver-ns4nx\" (UID: \"e69aa1a1-0d4e-40a8-84a3-c556bcb00606\") " pod="calico-system/csi-node-driver-ns4nx" Apr 16 01:53:22.551272 kubelet[2507]: E0416 01:53:22.551221 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:53:22.551493 kubelet[2507]: W0416 01:53:22.551452 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:53:22.551566 kubelet[2507]: E0416 01:53:22.551503 2507 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:53:22.551775 kubelet[2507]: E0416 01:53:22.551689 2507 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:53:22.552946 containerd[1462]: time="2026-04-16T01:53:22.552460467Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-8944d7cb-lpd64,Uid:18f29b86-ff5f-4066-9aa0-86f0ab10cfb9,Namespace:calico-system,Attempt:0,}" Apr 16 01:53:22.556546 kubelet[2507]: E0416 01:53:22.556492 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:53:22.556546 kubelet[2507]: W0416 01:53:22.556537 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:53:22.556653 kubelet[2507]: E0416 01:53:22.556562 2507 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:53:22.560059 kubelet[2507]: E0416 01:53:22.559883 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:53:22.560059 kubelet[2507]: W0416 01:53:22.559998 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:53:22.560059 kubelet[2507]: E0416 01:53:22.560034 2507 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:53:22.575627 containerd[1462]: time="2026-04-16T01:53:22.575478660Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 16 01:53:22.575766 containerd[1462]: time="2026-04-16T01:53:22.575533523Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 16 01:53:22.575786 containerd[1462]: time="2026-04-16T01:53:22.575621976Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 01:53:22.576531 containerd[1462]: time="2026-04-16T01:53:22.576472987Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 01:53:22.594120 systemd[1]: Started cri-containerd-2be19db0de2eafffd200660bd1be8c6f1bdf2e29ae97e62f6141463738dea7aa.scope - libcontainer container 2be19db0de2eafffd200660bd1be8c6f1bdf2e29ae97e62f6141463738dea7aa. Apr 16 01:53:22.595476 containerd[1462]: time="2026-04-16T01:53:22.595315563Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8m9qq,Uid:19254348-cd83-4ac8-a704-df635c6abc25,Namespace:calico-system,Attempt:0,}" Apr 16 01:53:22.617515 containerd[1462]: time="2026-04-16T01:53:22.617423435Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 16 01:53:22.617515 containerd[1462]: time="2026-04-16T01:53:22.617480664Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 16 01:53:22.617884 containerd[1462]: time="2026-04-16T01:53:22.617681430Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 01:53:22.618087 containerd[1462]: time="2026-04-16T01:53:22.618038196Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 01:53:22.628371 containerd[1462]: time="2026-04-16T01:53:22.628085330Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-8944d7cb-lpd64,Uid:18f29b86-ff5f-4066-9aa0-86f0ab10cfb9,Namespace:calico-system,Attempt:0,} returns sandbox id \"2be19db0de2eafffd200660bd1be8c6f1bdf2e29ae97e62f6141463738dea7aa\"" Apr 16 01:53:22.629073 kubelet[2507]: E0416 01:53:22.628924 2507 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:53:22.630574 containerd[1462]: time="2026-04-16T01:53:22.630545675Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Apr 16 01:53:22.634027 systemd[1]: Started cri-containerd-1128caa52d53991402d79079162db7ee99a02ef2faf53dc0f971ec3d5bcb639e.scope - libcontainer container 1128caa52d53991402d79079162db7ee99a02ef2faf53dc0f971ec3d5bcb639e. Apr 16 01:53:22.650102 kubelet[2507]: E0416 01:53:22.650080 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:53:22.650245 kubelet[2507]: W0416 01:53:22.650235 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:53:22.650301 kubelet[2507]: E0416 01:53:22.650284 2507 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:53:22.650681 kubelet[2507]: E0416 01:53:22.650569 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:53:22.650681 kubelet[2507]: W0416 01:53:22.650577 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:53:22.650681 kubelet[2507]: E0416 01:53:22.650584 2507 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:53:22.650915 kubelet[2507]: E0416 01:53:22.650903 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:53:22.650973 kubelet[2507]: W0416 01:53:22.650956 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:53:22.650973 kubelet[2507]: E0416 01:53:22.650966 2507 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:53:22.651324 kubelet[2507]: E0416 01:53:22.651275 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:53:22.651324 kubelet[2507]: W0416 01:53:22.651282 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:53:22.651324 kubelet[2507]: E0416 01:53:22.651289 2507 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:53:22.651718 kubelet[2507]: E0416 01:53:22.651591 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:53:22.651718 kubelet[2507]: W0416 01:53:22.651597 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:53:22.651718 kubelet[2507]: E0416 01:53:22.651603 2507 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:53:22.652168 kubelet[2507]: E0416 01:53:22.652069 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:53:22.652168 kubelet[2507]: W0416 01:53:22.652077 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:53:22.652168 kubelet[2507]: E0416 01:53:22.652084 2507 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:53:22.653011 kubelet[2507]: E0416 01:53:22.652944 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:53:22.653011 kubelet[2507]: W0416 01:53:22.652952 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:53:22.653011 kubelet[2507]: E0416 01:53:22.652960 2507 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:53:22.653449 kubelet[2507]: E0416 01:53:22.653413 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:53:22.653449 kubelet[2507]: W0416 01:53:22.653429 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:53:22.653449 kubelet[2507]: E0416 01:53:22.653441 2507 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:53:22.654136 kubelet[2507]: E0416 01:53:22.654008 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:53:22.654136 kubelet[2507]: W0416 01:53:22.654020 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:53:22.654136 kubelet[2507]: E0416 01:53:22.654030 2507 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:53:22.654515 kubelet[2507]: E0416 01:53:22.654390 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:53:22.654515 kubelet[2507]: W0416 01:53:22.654401 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:53:22.654515 kubelet[2507]: E0416 01:53:22.654411 2507 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:53:22.654621 kubelet[2507]: E0416 01:53:22.654613 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:53:22.654642 kubelet[2507]: W0416 01:53:22.654621 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:53:22.654642 kubelet[2507]: E0416 01:53:22.654628 2507 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:53:22.654926 kubelet[2507]: E0416 01:53:22.654918 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:53:22.654996 kubelet[2507]: W0416 01:53:22.654958 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:53:22.654996 kubelet[2507]: E0416 01:53:22.654968 2507 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:53:22.655273 kubelet[2507]: E0416 01:53:22.655200 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:53:22.655273 kubelet[2507]: W0416 01:53:22.655208 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:53:22.655273 kubelet[2507]: E0416 01:53:22.655214 2507 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:53:22.655331 containerd[1462]: time="2026-04-16T01:53:22.655281443Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8m9qq,Uid:19254348-cd83-4ac8-a704-df635c6abc25,Namespace:calico-system,Attempt:0,} returns sandbox id \"1128caa52d53991402d79079162db7ee99a02ef2faf53dc0f971ec3d5bcb639e\"" Apr 16 01:53:22.655646 kubelet[2507]: E0416 01:53:22.655587 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:53:22.655646 kubelet[2507]: W0416 01:53:22.655594 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:53:22.655646 kubelet[2507]: E0416 01:53:22.655601 2507 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:53:22.656444 kubelet[2507]: E0416 01:53:22.656419 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:53:22.656444 kubelet[2507]: W0416 01:53:22.656433 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:53:22.656444 kubelet[2507]: E0416 01:53:22.656442 2507 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:53:22.656752 kubelet[2507]: E0416 01:53:22.656737 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:53:22.656752 kubelet[2507]: W0416 01:53:22.656750 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:53:22.656794 kubelet[2507]: E0416 01:53:22.656757 2507 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:53:22.657080 kubelet[2507]: E0416 01:53:22.657054 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:53:22.657080 kubelet[2507]: W0416 01:53:22.657072 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:53:22.657080 kubelet[2507]: E0416 01:53:22.657081 2507 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:53:22.657279 kubelet[2507]: E0416 01:53:22.657259 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:53:22.657279 kubelet[2507]: W0416 01:53:22.657271 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:53:22.657279 kubelet[2507]: E0416 01:53:22.657277 2507 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:53:22.657431 kubelet[2507]: E0416 01:53:22.657416 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:53:22.657431 kubelet[2507]: W0416 01:53:22.657428 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:53:22.657463 kubelet[2507]: E0416 01:53:22.657433 2507 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:53:22.657629 kubelet[2507]: E0416 01:53:22.657609 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:53:22.657629 kubelet[2507]: W0416 01:53:22.657623 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:53:22.657629 kubelet[2507]: E0416 01:53:22.657629 2507 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:53:22.657936 kubelet[2507]: E0416 01:53:22.657918 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:53:22.657936 kubelet[2507]: W0416 01:53:22.657929 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:53:22.657982 kubelet[2507]: E0416 01:53:22.657939 2507 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:53:22.658173 kubelet[2507]: E0416 01:53:22.658155 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:53:22.658193 kubelet[2507]: W0416 01:53:22.658174 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:53:22.658193 kubelet[2507]: E0416 01:53:22.658183 2507 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:53:22.658424 kubelet[2507]: E0416 01:53:22.658404 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:53:22.658424 kubelet[2507]: W0416 01:53:22.658420 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:53:22.658461 kubelet[2507]: E0416 01:53:22.658427 2507 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:53:22.658806 kubelet[2507]: E0416 01:53:22.658757 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:53:22.658806 kubelet[2507]: W0416 01:53:22.658781 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:53:22.658806 kubelet[2507]: E0416 01:53:22.658794 2507 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:53:22.660072 kubelet[2507]: E0416 01:53:22.660007 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:53:22.660072 kubelet[2507]: W0416 01:53:22.660027 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:53:22.660072 kubelet[2507]: E0416 01:53:22.660038 2507 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:53:22.666097 kubelet[2507]: E0416 01:53:22.666074 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:53:22.666179 kubelet[2507]: W0416 01:53:22.666100 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:53:22.666179 kubelet[2507]: E0416 01:53:22.666117 2507 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:53:23.860214 kubelet[2507]: E0416 01:53:23.860109 2507 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ns4nx" podUID="e69aa1a1-0d4e-40a8-84a3-c556bcb00606" Apr 16 01:53:24.498916 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3181431910.mount: Deactivated successfully. Apr 16 01:53:25.731261 containerd[1462]: time="2026-04-16T01:53:25.731082198Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:53:25.734510 containerd[1462]: time="2026-04-16T01:53:25.734430784Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=36107596" Apr 16 01:53:25.734510 containerd[1462]: time="2026-04-16T01:53:25.734503686Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:53:25.737115 containerd[1462]: time="2026-04-16T01:53:25.737066848Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:53:25.737484 containerd[1462]: time="2026-04-16T01:53:25.737457003Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 3.106655462s" Apr 16 01:53:25.737514 containerd[1462]: time="2026-04-16T01:53:25.737491778Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Apr 16 01:53:25.738298 containerd[1462]: time="2026-04-16T01:53:25.738281348Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Apr 16 01:53:25.747422 containerd[1462]: time="2026-04-16T01:53:25.747383653Z" level=info msg="CreateContainer within sandbox \"2be19db0de2eafffd200660bd1be8c6f1bdf2e29ae97e62f6141463738dea7aa\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 16 01:53:25.760293 containerd[1462]: time="2026-04-16T01:53:25.760221985Z" level=info msg="CreateContainer within sandbox \"2be19db0de2eafffd200660bd1be8c6f1bdf2e29ae97e62f6141463738dea7aa\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"e4fc458ca8d344713f8e1279784ecc32a8ce8665ef0f8efe876591947eb01b56\"" Apr 16 01:53:25.760755 containerd[1462]: time="2026-04-16T01:53:25.760714090Z" level=info msg="StartContainer for \"e4fc458ca8d344713f8e1279784ecc32a8ce8665ef0f8efe876591947eb01b56\"" Apr 16 01:53:25.795113 systemd[1]: Started cri-containerd-e4fc458ca8d344713f8e1279784ecc32a8ce8665ef0f8efe876591947eb01b56.scope - libcontainer container e4fc458ca8d344713f8e1279784ecc32a8ce8665ef0f8efe876591947eb01b56. Apr 16 01:53:25.827535 containerd[1462]: time="2026-04-16T01:53:25.827499329Z" level=info msg="StartContainer for \"e4fc458ca8d344713f8e1279784ecc32a8ce8665ef0f8efe876591947eb01b56\" returns successfully" Apr 16 01:53:25.865289 kubelet[2507]: E0416 01:53:25.860954 2507 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ns4nx" podUID="e69aa1a1-0d4e-40a8-84a3-c556bcb00606" Apr 16 01:53:25.914884 kubelet[2507]: E0416 01:53:25.914794 2507 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:53:25.933743 kubelet[2507]: I0416 01:53:25.933638 2507 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-8944d7cb-lpd64" podStartSLOduration=0.825872991 podStartE2EDuration="3.933619722s" podCreationTimestamp="2026-04-16 01:53:22 +0000 UTC" firstStartedPulling="2026-04-16 01:53:22.630238206 +0000 UTC m=+16.875937834" lastFinishedPulling="2026-04-16 01:53:25.737984931 +0000 UTC m=+19.983684565" observedRunningTime="2026-04-16 01:53:25.933593197 +0000 UTC m=+20.179292835" watchObservedRunningTime="2026-04-16 01:53:25.933619722 +0000 UTC m=+20.179319360" Apr 16 01:53:25.973236 kubelet[2507]: E0416 01:53:25.973150 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:53:25.973236 kubelet[2507]: W0416 01:53:25.973198 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:53:25.973236 kubelet[2507]: E0416 01:53:25.973225 2507 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:53:25.973911 kubelet[2507]: E0416 01:53:25.973450 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:53:25.973911 kubelet[2507]: W0416 01:53:25.973456 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:53:25.973911 kubelet[2507]: E0416 01:53:25.973464 2507 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:53:25.973911 kubelet[2507]: E0416 01:53:25.973777 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:53:25.973911 kubelet[2507]: W0416 01:53:25.973799 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:53:25.973911 kubelet[2507]: E0416 01:53:25.973814 2507 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:53:25.975020 kubelet[2507]: E0416 01:53:25.974934 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:53:25.975020 kubelet[2507]: W0416 01:53:25.974960 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:53:25.975020 kubelet[2507]: E0416 01:53:25.974975 2507 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:53:25.975253 kubelet[2507]: E0416 01:53:25.975236 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:53:25.975253 kubelet[2507]: W0416 01:53:25.975243 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:53:25.975253 kubelet[2507]: E0416 01:53:25.975251 2507 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:53:25.975705 kubelet[2507]: E0416 01:53:25.975691 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:53:25.975751 kubelet[2507]: W0416 01:53:25.975706 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:53:25.975751 kubelet[2507]: E0416 01:53:25.975743 2507 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:53:25.977617 kubelet[2507]: E0416 01:53:25.977460 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:53:25.977617 kubelet[2507]: W0416 01:53:25.977627 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:53:25.977892 kubelet[2507]: E0416 01:53:25.977688 2507 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:53:25.978420 kubelet[2507]: E0416 01:53:25.978369 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:53:25.978420 kubelet[2507]: W0416 01:53:25.978394 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:53:25.978420 kubelet[2507]: E0416 01:53:25.978402 2507 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:53:25.979321 kubelet[2507]: E0416 01:53:25.979288 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:53:25.979321 kubelet[2507]: W0416 01:53:25.979302 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:53:25.979321 kubelet[2507]: E0416 01:53:25.979312 2507 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:53:25.980318 kubelet[2507]: E0416 01:53:25.979523 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:53:25.980318 kubelet[2507]: W0416 01:53:25.979529 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:53:25.980318 kubelet[2507]: E0416 01:53:25.979536 2507 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:53:25.980318 kubelet[2507]: E0416 01:53:25.979688 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:53:25.980318 kubelet[2507]: W0416 01:53:25.979698 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:53:25.980318 kubelet[2507]: E0416 01:53:25.979708 2507 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:53:25.980318 kubelet[2507]: E0416 01:53:25.980022 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:53:25.980318 kubelet[2507]: W0416 01:53:25.980034 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:53:25.980318 kubelet[2507]: E0416 01:53:25.980047 2507 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:53:25.980318 kubelet[2507]: E0416 01:53:25.980302 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:53:25.980551 kubelet[2507]: W0416 01:53:25.980308 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:53:25.980551 kubelet[2507]: E0416 01:53:25.980315 2507 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:53:25.980551 kubelet[2507]: E0416 01:53:25.980441 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:53:25.980551 kubelet[2507]: W0416 01:53:25.980446 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:53:25.980551 kubelet[2507]: E0416 01:53:25.980451 2507 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:53:25.980635 kubelet[2507]: E0416 01:53:25.980579 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:53:25.980635 kubelet[2507]: W0416 01:53:25.980586 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:53:25.980635 kubelet[2507]: E0416 01:53:25.980591 2507 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:53:25.980917 kubelet[2507]: E0416 01:53:25.980878 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:53:25.980917 kubelet[2507]: W0416 01:53:25.980898 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:53:25.980917 kubelet[2507]: E0416 01:53:25.980905 2507 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:53:25.981991 kubelet[2507]: E0416 01:53:25.981075 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:53:25.981991 kubelet[2507]: W0416 01:53:25.981080 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:53:25.981991 kubelet[2507]: E0416 01:53:25.981086 2507 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:53:25.981991 kubelet[2507]: E0416 01:53:25.981307 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:53:25.981991 kubelet[2507]: W0416 01:53:25.981312 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:53:25.981991 kubelet[2507]: E0416 01:53:25.981318 2507 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:53:25.981991 kubelet[2507]: E0416 01:53:25.981509 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:53:25.981991 kubelet[2507]: W0416 01:53:25.981515 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:53:25.981991 kubelet[2507]: E0416 01:53:25.981522 2507 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:53:25.981991 kubelet[2507]: E0416 01:53:25.981679 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:53:25.982194 kubelet[2507]: W0416 01:53:25.981684 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:53:25.982194 kubelet[2507]: E0416 01:53:25.981689 2507 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:53:25.982194 kubelet[2507]: E0416 01:53:25.982010 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:53:25.982194 kubelet[2507]: W0416 01:53:25.982016 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:53:25.982194 kubelet[2507]: E0416 01:53:25.982023 2507 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:53:25.982271 kubelet[2507]: E0416 01:53:25.982255 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:53:25.982271 kubelet[2507]: W0416 01:53:25.982260 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:53:25.982271 kubelet[2507]: E0416 01:53:25.982266 2507 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:53:25.982814 kubelet[2507]: E0416 01:53:25.982785 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:53:25.982814 kubelet[2507]: W0416 01:53:25.982808 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:53:25.982814 kubelet[2507]: E0416 01:53:25.982815 2507 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:53:25.983133 kubelet[2507]: E0416 01:53:25.983064 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:53:25.983133 kubelet[2507]: W0416 01:53:25.983069 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:53:25.983133 kubelet[2507]: E0416 01:53:25.983075 2507 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:53:25.983691 kubelet[2507]: E0416 01:53:25.983634 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:53:25.983691 kubelet[2507]: W0416 01:53:25.983674 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:53:25.983691 kubelet[2507]: E0416 01:53:25.983683 2507 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:53:25.984166 kubelet[2507]: E0416 01:53:25.984101 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:53:25.984166 kubelet[2507]: W0416 01:53:25.984120 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:53:25.984166 kubelet[2507]: E0416 01:53:25.984128 2507 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:53:25.985340 kubelet[2507]: E0416 01:53:25.985276 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:53:25.985340 kubelet[2507]: W0416 01:53:25.985319 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:53:25.985340 kubelet[2507]: E0416 01:53:25.985336 2507 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:53:25.986569 kubelet[2507]: E0416 01:53:25.986314 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:53:25.986569 kubelet[2507]: W0416 01:53:25.986328 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:53:25.986569 kubelet[2507]: E0416 01:53:25.986341 2507 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:53:25.987381 kubelet[2507]: E0416 01:53:25.986934 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:53:25.987381 kubelet[2507]: W0416 01:53:25.986946 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:53:25.987381 kubelet[2507]: E0416 01:53:25.986955 2507 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:53:25.987381 kubelet[2507]: E0416 01:53:25.987324 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:53:25.987381 kubelet[2507]: W0416 01:53:25.987337 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:53:25.987381 kubelet[2507]: E0416 01:53:25.987351 2507 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:53:25.987868 kubelet[2507]: E0416 01:53:25.987790 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:53:25.987868 kubelet[2507]: W0416 01:53:25.987826 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:53:25.987944 kubelet[2507]: E0416 01:53:25.987878 2507 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:53:25.988160 kubelet[2507]: E0416 01:53:25.988125 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:53:25.988160 kubelet[2507]: W0416 01:53:25.988146 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:53:25.988160 kubelet[2507]: E0416 01:53:25.988153 2507 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:53:25.988602 kubelet[2507]: E0416 01:53:25.988396 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:53:25.988602 kubelet[2507]: W0416 01:53:25.988403 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:53:25.988602 kubelet[2507]: E0416 01:53:25.988410 2507 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:53:26.916175 kubelet[2507]: I0416 01:53:26.916090 2507 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 16 01:53:26.916552 kubelet[2507]: E0416 01:53:26.916523 2507 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:53:26.988025 kubelet[2507]: E0416 01:53:26.987955 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:53:26.988025 kubelet[2507]: W0416 01:53:26.987998 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:53:26.988025 kubelet[2507]: E0416 01:53:26.988025 2507 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:53:26.988374 kubelet[2507]: E0416 01:53:26.988347 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:53:26.988374 kubelet[2507]: W0416 01:53:26.988372 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:53:26.988465 kubelet[2507]: E0416 01:53:26.988386 2507 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:53:26.988619 kubelet[2507]: E0416 01:53:26.988601 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:53:26.988649 kubelet[2507]: W0416 01:53:26.988620 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:53:26.988649 kubelet[2507]: E0416 01:53:26.988630 2507 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:53:26.988980 kubelet[2507]: E0416 01:53:26.988966 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:53:26.988980 kubelet[2507]: W0416 01:53:26.988980 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:53:26.989044 kubelet[2507]: E0416 01:53:26.988987 2507 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:53:26.989217 kubelet[2507]: E0416 01:53:26.989199 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:53:26.989217 kubelet[2507]: W0416 01:53:26.989211 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:53:26.989217 kubelet[2507]: E0416 01:53:26.989217 2507 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:53:26.989376 kubelet[2507]: E0416 01:53:26.989357 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:53:26.989376 kubelet[2507]: W0416 01:53:26.989368 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:53:26.989376 kubelet[2507]: E0416 01:53:26.989373 2507 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:53:26.989787 kubelet[2507]: E0416 01:53:26.989702 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:53:26.989787 kubelet[2507]: W0416 01:53:26.989743 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:53:26.989787 kubelet[2507]: E0416 01:53:26.989758 2507 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:53:26.990019 kubelet[2507]: E0416 01:53:26.990002 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:53:26.990019 kubelet[2507]: W0416 01:53:26.990015 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:53:26.990061 kubelet[2507]: E0416 01:53:26.990021 2507 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:53:26.990257 kubelet[2507]: E0416 01:53:26.990244 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:53:26.990257 kubelet[2507]: W0416 01:53:26.990256 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:53:26.990292 kubelet[2507]: E0416 01:53:26.990261 2507 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:53:26.990439 kubelet[2507]: E0416 01:53:26.990426 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:53:26.990439 kubelet[2507]: W0416 01:53:26.990438 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:53:26.990499 kubelet[2507]: E0416 01:53:26.990444 2507 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:53:26.990673 kubelet[2507]: E0416 01:53:26.990659 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:53:26.990673 kubelet[2507]: W0416 01:53:26.990671 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:53:26.990748 kubelet[2507]: E0416 01:53:26.990676 2507 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:53:26.990892 kubelet[2507]: E0416 01:53:26.990879 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:53:26.990892 kubelet[2507]: W0416 01:53:26.990891 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:53:26.990934 kubelet[2507]: E0416 01:53:26.990897 2507 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:53:26.991095 kubelet[2507]: E0416 01:53:26.991078 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:53:26.991117 kubelet[2507]: W0416 01:53:26.991095 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:53:26.991117 kubelet[2507]: E0416 01:53:26.991105 2507 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:53:26.991358 kubelet[2507]: E0416 01:53:26.991343 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:53:26.991358 kubelet[2507]: W0416 01:53:26.991355 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:53:26.991392 kubelet[2507]: E0416 01:53:26.991361 2507 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:53:26.991590 kubelet[2507]: E0416 01:53:26.991575 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:53:26.991590 kubelet[2507]: W0416 01:53:26.991587 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:53:26.991622 kubelet[2507]: E0416 01:53:26.991593 2507 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:53:26.991931 kubelet[2507]: E0416 01:53:26.991918 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:53:26.991931 kubelet[2507]: W0416 01:53:26.991931 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:53:26.991973 kubelet[2507]: E0416 01:53:26.991937 2507 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:53:26.992121 kubelet[2507]: E0416 01:53:26.992108 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:53:26.992121 kubelet[2507]: W0416 01:53:26.992120 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:53:26.992163 kubelet[2507]: E0416 01:53:26.992125 2507 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:53:26.992354 kubelet[2507]: E0416 01:53:26.992342 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:53:26.992354 kubelet[2507]: W0416 01:53:26.992354 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:53:26.992390 kubelet[2507]: E0416 01:53:26.992360 2507 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:53:26.992572 kubelet[2507]: E0416 01:53:26.992560 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:53:26.992572 kubelet[2507]: W0416 01:53:26.992571 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:53:26.992606 kubelet[2507]: E0416 01:53:26.992577 2507 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:53:26.992827 kubelet[2507]: E0416 01:53:26.992813 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:53:26.992876 kubelet[2507]: W0416 01:53:26.992827 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:53:26.992876 kubelet[2507]: E0416 01:53:26.992868 2507 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:53:26.993049 kubelet[2507]: E0416 01:53:26.993037 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:53:26.993067 kubelet[2507]: W0416 01:53:26.993049 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:53:26.993067 kubelet[2507]: E0416 01:53:26.993055 2507 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:53:26.993262 kubelet[2507]: E0416 01:53:26.993250 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:53:26.993262 kubelet[2507]: W0416 01:53:26.993261 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:53:26.993296 kubelet[2507]: E0416 01:53:26.993266 2507 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:53:26.993414 kubelet[2507]: E0416 01:53:26.993401 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:53:26.993414 kubelet[2507]: W0416 01:53:26.993414 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:53:26.993451 kubelet[2507]: E0416 01:53:26.993420 2507 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:53:26.993616 kubelet[2507]: E0416 01:53:26.993601 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:53:26.993616 kubelet[2507]: W0416 01:53:26.993613 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:53:26.993648 kubelet[2507]: E0416 01:53:26.993618 2507 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:53:26.993811 kubelet[2507]: E0416 01:53:26.993796 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:53:26.993811 kubelet[2507]: W0416 01:53:26.993808 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:53:26.993869 kubelet[2507]: E0416 01:53:26.993813 2507 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:53:26.994006 kubelet[2507]: E0416 01:53:26.993991 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:53:26.994027 kubelet[2507]: W0416 01:53:26.994006 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:53:26.994027 kubelet[2507]: E0416 01:53:26.994014 2507 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:53:26.994272 kubelet[2507]: E0416 01:53:26.994257 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:53:26.994272 kubelet[2507]: W0416 01:53:26.994270 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:53:26.994310 kubelet[2507]: E0416 01:53:26.994276 2507 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:53:26.994579 kubelet[2507]: E0416 01:53:26.994563 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:53:26.994599 kubelet[2507]: W0416 01:53:26.994579 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:53:26.994599 kubelet[2507]: E0416 01:53:26.994589 2507 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:53:26.994799 kubelet[2507]: E0416 01:53:26.994777 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:53:26.994799 kubelet[2507]: W0416 01:53:26.994794 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:53:26.994917 kubelet[2507]: E0416 01:53:26.994804 2507 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:53:26.995028 kubelet[2507]: E0416 01:53:26.995011 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:53:26.995028 kubelet[2507]: W0416 01:53:26.995024 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:53:26.995060 kubelet[2507]: E0416 01:53:26.995034 2507 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:53:26.995379 kubelet[2507]: E0416 01:53:26.995343 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:53:26.995379 kubelet[2507]: W0416 01:53:26.995367 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:53:26.995379 kubelet[2507]: E0416 01:53:26.995382 2507 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:53:26.995822 kubelet[2507]: E0416 01:53:26.995800 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:53:26.995879 kubelet[2507]: W0416 01:53:26.995827 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:53:26.995913 kubelet[2507]: E0416 01:53:26.995893 2507 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:53:26.996167 kubelet[2507]: E0416 01:53:26.996151 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:53:26.996185 kubelet[2507]: W0416 01:53:26.996169 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:53:26.996185 kubelet[2507]: E0416 01:53:26.996178 2507 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:53:27.335370 update_engine[1451]: I20260416 01:53:27.335261 1451 update_attempter.cc:509] Updating boot flags... Apr 16 01:53:27.337158 containerd[1462]: time="2026-04-16T01:53:27.337104146Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:53:27.338001 containerd[1462]: time="2026-04-16T01:53:27.337930198Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4630250" Apr 16 01:53:27.339391 containerd[1462]: time="2026-04-16T01:53:27.339321579Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:53:27.342265 containerd[1462]: time="2026-04-16T01:53:27.342099210Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:53:27.343432 containerd[1462]: time="2026-04-16T01:53:27.343367089Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 1.603940416s" Apr 16 01:53:27.343432 containerd[1462]: time="2026-04-16T01:53:27.343421843Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Apr 16 01:53:27.351902 containerd[1462]: time="2026-04-16T01:53:27.351562748Z" level=info msg="CreateContainer within sandbox \"1128caa52d53991402d79079162db7ee99a02ef2faf53dc0f971ec3d5bcb639e\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 16 01:53:27.366001 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (3172) Apr 16 01:53:27.377165 containerd[1462]: time="2026-04-16T01:53:27.377088201Z" level=info msg="CreateContainer within sandbox \"1128caa52d53991402d79079162db7ee99a02ef2faf53dc0f971ec3d5bcb639e\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"58cc9a33395370ebbb439886eb690dbbca2130253c5926c9e1c9f5821badcee7\"" Apr 16 01:53:27.379239 containerd[1462]: time="2026-04-16T01:53:27.379155916Z" level=info msg="StartContainer for \"58cc9a33395370ebbb439886eb690dbbca2130253c5926c9e1c9f5821badcee7\"" Apr 16 01:53:27.413959 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (3174) Apr 16 01:53:27.423083 systemd[1]: Started cri-containerd-58cc9a33395370ebbb439886eb690dbbca2130253c5926c9e1c9f5821badcee7.scope - libcontainer container 58cc9a33395370ebbb439886eb690dbbca2130253c5926c9e1c9f5821badcee7. Apr 16 01:53:27.455949 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (3174) Apr 16 01:53:27.474534 containerd[1462]: time="2026-04-16T01:53:27.474396119Z" level=info msg="StartContainer for \"58cc9a33395370ebbb439886eb690dbbca2130253c5926c9e1c9f5821badcee7\" returns successfully" Apr 16 01:53:27.482393 systemd[1]: cri-containerd-58cc9a33395370ebbb439886eb690dbbca2130253c5926c9e1c9f5821badcee7.scope: Deactivated successfully. Apr 16 01:53:27.515501 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-58cc9a33395370ebbb439886eb690dbbca2130253c5926c9e1c9f5821badcee7-rootfs.mount: Deactivated successfully. Apr 16 01:53:27.578084 containerd[1462]: time="2026-04-16T01:53:27.577932050Z" level=info msg="shim disconnected" id=58cc9a33395370ebbb439886eb690dbbca2130253c5926c9e1c9f5821badcee7 namespace=k8s.io Apr 16 01:53:27.578084 containerd[1462]: time="2026-04-16T01:53:27.578019440Z" level=warning msg="cleaning up after shim disconnected" id=58cc9a33395370ebbb439886eb690dbbca2130253c5926c9e1c9f5821badcee7 namespace=k8s.io Apr 16 01:53:27.578084 containerd[1462]: time="2026-04-16T01:53:27.578028507Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 16 01:53:27.861461 kubelet[2507]: E0416 01:53:27.861400 2507 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ns4nx" podUID="e69aa1a1-0d4e-40a8-84a3-c556bcb00606" Apr 16 01:53:27.925410 containerd[1462]: time="2026-04-16T01:53:27.925333220Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Apr 16 01:53:29.859934 kubelet[2507]: E0416 01:53:29.859884 2507 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ns4nx" podUID="e69aa1a1-0d4e-40a8-84a3-c556bcb00606" Apr 16 01:53:31.865621 kubelet[2507]: E0416 01:53:31.865523 2507 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ns4nx" podUID="e69aa1a1-0d4e-40a8-84a3-c556bcb00606" Apr 16 01:53:33.860518 kubelet[2507]: E0416 01:53:33.860450 2507 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ns4nx" podUID="e69aa1a1-0d4e-40a8-84a3-c556bcb00606" Apr 16 01:53:33.862588 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount794401666.mount: Deactivated successfully. Apr 16 01:53:34.101629 containerd[1462]: time="2026-04-16T01:53:34.101556493Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Apr 16 01:53:34.107577 containerd[1462]: time="2026-04-16T01:53:34.107400394Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:53:34.109625 containerd[1462]: time="2026-04-16T01:53:34.109569018Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:53:34.111644 containerd[1462]: time="2026-04-16T01:53:34.111548388Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:53:34.112428 containerd[1462]: time="2026-04-16T01:53:34.112348354Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 6.186957744s" Apr 16 01:53:34.112428 containerd[1462]: time="2026-04-16T01:53:34.112390770Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Apr 16 01:53:34.116536 containerd[1462]: time="2026-04-16T01:53:34.116494943Z" level=info msg="CreateContainer within sandbox \"1128caa52d53991402d79079162db7ee99a02ef2faf53dc0f971ec3d5bcb639e\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Apr 16 01:53:34.132104 containerd[1462]: time="2026-04-16T01:53:34.132053409Z" level=info msg="CreateContainer within sandbox \"1128caa52d53991402d79079162db7ee99a02ef2faf53dc0f971ec3d5bcb639e\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"dcdd999dff7a35799bed98ff0eef1c8f327185beb9e0f63e7a92e302995e1719\"" Apr 16 01:53:34.132606 containerd[1462]: time="2026-04-16T01:53:34.132583107Z" level=info msg="StartContainer for \"dcdd999dff7a35799bed98ff0eef1c8f327185beb9e0f63e7a92e302995e1719\"" Apr 16 01:53:34.178613 systemd[1]: Started cri-containerd-dcdd999dff7a35799bed98ff0eef1c8f327185beb9e0f63e7a92e302995e1719.scope - libcontainer container dcdd999dff7a35799bed98ff0eef1c8f327185beb9e0f63e7a92e302995e1719. Apr 16 01:53:34.201086 containerd[1462]: time="2026-04-16T01:53:34.201048040Z" level=info msg="StartContainer for \"dcdd999dff7a35799bed98ff0eef1c8f327185beb9e0f63e7a92e302995e1719\" returns successfully" Apr 16 01:53:34.236669 systemd[1]: cri-containerd-dcdd999dff7a35799bed98ff0eef1c8f327185beb9e0f63e7a92e302995e1719.scope: Deactivated successfully. Apr 16 01:53:34.318750 containerd[1462]: time="2026-04-16T01:53:34.318637187Z" level=info msg="shim disconnected" id=dcdd999dff7a35799bed98ff0eef1c8f327185beb9e0f63e7a92e302995e1719 namespace=k8s.io Apr 16 01:53:34.318750 containerd[1462]: time="2026-04-16T01:53:34.318715944Z" level=warning msg="cleaning up after shim disconnected" id=dcdd999dff7a35799bed98ff0eef1c8f327185beb9e0f63e7a92e302995e1719 namespace=k8s.io Apr 16 01:53:34.318750 containerd[1462]: time="2026-04-16T01:53:34.318763340Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 16 01:53:34.863050 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dcdd999dff7a35799bed98ff0eef1c8f327185beb9e0f63e7a92e302995e1719-rootfs.mount: Deactivated successfully. Apr 16 01:53:34.947902 containerd[1462]: time="2026-04-16T01:53:34.947821830Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Apr 16 01:53:35.860586 kubelet[2507]: E0416 01:53:35.860350 2507 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ns4nx" podUID="e69aa1a1-0d4e-40a8-84a3-c556bcb00606" Apr 16 01:53:35.949810 kubelet[2507]: I0416 01:53:35.949769 2507 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 16 01:53:35.950095 kubelet[2507]: E0416 01:53:35.950077 2507 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:53:36.949460 kubelet[2507]: E0416 01:53:36.949398 2507 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:53:37.861340 kubelet[2507]: E0416 01:53:37.861269 2507 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ns4nx" podUID="e69aa1a1-0d4e-40a8-84a3-c556bcb00606" Apr 16 01:53:38.138229 containerd[1462]: time="2026-04-16T01:53:38.138094743Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:53:38.139093 containerd[1462]: time="2026-04-16T01:53:38.139044069Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Apr 16 01:53:38.140117 containerd[1462]: time="2026-04-16T01:53:38.140077726Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:53:38.142057 containerd[1462]: time="2026-04-16T01:53:38.141791546Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:53:38.142445 containerd[1462]: time="2026-04-16T01:53:38.142413779Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 3.194520156s" Apr 16 01:53:38.142445 containerd[1462]: time="2026-04-16T01:53:38.142444046Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Apr 16 01:53:38.156674 containerd[1462]: time="2026-04-16T01:53:38.156613199Z" level=info msg="CreateContainer within sandbox \"1128caa52d53991402d79079162db7ee99a02ef2faf53dc0f971ec3d5bcb639e\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 16 01:53:38.189417 containerd[1462]: time="2026-04-16T01:53:38.189334870Z" level=info msg="CreateContainer within sandbox \"1128caa52d53991402d79079162db7ee99a02ef2faf53dc0f971ec3d5bcb639e\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"dce54bb381f361c222972c655958fb1a0d4e09cdd0632387e3d7edfceb434dfc\"" Apr 16 01:53:38.189980 containerd[1462]: time="2026-04-16T01:53:38.189932033Z" level=info msg="StartContainer for \"dce54bb381f361c222972c655958fb1a0d4e09cdd0632387e3d7edfceb434dfc\"" Apr 16 01:53:38.225615 systemd[1]: run-containerd-runc-k8s.io-dce54bb381f361c222972c655958fb1a0d4e09cdd0632387e3d7edfceb434dfc-runc.u57PZK.mount: Deactivated successfully. Apr 16 01:53:38.234115 systemd[1]: Started cri-containerd-dce54bb381f361c222972c655958fb1a0d4e09cdd0632387e3d7edfceb434dfc.scope - libcontainer container dce54bb381f361c222972c655958fb1a0d4e09cdd0632387e3d7edfceb434dfc. Apr 16 01:53:38.256594 containerd[1462]: time="2026-04-16T01:53:38.256547480Z" level=info msg="StartContainer for \"dce54bb381f361c222972c655958fb1a0d4e09cdd0632387e3d7edfceb434dfc\" returns successfully" Apr 16 01:53:38.717089 systemd[1]: cri-containerd-dce54bb381f361c222972c655958fb1a0d4e09cdd0632387e3d7edfceb434dfc.scope: Deactivated successfully. Apr 16 01:53:38.748436 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dce54bb381f361c222972c655958fb1a0d4e09cdd0632387e3d7edfceb434dfc-rootfs.mount: Deactivated successfully. Apr 16 01:53:38.757253 containerd[1462]: time="2026-04-16T01:53:38.757172321Z" level=info msg="shim disconnected" id=dce54bb381f361c222972c655958fb1a0d4e09cdd0632387e3d7edfceb434dfc namespace=k8s.io Apr 16 01:53:38.757253 containerd[1462]: time="2026-04-16T01:53:38.757238051Z" level=warning msg="cleaning up after shim disconnected" id=dce54bb381f361c222972c655958fb1a0d4e09cdd0632387e3d7edfceb434dfc namespace=k8s.io Apr 16 01:53:38.757253 containerd[1462]: time="2026-04-16T01:53:38.757246517Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 16 01:53:38.819128 kubelet[2507]: I0416 01:53:38.818971 2507 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Apr 16 01:53:38.860448 systemd[1]: Created slice kubepods-burstable-pod24667828_0318_48a4_b65d_0a8b4f5d05b3.slice - libcontainer container kubepods-burstable-pod24667828_0318_48a4_b65d_0a8b4f5d05b3.slice. Apr 16 01:53:38.868490 systemd[1]: Created slice kubepods-besteffort-pod551d6657_610e_42f9_ba7a_74405391e1d0.slice - libcontainer container kubepods-besteffort-pod551d6657_610e_42f9_ba7a_74405391e1d0.slice. Apr 16 01:53:38.873944 systemd[1]: Created slice kubepods-burstable-pod2fdd7d91_407b_4b45_9406_2b1f2309b677.slice - libcontainer container kubepods-burstable-pod2fdd7d91_407b_4b45_9406_2b1f2309b677.slice. Apr 16 01:53:38.877708 kubelet[2507]: I0416 01:53:38.877667 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c63c8a30-fd46-4747-b897-d84c56ff39e0-calico-apiserver-certs\") pod \"calico-apiserver-6bbd685878-kfhqt\" (UID: \"c63c8a30-fd46-4747-b897-d84c56ff39e0\") " pod="calico-system/calico-apiserver-6bbd685878-kfhqt" Apr 16 01:53:38.877708 kubelet[2507]: I0416 01:53:38.877725 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rn5t\" (UniqueName: \"kubernetes.io/projected/551d6657-610e-42f9-ba7a-74405391e1d0-kube-api-access-7rn5t\") pod \"calico-kube-controllers-74b56fb579-thpht\" (UID: \"551d6657-610e-42f9-ba7a-74405391e1d0\") " pod="calico-system/calico-kube-controllers-74b56fb579-thpht" Apr 16 01:53:38.877898 kubelet[2507]: I0416 01:53:38.877779 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/682faac5-585b-4912-9c11-ca5063bb23e6-whisker-backend-key-pair\") pod \"whisker-7dfff6788d-dp76c\" (UID: \"682faac5-585b-4912-9c11-ca5063bb23e6\") " pod="calico-system/whisker-7dfff6788d-dp76c" Apr 16 01:53:38.877898 kubelet[2507]: I0416 01:53:38.877799 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61c5e9a8-153a-4abd-80be-462bc37c3e43-config\") pod \"goldmane-5b85766d88-mflhh\" (UID: \"61c5e9a8-153a-4abd-80be-462bc37c3e43\") " pod="calico-system/goldmane-5b85766d88-mflhh" Apr 16 01:53:38.877898 kubelet[2507]: I0416 01:53:38.877821 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/24667828-0318-48a4-b65d-0a8b4f5d05b3-config-volume\") pod \"coredns-674b8bbfcf-xrsbx\" (UID: \"24667828-0318-48a4-b65d-0a8b4f5d05b3\") " pod="kube-system/coredns-674b8bbfcf-xrsbx" Apr 16 01:53:38.877956 kubelet[2507]: I0416 01:53:38.877897 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2fdd7d91-407b-4b45-9406-2b1f2309b677-config-volume\") pod \"coredns-674b8bbfcf-kx9h2\" (UID: \"2fdd7d91-407b-4b45-9406-2b1f2309b677\") " pod="kube-system/coredns-674b8bbfcf-kx9h2" Apr 16 01:53:38.877956 kubelet[2507]: I0416 01:53:38.877917 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsmgh\" (UniqueName: \"kubernetes.io/projected/682faac5-585b-4912-9c11-ca5063bb23e6-kube-api-access-jsmgh\") pod \"whisker-7dfff6788d-dp76c\" (UID: \"682faac5-585b-4912-9c11-ca5063bb23e6\") " pod="calico-system/whisker-7dfff6788d-dp76c" Apr 16 01:53:38.877956 kubelet[2507]: I0416 01:53:38.877935 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/61c5e9a8-153a-4abd-80be-462bc37c3e43-goldmane-key-pair\") pod \"goldmane-5b85766d88-mflhh\" (UID: \"61c5e9a8-153a-4abd-80be-462bc37c3e43\") " pod="calico-system/goldmane-5b85766d88-mflhh" Apr 16 01:53:38.878007 kubelet[2507]: I0416 01:53:38.877954 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kc2gw\" (UniqueName: \"kubernetes.io/projected/c63c8a30-fd46-4747-b897-d84c56ff39e0-kube-api-access-kc2gw\") pod \"calico-apiserver-6bbd685878-kfhqt\" (UID: \"c63c8a30-fd46-4747-b897-d84c56ff39e0\") " pod="calico-system/calico-apiserver-6bbd685878-kfhqt" Apr 16 01:53:38.878007 kubelet[2507]: I0416 01:53:38.877975 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/551d6657-610e-42f9-ba7a-74405391e1d0-tigera-ca-bundle\") pod \"calico-kube-controllers-74b56fb579-thpht\" (UID: \"551d6657-610e-42f9-ba7a-74405391e1d0\") " pod="calico-system/calico-kube-controllers-74b56fb579-thpht" Apr 16 01:53:38.878007 kubelet[2507]: I0416 01:53:38.877994 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/61c5e9a8-153a-4abd-80be-462bc37c3e43-goldmane-ca-bundle\") pod \"goldmane-5b85766d88-mflhh\" (UID: \"61c5e9a8-153a-4abd-80be-462bc37c3e43\") " pod="calico-system/goldmane-5b85766d88-mflhh" Apr 16 01:53:38.878072 kubelet[2507]: I0416 01:53:38.878013 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flzx4\" (UniqueName: \"kubernetes.io/projected/481ff886-10a4-44ca-8500-2463e1271fbf-kube-api-access-flzx4\") pod \"calico-apiserver-6bbd685878-nsd9x\" (UID: \"481ff886-10a4-44ca-8500-2463e1271fbf\") " pod="calico-system/calico-apiserver-6bbd685878-nsd9x" Apr 16 01:53:38.878072 kubelet[2507]: I0416 01:53:38.878035 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkpcj\" (UniqueName: \"kubernetes.io/projected/61c5e9a8-153a-4abd-80be-462bc37c3e43-kube-api-access-jkpcj\") pod \"goldmane-5b85766d88-mflhh\" (UID: \"61c5e9a8-153a-4abd-80be-462bc37c3e43\") " pod="calico-system/goldmane-5b85766d88-mflhh" Apr 16 01:53:38.878072 kubelet[2507]: I0416 01:53:38.878057 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rc56r\" (UniqueName: \"kubernetes.io/projected/2fdd7d91-407b-4b45-9406-2b1f2309b677-kube-api-access-rc56r\") pod \"coredns-674b8bbfcf-kx9h2\" (UID: \"2fdd7d91-407b-4b45-9406-2b1f2309b677\") " pod="kube-system/coredns-674b8bbfcf-kx9h2" Apr 16 01:53:38.878148 kubelet[2507]: I0416 01:53:38.878075 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/682faac5-585b-4912-9c11-ca5063bb23e6-nginx-config\") pod \"whisker-7dfff6788d-dp76c\" (UID: \"682faac5-585b-4912-9c11-ca5063bb23e6\") " pod="calico-system/whisker-7dfff6788d-dp76c" Apr 16 01:53:38.878148 kubelet[2507]: I0416 01:53:38.878095 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/481ff886-10a4-44ca-8500-2463e1271fbf-calico-apiserver-certs\") pod \"calico-apiserver-6bbd685878-nsd9x\" (UID: \"481ff886-10a4-44ca-8500-2463e1271fbf\") " pod="calico-system/calico-apiserver-6bbd685878-nsd9x" Apr 16 01:53:38.878148 kubelet[2507]: I0416 01:53:38.878115 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86fk7\" (UniqueName: \"kubernetes.io/projected/24667828-0318-48a4-b65d-0a8b4f5d05b3-kube-api-access-86fk7\") pod \"coredns-674b8bbfcf-xrsbx\" (UID: \"24667828-0318-48a4-b65d-0a8b4f5d05b3\") " pod="kube-system/coredns-674b8bbfcf-xrsbx" Apr 16 01:53:38.878148 kubelet[2507]: I0416 01:53:38.878133 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/682faac5-585b-4912-9c11-ca5063bb23e6-whisker-ca-bundle\") pod \"whisker-7dfff6788d-dp76c\" (UID: \"682faac5-585b-4912-9c11-ca5063bb23e6\") " pod="calico-system/whisker-7dfff6788d-dp76c" Apr 16 01:53:38.879209 systemd[1]: Created slice kubepods-besteffort-podc63c8a30_fd46_4747_b897_d84c56ff39e0.slice - libcontainer container kubepods-besteffort-podc63c8a30_fd46_4747_b897_d84c56ff39e0.slice. Apr 16 01:53:38.882915 systemd[1]: Created slice kubepods-besteffort-pod682faac5_585b_4912_9c11_ca5063bb23e6.slice - libcontainer container kubepods-besteffort-pod682faac5_585b_4912_9c11_ca5063bb23e6.slice. Apr 16 01:53:38.887814 systemd[1]: Created slice kubepods-besteffort-pod61c5e9a8_153a_4abd_80be_462bc37c3e43.slice - libcontainer container kubepods-besteffort-pod61c5e9a8_153a_4abd_80be_462bc37c3e43.slice. Apr 16 01:53:38.891545 systemd[1]: Created slice kubepods-besteffort-pod481ff886_10a4_44ca_8500_2463e1271fbf.slice - libcontainer container kubepods-besteffort-pod481ff886_10a4_44ca_8500_2463e1271fbf.slice. Apr 16 01:53:38.964150 containerd[1462]: time="2026-04-16T01:53:38.964112869Z" level=info msg="CreateContainer within sandbox \"1128caa52d53991402d79079162db7ee99a02ef2faf53dc0f971ec3d5bcb639e\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 16 01:53:38.981245 containerd[1462]: time="2026-04-16T01:53:38.981061791Z" level=info msg="CreateContainer within sandbox \"1128caa52d53991402d79079162db7ee99a02ef2faf53dc0f971ec3d5bcb639e\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"82c5cfc46974c6ca273114bde7fc5aa3c4ec2ab87c7848161f4c1cd473a6774d\"" Apr 16 01:53:38.986450 containerd[1462]: time="2026-04-16T01:53:38.985355816Z" level=info msg="StartContainer for \"82c5cfc46974c6ca273114bde7fc5aa3c4ec2ab87c7848161f4c1cd473a6774d\"" Apr 16 01:53:39.015036 systemd[1]: Started cri-containerd-82c5cfc46974c6ca273114bde7fc5aa3c4ec2ab87c7848161f4c1cd473a6774d.scope - libcontainer container 82c5cfc46974c6ca273114bde7fc5aa3c4ec2ab87c7848161f4c1cd473a6774d. Apr 16 01:53:39.042172 containerd[1462]: time="2026-04-16T01:53:39.041989396Z" level=info msg="StartContainer for \"82c5cfc46974c6ca273114bde7fc5aa3c4ec2ab87c7848161f4c1cd473a6774d\" returns successfully" Apr 16 01:53:39.165770 kubelet[2507]: E0416 01:53:39.165212 2507 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:53:39.172657 containerd[1462]: time="2026-04-16T01:53:39.172557823Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-74b56fb579-thpht,Uid:551d6657-610e-42f9-ba7a-74405391e1d0,Namespace:calico-system,Attempt:0,}" Apr 16 01:53:39.174383 containerd[1462]: time="2026-04-16T01:53:39.173959966Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xrsbx,Uid:24667828-0318-48a4-b65d-0a8b4f5d05b3,Namespace:kube-system,Attempt:0,}" Apr 16 01:53:39.176760 kubelet[2507]: E0416 01:53:39.176470 2507 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:53:39.183179 containerd[1462]: time="2026-04-16T01:53:39.183112286Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6bbd685878-kfhqt,Uid:c63c8a30-fd46-4747-b897-d84c56ff39e0,Namespace:calico-system,Attempt:0,}" Apr 16 01:53:39.189914 containerd[1462]: time="2026-04-16T01:53:39.187471288Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-kx9h2,Uid:2fdd7d91-407b-4b45-9406-2b1f2309b677,Namespace:kube-system,Attempt:0,}" Apr 16 01:53:39.190176 containerd[1462]: time="2026-04-16T01:53:39.189039362Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7dfff6788d-dp76c,Uid:682faac5-585b-4912-9c11-ca5063bb23e6,Namespace:calico-system,Attempt:0,}" Apr 16 01:53:39.197573 containerd[1462]: time="2026-04-16T01:53:39.190945657Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-mflhh,Uid:61c5e9a8-153a-4abd-80be-462bc37c3e43,Namespace:calico-system,Attempt:0,}" Apr 16 01:53:39.197573 containerd[1462]: time="2026-04-16T01:53:39.196477762Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6bbd685878-nsd9x,Uid:481ff886-10a4-44ca-8500-2463e1271fbf,Namespace:calico-system,Attempt:0,}" Apr 16 01:53:39.500048 systemd-networkd[1389]: calie4dc35f109a: Link UP Apr 16 01:53:39.500727 systemd-networkd[1389]: calie4dc35f109a: Gained carrier Apr 16 01:53:39.510135 containerd[1462]: 2026-04-16 01:53:39.370 [ERROR][3479] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 16 01:53:39.510135 containerd[1462]: 2026-04-16 01:53:39.391 [INFO][3479] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--kx9h2-eth0 coredns-674b8bbfcf- kube-system 2fdd7d91-407b-4b45-9406-2b1f2309b677 903 0 2026-04-16 01:53:13 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-kx9h2 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie4dc35f109a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="3c5eacdc5d8051ff39a134c804d8229ee73a7c69058f510a4459d97d8a901c8b" Namespace="kube-system" Pod="coredns-674b8bbfcf-kx9h2" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--kx9h2-" Apr 16 01:53:39.510135 containerd[1462]: 2026-04-16 01:53:39.391 [INFO][3479] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3c5eacdc5d8051ff39a134c804d8229ee73a7c69058f510a4459d97d8a901c8b" Namespace="kube-system" Pod="coredns-674b8bbfcf-kx9h2" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--kx9h2-eth0" Apr 16 01:53:39.510135 containerd[1462]: 2026-04-16 01:53:39.442 [INFO][3571] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3c5eacdc5d8051ff39a134c804d8229ee73a7c69058f510a4459d97d8a901c8b" HandleID="k8s-pod-network.3c5eacdc5d8051ff39a134c804d8229ee73a7c69058f510a4459d97d8a901c8b" Workload="localhost-k8s-coredns--674b8bbfcf--kx9h2-eth0" Apr 16 01:53:39.510135 containerd[1462]: 2026-04-16 01:53:39.452 [INFO][3571] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="3c5eacdc5d8051ff39a134c804d8229ee73a7c69058f510a4459d97d8a901c8b" HandleID="k8s-pod-network.3c5eacdc5d8051ff39a134c804d8229ee73a7c69058f510a4459d97d8a901c8b" Workload="localhost-k8s-coredns--674b8bbfcf--kx9h2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000377910), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-kx9h2", "timestamp":"2026-04-16 01:53:39.442037872 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0002134a0)} Apr 16 01:53:39.510135 containerd[1462]: 2026-04-16 01:53:39.452 [INFO][3571] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 01:53:39.510135 containerd[1462]: 2026-04-16 01:53:39.452 [INFO][3571] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 01:53:39.510135 containerd[1462]: 2026-04-16 01:53:39.452 [INFO][3571] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 16 01:53:39.510135 containerd[1462]: 2026-04-16 01:53:39.455 [INFO][3571] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.3c5eacdc5d8051ff39a134c804d8229ee73a7c69058f510a4459d97d8a901c8b" host="localhost" Apr 16 01:53:39.510135 containerd[1462]: 2026-04-16 01:53:39.462 [INFO][3571] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 16 01:53:39.510135 containerd[1462]: 2026-04-16 01:53:39.470 [INFO][3571] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 16 01:53:39.510135 containerd[1462]: 2026-04-16 01:53:39.472 [INFO][3571] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 16 01:53:39.510135 containerd[1462]: 2026-04-16 01:53:39.475 [INFO][3571] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 16 01:53:39.510135 containerd[1462]: 2026-04-16 01:53:39.475 [INFO][3571] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3c5eacdc5d8051ff39a134c804d8229ee73a7c69058f510a4459d97d8a901c8b" host="localhost" Apr 16 01:53:39.510135 containerd[1462]: 2026-04-16 01:53:39.477 [INFO][3571] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.3c5eacdc5d8051ff39a134c804d8229ee73a7c69058f510a4459d97d8a901c8b Apr 16 01:53:39.510135 containerd[1462]: 2026-04-16 01:53:39.481 [INFO][3571] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3c5eacdc5d8051ff39a134c804d8229ee73a7c69058f510a4459d97d8a901c8b" host="localhost" Apr 16 01:53:39.510135 containerd[1462]: 2026-04-16 01:53:39.487 [INFO][3571] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.3c5eacdc5d8051ff39a134c804d8229ee73a7c69058f510a4459d97d8a901c8b" host="localhost" Apr 16 01:53:39.510135 containerd[1462]: 2026-04-16 01:53:39.487 [INFO][3571] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.3c5eacdc5d8051ff39a134c804d8229ee73a7c69058f510a4459d97d8a901c8b" host="localhost" Apr 16 01:53:39.510135 containerd[1462]: 2026-04-16 01:53:39.487 [INFO][3571] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 01:53:39.510135 containerd[1462]: 2026-04-16 01:53:39.487 [INFO][3571] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="3c5eacdc5d8051ff39a134c804d8229ee73a7c69058f510a4459d97d8a901c8b" HandleID="k8s-pod-network.3c5eacdc5d8051ff39a134c804d8229ee73a7c69058f510a4459d97d8a901c8b" Workload="localhost-k8s-coredns--674b8bbfcf--kx9h2-eth0" Apr 16 01:53:39.510658 containerd[1462]: 2026-04-16 01:53:39.491 [INFO][3479] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3c5eacdc5d8051ff39a134c804d8229ee73a7c69058f510a4459d97d8a901c8b" Namespace="kube-system" Pod="coredns-674b8bbfcf-kx9h2" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--kx9h2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--kx9h2-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"2fdd7d91-407b-4b45-9406-2b1f2309b677", ResourceVersion:"903", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 1, 53, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-kx9h2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie4dc35f109a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 01:53:39.510658 containerd[1462]: 2026-04-16 01:53:39.491 [INFO][3479] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="3c5eacdc5d8051ff39a134c804d8229ee73a7c69058f510a4459d97d8a901c8b" Namespace="kube-system" Pod="coredns-674b8bbfcf-kx9h2" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--kx9h2-eth0" Apr 16 01:53:39.510658 containerd[1462]: 2026-04-16 01:53:39.491 [INFO][3479] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie4dc35f109a ContainerID="3c5eacdc5d8051ff39a134c804d8229ee73a7c69058f510a4459d97d8a901c8b" Namespace="kube-system" Pod="coredns-674b8bbfcf-kx9h2" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--kx9h2-eth0" Apr 16 01:53:39.510658 containerd[1462]: 2026-04-16 01:53:39.501 [INFO][3479] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3c5eacdc5d8051ff39a134c804d8229ee73a7c69058f510a4459d97d8a901c8b" Namespace="kube-system" Pod="coredns-674b8bbfcf-kx9h2" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--kx9h2-eth0" Apr 16 01:53:39.510658 containerd[1462]: 2026-04-16 01:53:39.501 [INFO][3479] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3c5eacdc5d8051ff39a134c804d8229ee73a7c69058f510a4459d97d8a901c8b" Namespace="kube-system" Pod="coredns-674b8bbfcf-kx9h2" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--kx9h2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--kx9h2-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"2fdd7d91-407b-4b45-9406-2b1f2309b677", ResourceVersion:"903", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 1, 53, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3c5eacdc5d8051ff39a134c804d8229ee73a7c69058f510a4459d97d8a901c8b", Pod:"coredns-674b8bbfcf-kx9h2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie4dc35f109a", MAC:"7e:1e:2c:02:ab:2a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 01:53:39.510658 containerd[1462]: 2026-04-16 01:53:39.508 [INFO][3479] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3c5eacdc5d8051ff39a134c804d8229ee73a7c69058f510a4459d97d8a901c8b" Namespace="kube-system" Pod="coredns-674b8bbfcf-kx9h2" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--kx9h2-eth0" Apr 16 01:53:39.533770 containerd[1462]: time="2026-04-16T01:53:39.533686626Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 16 01:53:39.533970 containerd[1462]: time="2026-04-16T01:53:39.533771612Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 16 01:53:39.533970 containerd[1462]: time="2026-04-16T01:53:39.533814492Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 01:53:39.534080 containerd[1462]: time="2026-04-16T01:53:39.534046943Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 01:53:39.553033 systemd[1]: Started cri-containerd-3c5eacdc5d8051ff39a134c804d8229ee73a7c69058f510a4459d97d8a901c8b.scope - libcontainer container 3c5eacdc5d8051ff39a134c804d8229ee73a7c69058f510a4459d97d8a901c8b. Apr 16 01:53:39.563756 systemd-resolved[1391]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 16 01:53:39.589867 containerd[1462]: time="2026-04-16T01:53:39.589714357Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-kx9h2,Uid:2fdd7d91-407b-4b45-9406-2b1f2309b677,Namespace:kube-system,Attempt:0,} returns sandbox id \"3c5eacdc5d8051ff39a134c804d8229ee73a7c69058f510a4459d97d8a901c8b\"" Apr 16 01:53:39.591068 kubelet[2507]: E0416 01:53:39.591029 2507 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:53:39.592822 systemd-networkd[1389]: cali82bac25da03: Link UP Apr 16 01:53:39.593345 systemd-networkd[1389]: cali82bac25da03: Gained carrier Apr 16 01:53:39.600347 containerd[1462]: time="2026-04-16T01:53:39.600308276Z" level=info msg="CreateContainer within sandbox \"3c5eacdc5d8051ff39a134c804d8229ee73a7c69058f510a4459d97d8a901c8b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 16 01:53:39.606681 containerd[1462]: 2026-04-16 01:53:39.385 [ERROR][3457] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 16 01:53:39.606681 containerd[1462]: 2026-04-16 01:53:39.417 [INFO][3457] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--74b56fb579--thpht-eth0 calico-kube-controllers-74b56fb579- calico-system 551d6657-610e-42f9-ba7a-74405391e1d0 902 0 2026-04-16 01:53:22 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:74b56fb579 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-74b56fb579-thpht eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali82bac25da03 [] [] }} ContainerID="e0948d22d4bb838e4c826eef78aa4ade491aa56b5eb3ffd5636abca8c3a19d15" Namespace="calico-system" Pod="calico-kube-controllers-74b56fb579-thpht" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--74b56fb579--thpht-" Apr 16 01:53:39.606681 containerd[1462]: 2026-04-16 01:53:39.417 [INFO][3457] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e0948d22d4bb838e4c826eef78aa4ade491aa56b5eb3ffd5636abca8c3a19d15" Namespace="calico-system" Pod="calico-kube-controllers-74b56fb579-thpht" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--74b56fb579--thpht-eth0" Apr 16 01:53:39.606681 containerd[1462]: 2026-04-16 01:53:39.456 [INFO][3608] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e0948d22d4bb838e4c826eef78aa4ade491aa56b5eb3ffd5636abca8c3a19d15" HandleID="k8s-pod-network.e0948d22d4bb838e4c826eef78aa4ade491aa56b5eb3ffd5636abca8c3a19d15" Workload="localhost-k8s-calico--kube--controllers--74b56fb579--thpht-eth0" Apr 16 01:53:39.606681 containerd[1462]: 2026-04-16 01:53:39.464 [INFO][3608] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="e0948d22d4bb838e4c826eef78aa4ade491aa56b5eb3ffd5636abca8c3a19d15" HandleID="k8s-pod-network.e0948d22d4bb838e4c826eef78aa4ade491aa56b5eb3ffd5636abca8c3a19d15" Workload="localhost-k8s-calico--kube--controllers--74b56fb579--thpht-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ee090), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-74b56fb579-thpht", "timestamp":"2026-04-16 01:53:39.456663861 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00055adc0)} Apr 16 01:53:39.606681 containerd[1462]: 2026-04-16 01:53:39.464 [INFO][3608] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 01:53:39.606681 containerd[1462]: 2026-04-16 01:53:39.487 [INFO][3608] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 01:53:39.606681 containerd[1462]: 2026-04-16 01:53:39.487 [INFO][3608] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 16 01:53:39.606681 containerd[1462]: 2026-04-16 01:53:39.557 [INFO][3608] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.e0948d22d4bb838e4c826eef78aa4ade491aa56b5eb3ffd5636abca8c3a19d15" host="localhost" Apr 16 01:53:39.606681 containerd[1462]: 2026-04-16 01:53:39.561 [INFO][3608] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 16 01:53:39.606681 containerd[1462]: 2026-04-16 01:53:39.569 [INFO][3608] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 16 01:53:39.606681 containerd[1462]: 2026-04-16 01:53:39.571 [INFO][3608] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 16 01:53:39.606681 containerd[1462]: 2026-04-16 01:53:39.572 [INFO][3608] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 16 01:53:39.606681 containerd[1462]: 2026-04-16 01:53:39.572 [INFO][3608] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e0948d22d4bb838e4c826eef78aa4ade491aa56b5eb3ffd5636abca8c3a19d15" host="localhost" Apr 16 01:53:39.606681 containerd[1462]: 2026-04-16 01:53:39.574 [INFO][3608] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.e0948d22d4bb838e4c826eef78aa4ade491aa56b5eb3ffd5636abca8c3a19d15 Apr 16 01:53:39.606681 containerd[1462]: 2026-04-16 01:53:39.577 [INFO][3608] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e0948d22d4bb838e4c826eef78aa4ade491aa56b5eb3ffd5636abca8c3a19d15" host="localhost" Apr 16 01:53:39.606681 containerd[1462]: 2026-04-16 01:53:39.587 [INFO][3608] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.e0948d22d4bb838e4c826eef78aa4ade491aa56b5eb3ffd5636abca8c3a19d15" host="localhost" Apr 16 01:53:39.606681 containerd[1462]: 2026-04-16 01:53:39.587 [INFO][3608] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.e0948d22d4bb838e4c826eef78aa4ade491aa56b5eb3ffd5636abca8c3a19d15" host="localhost" Apr 16 01:53:39.606681 containerd[1462]: 2026-04-16 01:53:39.587 [INFO][3608] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 01:53:39.606681 containerd[1462]: 2026-04-16 01:53:39.587 [INFO][3608] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="e0948d22d4bb838e4c826eef78aa4ade491aa56b5eb3ffd5636abca8c3a19d15" HandleID="k8s-pod-network.e0948d22d4bb838e4c826eef78aa4ade491aa56b5eb3ffd5636abca8c3a19d15" Workload="localhost-k8s-calico--kube--controllers--74b56fb579--thpht-eth0" Apr 16 01:53:39.607167 containerd[1462]: 2026-04-16 01:53:39.590 [INFO][3457] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e0948d22d4bb838e4c826eef78aa4ade491aa56b5eb3ffd5636abca8c3a19d15" Namespace="calico-system" Pod="calico-kube-controllers-74b56fb579-thpht" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--74b56fb579--thpht-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--74b56fb579--thpht-eth0", GenerateName:"calico-kube-controllers-74b56fb579-", Namespace:"calico-system", SelfLink:"", UID:"551d6657-610e-42f9-ba7a-74405391e1d0", ResourceVersion:"902", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 1, 53, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"74b56fb579", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-74b56fb579-thpht", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali82bac25da03", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 01:53:39.607167 containerd[1462]: 2026-04-16 01:53:39.590 [INFO][3457] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="e0948d22d4bb838e4c826eef78aa4ade491aa56b5eb3ffd5636abca8c3a19d15" Namespace="calico-system" Pod="calico-kube-controllers-74b56fb579-thpht" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--74b56fb579--thpht-eth0" Apr 16 01:53:39.607167 containerd[1462]: 2026-04-16 01:53:39.590 [INFO][3457] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali82bac25da03 ContainerID="e0948d22d4bb838e4c826eef78aa4ade491aa56b5eb3ffd5636abca8c3a19d15" Namespace="calico-system" Pod="calico-kube-controllers-74b56fb579-thpht" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--74b56fb579--thpht-eth0" Apr 16 01:53:39.607167 containerd[1462]: 2026-04-16 01:53:39.593 [INFO][3457] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e0948d22d4bb838e4c826eef78aa4ade491aa56b5eb3ffd5636abca8c3a19d15" Namespace="calico-system" Pod="calico-kube-controllers-74b56fb579-thpht" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--74b56fb579--thpht-eth0" Apr 16 01:53:39.607167 containerd[1462]: 2026-04-16 01:53:39.594 [INFO][3457] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e0948d22d4bb838e4c826eef78aa4ade491aa56b5eb3ffd5636abca8c3a19d15" Namespace="calico-system" Pod="calico-kube-controllers-74b56fb579-thpht" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--74b56fb579--thpht-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--74b56fb579--thpht-eth0", GenerateName:"calico-kube-controllers-74b56fb579-", Namespace:"calico-system", SelfLink:"", UID:"551d6657-610e-42f9-ba7a-74405391e1d0", ResourceVersion:"902", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 1, 53, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"74b56fb579", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e0948d22d4bb838e4c826eef78aa4ade491aa56b5eb3ffd5636abca8c3a19d15", Pod:"calico-kube-controllers-74b56fb579-thpht", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali82bac25da03", MAC:"f6:8a:21:ce:1f:6a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 01:53:39.607167 containerd[1462]: 2026-04-16 01:53:39.605 [INFO][3457] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e0948d22d4bb838e4c826eef78aa4ade491aa56b5eb3ffd5636abca8c3a19d15" Namespace="calico-system" Pod="calico-kube-controllers-74b56fb579-thpht" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--74b56fb579--thpht-eth0" Apr 16 01:53:39.611985 containerd[1462]: time="2026-04-16T01:53:39.611941638Z" level=info msg="CreateContainer within sandbox \"3c5eacdc5d8051ff39a134c804d8229ee73a7c69058f510a4459d97d8a901c8b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"39aca2e029fe9f373509e33d1fbca7d6d863b4800fc64f0936fa0c638b88cec2\"" Apr 16 01:53:39.612721 containerd[1462]: time="2026-04-16T01:53:39.612704729Z" level=info msg="StartContainer for \"39aca2e029fe9f373509e33d1fbca7d6d863b4800fc64f0936fa0c638b88cec2\"" Apr 16 01:53:39.625510 containerd[1462]: time="2026-04-16T01:53:39.625420461Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 16 01:53:39.625631 containerd[1462]: time="2026-04-16T01:53:39.625495217Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 16 01:53:39.625631 containerd[1462]: time="2026-04-16T01:53:39.625503946Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 01:53:39.625691 containerd[1462]: time="2026-04-16T01:53:39.625596579Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 01:53:39.634991 systemd[1]: Started cri-containerd-39aca2e029fe9f373509e33d1fbca7d6d863b4800fc64f0936fa0c638b88cec2.scope - libcontainer container 39aca2e029fe9f373509e33d1fbca7d6d863b4800fc64f0936fa0c638b88cec2. Apr 16 01:53:39.637532 systemd[1]: Started cri-containerd-e0948d22d4bb838e4c826eef78aa4ade491aa56b5eb3ffd5636abca8c3a19d15.scope - libcontainer container e0948d22d4bb838e4c826eef78aa4ade491aa56b5eb3ffd5636abca8c3a19d15. Apr 16 01:53:39.647584 systemd-resolved[1391]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 16 01:53:39.660958 containerd[1462]: time="2026-04-16T01:53:39.660913366Z" level=info msg="StartContainer for \"39aca2e029fe9f373509e33d1fbca7d6d863b4800fc64f0936fa0c638b88cec2\" returns successfully" Apr 16 01:53:39.681291 containerd[1462]: time="2026-04-16T01:53:39.681004641Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-74b56fb579-thpht,Uid:551d6657-610e-42f9-ba7a-74405391e1d0,Namespace:calico-system,Attempt:0,} returns sandbox id \"e0948d22d4bb838e4c826eef78aa4ade491aa56b5eb3ffd5636abca8c3a19d15\"" Apr 16 01:53:39.684451 containerd[1462]: time="2026-04-16T01:53:39.684341009Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Apr 16 01:53:39.706655 systemd-networkd[1389]: cali9911cb9b543: Link UP Apr 16 01:53:39.707280 systemd-networkd[1389]: cali9911cb9b543: Gained carrier Apr 16 01:53:39.722625 containerd[1462]: 2026-04-16 01:53:39.371 [ERROR][3481] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 16 01:53:39.722625 containerd[1462]: 2026-04-16 01:53:39.395 [INFO][3481] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--5b85766d88--mflhh-eth0 goldmane-5b85766d88- calico-system 61c5e9a8-153a-4abd-80be-462bc37c3e43 906 0 2026-04-16 01:53:22 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:5b85766d88 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-5b85766d88-mflhh eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali9911cb9b543 [] [] }} ContainerID="3e3a2b5b822d0d5c75a394d38f5f97282ce8a2fc87c4dbec183da84380e08688" Namespace="calico-system" Pod="goldmane-5b85766d88-mflhh" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--mflhh-" Apr 16 01:53:39.722625 containerd[1462]: 2026-04-16 01:53:39.395 [INFO][3481] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3e3a2b5b822d0d5c75a394d38f5f97282ce8a2fc87c4dbec183da84380e08688" Namespace="calico-system" Pod="goldmane-5b85766d88-mflhh" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--mflhh-eth0" Apr 16 01:53:39.722625 containerd[1462]: 2026-04-16 01:53:39.450 [INFO][3578] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3e3a2b5b822d0d5c75a394d38f5f97282ce8a2fc87c4dbec183da84380e08688" HandleID="k8s-pod-network.3e3a2b5b822d0d5c75a394d38f5f97282ce8a2fc87c4dbec183da84380e08688" Workload="localhost-k8s-goldmane--5b85766d88--mflhh-eth0" Apr 16 01:53:39.722625 containerd[1462]: 2026-04-16 01:53:39.470 [INFO][3578] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="3e3a2b5b822d0d5c75a394d38f5f97282ce8a2fc87c4dbec183da84380e08688" HandleID="k8s-pod-network.3e3a2b5b822d0d5c75a394d38f5f97282ce8a2fc87c4dbec183da84380e08688" Workload="localhost-k8s-goldmane--5b85766d88--mflhh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000367d40), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-5b85766d88-mflhh", "timestamp":"2026-04-16 01:53:39.450768604 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0005731e0)} Apr 16 01:53:39.722625 containerd[1462]: 2026-04-16 01:53:39.470 [INFO][3578] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 01:53:39.722625 containerd[1462]: 2026-04-16 01:53:39.587 [INFO][3578] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 01:53:39.722625 containerd[1462]: 2026-04-16 01:53:39.587 [INFO][3578] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 16 01:53:39.722625 containerd[1462]: 2026-04-16 01:53:39.658 [INFO][3578] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.3e3a2b5b822d0d5c75a394d38f5f97282ce8a2fc87c4dbec183da84380e08688" host="localhost" Apr 16 01:53:39.722625 containerd[1462]: 2026-04-16 01:53:39.675 [INFO][3578] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 16 01:53:39.722625 containerd[1462]: 2026-04-16 01:53:39.682 [INFO][3578] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 16 01:53:39.722625 containerd[1462]: 2026-04-16 01:53:39.689 [INFO][3578] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 16 01:53:39.722625 containerd[1462]: 2026-04-16 01:53:39.692 [INFO][3578] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 16 01:53:39.722625 containerd[1462]: 2026-04-16 01:53:39.692 [INFO][3578] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3e3a2b5b822d0d5c75a394d38f5f97282ce8a2fc87c4dbec183da84380e08688" host="localhost" Apr 16 01:53:39.722625 containerd[1462]: 2026-04-16 01:53:39.693 [INFO][3578] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.3e3a2b5b822d0d5c75a394d38f5f97282ce8a2fc87c4dbec183da84380e08688 Apr 16 01:53:39.722625 containerd[1462]: 2026-04-16 01:53:39.696 [INFO][3578] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3e3a2b5b822d0d5c75a394d38f5f97282ce8a2fc87c4dbec183da84380e08688" host="localhost" Apr 16 01:53:39.722625 containerd[1462]: 2026-04-16 01:53:39.702 [INFO][3578] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.3e3a2b5b822d0d5c75a394d38f5f97282ce8a2fc87c4dbec183da84380e08688" host="localhost" Apr 16 01:53:39.722625 containerd[1462]: 2026-04-16 01:53:39.702 [INFO][3578] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.3e3a2b5b822d0d5c75a394d38f5f97282ce8a2fc87c4dbec183da84380e08688" host="localhost" Apr 16 01:53:39.722625 containerd[1462]: 2026-04-16 01:53:39.702 [INFO][3578] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 01:53:39.722625 containerd[1462]: 2026-04-16 01:53:39.702 [INFO][3578] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="3e3a2b5b822d0d5c75a394d38f5f97282ce8a2fc87c4dbec183da84380e08688" HandleID="k8s-pod-network.3e3a2b5b822d0d5c75a394d38f5f97282ce8a2fc87c4dbec183da84380e08688" Workload="localhost-k8s-goldmane--5b85766d88--mflhh-eth0" Apr 16 01:53:39.724190 containerd[1462]: 2026-04-16 01:53:39.704 [INFO][3481] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3e3a2b5b822d0d5c75a394d38f5f97282ce8a2fc87c4dbec183da84380e08688" Namespace="calico-system" Pod="goldmane-5b85766d88-mflhh" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--mflhh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--5b85766d88--mflhh-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"61c5e9a8-153a-4abd-80be-462bc37c3e43", ResourceVersion:"906", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 1, 53, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-5b85766d88-mflhh", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali9911cb9b543", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 01:53:39.724190 containerd[1462]: 2026-04-16 01:53:39.705 [INFO][3481] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="3e3a2b5b822d0d5c75a394d38f5f97282ce8a2fc87c4dbec183da84380e08688" Namespace="calico-system" Pod="goldmane-5b85766d88-mflhh" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--mflhh-eth0" Apr 16 01:53:39.724190 containerd[1462]: 2026-04-16 01:53:39.705 [INFO][3481] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9911cb9b543 ContainerID="3e3a2b5b822d0d5c75a394d38f5f97282ce8a2fc87c4dbec183da84380e08688" Namespace="calico-system" Pod="goldmane-5b85766d88-mflhh" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--mflhh-eth0" Apr 16 01:53:39.724190 containerd[1462]: 2026-04-16 01:53:39.707 [INFO][3481] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3e3a2b5b822d0d5c75a394d38f5f97282ce8a2fc87c4dbec183da84380e08688" Namespace="calico-system" Pod="goldmane-5b85766d88-mflhh" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--mflhh-eth0" Apr 16 01:53:39.724190 containerd[1462]: 2026-04-16 01:53:39.707 [INFO][3481] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3e3a2b5b822d0d5c75a394d38f5f97282ce8a2fc87c4dbec183da84380e08688" Namespace="calico-system" Pod="goldmane-5b85766d88-mflhh" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--mflhh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--5b85766d88--mflhh-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"61c5e9a8-153a-4abd-80be-462bc37c3e43", ResourceVersion:"906", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 1, 53, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3e3a2b5b822d0d5c75a394d38f5f97282ce8a2fc87c4dbec183da84380e08688", Pod:"goldmane-5b85766d88-mflhh", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali9911cb9b543", MAC:"a6:ac:c4:50:27:11", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 01:53:39.724190 containerd[1462]: 2026-04-16 01:53:39.717 [INFO][3481] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3e3a2b5b822d0d5c75a394d38f5f97282ce8a2fc87c4dbec183da84380e08688" Namespace="calico-system" Pod="goldmane-5b85766d88-mflhh" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--mflhh-eth0" Apr 16 01:53:39.744872 containerd[1462]: time="2026-04-16T01:53:39.744779655Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 16 01:53:39.745057 containerd[1462]: time="2026-04-16T01:53:39.744864615Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 16 01:53:39.745057 containerd[1462]: time="2026-04-16T01:53:39.744882623Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 01:53:39.745057 containerd[1462]: time="2026-04-16T01:53:39.744952974Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 01:53:39.761011 systemd[1]: Started cri-containerd-3e3a2b5b822d0d5c75a394d38f5f97282ce8a2fc87c4dbec183da84380e08688.scope - libcontainer container 3e3a2b5b822d0d5c75a394d38f5f97282ce8a2fc87c4dbec183da84380e08688. Apr 16 01:53:39.775395 systemd-resolved[1391]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 16 01:53:39.804134 systemd-networkd[1389]: calia096f1f7424: Link UP Apr 16 01:53:39.804311 systemd-networkd[1389]: calia096f1f7424: Gained carrier Apr 16 01:53:39.806803 containerd[1462]: time="2026-04-16T01:53:39.806684971Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-mflhh,Uid:61c5e9a8-153a-4abd-80be-462bc37c3e43,Namespace:calico-system,Attempt:0,} returns sandbox id \"3e3a2b5b822d0d5c75a394d38f5f97282ce8a2fc87c4dbec183da84380e08688\"" Apr 16 01:53:39.816628 containerd[1462]: 2026-04-16 01:53:39.369 [ERROR][3475] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 16 01:53:39.816628 containerd[1462]: 2026-04-16 01:53:39.397 [INFO][3475] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--7dfff6788d--dp76c-eth0 whisker-7dfff6788d- calico-system 682faac5-585b-4912-9c11-ca5063bb23e6 919 0 2026-04-16 01:53:25 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7dfff6788d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-7dfff6788d-dp76c eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calia096f1f7424 [] [] }} ContainerID="15c1c0d638112cfbe7472e00f92eb6c84c935d9b722b582a6835b90b6c433307" Namespace="calico-system" Pod="whisker-7dfff6788d-dp76c" WorkloadEndpoint="localhost-k8s-whisker--7dfff6788d--dp76c-" Apr 16 01:53:39.816628 containerd[1462]: 2026-04-16 01:53:39.397 [INFO][3475] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="15c1c0d638112cfbe7472e00f92eb6c84c935d9b722b582a6835b90b6c433307" Namespace="calico-system" Pod="whisker-7dfff6788d-dp76c" WorkloadEndpoint="localhost-k8s-whisker--7dfff6788d--dp76c-eth0" Apr 16 01:53:39.816628 containerd[1462]: 2026-04-16 01:53:39.471 [INFO][3590] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="15c1c0d638112cfbe7472e00f92eb6c84c935d9b722b582a6835b90b6c433307" HandleID="k8s-pod-network.15c1c0d638112cfbe7472e00f92eb6c84c935d9b722b582a6835b90b6c433307" Workload="localhost-k8s-whisker--7dfff6788d--dp76c-eth0" Apr 16 01:53:39.816628 containerd[1462]: 2026-04-16 01:53:39.477 [INFO][3590] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="15c1c0d638112cfbe7472e00f92eb6c84c935d9b722b582a6835b90b6c433307" HandleID="k8s-pod-network.15c1c0d638112cfbe7472e00f92eb6c84c935d9b722b582a6835b90b6c433307" Workload="localhost-k8s-whisker--7dfff6788d--dp76c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f7f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-7dfff6788d-dp76c", "timestamp":"2026-04-16 01:53:39.47149134 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000376f20)} Apr 16 01:53:39.816628 containerd[1462]: 2026-04-16 01:53:39.477 [INFO][3590] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 01:53:39.816628 containerd[1462]: 2026-04-16 01:53:39.702 [INFO][3590] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 01:53:39.816628 containerd[1462]: 2026-04-16 01:53:39.703 [INFO][3590] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 16 01:53:39.816628 containerd[1462]: 2026-04-16 01:53:39.758 [INFO][3590] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.15c1c0d638112cfbe7472e00f92eb6c84c935d9b722b582a6835b90b6c433307" host="localhost" Apr 16 01:53:39.816628 containerd[1462]: 2026-04-16 01:53:39.771 [INFO][3590] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 16 01:53:39.816628 containerd[1462]: 2026-04-16 01:53:39.782 [INFO][3590] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 16 01:53:39.816628 containerd[1462]: 2026-04-16 01:53:39.783 [INFO][3590] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 16 01:53:39.816628 containerd[1462]: 2026-04-16 01:53:39.787 [INFO][3590] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 16 01:53:39.816628 containerd[1462]: 2026-04-16 01:53:39.787 [INFO][3590] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.15c1c0d638112cfbe7472e00f92eb6c84c935d9b722b582a6835b90b6c433307" host="localhost" Apr 16 01:53:39.816628 containerd[1462]: 2026-04-16 01:53:39.789 [INFO][3590] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.15c1c0d638112cfbe7472e00f92eb6c84c935d9b722b582a6835b90b6c433307 Apr 16 01:53:39.816628 containerd[1462]: 2026-04-16 01:53:39.793 [INFO][3590] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.15c1c0d638112cfbe7472e00f92eb6c84c935d9b722b582a6835b90b6c433307" host="localhost" Apr 16 01:53:39.816628 containerd[1462]: 2026-04-16 01:53:39.799 [INFO][3590] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.15c1c0d638112cfbe7472e00f92eb6c84c935d9b722b582a6835b90b6c433307" host="localhost" Apr 16 01:53:39.816628 containerd[1462]: 2026-04-16 01:53:39.799 [INFO][3590] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.15c1c0d638112cfbe7472e00f92eb6c84c935d9b722b582a6835b90b6c433307" host="localhost" Apr 16 01:53:39.816628 containerd[1462]: 2026-04-16 01:53:39.799 [INFO][3590] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 01:53:39.816628 containerd[1462]: 2026-04-16 01:53:39.799 [INFO][3590] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="15c1c0d638112cfbe7472e00f92eb6c84c935d9b722b582a6835b90b6c433307" HandleID="k8s-pod-network.15c1c0d638112cfbe7472e00f92eb6c84c935d9b722b582a6835b90b6c433307" Workload="localhost-k8s-whisker--7dfff6788d--dp76c-eth0" Apr 16 01:53:39.817123 containerd[1462]: 2026-04-16 01:53:39.801 [INFO][3475] cni-plugin/k8s.go 418: Populated endpoint ContainerID="15c1c0d638112cfbe7472e00f92eb6c84c935d9b722b582a6835b90b6c433307" Namespace="calico-system" Pod="whisker-7dfff6788d-dp76c" WorkloadEndpoint="localhost-k8s-whisker--7dfff6788d--dp76c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7dfff6788d--dp76c-eth0", GenerateName:"whisker-7dfff6788d-", Namespace:"calico-system", SelfLink:"", UID:"682faac5-585b-4912-9c11-ca5063bb23e6", ResourceVersion:"919", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 1, 53, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7dfff6788d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-7dfff6788d-dp76c", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia096f1f7424", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 01:53:39.817123 containerd[1462]: 2026-04-16 01:53:39.802 [INFO][3475] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="15c1c0d638112cfbe7472e00f92eb6c84c935d9b722b582a6835b90b6c433307" Namespace="calico-system" Pod="whisker-7dfff6788d-dp76c" WorkloadEndpoint="localhost-k8s-whisker--7dfff6788d--dp76c-eth0" Apr 16 01:53:39.817123 containerd[1462]: 2026-04-16 01:53:39.802 [INFO][3475] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia096f1f7424 ContainerID="15c1c0d638112cfbe7472e00f92eb6c84c935d9b722b582a6835b90b6c433307" Namespace="calico-system" Pod="whisker-7dfff6788d-dp76c" WorkloadEndpoint="localhost-k8s-whisker--7dfff6788d--dp76c-eth0" Apr 16 01:53:39.817123 containerd[1462]: 2026-04-16 01:53:39.804 [INFO][3475] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="15c1c0d638112cfbe7472e00f92eb6c84c935d9b722b582a6835b90b6c433307" Namespace="calico-system" Pod="whisker-7dfff6788d-dp76c" WorkloadEndpoint="localhost-k8s-whisker--7dfff6788d--dp76c-eth0" Apr 16 01:53:39.817123 containerd[1462]: 2026-04-16 01:53:39.804 [INFO][3475] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="15c1c0d638112cfbe7472e00f92eb6c84c935d9b722b582a6835b90b6c433307" Namespace="calico-system" Pod="whisker-7dfff6788d-dp76c" WorkloadEndpoint="localhost-k8s-whisker--7dfff6788d--dp76c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7dfff6788d--dp76c-eth0", GenerateName:"whisker-7dfff6788d-", Namespace:"calico-system", SelfLink:"", UID:"682faac5-585b-4912-9c11-ca5063bb23e6", ResourceVersion:"919", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 1, 53, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7dfff6788d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"15c1c0d638112cfbe7472e00f92eb6c84c935d9b722b582a6835b90b6c433307", Pod:"whisker-7dfff6788d-dp76c", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia096f1f7424", MAC:"62:15:40:67:a4:62", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 01:53:39.817123 containerd[1462]: 2026-04-16 01:53:39.815 [INFO][3475] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="15c1c0d638112cfbe7472e00f92eb6c84c935d9b722b582a6835b90b6c433307" Namespace="calico-system" Pod="whisker-7dfff6788d-dp76c" WorkloadEndpoint="localhost-k8s-whisker--7dfff6788d--dp76c-eth0" Apr 16 01:53:39.834459 containerd[1462]: time="2026-04-16T01:53:39.834256105Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 16 01:53:39.834459 containerd[1462]: time="2026-04-16T01:53:39.834311993Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 16 01:53:39.834459 containerd[1462]: time="2026-04-16T01:53:39.834320397Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 01:53:39.834459 containerd[1462]: time="2026-04-16T01:53:39.834399256Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 01:53:39.859161 systemd[1]: Started cri-containerd-15c1c0d638112cfbe7472e00f92eb6c84c935d9b722b582a6835b90b6c433307.scope - libcontainer container 15c1c0d638112cfbe7472e00f92eb6c84c935d9b722b582a6835b90b6c433307. Apr 16 01:53:39.869118 systemd[1]: Created slice kubepods-besteffort-pode69aa1a1_0d4e_40a8_84a3_c556bcb00606.slice - libcontainer container kubepods-besteffort-pode69aa1a1_0d4e_40a8_84a3_c556bcb00606.slice. Apr 16 01:53:39.873760 systemd-resolved[1391]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 16 01:53:39.874295 containerd[1462]: time="2026-04-16T01:53:39.874227714Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ns4nx,Uid:e69aa1a1-0d4e-40a8-84a3-c556bcb00606,Namespace:calico-system,Attempt:0,}" Apr 16 01:53:39.911008 containerd[1462]: time="2026-04-16T01:53:39.910949110Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7dfff6788d-dp76c,Uid:682faac5-585b-4912-9c11-ca5063bb23e6,Namespace:calico-system,Attempt:0,} returns sandbox id \"15c1c0d638112cfbe7472e00f92eb6c84c935d9b722b582a6835b90b6c433307\"" Apr 16 01:53:39.913476 systemd-networkd[1389]: cali01a9cce86ef: Link UP Apr 16 01:53:39.914362 systemd-networkd[1389]: cali01a9cce86ef: Gained carrier Apr 16 01:53:39.927372 containerd[1462]: 2026-04-16 01:53:39.381 [ERROR][3537] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 16 01:53:39.927372 containerd[1462]: 2026-04-16 01:53:39.405 [INFO][3537] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6bbd685878--kfhqt-eth0 calico-apiserver-6bbd685878- calico-system c63c8a30-fd46-4747-b897-d84c56ff39e0 904 0 2026-04-16 01:53:21 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6bbd685878 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6bbd685878-kfhqt eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali01a9cce86ef [] [] }} ContainerID="fb691154a8d1884607089283edb83ab23419d53f75ae9098f17e20cc516237c8" Namespace="calico-system" Pod="calico-apiserver-6bbd685878-kfhqt" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bbd685878--kfhqt-" Apr 16 01:53:39.927372 containerd[1462]: 2026-04-16 01:53:39.405 [INFO][3537] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fb691154a8d1884607089283edb83ab23419d53f75ae9098f17e20cc516237c8" Namespace="calico-system" Pod="calico-apiserver-6bbd685878-kfhqt" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bbd685878--kfhqt-eth0" Apr 16 01:53:39.927372 containerd[1462]: 2026-04-16 01:53:39.467 [INFO][3584] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fb691154a8d1884607089283edb83ab23419d53f75ae9098f17e20cc516237c8" HandleID="k8s-pod-network.fb691154a8d1884607089283edb83ab23419d53f75ae9098f17e20cc516237c8" Workload="localhost-k8s-calico--apiserver--6bbd685878--kfhqt-eth0" Apr 16 01:53:39.927372 containerd[1462]: 2026-04-16 01:53:39.480 [INFO][3584] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="fb691154a8d1884607089283edb83ab23419d53f75ae9098f17e20cc516237c8" HandleID="k8s-pod-network.fb691154a8d1884607089283edb83ab23419d53f75ae9098f17e20cc516237c8" Workload="localhost-k8s-calico--apiserver--6bbd685878--kfhqt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000139b00), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-6bbd685878-kfhqt", "timestamp":"2026-04-16 01:53:39.467206752 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000434dc0)} Apr 16 01:53:39.927372 containerd[1462]: 2026-04-16 01:53:39.480 [INFO][3584] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 01:53:39.927372 containerd[1462]: 2026-04-16 01:53:39.799 [INFO][3584] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 01:53:39.927372 containerd[1462]: 2026-04-16 01:53:39.799 [INFO][3584] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 16 01:53:39.927372 containerd[1462]: 2026-04-16 01:53:39.858 [INFO][3584] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.fb691154a8d1884607089283edb83ab23419d53f75ae9098f17e20cc516237c8" host="localhost" Apr 16 01:53:39.927372 containerd[1462]: 2026-04-16 01:53:39.875 [INFO][3584] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 16 01:53:39.927372 containerd[1462]: 2026-04-16 01:53:39.884 [INFO][3584] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 16 01:53:39.927372 containerd[1462]: 2026-04-16 01:53:39.887 [INFO][3584] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 16 01:53:39.927372 containerd[1462]: 2026-04-16 01:53:39.890 [INFO][3584] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 16 01:53:39.927372 containerd[1462]: 2026-04-16 01:53:39.890 [INFO][3584] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.fb691154a8d1884607089283edb83ab23419d53f75ae9098f17e20cc516237c8" host="localhost" Apr 16 01:53:39.927372 containerd[1462]: 2026-04-16 01:53:39.893 [INFO][3584] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.fb691154a8d1884607089283edb83ab23419d53f75ae9098f17e20cc516237c8 Apr 16 01:53:39.927372 containerd[1462]: 2026-04-16 01:53:39.899 [INFO][3584] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.fb691154a8d1884607089283edb83ab23419d53f75ae9098f17e20cc516237c8" host="localhost" Apr 16 01:53:39.927372 containerd[1462]: 2026-04-16 01:53:39.908 [INFO][3584] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.fb691154a8d1884607089283edb83ab23419d53f75ae9098f17e20cc516237c8" host="localhost" Apr 16 01:53:39.927372 containerd[1462]: 2026-04-16 01:53:39.908 [INFO][3584] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.fb691154a8d1884607089283edb83ab23419d53f75ae9098f17e20cc516237c8" host="localhost" Apr 16 01:53:39.927372 containerd[1462]: 2026-04-16 01:53:39.908 [INFO][3584] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 01:53:39.927372 containerd[1462]: 2026-04-16 01:53:39.908 [INFO][3584] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="fb691154a8d1884607089283edb83ab23419d53f75ae9098f17e20cc516237c8" HandleID="k8s-pod-network.fb691154a8d1884607089283edb83ab23419d53f75ae9098f17e20cc516237c8" Workload="localhost-k8s-calico--apiserver--6bbd685878--kfhqt-eth0" Apr 16 01:53:39.927990 containerd[1462]: 2026-04-16 01:53:39.910 [INFO][3537] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fb691154a8d1884607089283edb83ab23419d53f75ae9098f17e20cc516237c8" Namespace="calico-system" Pod="calico-apiserver-6bbd685878-kfhqt" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bbd685878--kfhqt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6bbd685878--kfhqt-eth0", GenerateName:"calico-apiserver-6bbd685878-", Namespace:"calico-system", SelfLink:"", UID:"c63c8a30-fd46-4747-b897-d84c56ff39e0", ResourceVersion:"904", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 1, 53, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6bbd685878", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6bbd685878-kfhqt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali01a9cce86ef", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 01:53:39.927990 containerd[1462]: 2026-04-16 01:53:39.910 [INFO][3537] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="fb691154a8d1884607089283edb83ab23419d53f75ae9098f17e20cc516237c8" Namespace="calico-system" Pod="calico-apiserver-6bbd685878-kfhqt" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bbd685878--kfhqt-eth0" Apr 16 01:53:39.927990 containerd[1462]: 2026-04-16 01:53:39.910 [INFO][3537] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali01a9cce86ef ContainerID="fb691154a8d1884607089283edb83ab23419d53f75ae9098f17e20cc516237c8" Namespace="calico-system" Pod="calico-apiserver-6bbd685878-kfhqt" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bbd685878--kfhqt-eth0" Apr 16 01:53:39.927990 containerd[1462]: 2026-04-16 01:53:39.915 [INFO][3537] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fb691154a8d1884607089283edb83ab23419d53f75ae9098f17e20cc516237c8" Namespace="calico-system" Pod="calico-apiserver-6bbd685878-kfhqt" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bbd685878--kfhqt-eth0" Apr 16 01:53:39.927990 containerd[1462]: 2026-04-16 01:53:39.915 [INFO][3537] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fb691154a8d1884607089283edb83ab23419d53f75ae9098f17e20cc516237c8" Namespace="calico-system" Pod="calico-apiserver-6bbd685878-kfhqt" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bbd685878--kfhqt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6bbd685878--kfhqt-eth0", GenerateName:"calico-apiserver-6bbd685878-", Namespace:"calico-system", SelfLink:"", UID:"c63c8a30-fd46-4747-b897-d84c56ff39e0", ResourceVersion:"904", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 1, 53, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6bbd685878", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fb691154a8d1884607089283edb83ab23419d53f75ae9098f17e20cc516237c8", Pod:"calico-apiserver-6bbd685878-kfhqt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali01a9cce86ef", MAC:"82:26:a4:a8:8e:f0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 01:53:39.927990 containerd[1462]: 2026-04-16 01:53:39.924 [INFO][3537] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fb691154a8d1884607089283edb83ab23419d53f75ae9098f17e20cc516237c8" Namespace="calico-system" Pod="calico-apiserver-6bbd685878-kfhqt" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bbd685878--kfhqt-eth0" Apr 16 01:53:39.948711 containerd[1462]: time="2026-04-16T01:53:39.948484268Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 16 01:53:39.948974 containerd[1462]: time="2026-04-16T01:53:39.948892165Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 16 01:53:39.948974 containerd[1462]: time="2026-04-16T01:53:39.948913262Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 01:53:39.949299 containerd[1462]: time="2026-04-16T01:53:39.949200400Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 01:53:39.966883 kubelet[2507]: E0416 01:53:39.966825 2507 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:53:39.966992 systemd[1]: Started cri-containerd-fb691154a8d1884607089283edb83ab23419d53f75ae9098f17e20cc516237c8.scope - libcontainer container fb691154a8d1884607089283edb83ab23419d53f75ae9098f17e20cc516237c8. Apr 16 01:53:39.997359 systemd-resolved[1391]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 16 01:53:40.001992 kubelet[2507]: I0416 01:53:40.000501 2507 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-8m9qq" podStartSLOduration=2.5037954449999997 podStartE2EDuration="18.000485158s" podCreationTimestamp="2026-04-16 01:53:22 +0000 UTC" firstStartedPulling="2026-04-16 01:53:22.656503314 +0000 UTC m=+16.902202946" lastFinishedPulling="2026-04-16 01:53:38.153193032 +0000 UTC m=+32.398892659" observedRunningTime="2026-04-16 01:53:39.98507246 +0000 UTC m=+34.230772088" watchObservedRunningTime="2026-04-16 01:53:40.000485158 +0000 UTC m=+34.246184797" Apr 16 01:53:40.001992 kubelet[2507]: I0416 01:53:40.001408 2507 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-kx9h2" podStartSLOduration=27.001191092 podStartE2EDuration="27.001191092s" podCreationTimestamp="2026-04-16 01:53:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 01:53:39.999709327 +0000 UTC m=+34.245408966" watchObservedRunningTime="2026-04-16 01:53:40.001191092 +0000 UTC m=+34.246890727" Apr 16 01:53:40.023253 systemd-networkd[1389]: cali9d286029260: Link UP Apr 16 01:53:40.023355 systemd-networkd[1389]: cali9d286029260: Gained carrier Apr 16 01:53:40.036710 containerd[1462]: time="2026-04-16T01:53:40.036281483Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6bbd685878-kfhqt,Uid:c63c8a30-fd46-4747-b897-d84c56ff39e0,Namespace:calico-system,Attempt:0,} returns sandbox id \"fb691154a8d1884607089283edb83ab23419d53f75ae9098f17e20cc516237c8\"" Apr 16 01:53:40.037170 containerd[1462]: 2026-04-16 01:53:39.375 [ERROR][3461] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 16 01:53:40.037170 containerd[1462]: 2026-04-16 01:53:39.396 [INFO][3461] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--xrsbx-eth0 coredns-674b8bbfcf- kube-system 24667828-0318-48a4-b65d-0a8b4f5d05b3 898 0 2026-04-16 01:53:13 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-xrsbx eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali9d286029260 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="fe312a4d42030b5aad40564511aee3d1a15885808578fb3b958bb13dd638e959" Namespace="kube-system" Pod="coredns-674b8bbfcf-xrsbx" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--xrsbx-" Apr 16 01:53:40.037170 containerd[1462]: 2026-04-16 01:53:39.396 [INFO][3461] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fe312a4d42030b5aad40564511aee3d1a15885808578fb3b958bb13dd638e959" Namespace="kube-system" Pod="coredns-674b8bbfcf-xrsbx" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--xrsbx-eth0" Apr 16 01:53:40.037170 containerd[1462]: 2026-04-16 01:53:39.472 [INFO][3589] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fe312a4d42030b5aad40564511aee3d1a15885808578fb3b958bb13dd638e959" HandleID="k8s-pod-network.fe312a4d42030b5aad40564511aee3d1a15885808578fb3b958bb13dd638e959" Workload="localhost-k8s-coredns--674b8bbfcf--xrsbx-eth0" Apr 16 01:53:40.037170 containerd[1462]: 2026-04-16 01:53:39.480 [INFO][3589] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="fe312a4d42030b5aad40564511aee3d1a15885808578fb3b958bb13dd638e959" HandleID="k8s-pod-network.fe312a4d42030b5aad40564511aee3d1a15885808578fb3b958bb13dd638e959" Workload="localhost-k8s-coredns--674b8bbfcf--xrsbx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e140), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-xrsbx", "timestamp":"2026-04-16 01:53:39.472185853 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000536000)} Apr 16 01:53:40.037170 containerd[1462]: 2026-04-16 01:53:39.480 [INFO][3589] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 01:53:40.037170 containerd[1462]: 2026-04-16 01:53:39.908 [INFO][3589] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 01:53:40.037170 containerd[1462]: 2026-04-16 01:53:39.908 [INFO][3589] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 16 01:53:40.037170 containerd[1462]: 2026-04-16 01:53:39.958 [INFO][3589] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.fe312a4d42030b5aad40564511aee3d1a15885808578fb3b958bb13dd638e959" host="localhost" Apr 16 01:53:40.037170 containerd[1462]: 2026-04-16 01:53:39.973 [INFO][3589] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 16 01:53:40.037170 containerd[1462]: 2026-04-16 01:53:39.992 [INFO][3589] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 16 01:53:40.037170 containerd[1462]: 2026-04-16 01:53:39.994 [INFO][3589] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 16 01:53:40.037170 containerd[1462]: 2026-04-16 01:53:39.996 [INFO][3589] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 16 01:53:40.037170 containerd[1462]: 2026-04-16 01:53:39.996 [INFO][3589] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.fe312a4d42030b5aad40564511aee3d1a15885808578fb3b958bb13dd638e959" host="localhost" Apr 16 01:53:40.037170 containerd[1462]: 2026-04-16 01:53:40.002 [INFO][3589] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.fe312a4d42030b5aad40564511aee3d1a15885808578fb3b958bb13dd638e959 Apr 16 01:53:40.037170 containerd[1462]: 2026-04-16 01:53:40.007 [INFO][3589] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.fe312a4d42030b5aad40564511aee3d1a15885808578fb3b958bb13dd638e959" host="localhost" Apr 16 01:53:40.037170 containerd[1462]: 2026-04-16 01:53:40.013 [INFO][3589] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.fe312a4d42030b5aad40564511aee3d1a15885808578fb3b958bb13dd638e959" host="localhost" Apr 16 01:53:40.037170 containerd[1462]: 2026-04-16 01:53:40.013 [INFO][3589] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.fe312a4d42030b5aad40564511aee3d1a15885808578fb3b958bb13dd638e959" host="localhost" Apr 16 01:53:40.037170 containerd[1462]: 2026-04-16 01:53:40.015 [INFO][3589] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 01:53:40.037170 containerd[1462]: 2026-04-16 01:53:40.015 [INFO][3589] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="fe312a4d42030b5aad40564511aee3d1a15885808578fb3b958bb13dd638e959" HandleID="k8s-pod-network.fe312a4d42030b5aad40564511aee3d1a15885808578fb3b958bb13dd638e959" Workload="localhost-k8s-coredns--674b8bbfcf--xrsbx-eth0" Apr 16 01:53:40.037580 containerd[1462]: 2026-04-16 01:53:40.017 [INFO][3461] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fe312a4d42030b5aad40564511aee3d1a15885808578fb3b958bb13dd638e959" Namespace="kube-system" Pod="coredns-674b8bbfcf-xrsbx" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--xrsbx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--xrsbx-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"24667828-0318-48a4-b65d-0a8b4f5d05b3", ResourceVersion:"898", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 1, 53, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-xrsbx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9d286029260", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 01:53:40.037580 containerd[1462]: 2026-04-16 01:53:40.020 [INFO][3461] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="fe312a4d42030b5aad40564511aee3d1a15885808578fb3b958bb13dd638e959" Namespace="kube-system" Pod="coredns-674b8bbfcf-xrsbx" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--xrsbx-eth0" Apr 16 01:53:40.037580 containerd[1462]: 2026-04-16 01:53:40.020 [INFO][3461] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9d286029260 ContainerID="fe312a4d42030b5aad40564511aee3d1a15885808578fb3b958bb13dd638e959" Namespace="kube-system" Pod="coredns-674b8bbfcf-xrsbx" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--xrsbx-eth0" Apr 16 01:53:40.037580 containerd[1462]: 2026-04-16 01:53:40.021 [INFO][3461] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fe312a4d42030b5aad40564511aee3d1a15885808578fb3b958bb13dd638e959" Namespace="kube-system" Pod="coredns-674b8bbfcf-xrsbx" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--xrsbx-eth0" Apr 16 01:53:40.037580 containerd[1462]: 2026-04-16 01:53:40.021 [INFO][3461] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fe312a4d42030b5aad40564511aee3d1a15885808578fb3b958bb13dd638e959" Namespace="kube-system" Pod="coredns-674b8bbfcf-xrsbx" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--xrsbx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--xrsbx-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"24667828-0318-48a4-b65d-0a8b4f5d05b3", ResourceVersion:"898", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 1, 53, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fe312a4d42030b5aad40564511aee3d1a15885808578fb3b958bb13dd638e959", Pod:"coredns-674b8bbfcf-xrsbx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9d286029260", MAC:"56:0d:c2:76:db:ca", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 01:53:40.037580 containerd[1462]: 2026-04-16 01:53:40.034 [INFO][3461] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fe312a4d42030b5aad40564511aee3d1a15885808578fb3b958bb13dd638e959" Namespace="kube-system" Pod="coredns-674b8bbfcf-xrsbx" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--xrsbx-eth0" Apr 16 01:53:40.052999 containerd[1462]: time="2026-04-16T01:53:40.052792926Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 16 01:53:40.053600 containerd[1462]: time="2026-04-16T01:53:40.053521641Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 16 01:53:40.053600 containerd[1462]: time="2026-04-16T01:53:40.053579337Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 01:53:40.053755 containerd[1462]: time="2026-04-16T01:53:40.053696724Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 01:53:40.071045 systemd[1]: Started cri-containerd-fe312a4d42030b5aad40564511aee3d1a15885808578fb3b958bb13dd638e959.scope - libcontainer container fe312a4d42030b5aad40564511aee3d1a15885808578fb3b958bb13dd638e959. Apr 16 01:53:40.080487 systemd-resolved[1391]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 16 01:53:40.107709 systemd-networkd[1389]: calib99f3092cda: Link UP Apr 16 01:53:40.109026 systemd-networkd[1389]: calib99f3092cda: Gained carrier Apr 16 01:53:40.112425 containerd[1462]: time="2026-04-16T01:53:40.112363458Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xrsbx,Uid:24667828-0318-48a4-b65d-0a8b4f5d05b3,Namespace:kube-system,Attempt:0,} returns sandbox id \"fe312a4d42030b5aad40564511aee3d1a15885808578fb3b958bb13dd638e959\"" Apr 16 01:53:40.113103 kubelet[2507]: E0416 01:53:40.113065 2507 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:53:40.119386 containerd[1462]: time="2026-04-16T01:53:40.119317130Z" level=info msg="CreateContainer within sandbox \"fe312a4d42030b5aad40564511aee3d1a15885808578fb3b958bb13dd638e959\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 16 01:53:40.122083 containerd[1462]: 2026-04-16 01:53:39.378 [ERROR][3456] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 16 01:53:40.122083 containerd[1462]: 2026-04-16 01:53:39.407 [INFO][3456] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6bbd685878--nsd9x-eth0 calico-apiserver-6bbd685878- calico-system 481ff886-10a4-44ca-8500-2463e1271fbf 907 0 2026-04-16 01:53:21 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6bbd685878 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6bbd685878-nsd9x eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calib99f3092cda [] [] }} ContainerID="0bf17e5f17bafdc42a3b09042129702a518877d9717d0b1e3426af9ba5c10339" Namespace="calico-system" Pod="calico-apiserver-6bbd685878-nsd9x" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bbd685878--nsd9x-" Apr 16 01:53:40.122083 containerd[1462]: 2026-04-16 01:53:39.407 [INFO][3456] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0bf17e5f17bafdc42a3b09042129702a518877d9717d0b1e3426af9ba5c10339" Namespace="calico-system" Pod="calico-apiserver-6bbd685878-nsd9x" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bbd685878--nsd9x-eth0" Apr 16 01:53:40.122083 containerd[1462]: 2026-04-16 01:53:39.482 [INFO][3600] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0bf17e5f17bafdc42a3b09042129702a518877d9717d0b1e3426af9ba5c10339" HandleID="k8s-pod-network.0bf17e5f17bafdc42a3b09042129702a518877d9717d0b1e3426af9ba5c10339" Workload="localhost-k8s-calico--apiserver--6bbd685878--nsd9x-eth0" Apr 16 01:53:40.122083 containerd[1462]: 2026-04-16 01:53:39.487 [INFO][3600] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="0bf17e5f17bafdc42a3b09042129702a518877d9717d0b1e3426af9ba5c10339" HandleID="k8s-pod-network.0bf17e5f17bafdc42a3b09042129702a518877d9717d0b1e3426af9ba5c10339" Workload="localhost-k8s-calico--apiserver--6bbd685878--nsd9x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e4a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-6bbd685878-nsd9x", "timestamp":"2026-04-16 01:53:39.482289437 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00039fb80)} Apr 16 01:53:40.122083 containerd[1462]: 2026-04-16 01:53:39.487 [INFO][3600] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 01:53:40.122083 containerd[1462]: 2026-04-16 01:53:40.014 [INFO][3600] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 01:53:40.122083 containerd[1462]: 2026-04-16 01:53:40.014 [INFO][3600] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 16 01:53:40.122083 containerd[1462]: 2026-04-16 01:53:40.060 [INFO][3600] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.0bf17e5f17bafdc42a3b09042129702a518877d9717d0b1e3426af9ba5c10339" host="localhost" Apr 16 01:53:40.122083 containerd[1462]: 2026-04-16 01:53:40.074 [INFO][3600] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 16 01:53:40.122083 containerd[1462]: 2026-04-16 01:53:40.083 [INFO][3600] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 16 01:53:40.122083 containerd[1462]: 2026-04-16 01:53:40.085 [INFO][3600] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 16 01:53:40.122083 containerd[1462]: 2026-04-16 01:53:40.088 [INFO][3600] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 16 01:53:40.122083 containerd[1462]: 2026-04-16 01:53:40.088 [INFO][3600] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0bf17e5f17bafdc42a3b09042129702a518877d9717d0b1e3426af9ba5c10339" host="localhost" Apr 16 01:53:40.122083 containerd[1462]: 2026-04-16 01:53:40.090 [INFO][3600] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.0bf17e5f17bafdc42a3b09042129702a518877d9717d0b1e3426af9ba5c10339 Apr 16 01:53:40.122083 containerd[1462]: 2026-04-16 01:53:40.095 [INFO][3600] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0bf17e5f17bafdc42a3b09042129702a518877d9717d0b1e3426af9ba5c10339" host="localhost" Apr 16 01:53:40.122083 containerd[1462]: 2026-04-16 01:53:40.100 [INFO][3600] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.0bf17e5f17bafdc42a3b09042129702a518877d9717d0b1e3426af9ba5c10339" host="localhost" Apr 16 01:53:40.122083 containerd[1462]: 2026-04-16 01:53:40.100 [INFO][3600] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.0bf17e5f17bafdc42a3b09042129702a518877d9717d0b1e3426af9ba5c10339" host="localhost" Apr 16 01:53:40.122083 containerd[1462]: 2026-04-16 01:53:40.100 [INFO][3600] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 01:53:40.122083 containerd[1462]: 2026-04-16 01:53:40.100 [INFO][3600] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="0bf17e5f17bafdc42a3b09042129702a518877d9717d0b1e3426af9ba5c10339" HandleID="k8s-pod-network.0bf17e5f17bafdc42a3b09042129702a518877d9717d0b1e3426af9ba5c10339" Workload="localhost-k8s-calico--apiserver--6bbd685878--nsd9x-eth0" Apr 16 01:53:40.122534 containerd[1462]: 2026-04-16 01:53:40.105 [INFO][3456] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0bf17e5f17bafdc42a3b09042129702a518877d9717d0b1e3426af9ba5c10339" Namespace="calico-system" Pod="calico-apiserver-6bbd685878-nsd9x" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bbd685878--nsd9x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6bbd685878--nsd9x-eth0", GenerateName:"calico-apiserver-6bbd685878-", Namespace:"calico-system", SelfLink:"", UID:"481ff886-10a4-44ca-8500-2463e1271fbf", ResourceVersion:"907", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 1, 53, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6bbd685878", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6bbd685878-nsd9x", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calib99f3092cda", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 01:53:40.122534 containerd[1462]: 2026-04-16 01:53:40.106 [INFO][3456] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="0bf17e5f17bafdc42a3b09042129702a518877d9717d0b1e3426af9ba5c10339" Namespace="calico-system" Pod="calico-apiserver-6bbd685878-nsd9x" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bbd685878--nsd9x-eth0" Apr 16 01:53:40.122534 containerd[1462]: 2026-04-16 01:53:40.106 [INFO][3456] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib99f3092cda ContainerID="0bf17e5f17bafdc42a3b09042129702a518877d9717d0b1e3426af9ba5c10339" Namespace="calico-system" Pod="calico-apiserver-6bbd685878-nsd9x" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bbd685878--nsd9x-eth0" Apr 16 01:53:40.122534 containerd[1462]: 2026-04-16 01:53:40.107 [INFO][3456] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0bf17e5f17bafdc42a3b09042129702a518877d9717d0b1e3426af9ba5c10339" Namespace="calico-system" Pod="calico-apiserver-6bbd685878-nsd9x" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bbd685878--nsd9x-eth0" Apr 16 01:53:40.122534 containerd[1462]: 2026-04-16 01:53:40.108 [INFO][3456] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0bf17e5f17bafdc42a3b09042129702a518877d9717d0b1e3426af9ba5c10339" Namespace="calico-system" Pod="calico-apiserver-6bbd685878-nsd9x" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bbd685878--nsd9x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6bbd685878--nsd9x-eth0", GenerateName:"calico-apiserver-6bbd685878-", Namespace:"calico-system", SelfLink:"", UID:"481ff886-10a4-44ca-8500-2463e1271fbf", ResourceVersion:"907", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 1, 53, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6bbd685878", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0bf17e5f17bafdc42a3b09042129702a518877d9717d0b1e3426af9ba5c10339", Pod:"calico-apiserver-6bbd685878-nsd9x", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calib99f3092cda", MAC:"ca:83:d0:7f:fe:a7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 01:53:40.122534 containerd[1462]: 2026-04-16 01:53:40.120 [INFO][3456] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0bf17e5f17bafdc42a3b09042129702a518877d9717d0b1e3426af9ba5c10339" Namespace="calico-system" Pod="calico-apiserver-6bbd685878-nsd9x" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bbd685878--nsd9x-eth0" Apr 16 01:53:40.133029 containerd[1462]: time="2026-04-16T01:53:40.132956449Z" level=info msg="CreateContainer within sandbox \"fe312a4d42030b5aad40564511aee3d1a15885808578fb3b958bb13dd638e959\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"12fed9109bab7af9c9e2a47c08faede332fac20c1e6a068884ff02918efb6d48\"" Apr 16 01:53:40.138501 containerd[1462]: time="2026-04-16T01:53:40.138453407Z" level=info msg="StartContainer for \"12fed9109bab7af9c9e2a47c08faede332fac20c1e6a068884ff02918efb6d48\"" Apr 16 01:53:40.155112 containerd[1462]: time="2026-04-16T01:53:40.154997769Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 16 01:53:40.155112 containerd[1462]: time="2026-04-16T01:53:40.155071911Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 16 01:53:40.155112 containerd[1462]: time="2026-04-16T01:53:40.155083852Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 01:53:40.155314 containerd[1462]: time="2026-04-16T01:53:40.155145041Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 01:53:40.162032 systemd[1]: Started cri-containerd-12fed9109bab7af9c9e2a47c08faede332fac20c1e6a068884ff02918efb6d48.scope - libcontainer container 12fed9109bab7af9c9e2a47c08faede332fac20c1e6a068884ff02918efb6d48. Apr 16 01:53:40.186044 systemd[1]: Started cri-containerd-0bf17e5f17bafdc42a3b09042129702a518877d9717d0b1e3426af9ba5c10339.scope - libcontainer container 0bf17e5f17bafdc42a3b09042129702a518877d9717d0b1e3426af9ba5c10339. Apr 16 01:53:40.197142 systemd-resolved[1391]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 16 01:53:40.206531 containerd[1462]: time="2026-04-16T01:53:40.205167868Z" level=info msg="StartContainer for \"12fed9109bab7af9c9e2a47c08faede332fac20c1e6a068884ff02918efb6d48\" returns successfully" Apr 16 01:53:40.220144 systemd-networkd[1389]: califce131dfcac: Link UP Apr 16 01:53:40.220977 systemd-networkd[1389]: califce131dfcac: Gained carrier Apr 16 01:53:40.226175 containerd[1462]: time="2026-04-16T01:53:40.226129199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6bbd685878-nsd9x,Uid:481ff886-10a4-44ca-8500-2463e1271fbf,Namespace:calico-system,Attempt:0,} returns sandbox id \"0bf17e5f17bafdc42a3b09042129702a518877d9717d0b1e3426af9ba5c10339\"" Apr 16 01:53:40.235710 containerd[1462]: 2026-04-16 01:53:39.928 [ERROR][3863] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 16 01:53:40.235710 containerd[1462]: 2026-04-16 01:53:39.938 [INFO][3863] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--ns4nx-eth0 csi-node-driver- calico-system e69aa1a1-0d4e-40a8-84a3-c556bcb00606 757 0 2026-04-16 01:53:22 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6d9d697c7c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-ns4nx eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] califce131dfcac [] [] }} ContainerID="9594c70462efeb29726b5a2486166195861da5c87c1914e973e826c078ff9edc" Namespace="calico-system" Pod="csi-node-driver-ns4nx" WorkloadEndpoint="localhost-k8s-csi--node--driver--ns4nx-" Apr 16 01:53:40.235710 containerd[1462]: 2026-04-16 01:53:39.938 [INFO][3863] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9594c70462efeb29726b5a2486166195861da5c87c1914e973e826c078ff9edc" Namespace="calico-system" Pod="csi-node-driver-ns4nx" WorkloadEndpoint="localhost-k8s-csi--node--driver--ns4nx-eth0" Apr 16 01:53:40.235710 containerd[1462]: 2026-04-16 01:53:39.968 [INFO][3899] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9594c70462efeb29726b5a2486166195861da5c87c1914e973e826c078ff9edc" HandleID="k8s-pod-network.9594c70462efeb29726b5a2486166195861da5c87c1914e973e826c078ff9edc" Workload="localhost-k8s-csi--node--driver--ns4nx-eth0" Apr 16 01:53:40.235710 containerd[1462]: 2026-04-16 01:53:39.974 [INFO][3899] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="9594c70462efeb29726b5a2486166195861da5c87c1914e973e826c078ff9edc" HandleID="k8s-pod-network.9594c70462efeb29726b5a2486166195861da5c87c1914e973e826c078ff9edc" Workload="localhost-k8s-csi--node--driver--ns4nx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003792f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-ns4nx", "timestamp":"2026-04-16 01:53:39.968164807 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0004e4f20)} Apr 16 01:53:40.235710 containerd[1462]: 2026-04-16 01:53:39.974 [INFO][3899] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 01:53:40.235710 containerd[1462]: 2026-04-16 01:53:40.100 [INFO][3899] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 01:53:40.235710 containerd[1462]: 2026-04-16 01:53:40.100 [INFO][3899] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 16 01:53:40.235710 containerd[1462]: 2026-04-16 01:53:40.160 [INFO][3899] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.9594c70462efeb29726b5a2486166195861da5c87c1914e973e826c078ff9edc" host="localhost" Apr 16 01:53:40.235710 containerd[1462]: 2026-04-16 01:53:40.174 [INFO][3899] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 16 01:53:40.235710 containerd[1462]: 2026-04-16 01:53:40.184 [INFO][3899] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 16 01:53:40.235710 containerd[1462]: 2026-04-16 01:53:40.188 [INFO][3899] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 16 01:53:40.235710 containerd[1462]: 2026-04-16 01:53:40.191 [INFO][3899] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 16 01:53:40.235710 containerd[1462]: 2026-04-16 01:53:40.192 [INFO][3899] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9594c70462efeb29726b5a2486166195861da5c87c1914e973e826c078ff9edc" host="localhost" Apr 16 01:53:40.235710 containerd[1462]: 2026-04-16 01:53:40.193 [INFO][3899] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.9594c70462efeb29726b5a2486166195861da5c87c1914e973e826c078ff9edc Apr 16 01:53:40.235710 containerd[1462]: 2026-04-16 01:53:40.198 [INFO][3899] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9594c70462efeb29726b5a2486166195861da5c87c1914e973e826c078ff9edc" host="localhost" Apr 16 01:53:40.235710 containerd[1462]: 2026-04-16 01:53:40.203 [INFO][3899] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.9594c70462efeb29726b5a2486166195861da5c87c1914e973e826c078ff9edc" host="localhost" Apr 16 01:53:40.235710 containerd[1462]: 2026-04-16 01:53:40.204 [INFO][3899] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.9594c70462efeb29726b5a2486166195861da5c87c1914e973e826c078ff9edc" host="localhost" Apr 16 01:53:40.235710 containerd[1462]: 2026-04-16 01:53:40.204 [INFO][3899] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 01:53:40.235710 containerd[1462]: 2026-04-16 01:53:40.204 [INFO][3899] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="9594c70462efeb29726b5a2486166195861da5c87c1914e973e826c078ff9edc" HandleID="k8s-pod-network.9594c70462efeb29726b5a2486166195861da5c87c1914e973e826c078ff9edc" Workload="localhost-k8s-csi--node--driver--ns4nx-eth0" Apr 16 01:53:40.236222 containerd[1462]: 2026-04-16 01:53:40.207 [INFO][3863] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9594c70462efeb29726b5a2486166195861da5c87c1914e973e826c078ff9edc" Namespace="calico-system" Pod="csi-node-driver-ns4nx" WorkloadEndpoint="localhost-k8s-csi--node--driver--ns4nx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--ns4nx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e69aa1a1-0d4e-40a8-84a3-c556bcb00606", ResourceVersion:"757", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 1, 53, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-ns4nx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"califce131dfcac", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 01:53:40.236222 containerd[1462]: 2026-04-16 01:53:40.207 [INFO][3863] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="9594c70462efeb29726b5a2486166195861da5c87c1914e973e826c078ff9edc" Namespace="calico-system" Pod="csi-node-driver-ns4nx" WorkloadEndpoint="localhost-k8s-csi--node--driver--ns4nx-eth0" Apr 16 01:53:40.236222 containerd[1462]: 2026-04-16 01:53:40.207 [INFO][3863] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califce131dfcac ContainerID="9594c70462efeb29726b5a2486166195861da5c87c1914e973e826c078ff9edc" Namespace="calico-system" Pod="csi-node-driver-ns4nx" WorkloadEndpoint="localhost-k8s-csi--node--driver--ns4nx-eth0" Apr 16 01:53:40.236222 containerd[1462]: 2026-04-16 01:53:40.221 [INFO][3863] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9594c70462efeb29726b5a2486166195861da5c87c1914e973e826c078ff9edc" Namespace="calico-system" Pod="csi-node-driver-ns4nx" WorkloadEndpoint="localhost-k8s-csi--node--driver--ns4nx-eth0" Apr 16 01:53:40.236222 containerd[1462]: 2026-04-16 01:53:40.222 [INFO][3863] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9594c70462efeb29726b5a2486166195861da5c87c1914e973e826c078ff9edc" Namespace="calico-system" Pod="csi-node-driver-ns4nx" WorkloadEndpoint="localhost-k8s-csi--node--driver--ns4nx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--ns4nx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e69aa1a1-0d4e-40a8-84a3-c556bcb00606", ResourceVersion:"757", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 1, 53, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9594c70462efeb29726b5a2486166195861da5c87c1914e973e826c078ff9edc", Pod:"csi-node-driver-ns4nx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"califce131dfcac", MAC:"b2:0e:11:38:23:66", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 01:53:40.236222 containerd[1462]: 2026-04-16 01:53:40.233 [INFO][3863] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9594c70462efeb29726b5a2486166195861da5c87c1914e973e826c078ff9edc" Namespace="calico-system" Pod="csi-node-driver-ns4nx" WorkloadEndpoint="localhost-k8s-csi--node--driver--ns4nx-eth0" Apr 16 01:53:40.254760 containerd[1462]: time="2026-04-16T01:53:40.254651048Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 16 01:53:40.254760 containerd[1462]: time="2026-04-16T01:53:40.254728560Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 16 01:53:40.254897 containerd[1462]: time="2026-04-16T01:53:40.254759014Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 01:53:40.254951 containerd[1462]: time="2026-04-16T01:53:40.254873102Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 01:53:40.275032 systemd[1]: Started cri-containerd-9594c70462efeb29726b5a2486166195861da5c87c1914e973e826c078ff9edc.scope - libcontainer container 9594c70462efeb29726b5a2486166195861da5c87c1914e973e826c078ff9edc. Apr 16 01:53:40.285490 systemd-resolved[1391]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 16 01:53:40.297509 containerd[1462]: time="2026-04-16T01:53:40.297474859Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ns4nx,Uid:e69aa1a1-0d4e-40a8-84a3-c556bcb00606,Namespace:calico-system,Attempt:0,} returns sandbox id \"9594c70462efeb29726b5a2486166195861da5c87c1914e973e826c078ff9edc\"" Apr 16 01:53:40.592891 kernel: calico-node[4239]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Apr 16 01:53:40.866984 systemd-networkd[1389]: calie4dc35f109a: Gained IPv6LL Apr 16 01:53:40.935816 systemd-networkd[1389]: vxlan.calico: Link UP Apr 16 01:53:40.935823 systemd-networkd[1389]: vxlan.calico: Gained carrier Apr 16 01:53:40.973816 kubelet[2507]: E0416 01:53:40.973787 2507 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:53:40.974660 kubelet[2507]: I0416 01:53:40.974262 2507 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 16 01:53:40.974660 kubelet[2507]: E0416 01:53:40.974440 2507 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:53:40.987127 kubelet[2507]: I0416 01:53:40.987077 2507 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-xrsbx" podStartSLOduration=27.987060513 podStartE2EDuration="27.987060513s" podCreationTimestamp="2026-04-16 01:53:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 01:53:40.98624984 +0000 UTC m=+35.231949471" watchObservedRunningTime="2026-04-16 01:53:40.987060513 +0000 UTC m=+35.232760151" Apr 16 01:53:40.994697 systemd-networkd[1389]: cali01a9cce86ef: Gained IPv6LL Apr 16 01:53:41.058231 systemd-networkd[1389]: cali82bac25da03: Gained IPv6LL Apr 16 01:53:41.250465 systemd-networkd[1389]: cali9911cb9b543: Gained IPv6LL Apr 16 01:53:41.506115 systemd-networkd[1389]: cali9d286029260: Gained IPv6LL Apr 16 01:53:41.698109 systemd-networkd[1389]: calia096f1f7424: Gained IPv6LL Apr 16 01:53:41.763124 systemd-networkd[1389]: calib99f3092cda: Gained IPv6LL Apr 16 01:53:41.976618 kubelet[2507]: E0416 01:53:41.976573 2507 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:53:42.082085 systemd-networkd[1389]: califce131dfcac: Gained IPv6LL Apr 16 01:53:42.595128 systemd-networkd[1389]: vxlan.calico: Gained IPv6LL Apr 16 01:53:43.065638 kubelet[2507]: I0416 01:53:43.065547 2507 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 16 01:53:43.104973 containerd[1462]: time="2026-04-16T01:53:43.104906930Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:53:43.105798 containerd[1462]: time="2026-04-16T01:53:43.105769586Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Apr 16 01:53:43.107973 containerd[1462]: time="2026-04-16T01:53:43.106948510Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:53:43.109511 containerd[1462]: time="2026-04-16T01:53:43.109444655Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:53:43.109940 containerd[1462]: time="2026-04-16T01:53:43.109906904Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 3.425537167s" Apr 16 01:53:43.109940 containerd[1462]: time="2026-04-16T01:53:43.109940403Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Apr 16 01:53:43.110909 containerd[1462]: time="2026-04-16T01:53:43.110693285Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Apr 16 01:53:43.120968 containerd[1462]: time="2026-04-16T01:53:43.120931465Z" level=info msg="CreateContainer within sandbox \"e0948d22d4bb838e4c826eef78aa4ade491aa56b5eb3ffd5636abca8c3a19d15\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 16 01:53:43.143280 containerd[1462]: time="2026-04-16T01:53:43.143159613Z" level=info msg="CreateContainer within sandbox \"e0948d22d4bb838e4c826eef78aa4ade491aa56b5eb3ffd5636abca8c3a19d15\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"92f631a17a40350020cd5db3465cbc248037135733c29b52feb3c7c82b48b663\"" Apr 16 01:53:43.145176 containerd[1462]: time="2026-04-16T01:53:43.145072717Z" level=info msg="StartContainer for \"92f631a17a40350020cd5db3465cbc248037135733c29b52feb3c7c82b48b663\"" Apr 16 01:53:43.204107 systemd[1]: Started cri-containerd-92f631a17a40350020cd5db3465cbc248037135733c29b52feb3c7c82b48b663.scope - libcontainer container 92f631a17a40350020cd5db3465cbc248037135733c29b52feb3c7c82b48b663. Apr 16 01:53:43.256120 containerd[1462]: time="2026-04-16T01:53:43.256054774Z" level=info msg="StartContainer for \"92f631a17a40350020cd5db3465cbc248037135733c29b52feb3c7c82b48b663\" returns successfully" Apr 16 01:53:43.998566 kubelet[2507]: I0416 01:53:43.998422 2507 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-74b56fb579-thpht" podStartSLOduration=18.571351951 podStartE2EDuration="21.998404432s" podCreationTimestamp="2026-04-16 01:53:22 +0000 UTC" firstStartedPulling="2026-04-16 01:53:39.68353948 +0000 UTC m=+33.929239107" lastFinishedPulling="2026-04-16 01:53:43.110591958 +0000 UTC m=+37.356291588" observedRunningTime="2026-04-16 01:53:43.997747399 +0000 UTC m=+38.243447044" watchObservedRunningTime="2026-04-16 01:53:43.998404432 +0000 UTC m=+38.244104073" Apr 16 01:53:45.329031 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount663402895.mount: Deactivated successfully. Apr 16 01:53:45.697291 containerd[1462]: time="2026-04-16T01:53:45.697033202Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:53:45.698325 containerd[1462]: time="2026-04-16T01:53:45.698230421Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Apr 16 01:53:45.699550 containerd[1462]: time="2026-04-16T01:53:45.699505978Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:53:45.702713 containerd[1462]: time="2026-04-16T01:53:45.702633034Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:53:45.704607 containerd[1462]: time="2026-04-16T01:53:45.704541999Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 2.593826516s" Apr 16 01:53:45.704607 containerd[1462]: time="2026-04-16T01:53:45.704584257Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Apr 16 01:53:45.706107 containerd[1462]: time="2026-04-16T01:53:45.706067640Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Apr 16 01:53:45.711059 containerd[1462]: time="2026-04-16T01:53:45.711000384Z" level=info msg="CreateContainer within sandbox \"3e3a2b5b822d0d5c75a394d38f5f97282ce8a2fc87c4dbec183da84380e08688\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Apr 16 01:53:45.732287 containerd[1462]: time="2026-04-16T01:53:45.732233155Z" level=info msg="CreateContainer within sandbox \"3e3a2b5b822d0d5c75a394d38f5f97282ce8a2fc87c4dbec183da84380e08688\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"415d0fb6c30b4151749070289083e9fa908ee1294cd2bdbbe9ba09e6395181db\"" Apr 16 01:53:45.733058 containerd[1462]: time="2026-04-16T01:53:45.732998757Z" level=info msg="StartContainer for \"415d0fb6c30b4151749070289083e9fa908ee1294cd2bdbbe9ba09e6395181db\"" Apr 16 01:53:45.768119 systemd[1]: Started cri-containerd-415d0fb6c30b4151749070289083e9fa908ee1294cd2bdbbe9ba09e6395181db.scope - libcontainer container 415d0fb6c30b4151749070289083e9fa908ee1294cd2bdbbe9ba09e6395181db. Apr 16 01:53:45.807260 containerd[1462]: time="2026-04-16T01:53:45.807181206Z" level=info msg="StartContainer for \"415d0fb6c30b4151749070289083e9fa908ee1294cd2bdbbe9ba09e6395181db\" returns successfully" Apr 16 01:53:47.068465 kubelet[2507]: I0416 01:53:47.068173 2507 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-5b85766d88-mflhh" podStartSLOduration=19.170388117 podStartE2EDuration="25.068153754s" podCreationTimestamp="2026-04-16 01:53:22 +0000 UTC" firstStartedPulling="2026-04-16 01:53:39.808054795 +0000 UTC m=+34.053754425" lastFinishedPulling="2026-04-16 01:53:45.705820425 +0000 UTC m=+39.951520062" observedRunningTime="2026-04-16 01:53:46.001154934 +0000 UTC m=+40.246854582" watchObservedRunningTime="2026-04-16 01:53:47.068153754 +0000 UTC m=+41.313853389" Apr 16 01:53:47.431246 containerd[1462]: time="2026-04-16T01:53:47.431046768Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:53:47.432736 containerd[1462]: time="2026-04-16T01:53:47.432149047Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Apr 16 01:53:47.433714 containerd[1462]: time="2026-04-16T01:53:47.433650358Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:53:47.437062 containerd[1462]: time="2026-04-16T01:53:47.437008857Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:53:47.437910 containerd[1462]: time="2026-04-16T01:53:47.437823308Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 1.731710065s" Apr 16 01:53:47.437910 containerd[1462]: time="2026-04-16T01:53:47.437908029Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Apr 16 01:53:47.440146 containerd[1462]: time="2026-04-16T01:53:47.440109957Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 16 01:53:47.448895 containerd[1462]: time="2026-04-16T01:53:47.447668907Z" level=info msg="CreateContainer within sandbox \"15c1c0d638112cfbe7472e00f92eb6c84c935d9b722b582a6835b90b6c433307\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Apr 16 01:53:47.470445 containerd[1462]: time="2026-04-16T01:53:47.470359834Z" level=info msg="CreateContainer within sandbox \"15c1c0d638112cfbe7472e00f92eb6c84c935d9b722b582a6835b90b6c433307\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"1b93b10912ed92face034a1cddc4835fbda5ce29edc03a81c1c7de841e8535f8\"" Apr 16 01:53:47.471197 containerd[1462]: time="2026-04-16T01:53:47.471165531Z" level=info msg="StartContainer for \"1b93b10912ed92face034a1cddc4835fbda5ce29edc03a81c1c7de841e8535f8\"" Apr 16 01:53:47.520227 systemd[1]: Started cri-containerd-1b93b10912ed92face034a1cddc4835fbda5ce29edc03a81c1c7de841e8535f8.scope - libcontainer container 1b93b10912ed92face034a1cddc4835fbda5ce29edc03a81c1c7de841e8535f8. Apr 16 01:53:47.579973 containerd[1462]: time="2026-04-16T01:53:47.579755528Z" level=info msg="StartContainer for \"1b93b10912ed92face034a1cddc4835fbda5ce29edc03a81c1c7de841e8535f8\" returns successfully" Apr 16 01:53:51.012622 containerd[1462]: time="2026-04-16T01:53:51.012525682Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:53:51.013564 containerd[1462]: time="2026-04-16T01:53:51.013391288Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Apr 16 01:53:51.015002 containerd[1462]: time="2026-04-16T01:53:51.014930921Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:53:51.017235 containerd[1462]: time="2026-04-16T01:53:51.017158319Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:53:51.017702 containerd[1462]: time="2026-04-16T01:53:51.017629542Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 3.577470732s" Apr 16 01:53:51.017702 containerd[1462]: time="2026-04-16T01:53:51.017677545Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 16 01:53:51.019210 containerd[1462]: time="2026-04-16T01:53:51.019099375Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 16 01:53:51.023121 containerd[1462]: time="2026-04-16T01:53:51.023021601Z" level=info msg="CreateContainer within sandbox \"fb691154a8d1884607089283edb83ab23419d53f75ae9098f17e20cc516237c8\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 16 01:53:51.048176 containerd[1462]: time="2026-04-16T01:53:51.048107547Z" level=info msg="CreateContainer within sandbox \"fb691154a8d1884607089283edb83ab23419d53f75ae9098f17e20cc516237c8\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"c6fec03e6d7efc3a4b53357f6a5b6ff43fc67ad0e6aee992f40e9d17adbaa9b9\"" Apr 16 01:53:51.049072 containerd[1462]: time="2026-04-16T01:53:51.049030784Z" level=info msg="StartContainer for \"c6fec03e6d7efc3a4b53357f6a5b6ff43fc67ad0e6aee992f40e9d17adbaa9b9\"" Apr 16 01:53:51.122541 systemd[1]: Started cri-containerd-c6fec03e6d7efc3a4b53357f6a5b6ff43fc67ad0e6aee992f40e9d17adbaa9b9.scope - libcontainer container c6fec03e6d7efc3a4b53357f6a5b6ff43fc67ad0e6aee992f40e9d17adbaa9b9. Apr 16 01:53:51.191512 containerd[1462]: time="2026-04-16T01:53:51.191378001Z" level=info msg="StartContainer for \"c6fec03e6d7efc3a4b53357f6a5b6ff43fc67ad0e6aee992f40e9d17adbaa9b9\" returns successfully" Apr 16 01:53:51.430327 containerd[1462]: time="2026-04-16T01:53:51.430118349Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:53:51.431397 containerd[1462]: time="2026-04-16T01:53:51.431323472Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Apr 16 01:53:51.433912 containerd[1462]: time="2026-04-16T01:53:51.433746550Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 414.620043ms" Apr 16 01:53:51.433912 containerd[1462]: time="2026-04-16T01:53:51.433833062Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 16 01:53:51.435739 containerd[1462]: time="2026-04-16T01:53:51.435695178Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Apr 16 01:53:51.441503 containerd[1462]: time="2026-04-16T01:53:51.441442854Z" level=info msg="CreateContainer within sandbox \"0bf17e5f17bafdc42a3b09042129702a518877d9717d0b1e3426af9ba5c10339\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 16 01:53:51.460942 containerd[1462]: time="2026-04-16T01:53:51.460876010Z" level=info msg="CreateContainer within sandbox \"0bf17e5f17bafdc42a3b09042129702a518877d9717d0b1e3426af9ba5c10339\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"c1edfcfe2f5458daf8670a3ac052e06702c60dde61b2c32b28599ee38f4311a0\"" Apr 16 01:53:51.461896 containerd[1462]: time="2026-04-16T01:53:51.461645438Z" level=info msg="StartContainer for \"c1edfcfe2f5458daf8670a3ac052e06702c60dde61b2c32b28599ee38f4311a0\"" Apr 16 01:53:51.497424 systemd[1]: Started cri-containerd-c1edfcfe2f5458daf8670a3ac052e06702c60dde61b2c32b28599ee38f4311a0.scope - libcontainer container c1edfcfe2f5458daf8670a3ac052e06702c60dde61b2c32b28599ee38f4311a0. Apr 16 01:53:51.562890 containerd[1462]: time="2026-04-16T01:53:51.560665571Z" level=info msg="StartContainer for \"c1edfcfe2f5458daf8670a3ac052e06702c60dde61b2c32b28599ee38f4311a0\" returns successfully" Apr 16 01:53:52.020806 kubelet[2507]: I0416 01:53:52.020654 2507 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-6bbd685878-kfhqt" podStartSLOduration=20.039659795 podStartE2EDuration="31.020633066s" podCreationTimestamp="2026-04-16 01:53:21 +0000 UTC" firstStartedPulling="2026-04-16 01:53:40.037899724 +0000 UTC m=+34.283599351" lastFinishedPulling="2026-04-16 01:53:51.018872995 +0000 UTC m=+45.264572622" observedRunningTime="2026-04-16 01:53:52.019660602 +0000 UTC m=+46.265360240" watchObservedRunningTime="2026-04-16 01:53:52.020633066 +0000 UTC m=+46.266332710" Apr 16 01:53:52.040680 kubelet[2507]: I0416 01:53:52.040419 2507 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-6bbd685878-nsd9x" podStartSLOduration=19.834717076 podStartE2EDuration="31.040401163s" podCreationTimestamp="2026-04-16 01:53:21 +0000 UTC" firstStartedPulling="2026-04-16 01:53:40.229268057 +0000 UTC m=+34.474967684" lastFinishedPulling="2026-04-16 01:53:51.434952138 +0000 UTC m=+45.680651771" observedRunningTime="2026-04-16 01:53:52.033669776 +0000 UTC m=+46.279369404" watchObservedRunningTime="2026-04-16 01:53:52.040401163 +0000 UTC m=+46.286100801" Apr 16 01:53:53.010961 kubelet[2507]: I0416 01:53:53.010898 2507 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 16 01:53:53.010961 kubelet[2507]: I0416 01:53:53.010899 2507 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 16 01:53:53.930660 containerd[1462]: time="2026-04-16T01:53:53.930593034Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:53:53.931508 containerd[1462]: time="2026-04-16T01:53:53.931448590Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Apr 16 01:53:53.932831 containerd[1462]: time="2026-04-16T01:53:53.932770457Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:53:53.935718 containerd[1462]: time="2026-04-16T01:53:53.935674067Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:53:53.936520 containerd[1462]: time="2026-04-16T01:53:53.936463629Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 2.500716647s" Apr 16 01:53:53.936586 containerd[1462]: time="2026-04-16T01:53:53.936521725Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Apr 16 01:53:53.937650 containerd[1462]: time="2026-04-16T01:53:53.937626058Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Apr 16 01:53:53.941561 containerd[1462]: time="2026-04-16T01:53:53.941507290Z" level=info msg="CreateContainer within sandbox \"9594c70462efeb29726b5a2486166195861da5c87c1914e973e826c078ff9edc\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 16 01:53:53.957318 containerd[1462]: time="2026-04-16T01:53:53.957136911Z" level=info msg="CreateContainer within sandbox \"9594c70462efeb29726b5a2486166195861da5c87c1914e973e826c078ff9edc\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"89f54d7b95d0568f90f707b1da4f8c2f2a1bc044a1e175769c27b771399b03aa\"" Apr 16 01:53:53.958124 containerd[1462]: time="2026-04-16T01:53:53.958063515Z" level=info msg="StartContainer for \"89f54d7b95d0568f90f707b1da4f8c2f2a1bc044a1e175769c27b771399b03aa\"" Apr 16 01:53:53.996159 systemd[1]: Started cri-containerd-89f54d7b95d0568f90f707b1da4f8c2f2a1bc044a1e175769c27b771399b03aa.scope - libcontainer container 89f54d7b95d0568f90f707b1da4f8c2f2a1bc044a1e175769c27b771399b03aa. Apr 16 01:53:54.027762 containerd[1462]: time="2026-04-16T01:53:54.027654887Z" level=info msg="StartContainer for \"89f54d7b95d0568f90f707b1da4f8c2f2a1bc044a1e175769c27b771399b03aa\" returns successfully" Apr 16 01:53:55.834245 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3725486972.mount: Deactivated successfully. Apr 16 01:53:55.862127 containerd[1462]: time="2026-04-16T01:53:55.862062601Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:53:55.863726 containerd[1462]: time="2026-04-16T01:53:55.863662136Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Apr 16 01:53:55.864873 containerd[1462]: time="2026-04-16T01:53:55.864754341Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:53:55.867440 containerd[1462]: time="2026-04-16T01:53:55.867373427Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:53:55.868257 containerd[1462]: time="2026-04-16T01:53:55.868213457Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 1.930549281s" Apr 16 01:53:55.868308 containerd[1462]: time="2026-04-16T01:53:55.868260748Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Apr 16 01:53:55.869517 containerd[1462]: time="2026-04-16T01:53:55.869317906Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Apr 16 01:53:55.872935 containerd[1462]: time="2026-04-16T01:53:55.872783413Z" level=info msg="CreateContainer within sandbox \"15c1c0d638112cfbe7472e00f92eb6c84c935d9b722b582a6835b90b6c433307\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Apr 16 01:53:55.887021 containerd[1462]: time="2026-04-16T01:53:55.886927938Z" level=info msg="CreateContainer within sandbox \"15c1c0d638112cfbe7472e00f92eb6c84c935d9b722b582a6835b90b6c433307\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"799508586b672f8aea8116e787d1ae7474d86d8e9e139857e5124618f0be7a0a\"" Apr 16 01:53:55.887767 containerd[1462]: time="2026-04-16T01:53:55.887689341Z" level=info msg="StartContainer for \"799508586b672f8aea8116e787d1ae7474d86d8e9e139857e5124618f0be7a0a\"" Apr 16 01:53:55.932289 systemd[1]: Started cri-containerd-799508586b672f8aea8116e787d1ae7474d86d8e9e139857e5124618f0be7a0a.scope - libcontainer container 799508586b672f8aea8116e787d1ae7474d86d8e9e139857e5124618f0be7a0a. Apr 16 01:53:55.974060 containerd[1462]: time="2026-04-16T01:53:55.974018954Z" level=info msg="StartContainer for \"799508586b672f8aea8116e787d1ae7474d86d8e9e139857e5124618f0be7a0a\" returns successfully" Apr 16 01:53:56.062243 containerd[1462]: time="2026-04-16T01:53:56.062088375Z" level=info msg="StopContainer for \"799508586b672f8aea8116e787d1ae7474d86d8e9e139857e5124618f0be7a0a\" with timeout 30 (s)" Apr 16 01:53:56.063343 containerd[1462]: time="2026-04-16T01:53:56.062581159Z" level=info msg="StopContainer for \"1b93b10912ed92face034a1cddc4835fbda5ce29edc03a81c1c7de841e8535f8\" with timeout 30 (s)" Apr 16 01:53:56.064155 containerd[1462]: time="2026-04-16T01:53:56.064134458Z" level=info msg="Stop container \"799508586b672f8aea8116e787d1ae7474d86d8e9e139857e5124618f0be7a0a\" with signal terminated" Apr 16 01:53:56.064279 kubelet[2507]: I0416 01:53:56.064241 2507 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-7dfff6788d-dp76c" podStartSLOduration=15.107287246 podStartE2EDuration="31.064228196s" podCreationTimestamp="2026-04-16 01:53:25 +0000 UTC" firstStartedPulling="2026-04-16 01:53:39.912228775 +0000 UTC m=+34.157928403" lastFinishedPulling="2026-04-16 01:53:55.869169723 +0000 UTC m=+50.114869353" observedRunningTime="2026-04-16 01:53:56.063866562 +0000 UTC m=+50.309566188" watchObservedRunningTime="2026-04-16 01:53:56.064228196 +0000 UTC m=+50.309927834" Apr 16 01:53:56.065421 containerd[1462]: time="2026-04-16T01:53:56.064319070Z" level=info msg="Stop container \"1b93b10912ed92face034a1cddc4835fbda5ce29edc03a81c1c7de841e8535f8\" with signal terminated" Apr 16 01:53:56.076903 systemd[1]: cri-containerd-799508586b672f8aea8116e787d1ae7474d86d8e9e139857e5124618f0be7a0a.scope: Deactivated successfully. Apr 16 01:53:56.090249 systemd[1]: cri-containerd-1b93b10912ed92face034a1cddc4835fbda5ce29edc03a81c1c7de841e8535f8.scope: Deactivated successfully. Apr 16 01:53:56.126359 containerd[1462]: time="2026-04-16T01:53:56.114617651Z" level=info msg="shim disconnected" id=1b93b10912ed92face034a1cddc4835fbda5ce29edc03a81c1c7de841e8535f8 namespace=k8s.io Apr 16 01:53:56.126359 containerd[1462]: time="2026-04-16T01:53:56.126357325Z" level=warning msg="cleaning up after shim disconnected" id=1b93b10912ed92face034a1cddc4835fbda5ce29edc03a81c1c7de841e8535f8 namespace=k8s.io Apr 16 01:53:56.126359 containerd[1462]: time="2026-04-16T01:53:56.126377544Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 16 01:53:56.157311 containerd[1462]: time="2026-04-16T01:53:56.157189630Z" level=info msg="shim disconnected" id=799508586b672f8aea8116e787d1ae7474d86d8e9e139857e5124618f0be7a0a namespace=k8s.io Apr 16 01:53:56.157311 containerd[1462]: time="2026-04-16T01:53:56.157259038Z" level=warning msg="cleaning up after shim disconnected" id=799508586b672f8aea8116e787d1ae7474d86d8e9e139857e5124618f0be7a0a namespace=k8s.io Apr 16 01:53:56.157311 containerd[1462]: time="2026-04-16T01:53:56.157266517Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 16 01:53:56.159988 containerd[1462]: time="2026-04-16T01:53:56.159955664Z" level=info msg="StopContainer for \"1b93b10912ed92face034a1cddc4835fbda5ce29edc03a81c1c7de841e8535f8\" returns successfully" Apr 16 01:53:56.174639 containerd[1462]: time="2026-04-16T01:53:56.174557194Z" level=info msg="StopContainer for \"799508586b672f8aea8116e787d1ae7474d86d8e9e139857e5124618f0be7a0a\" returns successfully" Apr 16 01:53:56.179210 containerd[1462]: time="2026-04-16T01:53:56.179160090Z" level=info msg="StopPodSandbox for \"15c1c0d638112cfbe7472e00f92eb6c84c935d9b722b582a6835b90b6c433307\"" Apr 16 01:53:56.179284 containerd[1462]: time="2026-04-16T01:53:56.179227623Z" level=info msg="Container to stop \"1b93b10912ed92face034a1cddc4835fbda5ce29edc03a81c1c7de841e8535f8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 16 01:53:56.179284 containerd[1462]: time="2026-04-16T01:53:56.179238655Z" level=info msg="Container to stop \"799508586b672f8aea8116e787d1ae7474d86d8e9e139857e5124618f0be7a0a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 16 01:53:56.185700 systemd[1]: cri-containerd-15c1c0d638112cfbe7472e00f92eb6c84c935d9b722b582a6835b90b6c433307.scope: Deactivated successfully. Apr 16 01:53:56.208920 containerd[1462]: time="2026-04-16T01:53:56.208829693Z" level=info msg="shim disconnected" id=15c1c0d638112cfbe7472e00f92eb6c84c935d9b722b582a6835b90b6c433307 namespace=k8s.io Apr 16 01:53:56.208920 containerd[1462]: time="2026-04-16T01:53:56.208914125Z" level=warning msg="cleaning up after shim disconnected" id=15c1c0d638112cfbe7472e00f92eb6c84c935d9b722b582a6835b90b6c433307 namespace=k8s.io Apr 16 01:53:56.208920 containerd[1462]: time="2026-04-16T01:53:56.208921874Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 16 01:53:56.299960 systemd-networkd[1389]: calia096f1f7424: Link DOWN Apr 16 01:53:56.299968 systemd-networkd[1389]: calia096f1f7424: Lost carrier Apr 16 01:53:56.403446 containerd[1462]: 2026-04-16 01:53:56.297 [INFO][4938] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="15c1c0d638112cfbe7472e00f92eb6c84c935d9b722b582a6835b90b6c433307" Apr 16 01:53:56.403446 containerd[1462]: 2026-04-16 01:53:56.298 [INFO][4938] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="15c1c0d638112cfbe7472e00f92eb6c84c935d9b722b582a6835b90b6c433307" iface="eth0" netns="/var/run/netns/cni-46e5ac5a-8f16-299f-a007-90551c056ea9" Apr 16 01:53:56.403446 containerd[1462]: 2026-04-16 01:53:56.298 [INFO][4938] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="15c1c0d638112cfbe7472e00f92eb6c84c935d9b722b582a6835b90b6c433307" iface="eth0" netns="/var/run/netns/cni-46e5ac5a-8f16-299f-a007-90551c056ea9" Apr 16 01:53:56.403446 containerd[1462]: 2026-04-16 01:53:56.310 [INFO][4938] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="15c1c0d638112cfbe7472e00f92eb6c84c935d9b722b582a6835b90b6c433307" after=11.814034ms iface="eth0" netns="/var/run/netns/cni-46e5ac5a-8f16-299f-a007-90551c056ea9" Apr 16 01:53:56.403446 containerd[1462]: 2026-04-16 01:53:56.310 [INFO][4938] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="15c1c0d638112cfbe7472e00f92eb6c84c935d9b722b582a6835b90b6c433307" Apr 16 01:53:56.403446 containerd[1462]: 2026-04-16 01:53:56.310 [INFO][4938] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="15c1c0d638112cfbe7472e00f92eb6c84c935d9b722b582a6835b90b6c433307" Apr 16 01:53:56.403446 containerd[1462]: 2026-04-16 01:53:56.356 [INFO][4953] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="15c1c0d638112cfbe7472e00f92eb6c84c935d9b722b582a6835b90b6c433307" HandleID="k8s-pod-network.15c1c0d638112cfbe7472e00f92eb6c84c935d9b722b582a6835b90b6c433307" Workload="localhost-k8s-whisker--7dfff6788d--dp76c-eth0" Apr 16 01:53:56.403446 containerd[1462]: 2026-04-16 01:53:56.357 [INFO][4953] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 01:53:56.403446 containerd[1462]: 2026-04-16 01:53:56.357 [INFO][4953] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 01:53:56.403446 containerd[1462]: 2026-04-16 01:53:56.396 [INFO][4953] ipam/ipam_plugin.go 516: Released address using handleID ContainerID="15c1c0d638112cfbe7472e00f92eb6c84c935d9b722b582a6835b90b6c433307" HandleID="k8s-pod-network.15c1c0d638112cfbe7472e00f92eb6c84c935d9b722b582a6835b90b6c433307" Workload="localhost-k8s-whisker--7dfff6788d--dp76c-eth0" Apr 16 01:53:56.403446 containerd[1462]: 2026-04-16 01:53:56.397 [INFO][4953] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="15c1c0d638112cfbe7472e00f92eb6c84c935d9b722b582a6835b90b6c433307" HandleID="k8s-pod-network.15c1c0d638112cfbe7472e00f92eb6c84c935d9b722b582a6835b90b6c433307" Workload="localhost-k8s-whisker--7dfff6788d--dp76c-eth0" Apr 16 01:53:56.403446 containerd[1462]: 2026-04-16 01:53:56.398 [INFO][4953] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 01:53:56.403446 containerd[1462]: 2026-04-16 01:53:56.400 [INFO][4938] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="15c1c0d638112cfbe7472e00f92eb6c84c935d9b722b582a6835b90b6c433307" Apr 16 01:53:56.403918 containerd[1462]: time="2026-04-16T01:53:56.403686897Z" level=info msg="TearDown network for sandbox \"15c1c0d638112cfbe7472e00f92eb6c84c935d9b722b582a6835b90b6c433307\" successfully" Apr 16 01:53:56.403918 containerd[1462]: time="2026-04-16T01:53:56.403709562Z" level=info msg="StopPodSandbox for \"15c1c0d638112cfbe7472e00f92eb6c84c935d9b722b582a6835b90b6c433307\" returns successfully" Apr 16 01:53:56.441266 kubelet[2507]: I0416 01:53:56.441230 2507 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/682faac5-585b-4912-9c11-ca5063bb23e6-nginx-config\") pod \"682faac5-585b-4912-9c11-ca5063bb23e6\" (UID: \"682faac5-585b-4912-9c11-ca5063bb23e6\") " Apr 16 01:53:56.441266 kubelet[2507]: I0416 01:53:56.441289 2507 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jsmgh\" (UniqueName: \"kubernetes.io/projected/682faac5-585b-4912-9c11-ca5063bb23e6-kube-api-access-jsmgh\") pod \"682faac5-585b-4912-9c11-ca5063bb23e6\" (UID: \"682faac5-585b-4912-9c11-ca5063bb23e6\") " Apr 16 01:53:56.441444 kubelet[2507]: I0416 01:53:56.441342 2507 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/682faac5-585b-4912-9c11-ca5063bb23e6-whisker-ca-bundle\") pod \"682faac5-585b-4912-9c11-ca5063bb23e6\" (UID: \"682faac5-585b-4912-9c11-ca5063bb23e6\") " Apr 16 01:53:56.441444 kubelet[2507]: I0416 01:53:56.441361 2507 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/682faac5-585b-4912-9c11-ca5063bb23e6-whisker-backend-key-pair\") pod \"682faac5-585b-4912-9c11-ca5063bb23e6\" (UID: \"682faac5-585b-4912-9c11-ca5063bb23e6\") " Apr 16 01:53:56.441665 kubelet[2507]: I0416 01:53:56.441615 2507 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/682faac5-585b-4912-9c11-ca5063bb23e6-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "682faac5-585b-4912-9c11-ca5063bb23e6" (UID: "682faac5-585b-4912-9c11-ca5063bb23e6"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 16 01:53:56.441964 kubelet[2507]: I0416 01:53:56.441932 2507 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/682faac5-585b-4912-9c11-ca5063bb23e6-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "682faac5-585b-4912-9c11-ca5063bb23e6" (UID: "682faac5-585b-4912-9c11-ca5063bb23e6"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 16 01:53:56.444638 kubelet[2507]: I0416 01:53:56.444591 2507 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/682faac5-585b-4912-9c11-ca5063bb23e6-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "682faac5-585b-4912-9c11-ca5063bb23e6" (UID: "682faac5-585b-4912-9c11-ca5063bb23e6"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 16 01:53:56.444759 kubelet[2507]: I0416 01:53:56.444598 2507 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/682faac5-585b-4912-9c11-ca5063bb23e6-kube-api-access-jsmgh" (OuterVolumeSpecName: "kube-api-access-jsmgh") pod "682faac5-585b-4912-9c11-ca5063bb23e6" (UID: "682faac5-585b-4912-9c11-ca5063bb23e6"). InnerVolumeSpecName "kube-api-access-jsmgh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 16 01:53:56.542437 kubelet[2507]: I0416 01:53:56.542314 2507 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/682faac5-585b-4912-9c11-ca5063bb23e6-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Apr 16 01:53:56.542437 kubelet[2507]: I0416 01:53:56.542375 2507 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/682faac5-585b-4912-9c11-ca5063bb23e6-nginx-config\") on node \"localhost\" DevicePath \"\"" Apr 16 01:53:56.542437 kubelet[2507]: I0416 01:53:56.542382 2507 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jsmgh\" (UniqueName: \"kubernetes.io/projected/682faac5-585b-4912-9c11-ca5063bb23e6-kube-api-access-jsmgh\") on node \"localhost\" DevicePath \"\"" Apr 16 01:53:56.542437 kubelet[2507]: I0416 01:53:56.542389 2507 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/682faac5-585b-4912-9c11-ca5063bb23e6-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Apr 16 01:53:56.678573 systemd[1]: run-containerd-runc-k8s.io-799508586b672f8aea8116e787d1ae7474d86d8e9e139857e5124618f0be7a0a-runc.JxkRaz.mount: Deactivated successfully. Apr 16 01:53:56.678740 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-799508586b672f8aea8116e787d1ae7474d86d8e9e139857e5124618f0be7a0a-rootfs.mount: Deactivated successfully. Apr 16 01:53:56.678805 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1b93b10912ed92face034a1cddc4835fbda5ce29edc03a81c1c7de841e8535f8-rootfs.mount: Deactivated successfully. Apr 16 01:53:56.678914 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-15c1c0d638112cfbe7472e00f92eb6c84c935d9b722b582a6835b90b6c433307-rootfs.mount: Deactivated successfully. Apr 16 01:53:56.678960 systemd[1]: run-netns-cni\x2d46e5ac5a\x2d8f16\x2d299f\x2da007\x2d90551c056ea9.mount: Deactivated successfully. Apr 16 01:53:56.678999 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-15c1c0d638112cfbe7472e00f92eb6c84c935d9b722b582a6835b90b6c433307-shm.mount: Deactivated successfully. Apr 16 01:53:56.679046 systemd[1]: var-lib-kubelet-pods-682faac5\x2d585b\x2d4912\x2d9c11\x2dca5063bb23e6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djsmgh.mount: Deactivated successfully. Apr 16 01:53:56.679083 systemd[1]: var-lib-kubelet-pods-682faac5\x2d585b\x2d4912\x2d9c11\x2dca5063bb23e6-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Apr 16 01:53:56.786720 kernel: hrtimer: interrupt took 9198827 ns Apr 16 01:53:57.059658 kubelet[2507]: I0416 01:53:57.059566 2507 scope.go:117] "RemoveContainer" containerID="799508586b672f8aea8116e787d1ae7474d86d8e9e139857e5124618f0be7a0a" Apr 16 01:53:57.067967 systemd[1]: Removed slice kubepods-besteffort-pod682faac5_585b_4912_9c11_ca5063bb23e6.slice - libcontainer container kubepods-besteffort-pod682faac5_585b_4912_9c11_ca5063bb23e6.slice. Apr 16 01:53:57.070347 containerd[1462]: time="2026-04-16T01:53:57.070288035Z" level=info msg="RemoveContainer for \"799508586b672f8aea8116e787d1ae7474d86d8e9e139857e5124618f0be7a0a\"" Apr 16 01:53:57.080023 containerd[1462]: time="2026-04-16T01:53:57.079952479Z" level=info msg="RemoveContainer for \"799508586b672f8aea8116e787d1ae7474d86d8e9e139857e5124618f0be7a0a\" returns successfully" Apr 16 01:53:57.091464 kubelet[2507]: I0416 01:53:57.090976 2507 scope.go:117] "RemoveContainer" containerID="1b93b10912ed92face034a1cddc4835fbda5ce29edc03a81c1c7de841e8535f8" Apr 16 01:53:57.093146 containerd[1462]: time="2026-04-16T01:53:57.093093047Z" level=info msg="RemoveContainer for \"1b93b10912ed92face034a1cddc4835fbda5ce29edc03a81c1c7de841e8535f8\"" Apr 16 01:53:57.099368 containerd[1462]: time="2026-04-16T01:53:57.099268994Z" level=info msg="RemoveContainer for \"1b93b10912ed92face034a1cddc4835fbda5ce29edc03a81c1c7de841e8535f8\" returns successfully" Apr 16 01:53:57.101188 kubelet[2507]: I0416 01:53:57.100093 2507 scope.go:117] "RemoveContainer" containerID="799508586b672f8aea8116e787d1ae7474d86d8e9e139857e5124618f0be7a0a" Apr 16 01:53:57.124231 containerd[1462]: time="2026-04-16T01:53:57.115408221Z" level=error msg="ContainerStatus for \"799508586b672f8aea8116e787d1ae7474d86d8e9e139857e5124618f0be7a0a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"799508586b672f8aea8116e787d1ae7474d86d8e9e139857e5124618f0be7a0a\": not found" Apr 16 01:53:57.138832 kubelet[2507]: E0416 01:53:57.138475 2507 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"799508586b672f8aea8116e787d1ae7474d86d8e9e139857e5124618f0be7a0a\": not found" containerID="799508586b672f8aea8116e787d1ae7474d86d8e9e139857e5124618f0be7a0a" Apr 16 01:53:57.143683 kubelet[2507]: I0416 01:53:57.139331 2507 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"799508586b672f8aea8116e787d1ae7474d86d8e9e139857e5124618f0be7a0a"} err="failed to get container status \"799508586b672f8aea8116e787d1ae7474d86d8e9e139857e5124618f0be7a0a\": rpc error: code = NotFound desc = an error occurred when try to find container \"799508586b672f8aea8116e787d1ae7474d86d8e9e139857e5124618f0be7a0a\": not found" Apr 16 01:53:57.143975 kubelet[2507]: I0416 01:53:57.143760 2507 scope.go:117] "RemoveContainer" containerID="1b93b10912ed92face034a1cddc4835fbda5ce29edc03a81c1c7de841e8535f8" Apr 16 01:53:57.144327 containerd[1462]: time="2026-04-16T01:53:57.144239699Z" level=error msg="ContainerStatus for \"1b93b10912ed92face034a1cddc4835fbda5ce29edc03a81c1c7de841e8535f8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1b93b10912ed92face034a1cddc4835fbda5ce29edc03a81c1c7de841e8535f8\": not found" Apr 16 01:53:57.145010 kubelet[2507]: E0416 01:53:57.144513 2507 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1b93b10912ed92face034a1cddc4835fbda5ce29edc03a81c1c7de841e8535f8\": not found" containerID="1b93b10912ed92face034a1cddc4835fbda5ce29edc03a81c1c7de841e8535f8" Apr 16 01:53:57.145010 kubelet[2507]: I0416 01:53:57.144597 2507 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1b93b10912ed92face034a1cddc4835fbda5ce29edc03a81c1c7de841e8535f8"} err="failed to get container status \"1b93b10912ed92face034a1cddc4835fbda5ce29edc03a81c1c7de841e8535f8\": rpc error: code = NotFound desc = an error occurred when try to find container \"1b93b10912ed92face034a1cddc4835fbda5ce29edc03a81c1c7de841e8535f8\": not found" Apr 16 01:53:57.145010 kubelet[2507]: I0416 01:53:57.144627 2507 scope.go:117] "RemoveContainer" containerID="799508586b672f8aea8116e787d1ae7474d86d8e9e139857e5124618f0be7a0a" Apr 16 01:53:57.145530 containerd[1462]: time="2026-04-16T01:53:57.145441010Z" level=error msg="ContainerStatus for \"799508586b672f8aea8116e787d1ae7474d86d8e9e139857e5124618f0be7a0a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"799508586b672f8aea8116e787d1ae7474d86d8e9e139857e5124618f0be7a0a\": not found" Apr 16 01:53:57.148884 kubelet[2507]: I0416 01:53:57.145732 2507 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"799508586b672f8aea8116e787d1ae7474d86d8e9e139857e5124618f0be7a0a"} err="failed to get container status \"799508586b672f8aea8116e787d1ae7474d86d8e9e139857e5124618f0be7a0a\": rpc error: code = NotFound desc = an error occurred when try to find container \"799508586b672f8aea8116e787d1ae7474d86d8e9e139857e5124618f0be7a0a\": not found" Apr 16 01:53:57.148884 kubelet[2507]: I0416 01:53:57.145775 2507 scope.go:117] "RemoveContainer" containerID="1b93b10912ed92face034a1cddc4835fbda5ce29edc03a81c1c7de841e8535f8" Apr 16 01:53:57.148884 kubelet[2507]: I0416 01:53:57.146486 2507 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1b93b10912ed92face034a1cddc4835fbda5ce29edc03a81c1c7de841e8535f8"} err="failed to get container status \"1b93b10912ed92face034a1cddc4835fbda5ce29edc03a81c1c7de841e8535f8\": rpc error: code = NotFound desc = an error occurred when try to find container \"1b93b10912ed92face034a1cddc4835fbda5ce29edc03a81c1c7de841e8535f8\": not found" Apr 16 01:53:57.149067 containerd[1462]: time="2026-04-16T01:53:57.146256815Z" level=error msg="ContainerStatus for \"1b93b10912ed92face034a1cddc4835fbda5ce29edc03a81c1c7de841e8535f8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1b93b10912ed92face034a1cddc4835fbda5ce29edc03a81c1c7de841e8535f8\": not found" Apr 16 01:53:57.196218 systemd[1]: Created slice kubepods-besteffort-pod25444337_e38d_472c_ae9d_ff23611f4855.slice - libcontainer container kubepods-besteffort-pod25444337_e38d_472c_ae9d_ff23611f4855.slice. Apr 16 01:53:57.258394 kubelet[2507]: I0416 01:53:57.258211 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/25444337-e38d-472c-ae9d-ff23611f4855-whisker-backend-key-pair\") pod \"whisker-8597c9fcf7-gcz2r\" (UID: \"25444337-e38d-472c-ae9d-ff23611f4855\") " pod="calico-system/whisker-8597c9fcf7-gcz2r" Apr 16 01:53:57.258394 kubelet[2507]: I0416 01:53:57.258261 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/25444337-e38d-472c-ae9d-ff23611f4855-whisker-ca-bundle\") pod \"whisker-8597c9fcf7-gcz2r\" (UID: \"25444337-e38d-472c-ae9d-ff23611f4855\") " pod="calico-system/whisker-8597c9fcf7-gcz2r" Apr 16 01:53:57.258394 kubelet[2507]: I0416 01:53:57.258280 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvj6r\" (UniqueName: \"kubernetes.io/projected/25444337-e38d-472c-ae9d-ff23611f4855-kube-api-access-kvj6r\") pod \"whisker-8597c9fcf7-gcz2r\" (UID: \"25444337-e38d-472c-ae9d-ff23611f4855\") " pod="calico-system/whisker-8597c9fcf7-gcz2r" Apr 16 01:53:57.258394 kubelet[2507]: I0416 01:53:57.258305 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/25444337-e38d-472c-ae9d-ff23611f4855-nginx-config\") pod \"whisker-8597c9fcf7-gcz2r\" (UID: \"25444337-e38d-472c-ae9d-ff23611f4855\") " pod="calico-system/whisker-8597c9fcf7-gcz2r" Apr 16 01:53:57.501994 containerd[1462]: time="2026-04-16T01:53:57.501900792Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8597c9fcf7-gcz2r,Uid:25444337-e38d-472c-ae9d-ff23611f4855,Namespace:calico-system,Attempt:0,}" Apr 16 01:53:57.663786 systemd-networkd[1389]: calie9cffae1627: Link UP Apr 16 01:53:57.664489 systemd-networkd[1389]: calie9cffae1627: Gained carrier Apr 16 01:53:57.681410 containerd[1462]: 2026-04-16 01:53:57.596 [INFO][4982] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--8597c9fcf7--gcz2r-eth0 whisker-8597c9fcf7- calico-system 25444337-e38d-472c-ae9d-ff23611f4855 1108 0 2026-04-16 01:53:57 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:8597c9fcf7 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-8597c9fcf7-gcz2r eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calie9cffae1627 [] [] }} ContainerID="ffad2d547ea5d4521deade27a6bf488d7d367527dbd96a162da92416b1a0c9c2" Namespace="calico-system" Pod="whisker-8597c9fcf7-gcz2r" WorkloadEndpoint="localhost-k8s-whisker--8597c9fcf7--gcz2r-" Apr 16 01:53:57.681410 containerd[1462]: 2026-04-16 01:53:57.596 [INFO][4982] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ffad2d547ea5d4521deade27a6bf488d7d367527dbd96a162da92416b1a0c9c2" Namespace="calico-system" Pod="whisker-8597c9fcf7-gcz2r" WorkloadEndpoint="localhost-k8s-whisker--8597c9fcf7--gcz2r-eth0" Apr 16 01:53:57.681410 containerd[1462]: 2026-04-16 01:53:57.622 [INFO][4997] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ffad2d547ea5d4521deade27a6bf488d7d367527dbd96a162da92416b1a0c9c2" HandleID="k8s-pod-network.ffad2d547ea5d4521deade27a6bf488d7d367527dbd96a162da92416b1a0c9c2" Workload="localhost-k8s-whisker--8597c9fcf7--gcz2r-eth0" Apr 16 01:53:57.681410 containerd[1462]: 2026-04-16 01:53:57.632 [INFO][4997] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="ffad2d547ea5d4521deade27a6bf488d7d367527dbd96a162da92416b1a0c9c2" HandleID="k8s-pod-network.ffad2d547ea5d4521deade27a6bf488d7d367527dbd96a162da92416b1a0c9c2" Workload="localhost-k8s-whisker--8597c9fcf7--gcz2r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fbb0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-8597c9fcf7-gcz2r", "timestamp":"2026-04-16 01:53:57.622462019 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001926e0)} Apr 16 01:53:57.681410 containerd[1462]: 2026-04-16 01:53:57.632 [INFO][4997] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 01:53:57.681410 containerd[1462]: 2026-04-16 01:53:57.632 [INFO][4997] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 01:53:57.681410 containerd[1462]: 2026-04-16 01:53:57.632 [INFO][4997] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 16 01:53:57.681410 containerd[1462]: 2026-04-16 01:53:57.635 [INFO][4997] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.ffad2d547ea5d4521deade27a6bf488d7d367527dbd96a162da92416b1a0c9c2" host="localhost" Apr 16 01:53:57.681410 containerd[1462]: 2026-04-16 01:53:57.638 [INFO][4997] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 16 01:53:57.681410 containerd[1462]: 2026-04-16 01:53:57.642 [INFO][4997] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 16 01:53:57.681410 containerd[1462]: 2026-04-16 01:53:57.644 [INFO][4997] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 16 01:53:57.681410 containerd[1462]: 2026-04-16 01:53:57.646 [INFO][4997] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 16 01:53:57.681410 containerd[1462]: 2026-04-16 01:53:57.646 [INFO][4997] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ffad2d547ea5d4521deade27a6bf488d7d367527dbd96a162da92416b1a0c9c2" host="localhost" Apr 16 01:53:57.681410 containerd[1462]: 2026-04-16 01:53:57.647 [INFO][4997] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.ffad2d547ea5d4521deade27a6bf488d7d367527dbd96a162da92416b1a0c9c2 Apr 16 01:53:57.681410 containerd[1462]: 2026-04-16 01:53:57.653 [INFO][4997] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ffad2d547ea5d4521deade27a6bf488d7d367527dbd96a162da92416b1a0c9c2" host="localhost" Apr 16 01:53:57.681410 containerd[1462]: 2026-04-16 01:53:57.659 [INFO][4997] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.137/26] block=192.168.88.128/26 handle="k8s-pod-network.ffad2d547ea5d4521deade27a6bf488d7d367527dbd96a162da92416b1a0c9c2" host="localhost" Apr 16 01:53:57.681410 containerd[1462]: 2026-04-16 01:53:57.659 [INFO][4997] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.137/26] handle="k8s-pod-network.ffad2d547ea5d4521deade27a6bf488d7d367527dbd96a162da92416b1a0c9c2" host="localhost" Apr 16 01:53:57.681410 containerd[1462]: 2026-04-16 01:53:57.659 [INFO][4997] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 01:53:57.681410 containerd[1462]: 2026-04-16 01:53:57.659 [INFO][4997] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.137/26] IPv6=[] ContainerID="ffad2d547ea5d4521deade27a6bf488d7d367527dbd96a162da92416b1a0c9c2" HandleID="k8s-pod-network.ffad2d547ea5d4521deade27a6bf488d7d367527dbd96a162da92416b1a0c9c2" Workload="localhost-k8s-whisker--8597c9fcf7--gcz2r-eth0" Apr 16 01:53:57.681982 containerd[1462]: 2026-04-16 01:53:57.661 [INFO][4982] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ffad2d547ea5d4521deade27a6bf488d7d367527dbd96a162da92416b1a0c9c2" Namespace="calico-system" Pod="whisker-8597c9fcf7-gcz2r" WorkloadEndpoint="localhost-k8s-whisker--8597c9fcf7--gcz2r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--8597c9fcf7--gcz2r-eth0", GenerateName:"whisker-8597c9fcf7-", Namespace:"calico-system", SelfLink:"", UID:"25444337-e38d-472c-ae9d-ff23611f4855", ResourceVersion:"1108", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 1, 53, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"8597c9fcf7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-8597c9fcf7-gcz2r", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calie9cffae1627", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 01:53:57.681982 containerd[1462]: 2026-04-16 01:53:57.661 [INFO][4982] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.137/32] ContainerID="ffad2d547ea5d4521deade27a6bf488d7d367527dbd96a162da92416b1a0c9c2" Namespace="calico-system" Pod="whisker-8597c9fcf7-gcz2r" WorkloadEndpoint="localhost-k8s-whisker--8597c9fcf7--gcz2r-eth0" Apr 16 01:53:57.681982 containerd[1462]: 2026-04-16 01:53:57.662 [INFO][4982] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie9cffae1627 ContainerID="ffad2d547ea5d4521deade27a6bf488d7d367527dbd96a162da92416b1a0c9c2" Namespace="calico-system" Pod="whisker-8597c9fcf7-gcz2r" WorkloadEndpoint="localhost-k8s-whisker--8597c9fcf7--gcz2r-eth0" Apr 16 01:53:57.681982 containerd[1462]: 2026-04-16 01:53:57.667 [INFO][4982] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ffad2d547ea5d4521deade27a6bf488d7d367527dbd96a162da92416b1a0c9c2" Namespace="calico-system" Pod="whisker-8597c9fcf7-gcz2r" WorkloadEndpoint="localhost-k8s-whisker--8597c9fcf7--gcz2r-eth0" Apr 16 01:53:57.681982 containerd[1462]: 2026-04-16 01:53:57.667 [INFO][4982] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ffad2d547ea5d4521deade27a6bf488d7d367527dbd96a162da92416b1a0c9c2" Namespace="calico-system" Pod="whisker-8597c9fcf7-gcz2r" WorkloadEndpoint="localhost-k8s-whisker--8597c9fcf7--gcz2r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--8597c9fcf7--gcz2r-eth0", GenerateName:"whisker-8597c9fcf7-", Namespace:"calico-system", SelfLink:"", UID:"25444337-e38d-472c-ae9d-ff23611f4855", ResourceVersion:"1108", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 1, 53, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"8597c9fcf7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ffad2d547ea5d4521deade27a6bf488d7d367527dbd96a162da92416b1a0c9c2", Pod:"whisker-8597c9fcf7-gcz2r", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calie9cffae1627", MAC:"d6:84:12:1c:d4:36", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 01:53:57.681982 containerd[1462]: 2026-04-16 01:53:57.678 [INFO][4982] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ffad2d547ea5d4521deade27a6bf488d7d367527dbd96a162da92416b1a0c9c2" Namespace="calico-system" Pod="whisker-8597c9fcf7-gcz2r" WorkloadEndpoint="localhost-k8s-whisker--8597c9fcf7--gcz2r-eth0" Apr 16 01:53:57.711728 containerd[1462]: time="2026-04-16T01:53:57.711482161Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 16 01:53:57.711728 containerd[1462]: time="2026-04-16T01:53:57.711517109Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 16 01:53:57.711728 containerd[1462]: time="2026-04-16T01:53:57.711528401Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 01:53:57.711728 containerd[1462]: time="2026-04-16T01:53:57.711594136Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 01:53:57.737058 systemd[1]: Started cri-containerd-ffad2d547ea5d4521deade27a6bf488d7d367527dbd96a162da92416b1a0c9c2.scope - libcontainer container ffad2d547ea5d4521deade27a6bf488d7d367527dbd96a162da92416b1a0c9c2. Apr 16 01:53:57.748227 systemd-resolved[1391]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 16 01:53:57.779831 containerd[1462]: time="2026-04-16T01:53:57.779668067Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8597c9fcf7-gcz2r,Uid:25444337-e38d-472c-ae9d-ff23611f4855,Namespace:calico-system,Attempt:0,} returns sandbox id \"ffad2d547ea5d4521deade27a6bf488d7d367527dbd96a162da92416b1a0c9c2\"" Apr 16 01:53:57.789142 containerd[1462]: time="2026-04-16T01:53:57.789077752Z" level=info msg="CreateContainer within sandbox \"ffad2d547ea5d4521deade27a6bf488d7d367527dbd96a162da92416b1a0c9c2\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Apr 16 01:53:57.805240 containerd[1462]: time="2026-04-16T01:53:57.805173489Z" level=info msg="CreateContainer within sandbox \"ffad2d547ea5d4521deade27a6bf488d7d367527dbd96a162da92416b1a0c9c2\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"89ff5e0b855b6e54caf3e4a29c2e5b0ba9a3bb3618de1c672fd6c55837a66f5b\"" Apr 16 01:53:57.807124 containerd[1462]: time="2026-04-16T01:53:57.807038297Z" level=info msg="StartContainer for \"89ff5e0b855b6e54caf3e4a29c2e5b0ba9a3bb3618de1c672fd6c55837a66f5b\"" Apr 16 01:53:57.840022 systemd[1]: Started cri-containerd-89ff5e0b855b6e54caf3e4a29c2e5b0ba9a3bb3618de1c672fd6c55837a66f5b.scope - libcontainer container 89ff5e0b855b6e54caf3e4a29c2e5b0ba9a3bb3618de1c672fd6c55837a66f5b. Apr 16 01:53:57.862747 kubelet[2507]: I0416 01:53:57.862509 2507 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="682faac5-585b-4912-9c11-ca5063bb23e6" path="/var/lib/kubelet/pods/682faac5-585b-4912-9c11-ca5063bb23e6/volumes" Apr 16 01:53:57.881045 containerd[1462]: time="2026-04-16T01:53:57.880993931Z" level=info msg="StartContainer for \"89ff5e0b855b6e54caf3e4a29c2e5b0ba9a3bb3618de1c672fd6c55837a66f5b\" returns successfully" Apr 16 01:53:57.886345 containerd[1462]: time="2026-04-16T01:53:57.886113294Z" level=info msg="CreateContainer within sandbox \"ffad2d547ea5d4521deade27a6bf488d7d367527dbd96a162da92416b1a0c9c2\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Apr 16 01:53:57.901193 containerd[1462]: time="2026-04-16T01:53:57.901164214Z" level=info msg="CreateContainer within sandbox \"ffad2d547ea5d4521deade27a6bf488d7d367527dbd96a162da92416b1a0c9c2\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"1f27a4bb26deb5b44a66f3b686706e0c274701c593f7ffa4d776a8d8ceaf9425\"" Apr 16 01:53:57.901914 containerd[1462]: time="2026-04-16T01:53:57.901799954Z" level=info msg="StartContainer for \"1f27a4bb26deb5b44a66f3b686706e0c274701c593f7ffa4d776a8d8ceaf9425\"" Apr 16 01:53:57.933211 systemd[1]: Started cri-containerd-1f27a4bb26deb5b44a66f3b686706e0c274701c593f7ffa4d776a8d8ceaf9425.scope - libcontainer container 1f27a4bb26deb5b44a66f3b686706e0c274701c593f7ffa4d776a8d8ceaf9425. Apr 16 01:53:57.942472 containerd[1462]: time="2026-04-16T01:53:57.942431164Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:53:57.943303 containerd[1462]: time="2026-04-16T01:53:57.943250984Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Apr 16 01:53:57.944750 containerd[1462]: time="2026-04-16T01:53:57.944707363Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:53:57.947142 containerd[1462]: time="2026-04-16T01:53:57.947098480Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:53:57.948198 containerd[1462]: time="2026-04-16T01:53:57.948155681Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 2.078812462s" Apr 16 01:53:57.948253 containerd[1462]: time="2026-04-16T01:53:57.948226438Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Apr 16 01:53:57.952937 containerd[1462]: time="2026-04-16T01:53:57.952798369Z" level=info msg="CreateContainer within sandbox \"9594c70462efeb29726b5a2486166195861da5c87c1914e973e826c078ff9edc\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 16 01:53:57.973541 containerd[1462]: time="2026-04-16T01:53:57.973508826Z" level=info msg="CreateContainer within sandbox \"9594c70462efeb29726b5a2486166195861da5c87c1914e973e826c078ff9edc\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"749a69bdbc305bbb89306c0c138095dcb2a86925cdbf6365f67845ce3121a84e\"" Apr 16 01:53:57.975189 containerd[1462]: time="2026-04-16T01:53:57.975144817Z" level=info msg="StartContainer for \"749a69bdbc305bbb89306c0c138095dcb2a86925cdbf6365f67845ce3121a84e\"" Apr 16 01:53:57.976556 containerd[1462]: time="2026-04-16T01:53:57.976487792Z" level=info msg="StartContainer for \"1f27a4bb26deb5b44a66f3b686706e0c274701c593f7ffa4d776a8d8ceaf9425\" returns successfully" Apr 16 01:53:58.006028 systemd[1]: Started cri-containerd-749a69bdbc305bbb89306c0c138095dcb2a86925cdbf6365f67845ce3121a84e.scope - libcontainer container 749a69bdbc305bbb89306c0c138095dcb2a86925cdbf6365f67845ce3121a84e. Apr 16 01:53:58.050615 containerd[1462]: time="2026-04-16T01:53:58.050017936Z" level=info msg="StartContainer for \"749a69bdbc305bbb89306c0c138095dcb2a86925cdbf6365f67845ce3121a84e\" returns successfully" Apr 16 01:53:58.076940 kubelet[2507]: I0416 01:53:58.076766 2507 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-8597c9fcf7-gcz2r" podStartSLOduration=1.076750387 podStartE2EDuration="1.076750387s" podCreationTimestamp="2026-04-16 01:53:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 01:53:58.075608064 +0000 UTC m=+52.321307703" watchObservedRunningTime="2026-04-16 01:53:58.076750387 +0000 UTC m=+52.322450025" Apr 16 01:53:58.986085 kubelet[2507]: I0416 01:53:58.986013 2507 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 16 01:53:58.986987 kubelet[2507]: I0416 01:53:58.986953 2507 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 16 01:53:59.362245 systemd-networkd[1389]: calie9cffae1627: Gained IPv6LL Apr 16 01:54:05.843949 containerd[1462]: time="2026-04-16T01:54:05.843585271Z" level=info msg="StopPodSandbox for \"15c1c0d638112cfbe7472e00f92eb6c84c935d9b722b582a6835b90b6c433307\"" Apr 16 01:54:05.957490 containerd[1462]: 2026-04-16 01:54:05.902 [WARNING][5215] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="15c1c0d638112cfbe7472e00f92eb6c84c935d9b722b582a6835b90b6c433307" WorkloadEndpoint="localhost-k8s-whisker--7dfff6788d--dp76c-eth0" Apr 16 01:54:05.957490 containerd[1462]: 2026-04-16 01:54:05.902 [INFO][5215] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="15c1c0d638112cfbe7472e00f92eb6c84c935d9b722b582a6835b90b6c433307" Apr 16 01:54:05.957490 containerd[1462]: 2026-04-16 01:54:05.902 [INFO][5215] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="15c1c0d638112cfbe7472e00f92eb6c84c935d9b722b582a6835b90b6c433307" iface="eth0" netns="" Apr 16 01:54:05.957490 containerd[1462]: 2026-04-16 01:54:05.902 [INFO][5215] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="15c1c0d638112cfbe7472e00f92eb6c84c935d9b722b582a6835b90b6c433307" Apr 16 01:54:05.957490 containerd[1462]: 2026-04-16 01:54:05.902 [INFO][5215] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="15c1c0d638112cfbe7472e00f92eb6c84c935d9b722b582a6835b90b6c433307" Apr 16 01:54:05.957490 containerd[1462]: 2026-04-16 01:54:05.940 [INFO][5225] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="15c1c0d638112cfbe7472e00f92eb6c84c935d9b722b582a6835b90b6c433307" HandleID="k8s-pod-network.15c1c0d638112cfbe7472e00f92eb6c84c935d9b722b582a6835b90b6c433307" Workload="localhost-k8s-whisker--7dfff6788d--dp76c-eth0" Apr 16 01:54:05.957490 containerd[1462]: 2026-04-16 01:54:05.940 [INFO][5225] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 01:54:05.957490 containerd[1462]: 2026-04-16 01:54:05.940 [INFO][5225] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 01:54:05.957490 containerd[1462]: 2026-04-16 01:54:05.949 [WARNING][5225] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="15c1c0d638112cfbe7472e00f92eb6c84c935d9b722b582a6835b90b6c433307" HandleID="k8s-pod-network.15c1c0d638112cfbe7472e00f92eb6c84c935d9b722b582a6835b90b6c433307" Workload="localhost-k8s-whisker--7dfff6788d--dp76c-eth0" Apr 16 01:54:05.957490 containerd[1462]: 2026-04-16 01:54:05.949 [INFO][5225] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="15c1c0d638112cfbe7472e00f92eb6c84c935d9b722b582a6835b90b6c433307" HandleID="k8s-pod-network.15c1c0d638112cfbe7472e00f92eb6c84c935d9b722b582a6835b90b6c433307" Workload="localhost-k8s-whisker--7dfff6788d--dp76c-eth0" Apr 16 01:54:05.957490 containerd[1462]: 2026-04-16 01:54:05.953 [INFO][5225] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 01:54:05.957490 containerd[1462]: 2026-04-16 01:54:05.955 [INFO][5215] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="15c1c0d638112cfbe7472e00f92eb6c84c935d9b722b582a6835b90b6c433307" Apr 16 01:54:05.958289 containerd[1462]: time="2026-04-16T01:54:05.957552694Z" level=info msg="TearDown network for sandbox \"15c1c0d638112cfbe7472e00f92eb6c84c935d9b722b582a6835b90b6c433307\" successfully" Apr 16 01:54:05.958289 containerd[1462]: time="2026-04-16T01:54:05.957586560Z" level=info msg="StopPodSandbox for \"15c1c0d638112cfbe7472e00f92eb6c84c935d9b722b582a6835b90b6c433307\" returns successfully" Apr 16 01:54:05.958601 containerd[1462]: time="2026-04-16T01:54:05.958517125Z" level=info msg="RemovePodSandbox for \"15c1c0d638112cfbe7472e00f92eb6c84c935d9b722b582a6835b90b6c433307\"" Apr 16 01:54:05.958601 containerd[1462]: time="2026-04-16T01:54:05.958555063Z" level=info msg="Forcibly stopping sandbox \"15c1c0d638112cfbe7472e00f92eb6c84c935d9b722b582a6835b90b6c433307\"" Apr 16 01:54:06.067643 containerd[1462]: 2026-04-16 01:54:06.012 [WARNING][5242] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="15c1c0d638112cfbe7472e00f92eb6c84c935d9b722b582a6835b90b6c433307" WorkloadEndpoint="localhost-k8s-whisker--7dfff6788d--dp76c-eth0" Apr 16 01:54:06.067643 containerd[1462]: 2026-04-16 01:54:06.013 [INFO][5242] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="15c1c0d638112cfbe7472e00f92eb6c84c935d9b722b582a6835b90b6c433307" Apr 16 01:54:06.067643 containerd[1462]: 2026-04-16 01:54:06.013 [INFO][5242] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="15c1c0d638112cfbe7472e00f92eb6c84c935d9b722b582a6835b90b6c433307" iface="eth0" netns="" Apr 16 01:54:06.067643 containerd[1462]: 2026-04-16 01:54:06.013 [INFO][5242] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="15c1c0d638112cfbe7472e00f92eb6c84c935d9b722b582a6835b90b6c433307" Apr 16 01:54:06.067643 containerd[1462]: 2026-04-16 01:54:06.013 [INFO][5242] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="15c1c0d638112cfbe7472e00f92eb6c84c935d9b722b582a6835b90b6c433307" Apr 16 01:54:06.067643 containerd[1462]: 2026-04-16 01:54:06.049 [INFO][5250] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="15c1c0d638112cfbe7472e00f92eb6c84c935d9b722b582a6835b90b6c433307" HandleID="k8s-pod-network.15c1c0d638112cfbe7472e00f92eb6c84c935d9b722b582a6835b90b6c433307" Workload="localhost-k8s-whisker--7dfff6788d--dp76c-eth0" Apr 16 01:54:06.067643 containerd[1462]: 2026-04-16 01:54:06.049 [INFO][5250] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 01:54:06.067643 containerd[1462]: 2026-04-16 01:54:06.050 [INFO][5250] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 01:54:06.067643 containerd[1462]: 2026-04-16 01:54:06.060 [WARNING][5250] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="15c1c0d638112cfbe7472e00f92eb6c84c935d9b722b582a6835b90b6c433307" HandleID="k8s-pod-network.15c1c0d638112cfbe7472e00f92eb6c84c935d9b722b582a6835b90b6c433307" Workload="localhost-k8s-whisker--7dfff6788d--dp76c-eth0" Apr 16 01:54:06.067643 containerd[1462]: 2026-04-16 01:54:06.060 [INFO][5250] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="15c1c0d638112cfbe7472e00f92eb6c84c935d9b722b582a6835b90b6c433307" HandleID="k8s-pod-network.15c1c0d638112cfbe7472e00f92eb6c84c935d9b722b582a6835b90b6c433307" Workload="localhost-k8s-whisker--7dfff6788d--dp76c-eth0" Apr 16 01:54:06.067643 containerd[1462]: 2026-04-16 01:54:06.062 [INFO][5250] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 01:54:06.067643 containerd[1462]: 2026-04-16 01:54:06.065 [INFO][5242] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="15c1c0d638112cfbe7472e00f92eb6c84c935d9b722b582a6835b90b6c433307" Apr 16 01:54:06.068276 containerd[1462]: time="2026-04-16T01:54:06.067640110Z" level=info msg="TearDown network for sandbox \"15c1c0d638112cfbe7472e00f92eb6c84c935d9b722b582a6835b90b6c433307\" successfully" Apr 16 01:54:06.076328 containerd[1462]: time="2026-04-16T01:54:06.076227427Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"15c1c0d638112cfbe7472e00f92eb6c84c935d9b722b582a6835b90b6c433307\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 16 01:54:06.076511 containerd[1462]: time="2026-04-16T01:54:06.076377788Z" level=info msg="RemovePodSandbox \"15c1c0d638112cfbe7472e00f92eb6c84c935d9b722b582a6835b90b6c433307\" returns successfully" Apr 16 01:54:10.853556 kubelet[2507]: I0416 01:54:10.853378 2507 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 16 01:54:10.965753 kubelet[2507]: I0416 01:54:10.965611 2507 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-ns4nx" podStartSLOduration=31.315177346 podStartE2EDuration="48.965598152s" podCreationTimestamp="2026-04-16 01:53:22 +0000 UTC" firstStartedPulling="2026-04-16 01:53:40.298594906 +0000 UTC m=+34.544294533" lastFinishedPulling="2026-04-16 01:53:57.949015708 +0000 UTC m=+52.194715339" observedRunningTime="2026-04-16 01:53:58.087727117 +0000 UTC m=+52.333426755" watchObservedRunningTime="2026-04-16 01:54:10.965598152 +0000 UTC m=+65.211297790" Apr 16 01:54:11.391662 systemd[1]: Started sshd@7-10.0.0.136:22-10.0.0.1:45384.service - OpenSSH per-connection server daemon (10.0.0.1:45384). Apr 16 01:54:11.446080 sshd[5302]: Accepted publickey for core from 10.0.0.1 port 45384 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:54:11.448029 sshd[5302]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:54:11.452914 systemd-logind[1449]: New session 8 of user core. Apr 16 01:54:11.462308 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 16 01:54:11.973327 sshd[5302]: pam_unix(sshd:session): session closed for user core Apr 16 01:54:11.977459 systemd[1]: sshd@7-10.0.0.136:22-10.0.0.1:45384.service: Deactivated successfully. Apr 16 01:54:11.979551 systemd[1]: session-8.scope: Deactivated successfully. Apr 16 01:54:11.980337 systemd-logind[1449]: Session 8 logged out. Waiting for processes to exit. Apr 16 01:54:11.981366 systemd-logind[1449]: Removed session 8. Apr 16 01:54:16.998580 systemd[1]: Started sshd@8-10.0.0.136:22-10.0.0.1:45390.service - OpenSSH per-connection server daemon (10.0.0.1:45390). Apr 16 01:54:17.076532 sshd[5365]: Accepted publickey for core from 10.0.0.1 port 45390 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:54:17.078697 sshd[5365]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:54:17.084586 systemd-logind[1449]: New session 9 of user core. Apr 16 01:54:17.092196 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 16 01:54:17.352491 sshd[5365]: pam_unix(sshd:session): session closed for user core Apr 16 01:54:17.357809 systemd[1]: sshd@8-10.0.0.136:22-10.0.0.1:45390.service: Deactivated successfully. Apr 16 01:54:17.359537 systemd[1]: session-9.scope: Deactivated successfully. Apr 16 01:54:17.360330 systemd-logind[1449]: Session 9 logged out. Waiting for processes to exit. Apr 16 01:54:17.362214 systemd-logind[1449]: Removed session 9. Apr 16 01:54:22.365668 systemd[1]: Started sshd@9-10.0.0.136:22-10.0.0.1:40408.service - OpenSSH per-connection server daemon (10.0.0.1:40408). Apr 16 01:54:22.467726 sshd[5416]: Accepted publickey for core from 10.0.0.1 port 40408 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:54:22.470227 sshd[5416]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:54:22.474984 systemd-logind[1449]: New session 10 of user core. Apr 16 01:54:22.479054 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 16 01:54:22.619128 sshd[5416]: pam_unix(sshd:session): session closed for user core Apr 16 01:54:22.623875 systemd[1]: sshd@9-10.0.0.136:22-10.0.0.1:40408.service: Deactivated successfully. Apr 16 01:54:22.626478 systemd[1]: session-10.scope: Deactivated successfully. Apr 16 01:54:22.627729 systemd-logind[1449]: Session 10 logged out. Waiting for processes to exit. Apr 16 01:54:22.629419 systemd-logind[1449]: Removed session 10. Apr 16 01:54:22.696221 kubelet[2507]: I0416 01:54:22.696116 2507 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 16 01:54:25.871529 kubelet[2507]: E0416 01:54:25.871356 2507 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:54:27.638550 systemd[1]: Started sshd@10-10.0.0.136:22-10.0.0.1:40422.service - OpenSSH per-connection server daemon (10.0.0.1:40422). Apr 16 01:54:27.686762 sshd[5454]: Accepted publickey for core from 10.0.0.1 port 40422 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:54:27.689581 sshd[5454]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:54:27.697468 systemd-logind[1449]: New session 11 of user core. Apr 16 01:54:27.710565 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 16 01:54:27.871384 sshd[5454]: pam_unix(sshd:session): session closed for user core Apr 16 01:54:27.875208 systemd[1]: sshd@10-10.0.0.136:22-10.0.0.1:40422.service: Deactivated successfully. Apr 16 01:54:27.877190 systemd[1]: session-11.scope: Deactivated successfully. Apr 16 01:54:27.878009 systemd-logind[1449]: Session 11 logged out. Waiting for processes to exit. Apr 16 01:54:27.879927 systemd-logind[1449]: Removed session 11. Apr 16 01:54:32.861030 kubelet[2507]: E0416 01:54:32.860941 2507 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:54:32.886583 systemd[1]: Started sshd@11-10.0.0.136:22-10.0.0.1:49584.service - OpenSSH per-connection server daemon (10.0.0.1:49584). Apr 16 01:54:32.926785 sshd[5469]: Accepted publickey for core from 10.0.0.1 port 49584 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:54:32.929437 sshd[5469]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:54:32.935745 systemd-logind[1449]: New session 12 of user core. Apr 16 01:54:32.946279 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 16 01:54:33.064635 sshd[5469]: pam_unix(sshd:session): session closed for user core Apr 16 01:54:33.068647 systemd[1]: sshd@11-10.0.0.136:22-10.0.0.1:49584.service: Deactivated successfully. Apr 16 01:54:33.070346 systemd[1]: session-12.scope: Deactivated successfully. Apr 16 01:54:33.071263 systemd-logind[1449]: Session 12 logged out. Waiting for processes to exit. Apr 16 01:54:33.073146 systemd-logind[1449]: Removed session 12. Apr 16 01:54:38.080535 systemd[1]: Started sshd@12-10.0.0.136:22-10.0.0.1:49590.service - OpenSSH per-connection server daemon (10.0.0.1:49590). Apr 16 01:54:38.118543 sshd[5490]: Accepted publickey for core from 10.0.0.1 port 49590 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:54:38.119985 sshd[5490]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:54:38.125213 systemd-logind[1449]: New session 13 of user core. Apr 16 01:54:38.132204 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 16 01:54:38.251033 sshd[5490]: pam_unix(sshd:session): session closed for user core Apr 16 01:54:38.254827 systemd[1]: sshd@12-10.0.0.136:22-10.0.0.1:49590.service: Deactivated successfully. Apr 16 01:54:38.257279 systemd[1]: session-13.scope: Deactivated successfully. Apr 16 01:54:38.258291 systemd-logind[1449]: Session 13 logged out. Waiting for processes to exit. Apr 16 01:54:38.259738 systemd-logind[1449]: Removed session 13. Apr 16 01:54:40.861021 kubelet[2507]: E0416 01:54:40.860953 2507 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:54:43.295935 systemd[1]: Started sshd@13-10.0.0.136:22-10.0.0.1:41970.service - OpenSSH per-connection server daemon (10.0.0.1:41970). Apr 16 01:54:43.364440 sshd[5524]: Accepted publickey for core from 10.0.0.1 port 41970 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:54:43.366190 sshd[5524]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:54:43.373203 systemd-logind[1449]: New session 14 of user core. Apr 16 01:54:43.380430 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 16 01:54:43.580309 sshd[5524]: pam_unix(sshd:session): session closed for user core Apr 16 01:54:43.585570 systemd[1]: sshd@13-10.0.0.136:22-10.0.0.1:41970.service: Deactivated successfully. Apr 16 01:54:43.588521 systemd[1]: session-14.scope: Deactivated successfully. Apr 16 01:54:43.591607 systemd-logind[1449]: Session 14 logged out. Waiting for processes to exit. Apr 16 01:54:43.593636 systemd-logind[1449]: Removed session 14. Apr 16 01:54:44.861597 kubelet[2507]: E0416 01:54:44.861507 2507 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:54:47.865333 kubelet[2507]: E0416 01:54:47.860500 2507 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:54:48.591934 systemd[1]: Started sshd@14-10.0.0.136:22-10.0.0.1:41974.service - OpenSSH per-connection server daemon (10.0.0.1:41974). Apr 16 01:54:48.649445 sshd[5591]: Accepted publickey for core from 10.0.0.1 port 41974 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:54:48.651754 sshd[5591]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:54:48.659240 systemd-logind[1449]: New session 15 of user core. Apr 16 01:54:48.670146 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 16 01:54:48.832755 sshd[5591]: pam_unix(sshd:session): session closed for user core Apr 16 01:54:48.837576 systemd[1]: sshd@14-10.0.0.136:22-10.0.0.1:41974.service: Deactivated successfully. Apr 16 01:54:48.839343 systemd[1]: session-15.scope: Deactivated successfully. Apr 16 01:54:48.841013 systemd-logind[1449]: Session 15 logged out. Waiting for processes to exit. Apr 16 01:54:48.841981 systemd-logind[1449]: Removed session 15. Apr 16 01:54:53.860807 systemd[1]: Started sshd@15-10.0.0.136:22-10.0.0.1:38902.service - OpenSSH per-connection server daemon (10.0.0.1:38902). Apr 16 01:54:53.934669 sshd[5606]: Accepted publickey for core from 10.0.0.1 port 38902 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:54:53.937057 sshd[5606]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:54:53.944085 systemd-logind[1449]: New session 16 of user core. Apr 16 01:54:53.949084 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 16 01:54:54.136361 sshd[5606]: pam_unix(sshd:session): session closed for user core Apr 16 01:54:54.141830 systemd[1]: sshd@15-10.0.0.136:22-10.0.0.1:38902.service: Deactivated successfully. Apr 16 01:54:54.144194 systemd[1]: session-16.scope: Deactivated successfully. Apr 16 01:54:54.145088 systemd-logind[1449]: Session 16 logged out. Waiting for processes to exit. Apr 16 01:54:54.146353 systemd-logind[1449]: Removed session 16. Apr 16 01:54:54.861207 kubelet[2507]: E0416 01:54:54.861056 2507 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:54:59.150281 systemd[1]: Started sshd@16-10.0.0.136:22-10.0.0.1:38912.service - OpenSSH per-connection server daemon (10.0.0.1:38912). Apr 16 01:54:59.194457 sshd[5621]: Accepted publickey for core from 10.0.0.1 port 38912 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:54:59.195984 sshd[5621]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:54:59.201195 systemd-logind[1449]: New session 17 of user core. Apr 16 01:54:59.214081 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 16 01:54:59.349270 sshd[5621]: pam_unix(sshd:session): session closed for user core Apr 16 01:54:59.351587 systemd[1]: sshd@16-10.0.0.136:22-10.0.0.1:38912.service: Deactivated successfully. Apr 16 01:54:59.353461 systemd[1]: session-17.scope: Deactivated successfully. Apr 16 01:54:59.354878 systemd-logind[1449]: Session 17 logged out. Waiting for processes to exit. Apr 16 01:54:59.356121 systemd-logind[1449]: Removed session 17. Apr 16 01:55:02.876018 kubelet[2507]: E0416 01:55:02.875928 2507 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:55:04.361516 systemd[1]: Started sshd@17-10.0.0.136:22-10.0.0.1:57568.service - OpenSSH per-connection server daemon (10.0.0.1:57568). Apr 16 01:55:04.398642 sshd[5646]: Accepted publickey for core from 10.0.0.1 port 57568 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:55:04.399925 sshd[5646]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:55:04.404600 systemd-logind[1449]: New session 18 of user core. Apr 16 01:55:04.414141 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 16 01:55:04.562525 sshd[5646]: pam_unix(sshd:session): session closed for user core Apr 16 01:55:04.567113 systemd[1]: sshd@17-10.0.0.136:22-10.0.0.1:57568.service: Deactivated successfully. Apr 16 01:55:04.569071 systemd[1]: session-18.scope: Deactivated successfully. Apr 16 01:55:04.569955 systemd-logind[1449]: Session 18 logged out. Waiting for processes to exit. Apr 16 01:55:04.572214 systemd-logind[1449]: Removed session 18. Apr 16 01:55:09.573744 systemd[1]: Started sshd@18-10.0.0.136:22-10.0.0.1:39826.service - OpenSSH per-connection server daemon (10.0.0.1:39826). Apr 16 01:55:09.636828 sshd[5664]: Accepted publickey for core from 10.0.0.1 port 39826 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:55:09.639632 sshd[5664]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:55:09.649423 systemd-logind[1449]: New session 19 of user core. Apr 16 01:55:09.662534 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 16 01:55:09.813534 sshd[5664]: pam_unix(sshd:session): session closed for user core Apr 16 01:55:09.818014 systemd[1]: sshd@18-10.0.0.136:22-10.0.0.1:39826.service: Deactivated successfully. Apr 16 01:55:09.819637 systemd[1]: session-19.scope: Deactivated successfully. Apr 16 01:55:09.820909 systemd-logind[1449]: Session 19 logged out. Waiting for processes to exit. Apr 16 01:55:09.822053 systemd-logind[1449]: Removed session 19. Apr 16 01:55:10.340152 systemd[1]: run-containerd-runc-k8s.io-415d0fb6c30b4151749070289083e9fa908ee1294cd2bdbbe9ba09e6395181db-runc.VGJbw2.mount: Deactivated successfully. Apr 16 01:55:14.825113 systemd[1]: Started sshd@19-10.0.0.136:22-10.0.0.1:39840.service - OpenSSH per-connection server daemon (10.0.0.1:39840). Apr 16 01:55:14.865480 sshd[5793]: Accepted publickey for core from 10.0.0.1 port 39840 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:55:14.866743 sshd[5793]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:55:14.871546 systemd-logind[1449]: New session 20 of user core. Apr 16 01:55:14.879078 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 16 01:55:15.030879 sshd[5793]: pam_unix(sshd:session): session closed for user core Apr 16 01:55:15.043219 systemd[1]: sshd@19-10.0.0.136:22-10.0.0.1:39840.service: Deactivated successfully. Apr 16 01:55:15.045809 systemd[1]: session-20.scope: Deactivated successfully. Apr 16 01:55:15.048738 systemd-logind[1449]: Session 20 logged out. Waiting for processes to exit. Apr 16 01:55:15.064719 systemd[1]: Started sshd@20-10.0.0.136:22-10.0.0.1:39852.service - OpenSSH per-connection server daemon (10.0.0.1:39852). Apr 16 01:55:15.067951 systemd-logind[1449]: Removed session 20. Apr 16 01:55:15.109752 sshd[5808]: Accepted publickey for core from 10.0.0.1 port 39852 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:55:15.111648 sshd[5808]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:55:15.119064 systemd-logind[1449]: New session 21 of user core. Apr 16 01:55:15.132617 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 16 01:55:15.331075 sshd[5808]: pam_unix(sshd:session): session closed for user core Apr 16 01:55:15.348725 systemd[1]: sshd@20-10.0.0.136:22-10.0.0.1:39852.service: Deactivated successfully. Apr 16 01:55:15.356627 systemd[1]: session-21.scope: Deactivated successfully. Apr 16 01:55:15.360225 systemd-logind[1449]: Session 21 logged out. Waiting for processes to exit. Apr 16 01:55:15.368313 systemd[1]: Started sshd@21-10.0.0.136:22-10.0.0.1:39868.service - OpenSSH per-connection server daemon (10.0.0.1:39868). Apr 16 01:55:15.370276 systemd-logind[1449]: Removed session 21. Apr 16 01:55:15.409271 sshd[5821]: Accepted publickey for core from 10.0.0.1 port 39868 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:55:15.410936 sshd[5821]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:55:15.415989 systemd-logind[1449]: New session 22 of user core. Apr 16 01:55:15.424179 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 16 01:55:15.545353 sshd[5821]: pam_unix(sshd:session): session closed for user core Apr 16 01:55:15.547653 systemd[1]: sshd@21-10.0.0.136:22-10.0.0.1:39868.service: Deactivated successfully. Apr 16 01:55:15.549503 systemd[1]: session-22.scope: Deactivated successfully. Apr 16 01:55:15.550900 systemd-logind[1449]: Session 22 logged out. Waiting for processes to exit. Apr 16 01:55:15.551724 systemd-logind[1449]: Removed session 22. Apr 16 01:55:20.557401 systemd[1]: Started sshd@22-10.0.0.136:22-10.0.0.1:58704.service - OpenSSH per-connection server daemon (10.0.0.1:58704). Apr 16 01:55:20.597278 sshd[5866]: Accepted publickey for core from 10.0.0.1 port 58704 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:55:20.599049 sshd[5866]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:55:20.603541 systemd-logind[1449]: New session 23 of user core. Apr 16 01:55:20.617527 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 16 01:55:20.798109 sshd[5866]: pam_unix(sshd:session): session closed for user core Apr 16 01:55:20.802655 systemd[1]: sshd@22-10.0.0.136:22-10.0.0.1:58704.service: Deactivated successfully. Apr 16 01:55:20.804952 systemd[1]: session-23.scope: Deactivated successfully. Apr 16 01:55:20.806387 systemd-logind[1449]: Session 23 logged out. Waiting for processes to exit. Apr 16 01:55:20.808153 systemd-logind[1449]: Removed session 23. Apr 16 01:55:25.809781 systemd[1]: Started sshd@23-10.0.0.136:22-10.0.0.1:58706.service - OpenSSH per-connection server daemon (10.0.0.1:58706). Apr 16 01:55:25.850220 sshd[5881]: Accepted publickey for core from 10.0.0.1 port 58706 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:55:25.851662 sshd[5881]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:55:25.856205 systemd-logind[1449]: New session 24 of user core. Apr 16 01:55:25.865260 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 16 01:55:26.008720 sshd[5881]: pam_unix(sshd:session): session closed for user core Apr 16 01:55:26.014034 systemd[1]: sshd@23-10.0.0.136:22-10.0.0.1:58706.service: Deactivated successfully. Apr 16 01:55:26.016215 systemd[1]: session-24.scope: Deactivated successfully. Apr 16 01:55:26.017704 systemd-logind[1449]: Session 24 logged out. Waiting for processes to exit. Apr 16 01:55:26.018727 systemd-logind[1449]: Removed session 24. Apr 16 01:55:31.026488 systemd[1]: Started sshd@24-10.0.0.136:22-10.0.0.1:50772.service - OpenSSH per-connection server daemon (10.0.0.1:50772). Apr 16 01:55:31.096579 sshd[5917]: Accepted publickey for core from 10.0.0.1 port 50772 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:55:31.100116 sshd[5917]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:55:31.106207 systemd-logind[1449]: New session 25 of user core. Apr 16 01:55:31.125225 systemd[1]: Started session-25.scope - Session 25 of User core. Apr 16 01:55:31.265671 sshd[5917]: pam_unix(sshd:session): session closed for user core Apr 16 01:55:31.275611 systemd[1]: sshd@24-10.0.0.136:22-10.0.0.1:50772.service: Deactivated successfully. Apr 16 01:55:31.277341 systemd[1]: session-25.scope: Deactivated successfully. Apr 16 01:55:31.278945 systemd-logind[1449]: Session 25 logged out. Waiting for processes to exit. Apr 16 01:55:31.286167 systemd[1]: Started sshd@25-10.0.0.136:22-10.0.0.1:50778.service - OpenSSH per-connection server daemon (10.0.0.1:50778). Apr 16 01:55:31.287086 systemd-logind[1449]: Removed session 25. Apr 16 01:55:31.321424 sshd[5931]: Accepted publickey for core from 10.0.0.1 port 50778 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:55:31.323567 sshd[5931]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:55:31.330098 systemd-logind[1449]: New session 26 of user core. Apr 16 01:55:31.342567 systemd[1]: Started session-26.scope - Session 26 of User core. Apr 16 01:55:31.556813 sshd[5931]: pam_unix(sshd:session): session closed for user core Apr 16 01:55:31.564228 systemd[1]: sshd@25-10.0.0.136:22-10.0.0.1:50778.service: Deactivated successfully. Apr 16 01:55:31.565480 systemd[1]: session-26.scope: Deactivated successfully. Apr 16 01:55:31.566188 systemd-logind[1449]: Session 26 logged out. Waiting for processes to exit. Apr 16 01:55:31.567586 systemd[1]: Started sshd@26-10.0.0.136:22-10.0.0.1:50794.service - OpenSSH per-connection server daemon (10.0.0.1:50794). Apr 16 01:55:31.568391 systemd-logind[1449]: Removed session 26. Apr 16 01:55:31.636722 sshd[5943]: Accepted publickey for core from 10.0.0.1 port 50794 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:55:31.638307 sshd[5943]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:55:31.642103 systemd-logind[1449]: New session 27 of user core. Apr 16 01:55:31.657169 systemd[1]: Started session-27.scope - Session 27 of User core. Apr 16 01:55:32.576814 sshd[5943]: pam_unix(sshd:session): session closed for user core Apr 16 01:55:32.584210 systemd[1]: sshd@26-10.0.0.136:22-10.0.0.1:50794.service: Deactivated successfully. Apr 16 01:55:32.586240 systemd[1]: session-27.scope: Deactivated successfully. Apr 16 01:55:32.588575 systemd-logind[1449]: Session 27 logged out. Waiting for processes to exit. Apr 16 01:55:32.597242 systemd[1]: Started sshd@27-10.0.0.136:22-10.0.0.1:50800.service - OpenSSH per-connection server daemon (10.0.0.1:50800). Apr 16 01:55:32.600119 systemd-logind[1449]: Removed session 27. Apr 16 01:55:32.663074 sshd[5971]: Accepted publickey for core from 10.0.0.1 port 50800 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:55:32.664383 sshd[5971]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:55:32.668925 systemd-logind[1449]: New session 28 of user core. Apr 16 01:55:32.676172 systemd[1]: Started session-28.scope - Session 28 of User core. Apr 16 01:55:33.139958 sshd[5971]: pam_unix(sshd:session): session closed for user core Apr 16 01:55:33.151243 systemd[1]: sshd@27-10.0.0.136:22-10.0.0.1:50800.service: Deactivated successfully. Apr 16 01:55:33.155582 systemd[1]: session-28.scope: Deactivated successfully. Apr 16 01:55:33.157810 systemd-logind[1449]: Session 28 logged out. Waiting for processes to exit. Apr 16 01:55:33.167317 systemd[1]: Started sshd@28-10.0.0.136:22-10.0.0.1:50804.service - OpenSSH per-connection server daemon (10.0.0.1:50804). Apr 16 01:55:33.168135 systemd-logind[1449]: Removed session 28. Apr 16 01:55:33.205260 sshd[5984]: Accepted publickey for core from 10.0.0.1 port 50804 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:55:33.208524 sshd[5984]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:55:33.213460 systemd-logind[1449]: New session 29 of user core. Apr 16 01:55:33.221682 systemd[1]: Started session-29.scope - Session 29 of User core. Apr 16 01:55:33.381816 sshd[5984]: pam_unix(sshd:session): session closed for user core Apr 16 01:55:33.385364 systemd[1]: sshd@28-10.0.0.136:22-10.0.0.1:50804.service: Deactivated successfully. Apr 16 01:55:33.387733 systemd[1]: session-29.scope: Deactivated successfully. Apr 16 01:55:33.389294 systemd-logind[1449]: Session 29 logged out. Waiting for processes to exit. Apr 16 01:55:33.391665 systemd-logind[1449]: Removed session 29. Apr 16 01:55:38.397756 systemd[1]: Started sshd@29-10.0.0.136:22-10.0.0.1:50816.service - OpenSSH per-connection server daemon (10.0.0.1:50816). Apr 16 01:55:38.437606 sshd[5998]: Accepted publickey for core from 10.0.0.1 port 50816 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:55:38.439092 sshd[5998]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:55:38.444649 systemd-logind[1449]: New session 30 of user core. Apr 16 01:55:38.450294 systemd[1]: Started session-30.scope - Session 30 of User core. Apr 16 01:55:38.585569 sshd[5998]: pam_unix(sshd:session): session closed for user core Apr 16 01:55:38.591044 systemd[1]: sshd@29-10.0.0.136:22-10.0.0.1:50816.service: Deactivated successfully. Apr 16 01:55:38.593643 systemd[1]: session-30.scope: Deactivated successfully. Apr 16 01:55:38.595130 systemd-logind[1449]: Session 30 logged out. Waiting for processes to exit. Apr 16 01:55:38.596390 systemd-logind[1449]: Removed session 30. Apr 16 01:55:43.599686 systemd[1]: Started sshd@30-10.0.0.136:22-10.0.0.1:55764.service - OpenSSH per-connection server daemon (10.0.0.1:55764). Apr 16 01:55:43.747858 sshd[6036]: Accepted publickey for core from 10.0.0.1 port 55764 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:55:43.749899 sshd[6036]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:55:43.755833 systemd-logind[1449]: New session 31 of user core. Apr 16 01:55:43.766311 systemd[1]: Started session-31.scope - Session 31 of User core. Apr 16 01:55:43.864132 kubelet[2507]: E0416 01:55:43.861496 2507 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:55:43.909980 sshd[6036]: pam_unix(sshd:session): session closed for user core Apr 16 01:55:43.913274 systemd[1]: sshd@30-10.0.0.136:22-10.0.0.1:55764.service: Deactivated successfully. Apr 16 01:55:43.914654 systemd[1]: session-31.scope: Deactivated successfully. Apr 16 01:55:43.915242 systemd-logind[1449]: Session 31 logged out. Waiting for processes to exit. Apr 16 01:55:43.916258 systemd-logind[1449]: Removed session 31. Apr 16 01:55:44.352619 update_engine[1451]: I20260416 01:55:44.352517 1451 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Apr 16 01:55:44.352619 update_engine[1451]: I20260416 01:55:44.352589 1451 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Apr 16 01:55:44.354080 update_engine[1451]: I20260416 01:55:44.354027 1451 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Apr 16 01:55:44.354460 update_engine[1451]: I20260416 01:55:44.354403 1451 omaha_request_params.cc:62] Current group set to lts Apr 16 01:55:44.354947 update_engine[1451]: I20260416 01:55:44.354926 1451 update_attempter.cc:499] Already updated boot flags. Skipping. Apr 16 01:55:44.355009 update_engine[1451]: I20260416 01:55:44.355000 1451 update_attempter.cc:643] Scheduling an action processor start. Apr 16 01:55:44.355074 update_engine[1451]: I20260416 01:55:44.355044 1451 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 16 01:55:44.355126 update_engine[1451]: I20260416 01:55:44.355098 1451 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Apr 16 01:55:44.355159 update_engine[1451]: I20260416 01:55:44.355150 1451 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 16 01:55:44.355196 update_engine[1451]: I20260416 01:55:44.355156 1451 omaha_request_action.cc:272] Request: Apr 16 01:55:44.355196 update_engine[1451]: Apr 16 01:55:44.355196 update_engine[1451]: Apr 16 01:55:44.355196 update_engine[1451]: Apr 16 01:55:44.355196 update_engine[1451]: Apr 16 01:55:44.355196 update_engine[1451]: Apr 16 01:55:44.355196 update_engine[1451]: Apr 16 01:55:44.355196 update_engine[1451]: Apr 16 01:55:44.355196 update_engine[1451]: Apr 16 01:55:44.355196 update_engine[1451]: I20260416 01:55:44.355161 1451 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 16 01:55:44.365966 locksmithd[1482]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Apr 16 01:55:44.366682 update_engine[1451]: I20260416 01:55:44.366006 1451 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 16 01:55:44.366682 update_engine[1451]: I20260416 01:55:44.366343 1451 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 16 01:55:44.373924 update_engine[1451]: E20260416 01:55:44.373786 1451 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 16 01:55:44.374224 update_engine[1451]: I20260416 01:55:44.373978 1451 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Apr 16 01:55:48.860367 kubelet[2507]: E0416 01:55:48.860308 2507 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:55:48.924596 systemd[1]: Started sshd@31-10.0.0.136:22-10.0.0.1:55778.service - OpenSSH per-connection server daemon (10.0.0.1:55778). Apr 16 01:55:49.014501 sshd[6095]: Accepted publickey for core from 10.0.0.1 port 55778 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:55:49.016770 sshd[6095]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:55:49.027045 systemd-logind[1449]: New session 32 of user core. Apr 16 01:55:49.033297 systemd[1]: Started session-32.scope - Session 32 of User core. Apr 16 01:55:49.325878 sshd[6095]: pam_unix(sshd:session): session closed for user core Apr 16 01:55:49.332152 systemd[1]: sshd@31-10.0.0.136:22-10.0.0.1:55778.service: Deactivated successfully. Apr 16 01:55:49.335391 systemd[1]: session-32.scope: Deactivated successfully. Apr 16 01:55:49.337055 systemd-logind[1449]: Session 32 logged out. Waiting for processes to exit. Apr 16 01:55:49.337974 systemd-logind[1449]: Removed session 32. Apr 16 01:55:50.860822 kubelet[2507]: E0416 01:55:50.860572 2507 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:55:54.338053 update_engine[1451]: I20260416 01:55:54.337965 1451 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 16 01:55:54.338447 update_engine[1451]: I20260416 01:55:54.338351 1451 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 16 01:55:54.339975 systemd[1]: Started sshd@32-10.0.0.136:22-10.0.0.1:34236.service - OpenSSH per-connection server daemon (10.0.0.1:34236). Apr 16 01:55:54.340441 update_engine[1451]: I20260416 01:55:54.340158 1451 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 16 01:55:54.346173 update_engine[1451]: E20260416 01:55:54.346055 1451 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 16 01:55:54.346173 update_engine[1451]: I20260416 01:55:54.346135 1451 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Apr 16 01:55:54.394276 sshd[6109]: Accepted publickey for core from 10.0.0.1 port 34236 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:55:54.395763 sshd[6109]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:55:54.400353 systemd-logind[1449]: New session 33 of user core. Apr 16 01:55:54.407316 systemd[1]: Started session-33.scope - Session 33 of User core. Apr 16 01:55:54.543917 sshd[6109]: pam_unix(sshd:session): session closed for user core Apr 16 01:55:54.547403 systemd[1]: sshd@32-10.0.0.136:22-10.0.0.1:34236.service: Deactivated successfully. Apr 16 01:55:54.549540 systemd[1]: session-33.scope: Deactivated successfully. Apr 16 01:55:54.550793 systemd-logind[1449]: Session 33 logged out. Waiting for processes to exit. Apr 16 01:55:54.551653 systemd-logind[1449]: Removed session 33.