Apr 14 13:16:36.956992 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Apr 13 18:40:27 -00 2026 Apr 14 13:16:36.957013 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c1ba97db2f6278922cfc5bd0ca74b4bb573fca2c3aed19c121a34271e693e156 Apr 14 13:16:36.957022 kernel: BIOS-provided physical RAM map: Apr 14 13:16:36.957027 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Apr 14 13:16:36.957031 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Apr 14 13:16:36.957035 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Apr 14 13:16:36.957040 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Apr 14 13:16:36.957045 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Apr 14 13:16:36.957049 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Apr 14 13:16:36.957054 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Apr 14 13:16:36.957059 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 14 13:16:36.957063 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Apr 14 13:16:36.957067 kernel: NX (Execute Disable) protection: active Apr 14 13:16:36.957072 kernel: APIC: Static calls initialized Apr 14 13:16:36.957077 kernel: SMBIOS 2.8 present. Apr 14 13:16:36.957084 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Apr 14 13:16:36.957088 kernel: Hypervisor detected: KVM Apr 14 13:16:36.957093 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 14 13:16:36.957097 kernel: kvm-clock: using sched offset of 4088127683 cycles Apr 14 13:16:36.957102 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 14 13:16:36.957107 kernel: tsc: Detected 2793.438 MHz processor Apr 14 13:16:36.957112 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 14 13:16:36.957117 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 14 13:16:36.957122 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x10000000000 Apr 14 13:16:36.957128 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Apr 14 13:16:36.957133 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 14 13:16:36.957138 kernel: Using GB pages for direct mapping Apr 14 13:16:36.957143 kernel: ACPI: Early table checksum verification disabled Apr 14 13:16:36.957147 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Apr 14 13:16:36.957152 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 14 13:16:36.957157 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 14 13:16:36.957162 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 14 13:16:36.957166 kernel: ACPI: FACS 0x000000009CFE0000 000040 Apr 14 13:16:36.957172 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 14 13:16:36.957177 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 14 13:16:36.957181 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 14 13:16:36.957186 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 14 13:16:36.957191 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Apr 14 13:16:36.957195 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Apr 14 13:16:36.957200 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Apr 14 13:16:36.957208 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Apr 14 13:16:36.957214 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Apr 14 13:16:36.957219 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Apr 14 13:16:36.957224 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Apr 14 13:16:36.957228 kernel: No NUMA configuration found Apr 14 13:16:36.957233 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Apr 14 13:16:36.957238 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Apr 14 13:16:36.957245 kernel: Zone ranges: Apr 14 13:16:36.957249 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 14 13:16:36.957254 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Apr 14 13:16:36.957259 kernel: Normal empty Apr 14 13:16:36.957264 kernel: Movable zone start for each node Apr 14 13:16:36.957269 kernel: Early memory node ranges Apr 14 13:16:36.957274 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Apr 14 13:16:36.957279 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Apr 14 13:16:36.957284 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Apr 14 13:16:36.957289 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 14 13:16:36.957295 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Apr 14 13:16:36.957300 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Apr 14 13:16:36.957305 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 14 13:16:36.957310 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 14 13:16:36.957315 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 14 13:16:36.957320 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 14 13:16:36.957324 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 14 13:16:36.957329 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 14 13:16:36.957334 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 14 13:16:36.957341 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 14 13:16:36.957346 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 14 13:16:36.957350 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 14 13:16:36.957355 kernel: TSC deadline timer available Apr 14 13:16:36.957360 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Apr 14 13:16:36.957365 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 14 13:16:36.957370 kernel: kvm-guest: KVM setup pv remote TLB flush Apr 14 13:16:36.957375 kernel: kvm-guest: setup PV sched yield Apr 14 13:16:36.957380 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Apr 14 13:16:36.957386 kernel: Booting paravirtualized kernel on KVM Apr 14 13:16:36.957392 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 14 13:16:36.957397 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Apr 14 13:16:36.957402 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Apr 14 13:16:36.957407 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Apr 14 13:16:36.957412 kernel: pcpu-alloc: [0] 0 1 2 3 Apr 14 13:16:36.957416 kernel: kvm-guest: PV spinlocks enabled Apr 14 13:16:36.957421 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 14 13:16:36.957427 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c1ba97db2f6278922cfc5bd0ca74b4bb573fca2c3aed19c121a34271e693e156 Apr 14 13:16:36.957434 kernel: random: crng init done Apr 14 13:16:36.957439 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 14 13:16:36.957444 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 14 13:16:36.957449 kernel: Fallback order for Node 0: 0 Apr 14 13:16:36.957454 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Apr 14 13:16:36.957459 kernel: Policy zone: DMA32 Apr 14 13:16:36.957464 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 14 13:16:36.957469 kernel: Memory: 2433652K/2571752K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42896K init, 2300K bss, 137896K reserved, 0K cma-reserved) Apr 14 13:16:36.957476 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Apr 14 13:16:36.957481 kernel: ftrace: allocating 37996 entries in 149 pages Apr 14 13:16:36.957485 kernel: ftrace: allocated 149 pages with 4 groups Apr 14 13:16:36.957490 kernel: Dynamic Preempt: voluntary Apr 14 13:16:36.957495 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 14 13:16:36.957501 kernel: rcu: RCU event tracing is enabled. Apr 14 13:16:36.957553 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Apr 14 13:16:36.957559 kernel: Trampoline variant of Tasks RCU enabled. Apr 14 13:16:36.957564 kernel: Rude variant of Tasks RCU enabled. Apr 14 13:16:36.957569 kernel: Tracing variant of Tasks RCU enabled. Apr 14 13:16:36.957576 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 14 13:16:36.957581 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Apr 14 13:16:36.957586 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Apr 14 13:16:36.957591 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 14 13:16:36.957596 kernel: Console: colour VGA+ 80x25 Apr 14 13:16:36.957601 kernel: printk: console [ttyS0] enabled Apr 14 13:16:36.957606 kernel: ACPI: Core revision 20230628 Apr 14 13:16:36.957611 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 14 13:16:36.957616 kernel: APIC: Switch to symmetric I/O mode setup Apr 14 13:16:36.957622 kernel: x2apic enabled Apr 14 13:16:36.957627 kernel: APIC: Switched APIC routing to: physical x2apic Apr 14 13:16:36.957632 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Apr 14 13:16:36.957637 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Apr 14 13:16:36.957642 kernel: kvm-guest: setup PV IPIs Apr 14 13:16:36.957647 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 14 13:16:36.957652 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 14 13:16:36.957664 kernel: Calibrating delay loop (skipped) preset value.. 5586.87 BogoMIPS (lpj=2793438) Apr 14 13:16:36.957669 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 14 13:16:36.957719 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Apr 14 13:16:36.957725 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Apr 14 13:16:36.957733 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 14 13:16:36.957739 kernel: Spectre V2 : Mitigation: Retpolines Apr 14 13:16:36.957744 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 14 13:16:36.957754 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 14 13:16:36.957762 kernel: RETBleed: Vulnerable Apr 14 13:16:36.957774 kernel: Speculative Store Bypass: Vulnerable Apr 14 13:16:36.957783 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 14 13:16:36.957821 kernel: GDS: Unknown: Dependent on hypervisor status Apr 14 13:16:36.957832 kernel: active return thunk: its_return_thunk Apr 14 13:16:36.957843 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 14 13:16:36.957850 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 14 13:16:36.957855 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 14 13:16:36.957861 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 14 13:16:36.957867 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 14 13:16:36.957874 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 14 13:16:36.957880 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 14 13:16:36.957886 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 14 13:16:36.957892 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Apr 14 13:16:36.957897 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Apr 14 13:16:36.957903 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Apr 14 13:16:36.957908 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Apr 14 13:16:36.957914 kernel: Freeing SMP alternatives memory: 32K Apr 14 13:16:36.957919 kernel: pid_max: default: 32768 minimum: 301 Apr 14 13:16:36.957927 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 14 13:16:36.957932 kernel: landlock: Up and running. Apr 14 13:16:36.957938 kernel: SELinux: Initializing. Apr 14 13:16:36.957943 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 14 13:16:36.957949 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 14 13:16:36.957954 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8370C CPU @ 2.80GHz (family: 0x6, model: 0x6a, stepping: 0x6) Apr 14 13:16:36.957960 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 14 13:16:36.957965 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 14 13:16:36.957971 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 14 13:16:36.957978 kernel: Performance Events: unsupported p6 CPU model 106 no PMU driver, software events only. Apr 14 13:16:36.957984 kernel: signal: max sigframe size: 3632 Apr 14 13:16:36.957989 kernel: rcu: Hierarchical SRCU implementation. Apr 14 13:16:36.957995 kernel: rcu: Max phase no-delay instances is 400. Apr 14 13:16:36.958001 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 14 13:16:36.958006 kernel: smp: Bringing up secondary CPUs ... Apr 14 13:16:36.958012 kernel: smpboot: x86: Booting SMP configuration: Apr 14 13:16:36.958017 kernel: .... node #0, CPUs: #1 #2 #3 Apr 14 13:16:36.958023 kernel: smp: Brought up 1 node, 4 CPUs Apr 14 13:16:36.958030 kernel: smpboot: Max logical packages: 1 Apr 14 13:16:36.958036 kernel: smpboot: Total of 4 processors activated (22347.50 BogoMIPS) Apr 14 13:16:36.958041 kernel: devtmpfs: initialized Apr 14 13:16:36.958046 kernel: x86/mm: Memory block size: 128MB Apr 14 13:16:36.958052 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 14 13:16:36.958058 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Apr 14 13:16:36.958063 kernel: pinctrl core: initialized pinctrl subsystem Apr 14 13:16:36.958069 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 14 13:16:36.958074 kernel: audit: initializing netlink subsys (disabled) Apr 14 13:16:36.958081 kernel: audit: type=2000 audit(1776172595.764:1): state=initialized audit_enabled=0 res=1 Apr 14 13:16:36.958086 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 14 13:16:36.958091 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 14 13:16:36.958097 kernel: cpuidle: using governor menu Apr 14 13:16:36.958102 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 14 13:16:36.958108 kernel: dca service started, version 1.12.1 Apr 14 13:16:36.958113 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Apr 14 13:16:36.958119 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Apr 14 13:16:36.958124 kernel: PCI: Using configuration type 1 for base access Apr 14 13:16:36.958131 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 14 13:16:36.958137 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 14 13:16:36.958142 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 14 13:16:36.958148 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 14 13:16:36.958154 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 14 13:16:36.958159 kernel: ACPI: Added _OSI(Module Device) Apr 14 13:16:36.958165 kernel: ACPI: Added _OSI(Processor Device) Apr 14 13:16:36.958170 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 14 13:16:36.958175 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 14 13:16:36.958182 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 14 13:16:36.958188 kernel: ACPI: Interpreter enabled Apr 14 13:16:36.958193 kernel: ACPI: PM: (supports S0 S3 S5) Apr 14 13:16:36.958199 kernel: ACPI: Using IOAPIC for interrupt routing Apr 14 13:16:36.958204 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 14 13:16:36.958210 kernel: PCI: Using E820 reservations for host bridge windows Apr 14 13:16:36.958215 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 14 13:16:36.958221 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 14 13:16:36.958339 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 14 13:16:36.958402 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 14 13:16:36.958457 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 14 13:16:36.958464 kernel: PCI host bridge to bus 0000:00 Apr 14 13:16:36.958573 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 14 13:16:36.958626 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 14 13:16:36.958675 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 14 13:16:36.958727 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Apr 14 13:16:36.958777 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Apr 14 13:16:36.958866 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Apr 14 13:16:36.958917 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 14 13:16:36.958984 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Apr 14 13:16:36.959047 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Apr 14 13:16:36.959107 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Apr 14 13:16:36.959165 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Apr 14 13:16:36.959261 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Apr 14 13:16:36.959318 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 14 13:16:36.959380 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Apr 14 13:16:36.959437 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Apr 14 13:16:36.959492 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Apr 14 13:16:36.959687 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Apr 14 13:16:36.959750 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Apr 14 13:16:36.959840 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Apr 14 13:16:36.959905 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Apr 14 13:16:36.959961 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Apr 14 13:16:36.960025 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Apr 14 13:16:36.960081 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Apr 14 13:16:36.960140 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Apr 14 13:16:36.960195 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Apr 14 13:16:36.960250 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Apr 14 13:16:36.960312 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Apr 14 13:16:36.960368 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 14 13:16:36.960427 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Apr 14 13:16:36.960484 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Apr 14 13:16:36.960581 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Apr 14 13:16:36.960642 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Apr 14 13:16:36.960697 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Apr 14 13:16:36.960705 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 14 13:16:36.960710 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 14 13:16:36.960716 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 14 13:16:36.960722 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 14 13:16:36.960729 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 14 13:16:36.960735 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 14 13:16:36.960741 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 14 13:16:36.960746 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 14 13:16:36.960752 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 14 13:16:36.960758 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 14 13:16:36.960763 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 14 13:16:36.960769 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 14 13:16:36.960774 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 14 13:16:36.960782 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 14 13:16:36.960788 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 14 13:16:36.960793 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 14 13:16:36.960824 kernel: iommu: Default domain type: Translated Apr 14 13:16:36.960834 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 14 13:16:36.960845 kernel: PCI: Using ACPI for IRQ routing Apr 14 13:16:36.960851 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 14 13:16:36.960857 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Apr 14 13:16:36.960862 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Apr 14 13:16:36.960928 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 14 13:16:36.960984 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 14 13:16:36.961039 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 14 13:16:36.961046 kernel: vgaarb: loaded Apr 14 13:16:36.961052 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 14 13:16:36.961057 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 14 13:16:36.961063 kernel: clocksource: Switched to clocksource kvm-clock Apr 14 13:16:36.961068 kernel: VFS: Disk quotas dquot_6.6.0 Apr 14 13:16:36.961074 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 14 13:16:36.961082 kernel: pnp: PnP ACPI init Apr 14 13:16:36.961145 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Apr 14 13:16:36.961153 kernel: pnp: PnP ACPI: found 6 devices Apr 14 13:16:36.961159 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 14 13:16:36.961165 kernel: NET: Registered PF_INET protocol family Apr 14 13:16:36.961170 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 14 13:16:36.961176 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 14 13:16:36.961181 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 14 13:16:36.961189 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 14 13:16:36.961195 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 14 13:16:36.961200 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 14 13:16:36.961206 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 14 13:16:36.961211 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 14 13:16:36.961217 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 14 13:16:36.961223 kernel: NET: Registered PF_XDP protocol family Apr 14 13:16:36.961275 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 14 13:16:36.961324 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 14 13:16:36.961376 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 14 13:16:36.961427 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Apr 14 13:16:36.961476 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Apr 14 13:16:36.961563 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Apr 14 13:16:36.961571 kernel: PCI: CLS 0 bytes, default 64 Apr 14 13:16:36.961577 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 14 13:16:36.961583 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 14 13:16:36.961588 kernel: Initialise system trusted keyrings Apr 14 13:16:36.961596 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 14 13:16:36.961602 kernel: Key type asymmetric registered Apr 14 13:16:36.961607 kernel: Asymmetric key parser 'x509' registered Apr 14 13:16:36.961613 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 14 13:16:36.961618 kernel: io scheduler mq-deadline registered Apr 14 13:16:36.961624 kernel: io scheduler kyber registered Apr 14 13:16:36.961630 kernel: io scheduler bfq registered Apr 14 13:16:36.961639 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 14 13:16:36.961648 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 14 13:16:36.961659 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 14 13:16:36.961669 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Apr 14 13:16:36.961677 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 14 13:16:36.961683 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 14 13:16:36.961688 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 14 13:16:36.961694 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 14 13:16:36.961699 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 14 13:16:36.961763 kernel: rtc_cmos 00:04: RTC can wake from S4 Apr 14 13:16:36.961773 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 14 13:16:36.961872 kernel: rtc_cmos 00:04: registered as rtc0 Apr 14 13:16:36.961925 kernel: rtc_cmos 00:04: setting system clock to 2026-04-14T13:16:36 UTC (1776172596) Apr 14 13:16:36.961976 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Apr 14 13:16:36.961983 kernel: intel_pstate: CPU model not supported Apr 14 13:16:36.961988 kernel: NET: Registered PF_INET6 protocol family Apr 14 13:16:36.961994 kernel: Segment Routing with IPv6 Apr 14 13:16:36.962000 kernel: In-situ OAM (IOAM) with IPv6 Apr 14 13:16:36.962005 kernel: NET: Registered PF_PACKET protocol family Apr 14 13:16:36.962013 kernel: Key type dns_resolver registered Apr 14 13:16:36.962019 kernel: IPI shorthand broadcast: enabled Apr 14 13:16:36.962024 kernel: sched_clock: Marking stable (1143018923, 247184285)->(1445940619, -55737411) Apr 14 13:16:36.962030 kernel: registered taskstats version 1 Apr 14 13:16:36.962035 kernel: Loading compiled-in X.509 certificates Apr 14 13:16:36.962041 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 51221ce98a81ccf90ef3d16403b42695603c5d00' Apr 14 13:16:36.962047 kernel: Key type .fscrypt registered Apr 14 13:16:36.962052 kernel: Key type fscrypt-provisioning registered Apr 14 13:16:36.962077 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 14 13:16:36.962086 kernel: ima: Allocated hash algorithm: sha1 Apr 14 13:16:36.962092 kernel: ima: No architecture policies found Apr 14 13:16:36.962097 kernel: clk: Disabling unused clocks Apr 14 13:16:36.962117 kernel: Freeing unused kernel image (initmem) memory: 42896K Apr 14 13:16:36.962123 kernel: Write protecting the kernel read-only data: 36864k Apr 14 13:16:36.962142 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 14 13:16:36.962176 kernel: Run /init as init process Apr 14 13:16:36.962208 kernel: with arguments: Apr 14 13:16:36.962228 kernel: /init Apr 14 13:16:36.962248 kernel: with environment: Apr 14 13:16:36.962254 kernel: HOME=/ Apr 14 13:16:36.962260 kernel: TERM=linux Apr 14 13:16:36.962268 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 14 13:16:36.962276 systemd[1]: Detected virtualization kvm. Apr 14 13:16:36.962282 systemd[1]: Detected architecture x86-64. Apr 14 13:16:36.962288 systemd[1]: Running in initrd. Apr 14 13:16:36.962294 systemd[1]: No hostname configured, using default hostname. Apr 14 13:16:36.962301 systemd[1]: Hostname set to . Apr 14 13:16:36.962308 systemd[1]: Initializing machine ID from VM UUID. Apr 14 13:16:36.962313 systemd[1]: Queued start job for default target initrd.target. Apr 14 13:16:36.962319 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 14 13:16:36.962325 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 14 13:16:36.962332 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 14 13:16:36.962338 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 14 13:16:36.962344 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 14 13:16:36.962352 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 14 13:16:36.962368 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 14 13:16:36.962374 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 14 13:16:36.962380 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 14 13:16:36.962387 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 14 13:16:36.962393 systemd[1]: Reached target paths.target - Path Units. Apr 14 13:16:36.962400 systemd[1]: Reached target slices.target - Slice Units. Apr 14 13:16:36.962406 systemd[1]: Reached target swap.target - Swaps. Apr 14 13:16:36.962412 systemd[1]: Reached target timers.target - Timer Units. Apr 14 13:16:36.962418 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 14 13:16:36.962424 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 14 13:16:36.962430 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 14 13:16:36.962436 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 14 13:16:36.962444 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 14 13:16:36.962450 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 14 13:16:36.962456 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 14 13:16:36.962462 systemd[1]: Reached target sockets.target - Socket Units. Apr 14 13:16:36.962470 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 14 13:16:36.962476 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 14 13:16:36.962482 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 14 13:16:36.962488 systemd[1]: Starting systemd-fsck-usr.service... Apr 14 13:16:36.962494 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 14 13:16:36.962502 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 14 13:16:36.962537 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 14 13:16:36.962543 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 14 13:16:36.962566 systemd-journald[193]: Collecting audit messages is disabled. Apr 14 13:16:36.962584 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 14 13:16:36.962591 systemd-journald[193]: Journal started Apr 14 13:16:36.962610 systemd-journald[193]: Runtime Journal (/run/log/journal/4df645bf04b14f77ba1511da8543fb22) is 6.0M, max 48.4M, 42.3M free. Apr 14 13:16:36.964376 systemd-modules-load[194]: Inserted module 'overlay' Apr 14 13:16:36.994574 systemd[1]: Finished systemd-fsck-usr.service. Apr 14 13:16:36.994782 systemd[1]: Started systemd-journald.service - Journal Service. Apr 14 13:16:36.980078 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 14 13:16:36.998789 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 14 13:16:37.166970 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 14 13:16:37.166997 kernel: Bridge firewalling registered Apr 14 13:16:37.000973 systemd-modules-load[194]: Inserted module 'br_netfilter' Apr 14 13:16:37.185490 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 14 13:16:37.188931 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 14 13:16:37.194349 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 14 13:16:37.208759 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 14 13:16:37.216750 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 14 13:16:37.217426 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 14 13:16:37.218704 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 14 13:16:37.237345 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 14 13:16:37.244487 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 14 13:16:37.252232 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 14 13:16:37.256679 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 14 13:16:37.275037 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 14 13:16:37.288750 dracut-cmdline[228]: dracut-dracut-053 Apr 14 13:16:37.292880 dracut-cmdline[228]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c1ba97db2f6278922cfc5bd0ca74b4bb573fca2c3aed19c121a34271e693e156 Apr 14 13:16:37.311857 systemd-resolved[230]: Positive Trust Anchors: Apr 14 13:16:37.311884 systemd-resolved[230]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 14 13:16:37.311909 systemd-resolved[230]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 14 13:16:37.314369 systemd-resolved[230]: Defaulting to hostname 'linux'. Apr 14 13:16:37.315905 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 14 13:16:37.322203 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 14 13:16:37.411703 kernel: SCSI subsystem initialized Apr 14 13:16:37.424714 kernel: Loading iSCSI transport class v2.0-870. Apr 14 13:16:37.440675 kernel: iscsi: registered transport (tcp) Apr 14 13:16:37.464725 kernel: iscsi: registered transport (qla4xxx) Apr 14 13:16:37.464861 kernel: QLogic iSCSI HBA Driver Apr 14 13:16:37.509935 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 14 13:16:37.532072 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 14 13:16:37.563498 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 14 13:16:37.563661 kernel: device-mapper: uevent: version 1.0.3 Apr 14 13:16:37.563670 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 14 13:16:37.611646 kernel: raid6: avx512x4 gen() 44438 MB/s Apr 14 13:16:37.629707 kernel: raid6: avx512x2 gen() 39084 MB/s Apr 14 13:16:37.646771 kernel: raid6: avx512x1 gen() 42049 MB/s Apr 14 13:16:37.664878 kernel: raid6: avx2x4 gen() 35425 MB/s Apr 14 13:16:37.682658 kernel: raid6: avx2x2 gen() 34855 MB/s Apr 14 13:16:37.700794 kernel: raid6: avx2x1 gen() 26519 MB/s Apr 14 13:16:37.700909 kernel: raid6: using algorithm avx512x4 gen() 44438 MB/s Apr 14 13:16:37.720198 kernel: raid6: .... xor() 9536 MB/s, rmw enabled Apr 14 13:16:37.720285 kernel: raid6: using avx512x2 recovery algorithm Apr 14 13:16:37.742620 kernel: xor: automatically using best checksumming function avx Apr 14 13:16:37.922671 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 14 13:16:37.941027 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 14 13:16:37.954929 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 14 13:16:37.995433 systemd-udevd[413]: Using default interface naming scheme 'v255'. Apr 14 13:16:37.998250 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 14 13:16:38.018054 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 14 13:16:38.035954 dracut-pre-trigger[430]: rd.md=0: removing MD RAID activation Apr 14 13:16:38.073326 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 14 13:16:38.094214 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 14 13:16:38.133442 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 14 13:16:38.142044 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 14 13:16:38.154988 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 14 13:16:38.161285 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 14 13:16:38.164675 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 14 13:16:38.173224 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 14 13:16:38.185543 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Apr 14 13:16:38.185775 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 14 13:16:38.195064 kernel: cryptd: max_cpu_qlen set to 1000 Apr 14 13:16:38.195086 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Apr 14 13:16:38.199425 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 14 13:16:38.199573 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 14 13:16:38.204890 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 14 13:16:38.217178 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 14 13:16:38.217196 kernel: GPT:9289727 != 19775487 Apr 14 13:16:38.217204 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 14 13:16:38.217212 kernel: GPT:9289727 != 19775487 Apr 14 13:16:38.217228 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 14 13:16:38.217236 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 14 13:16:38.216281 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 14 13:16:38.216393 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 14 13:16:38.227169 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 14 13:16:38.244579 kernel: libata version 3.00 loaded. Apr 14 13:16:38.249618 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 14 13:16:38.256905 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 14 13:16:38.267668 kernel: AVX2 version of gcm_enc/dec engaged. Apr 14 13:16:38.267696 kernel: AES CTR mode by8 optimization enabled Apr 14 13:16:38.277264 kernel: ahci 0000:00:1f.2: version 3.0 Apr 14 13:16:38.277474 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 14 13:16:38.277492 kernel: BTRFS: device fsid de1edd48-4571-4695-92f0-7af6e33c4e3d devid 1 transid 31 /dev/vda3 scanned by (udev-worker) (466) Apr 14 13:16:38.277505 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Apr 14 13:16:38.279608 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 14 13:16:38.281119 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Apr 14 13:16:38.290323 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (480) Apr 14 13:16:38.292980 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Apr 14 13:16:38.295855 kernel: scsi host0: ahci Apr 14 13:16:38.296035 kernel: scsi host1: ahci Apr 14 13:16:38.299635 kernel: scsi host2: ahci Apr 14 13:16:38.302080 kernel: scsi host3: ahci Apr 14 13:16:38.302237 kernel: scsi host4: ahci Apr 14 13:16:38.304492 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Apr 14 13:16:38.315154 kernel: scsi host5: ahci Apr 14 13:16:38.315364 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Apr 14 13:16:38.315380 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Apr 14 13:16:38.315391 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Apr 14 13:16:38.315403 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Apr 14 13:16:38.315414 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Apr 14 13:16:38.315425 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Apr 14 13:16:38.319239 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Apr 14 13:16:38.444898 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 14 13:16:38.452314 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 14 13:16:38.469267 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 14 13:16:38.470135 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 14 13:16:38.485380 disk-uuid[562]: Primary Header is updated. Apr 14 13:16:38.485380 disk-uuid[562]: Secondary Entries is updated. Apr 14 13:16:38.485380 disk-uuid[562]: Secondary Header is updated. Apr 14 13:16:38.493653 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 14 13:16:38.499032 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 14 13:16:38.504488 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 14 13:16:38.511597 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 14 13:16:38.511652 kernel: block device autoloading is deprecated and will be removed. Apr 14 13:16:38.622642 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 14 13:16:38.627573 kernel: ata1: SATA link down (SStatus 0 SControl 300) Apr 14 13:16:38.632634 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 14 13:16:38.635129 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Apr 14 13:16:38.635167 kernel: ata3.00: applying bridge limits Apr 14 13:16:38.637570 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 14 13:16:38.640842 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 14 13:16:38.640909 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 14 13:16:38.642663 kernel: ata3.00: configured for UDMA/100 Apr 14 13:16:38.647602 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 14 13:16:38.699296 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Apr 14 13:16:38.699847 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 14 13:16:38.714585 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Apr 14 13:16:39.516726 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 14 13:16:39.519783 disk-uuid[563]: The operation has completed successfully. Apr 14 13:16:39.555228 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 14 13:16:39.555468 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 14 13:16:39.570804 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 14 13:16:39.576089 sh[603]: Success Apr 14 13:16:39.595726 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Apr 14 13:16:39.653474 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 14 13:16:39.706979 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 14 13:16:39.714390 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 14 13:16:39.724872 kernel: BTRFS info (device dm-0): first mount of filesystem de1edd48-4571-4695-92f0-7af6e33c4e3d Apr 14 13:16:39.724943 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 14 13:16:39.728798 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 14 13:16:39.728874 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 14 13:16:39.735654 kernel: BTRFS info (device dm-0): using free space tree Apr 14 13:16:39.742752 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 14 13:16:39.743356 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 14 13:16:39.759294 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 14 13:16:39.764931 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 14 13:16:39.777759 kernel: BTRFS info (device vda6): first mount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 14 13:16:39.777810 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 14 13:16:39.777840 kernel: BTRFS info (device vda6): using free space tree Apr 14 13:16:39.783554 kernel: BTRFS info (device vda6): auto enabling async discard Apr 14 13:16:39.793011 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 14 13:16:39.797791 kernel: BTRFS info (device vda6): last unmount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 14 13:16:39.813378 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 14 13:16:39.826921 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 14 13:16:40.166313 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 14 13:16:40.173609 ignition[710]: Ignition 2.19.0 Apr 14 13:16:40.173663 ignition[710]: Stage: fetch-offline Apr 14 13:16:40.173798 ignition[710]: no configs at "/usr/lib/ignition/base.d" Apr 14 13:16:40.173809 ignition[710]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 14 13:16:40.174303 ignition[710]: parsed url from cmdline: "" Apr 14 13:16:40.174308 ignition[710]: no config URL provided Apr 14 13:16:40.174313 ignition[710]: reading system config file "/usr/lib/ignition/user.ign" Apr 14 13:16:40.174321 ignition[710]: no config at "/usr/lib/ignition/user.ign" Apr 14 13:16:40.174386 ignition[710]: op(1): [started] loading QEMU firmware config module Apr 14 13:16:40.174391 ignition[710]: op(1): executing: "modprobe" "qemu_fw_cfg" Apr 14 13:16:40.186861 ignition[710]: op(1): [finished] loading QEMU firmware config module Apr 14 13:16:40.196988 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 14 13:16:40.246114 systemd-networkd[791]: lo: Link UP Apr 14 13:16:40.246137 systemd-networkd[791]: lo: Gained carrier Apr 14 13:16:40.247356 systemd-networkd[791]: Enumeration completed Apr 14 13:16:40.248227 systemd-networkd[791]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 14 13:16:40.248229 systemd-networkd[791]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 14 13:16:40.250168 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 14 13:16:40.250646 systemd-networkd[791]: eth0: Link UP Apr 14 13:16:40.250649 systemd-networkd[791]: eth0: Gained carrier Apr 14 13:16:40.250654 systemd-networkd[791]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 14 13:16:40.254641 systemd[1]: Reached target network.target - Network. Apr 14 13:16:40.297920 systemd-networkd[791]: eth0: DHCPv4 address 10.0.0.15/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 14 13:16:40.355739 ignition[710]: parsing config with SHA512: 4a40a6903780ca03bf1d63daef7fcafb189275d5a8f9045e28664061a47e8fbc2006c828b7103a1d5c413052c4d031bca3a7ca66f301180bf4d6ae4ed5453338 Apr 14 13:16:40.374553 unknown[710]: fetched base config from "system" Apr 14 13:16:40.377589 systemd-resolved[230]: Detected conflict on linux IN A 10.0.0.15 Apr 14 13:16:40.377599 systemd-resolved[230]: Hostname conflict, changing published hostname from 'linux' to 'linux6'. Apr 14 13:16:40.379017 ignition[710]: fetch-offline: fetch-offline passed Apr 14 13:16:40.377712 unknown[710]: fetched user config from "qemu" Apr 14 13:16:40.379189 ignition[710]: Ignition finished successfully Apr 14 13:16:40.387751 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 14 13:16:40.391874 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Apr 14 13:16:40.398870 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 14 13:16:40.460997 ignition[796]: Ignition 2.19.0 Apr 14 13:16:40.461019 ignition[796]: Stage: kargs Apr 14 13:16:40.461176 ignition[796]: no configs at "/usr/lib/ignition/base.d" Apr 14 13:16:40.461183 ignition[796]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 14 13:16:40.462704 ignition[796]: kargs: kargs passed Apr 14 13:16:40.468859 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 14 13:16:40.462757 ignition[796]: Ignition finished successfully Apr 14 13:16:40.485339 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 14 13:16:40.582179 kernel: hrtimer: interrupt took 15419296 ns Apr 14 13:16:40.613372 ignition[804]: Ignition 2.19.0 Apr 14 13:16:40.613384 ignition[804]: Stage: disks Apr 14 13:16:40.614274 ignition[804]: no configs at "/usr/lib/ignition/base.d" Apr 14 13:16:40.614289 ignition[804]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 14 13:16:40.618144 ignition[804]: disks: disks passed Apr 14 13:16:40.619020 ignition[804]: Ignition finished successfully Apr 14 13:16:40.638185 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 14 13:16:40.640245 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 14 13:16:40.650476 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 14 13:16:40.655331 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 14 13:16:40.664358 systemd[1]: Reached target sysinit.target - System Initialization. Apr 14 13:16:40.665798 systemd[1]: Reached target basic.target - Basic System. Apr 14 13:16:40.685299 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 14 13:16:40.768631 systemd-fsck[814]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 14 13:16:40.773866 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 14 13:16:40.795752 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 14 13:16:41.042637 kernel: EXT4-fs (vda9): mounted filesystem e02793bf-3e0d-4c7e-b11a-92c664da7ce3 r/w with ordered data mode. Quota mode: none. Apr 14 13:16:41.046114 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 14 13:16:41.051466 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 14 13:16:41.078701 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 14 13:16:41.088132 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 14 13:16:41.093313 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 14 13:16:41.104758 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (822) Apr 14 13:16:41.104790 kernel: BTRFS info (device vda6): first mount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 14 13:16:41.104801 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 14 13:16:41.104812 kernel: BTRFS info (device vda6): using free space tree Apr 14 13:16:41.093377 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 14 13:16:41.093398 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 14 13:16:41.121568 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 14 13:16:41.127990 kernel: BTRFS info (device vda6): auto enabling async discard Apr 14 13:16:41.144770 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 14 13:16:41.146801 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 14 13:16:41.239107 initrd-setup-root[846]: cut: /sysroot/etc/passwd: No such file or directory Apr 14 13:16:41.244937 initrd-setup-root[853]: cut: /sysroot/etc/group: No such file or directory Apr 14 13:16:41.249784 initrd-setup-root[860]: cut: /sysroot/etc/shadow: No such file or directory Apr 14 13:16:41.257703 initrd-setup-root[867]: cut: /sysroot/etc/gshadow: No such file or directory Apr 14 13:16:41.343186 systemd-networkd[791]: eth0: Gained IPv6LL Apr 14 13:16:41.398366 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 14 13:16:41.416201 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 14 13:16:41.421055 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 14 13:16:41.432121 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 14 13:16:41.435632 kernel: BTRFS info (device vda6): last unmount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 14 13:16:41.485268 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 14 13:16:41.506333 ignition[936]: INFO : Ignition 2.19.0 Apr 14 13:16:41.506333 ignition[936]: INFO : Stage: mount Apr 14 13:16:41.510484 ignition[936]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 14 13:16:41.510484 ignition[936]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 14 13:16:41.529598 ignition[936]: INFO : mount: mount passed Apr 14 13:16:41.529598 ignition[936]: INFO : Ignition finished successfully Apr 14 13:16:41.532301 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 14 13:16:41.545428 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 14 13:16:42.068333 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 14 13:16:42.079794 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (949) Apr 14 13:16:42.079902 kernel: BTRFS info (device vda6): first mount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 14 13:16:42.079913 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 14 13:16:42.083342 kernel: BTRFS info (device vda6): using free space tree Apr 14 13:16:42.089616 kernel: BTRFS info (device vda6): auto enabling async discard Apr 14 13:16:42.091743 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 14 13:16:42.376059 ignition[966]: INFO : Ignition 2.19.0 Apr 14 13:16:42.376059 ignition[966]: INFO : Stage: files Apr 14 13:16:42.380672 ignition[966]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 14 13:16:42.380672 ignition[966]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 14 13:16:42.389644 ignition[966]: DEBUG : files: compiled without relabeling support, skipping Apr 14 13:16:42.394440 ignition[966]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 14 13:16:42.397935 ignition[966]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 14 13:16:42.414584 ignition[966]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 14 13:16:42.418396 ignition[966]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 14 13:16:42.429344 unknown[966]: wrote ssh authorized keys file for user: core Apr 14 13:16:42.432195 ignition[966]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 14 13:16:42.435813 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 14 13:16:42.435813 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 14 13:16:42.435813 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 14 13:16:42.435813 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 14 13:16:42.534472 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 14 13:16:42.795904 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 14 13:16:42.795904 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Apr 14 13:16:42.806161 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Apr 14 13:16:42.810723 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 14 13:16:42.810723 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 14 13:16:42.810723 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 14 13:16:42.810723 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 14 13:16:42.810723 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 14 13:16:42.810723 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 14 13:16:42.810723 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 14 13:16:42.810723 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 14 13:16:42.810723 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 14 13:16:42.810723 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 14 13:16:42.810723 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 14 13:16:42.810723 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Apr 14 13:16:43.022016 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Apr 14 13:16:45.849269 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 14 13:16:45.849269 ignition[966]: INFO : files: op(c): [started] processing unit "containerd.service" Apr 14 13:16:45.856870 ignition[966]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 14 13:16:45.856870 ignition[966]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 14 13:16:45.856870 ignition[966]: INFO : files: op(c): [finished] processing unit "containerd.service" Apr 14 13:16:45.856870 ignition[966]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Apr 14 13:16:45.856870 ignition[966]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 14 13:16:45.856870 ignition[966]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 14 13:16:45.856870 ignition[966]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Apr 14 13:16:45.856870 ignition[966]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Apr 14 13:16:45.856870 ignition[966]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 14 13:16:45.856870 ignition[966]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 14 13:16:45.856870 ignition[966]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Apr 14 13:16:45.856870 ignition[966]: INFO : files: op(12): [started] setting preset to disabled for "coreos-metadata.service" Apr 14 13:16:45.901747 ignition[966]: INFO : files: op(12): op(13): [started] removing enablement symlink(s) for "coreos-metadata.service" Apr 14 13:16:45.901747 ignition[966]: INFO : files: op(12): op(13): [finished] removing enablement symlink(s) for "coreos-metadata.service" Apr 14 13:16:45.901747 ignition[966]: INFO : files: op(12): [finished] setting preset to disabled for "coreos-metadata.service" Apr 14 13:16:45.901747 ignition[966]: INFO : files: op(14): [started] setting preset to enabled for "prepare-helm.service" Apr 14 13:16:45.901747 ignition[966]: INFO : files: op(14): [finished] setting preset to enabled for "prepare-helm.service" Apr 14 13:16:45.901747 ignition[966]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 14 13:16:45.901747 ignition[966]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 14 13:16:45.901747 ignition[966]: INFO : files: files passed Apr 14 13:16:45.901747 ignition[966]: INFO : Ignition finished successfully Apr 14 13:16:45.900441 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 14 13:16:45.947208 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 14 13:16:45.950698 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 14 13:16:45.956590 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 14 13:16:45.956691 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 14 13:16:45.972730 initrd-setup-root-after-ignition[994]: grep: /sysroot/oem/oem-release: No such file or directory Apr 14 13:16:45.978826 initrd-setup-root-after-ignition[996]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 14 13:16:45.978826 initrd-setup-root-after-ignition[996]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 14 13:16:45.982588 initrd-setup-root-after-ignition[1000]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 14 13:16:45.981743 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 14 13:16:45.997341 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 14 13:16:46.022269 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 14 13:16:46.178400 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 14 13:16:46.180468 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 14 13:16:46.188898 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 14 13:16:46.192011 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 14 13:16:46.196252 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 14 13:16:46.203041 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 14 13:16:46.254025 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 14 13:16:46.268413 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 14 13:16:46.289623 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 14 13:16:46.289842 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 14 13:16:46.295636 systemd[1]: Stopped target timers.target - Timer Units. Apr 14 13:16:46.300824 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 14 13:16:46.301417 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 14 13:16:46.309235 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 14 13:16:46.316809 systemd[1]: Stopped target basic.target - Basic System. Apr 14 13:16:46.318816 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 14 13:16:46.323677 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 14 13:16:46.333841 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 14 13:16:46.334051 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 14 13:16:46.341052 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 14 13:16:46.345664 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 14 13:16:46.350301 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 14 13:16:46.354341 systemd[1]: Stopped target swap.target - Swaps. Apr 14 13:16:46.358948 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 14 13:16:46.359107 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 14 13:16:46.368361 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 14 13:16:46.372192 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 14 13:16:46.377234 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 14 13:16:46.377493 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 14 13:16:46.381634 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 14 13:16:46.381754 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 14 13:16:46.389331 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 14 13:16:46.389592 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 14 13:16:46.396931 systemd[1]: Stopped target paths.target - Path Units. Apr 14 13:16:46.400899 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 14 13:16:46.401193 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 14 13:16:46.405160 systemd[1]: Stopped target slices.target - Slice Units. Apr 14 13:16:46.406976 systemd[1]: Stopped target sockets.target - Socket Units. Apr 14 13:16:46.416252 systemd[1]: iscsid.socket: Deactivated successfully. Apr 14 13:16:46.416618 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 14 13:16:46.420546 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 14 13:16:46.420627 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 14 13:16:46.423665 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 14 13:16:46.423769 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 14 13:16:46.431159 systemd[1]: ignition-files.service: Deactivated successfully. Apr 14 13:16:46.431423 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 14 13:16:46.452887 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 14 13:16:46.455639 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 14 13:16:46.455764 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 14 13:16:46.463135 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 14 13:16:46.466025 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 14 13:16:46.466144 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 14 13:16:46.471995 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 14 13:16:46.472122 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 14 13:16:46.492404 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 14 13:16:46.492595 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 14 13:16:46.501263 ignition[1020]: INFO : Ignition 2.19.0 Apr 14 13:16:46.501263 ignition[1020]: INFO : Stage: umount Apr 14 13:16:46.501263 ignition[1020]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 14 13:16:46.501263 ignition[1020]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 14 13:16:46.501263 ignition[1020]: INFO : umount: umount passed Apr 14 13:16:46.501263 ignition[1020]: INFO : Ignition finished successfully Apr 14 13:16:46.500104 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 14 13:16:46.500200 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 14 13:16:46.522896 systemd[1]: Stopped target network.target - Network. Apr 14 13:16:46.529148 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 14 13:16:46.529385 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 14 13:16:46.532391 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 14 13:16:46.533025 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 14 13:16:46.537835 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 14 13:16:46.537936 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 14 13:16:46.542027 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 14 13:16:46.542090 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 14 13:16:46.544627 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 14 13:16:46.548814 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 14 13:16:46.554591 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 14 13:16:46.555174 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 14 13:16:46.555287 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 14 13:16:46.558426 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 14 13:16:46.559224 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 14 13:16:46.563616 systemd-networkd[791]: eth0: DHCPv6 lease lost Apr 14 13:16:46.570050 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 14 13:16:46.572398 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 14 13:16:46.585481 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 14 13:16:46.585698 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 14 13:16:46.591772 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 14 13:16:46.592022 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 14 13:16:46.621903 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 14 13:16:46.625351 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 14 13:16:46.625459 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 14 13:16:46.671011 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 14 13:16:46.671404 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 14 13:16:46.672842 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 14 13:16:46.672925 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 14 13:16:46.681792 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 14 13:16:46.681902 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 14 13:16:46.685437 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 14 13:16:46.706960 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 14 13:16:46.707221 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 14 13:16:46.714381 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 14 13:16:46.715180 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 14 13:16:46.721030 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 14 13:16:46.721574 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 14 13:16:46.728298 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 14 13:16:46.728728 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 14 13:16:46.731056 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 14 13:16:46.731103 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 14 13:16:46.739770 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 14 13:16:46.739849 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 14 13:16:46.748839 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 14 13:16:46.748949 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 14 13:16:46.771179 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 14 13:16:46.772363 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 14 13:16:46.772560 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 14 13:16:46.778811 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 14 13:16:46.778938 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 14 13:16:46.784041 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 14 13:16:46.784130 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 14 13:16:46.785633 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 14 13:16:46.794018 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 14 13:16:46.810001 systemd[1]: Switching root. Apr 14 13:16:46.835581 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Apr 14 13:16:46.835665 systemd-journald[193]: Journal stopped Apr 14 13:16:48.838170 kernel: SELinux: policy capability network_peer_controls=1 Apr 14 13:16:48.838278 kernel: SELinux: policy capability open_perms=1 Apr 14 13:16:48.838296 kernel: SELinux: policy capability extended_socket_class=1 Apr 14 13:16:48.838310 kernel: SELinux: policy capability always_check_network=0 Apr 14 13:16:48.838323 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 14 13:16:48.838335 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 14 13:16:48.838349 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 14 13:16:48.838366 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 14 13:16:48.838379 kernel: audit: type=1403 audit(1776172607.150:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 14 13:16:48.838395 systemd[1]: Successfully loaded SELinux policy in 104.368ms. Apr 14 13:16:48.838422 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 15.932ms. Apr 14 13:16:48.838438 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 14 13:16:48.838452 systemd[1]: Detected virtualization kvm. Apr 14 13:16:48.838466 systemd[1]: Detected architecture x86-64. Apr 14 13:16:48.838479 systemd[1]: Detected first boot. Apr 14 13:16:48.838495 systemd[1]: Initializing machine ID from VM UUID. Apr 14 13:16:48.838556 zram_generator::config[1080]: No configuration found. Apr 14 13:16:48.838574 systemd[1]: Populated /etc with preset unit settings. Apr 14 13:16:48.838589 systemd[1]: Queued start job for default target multi-user.target. Apr 14 13:16:48.838604 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Apr 14 13:16:48.838624 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 14 13:16:48.838638 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 14 13:16:48.838653 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 14 13:16:48.838667 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 14 13:16:48.838685 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 14 13:16:48.838698 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 14 13:16:48.838712 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 14 13:16:48.838755 systemd[1]: Created slice user.slice - User and Session Slice. Apr 14 13:16:48.838769 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 14 13:16:48.838782 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 14 13:16:48.838796 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 14 13:16:48.838809 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 14 13:16:48.838823 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 14 13:16:48.838838 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 14 13:16:48.838856 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 14 13:16:48.838895 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 14 13:16:48.838911 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 14 13:16:48.838924 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 14 13:16:48.838938 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 14 13:16:48.838951 systemd[1]: Reached target slices.target - Slice Units. Apr 14 13:16:48.838966 systemd[1]: Reached target swap.target - Swaps. Apr 14 13:16:48.838984 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 14 13:16:48.838998 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 14 13:16:48.839012 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 14 13:16:48.839025 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 14 13:16:48.839040 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 14 13:16:48.839054 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 14 13:16:48.839069 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 14 13:16:48.839103 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 14 13:16:48.839119 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 14 13:16:48.839139 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 14 13:16:48.839154 systemd[1]: Mounting media.mount - External Media Directory... Apr 14 13:16:48.839169 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 14 13:16:48.839184 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 14 13:16:48.839198 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 14 13:16:48.839213 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 14 13:16:48.839227 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 14 13:16:48.839243 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 14 13:16:48.839256 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 14 13:16:48.839274 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 14 13:16:48.839288 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 14 13:16:48.839308 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 14 13:16:48.839321 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 14 13:16:48.839334 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 14 13:16:48.839347 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 14 13:16:48.839361 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 14 13:16:48.839375 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Apr 14 13:16:48.839391 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Apr 14 13:16:48.839404 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 14 13:16:48.840647 kernel: ACPI: bus type drm_connector registered Apr 14 13:16:48.840699 kernel: fuse: init (API version 7.39) Apr 14 13:16:48.840714 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 14 13:16:48.840727 kernel: loop: module loaded Apr 14 13:16:48.840740 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 14 13:16:48.840753 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 14 13:16:48.840820 systemd-journald[1180]: Collecting audit messages is disabled. Apr 14 13:16:48.840858 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 14 13:16:48.841278 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 14 13:16:48.841353 systemd-journald[1180]: Journal started Apr 14 13:16:48.841384 systemd-journald[1180]: Runtime Journal (/run/log/journal/4df645bf04b14f77ba1511da8543fb22) is 6.0M, max 48.4M, 42.3M free. Apr 14 13:16:48.858668 systemd[1]: Started systemd-journald.service - Journal Service. Apr 14 13:16:48.863326 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 14 13:16:48.866900 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 14 13:16:48.869986 systemd[1]: Mounted media.mount - External Media Directory. Apr 14 13:16:48.872759 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 14 13:16:48.877324 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 14 13:16:48.881160 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 14 13:16:48.883809 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 14 13:16:48.886454 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 14 13:16:48.893302 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 14 13:16:48.895005 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 14 13:16:48.900283 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 14 13:16:48.900605 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 14 13:16:48.906493 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 14 13:16:48.907456 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 14 13:16:48.911412 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 14 13:16:48.911695 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 14 13:16:48.917390 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 14 13:16:48.918232 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 14 13:16:48.922158 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 14 13:16:48.922988 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 14 13:16:48.929255 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 14 13:16:48.933436 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 14 13:16:48.937693 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 14 13:16:48.956635 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 14 13:16:48.970286 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 14 13:16:48.979057 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 14 13:16:48.982222 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 14 13:16:48.985279 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 14 13:16:48.994350 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 14 13:16:48.998125 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 14 13:16:49.003162 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 14 13:16:49.010060 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 14 13:16:49.022671 systemd-journald[1180]: Time spent on flushing to /var/log/journal/4df645bf04b14f77ba1511da8543fb22 is 26.325ms for 942 entries. Apr 14 13:16:49.022671 systemd-journald[1180]: System Journal (/var/log/journal/4df645bf04b14f77ba1511da8543fb22) is 8.0M, max 195.6M, 187.6M free. Apr 14 13:16:49.200039 systemd-journald[1180]: Received client request to flush runtime journal. Apr 14 13:16:49.027958 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 14 13:16:49.061060 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 14 13:16:49.072273 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 14 13:16:49.081608 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 14 13:16:49.085188 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 14 13:16:49.192832 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 14 13:16:49.204007 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 14 13:16:49.217294 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 14 13:16:49.289150 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 14 13:16:49.293640 udevadm[1222]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Apr 14 13:16:49.344728 systemd-tmpfiles[1218]: ACLs are not supported, ignoring. Apr 14 13:16:49.345219 systemd-tmpfiles[1218]: ACLs are not supported, ignoring. Apr 14 13:16:49.351341 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 14 13:16:49.355746 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 14 13:16:49.389171 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 14 13:16:49.639082 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 14 13:16:49.653454 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 14 13:16:49.684637 systemd-tmpfiles[1239]: ACLs are not supported, ignoring. Apr 14 13:16:49.685070 systemd-tmpfiles[1239]: ACLs are not supported, ignoring. Apr 14 13:16:49.695505 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 14 13:16:51.387169 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 14 13:16:51.398927 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 14 13:16:51.466621 systemd-udevd[1245]: Using default interface naming scheme 'v255'. Apr 14 13:16:51.600590 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 14 13:16:51.628459 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 14 13:16:51.692802 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 14 13:16:51.708139 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (1247) Apr 14 13:16:51.751869 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 14 13:16:51.771743 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Apr 14 13:16:52.220552 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Apr 14 13:16:52.232560 kernel: ACPI: button: Power Button [PWRF] Apr 14 13:16:52.239192 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 14 13:16:52.254338 systemd-networkd[1253]: lo: Link UP Apr 14 13:16:52.254367 systemd-networkd[1253]: lo: Gained carrier Apr 14 13:16:52.256383 systemd-networkd[1253]: Enumeration completed Apr 14 13:16:52.272064 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 14 13:16:52.275203 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Apr 14 13:16:52.275405 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 14 13:16:52.393813 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Apr 14 13:16:52.256788 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 14 13:16:52.258186 systemd-networkd[1253]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 14 13:16:52.258189 systemd-networkd[1253]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 14 13:16:52.276005 systemd-networkd[1253]: eth0: Link UP Apr 14 13:16:52.276009 systemd-networkd[1253]: eth0: Gained carrier Apr 14 13:16:52.276047 systemd-networkd[1253]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 14 13:16:52.282325 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 14 13:16:52.322991 systemd-networkd[1253]: eth0: DHCPv4 address 10.0.0.15/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 14 13:16:52.500967 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 14 13:16:52.569744 kernel: mousedev: PS/2 mouse device common for all mice Apr 14 13:16:52.681916 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 14 13:16:52.793047 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 14 13:16:52.810708 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 14 13:16:52.834043 lvm[1290]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 14 13:16:52.869064 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 14 13:16:52.877616 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 14 13:16:52.895409 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 14 13:16:52.951668 lvm[1293]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 14 13:16:53.128130 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 14 13:16:53.137749 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 14 13:16:53.144402 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 14 13:16:53.144625 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 14 13:16:53.148959 systemd[1]: Reached target machines.target - Containers. Apr 14 13:16:53.156458 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 14 13:16:53.182387 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 14 13:16:53.194948 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 14 13:16:53.199236 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 14 13:16:53.208148 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 14 13:16:53.302234 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 14 13:16:53.316926 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 14 13:16:53.324329 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 14 13:16:53.354997 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 14 13:16:53.370789 kernel: loop0: detected capacity change from 0 to 140768 Apr 14 13:16:53.371676 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 14 13:16:53.376213 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 14 13:16:53.407740 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 14 13:16:53.446733 kernel: loop1: detected capacity change from 0 to 142488 Apr 14 13:16:53.639352 kernel: loop2: detected capacity change from 0 to 228704 Apr 14 13:16:53.699757 systemd-networkd[1253]: eth0: Gained IPv6LL Apr 14 13:16:53.730484 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 14 13:16:53.800975 kernel: loop3: detected capacity change from 0 to 140768 Apr 14 13:16:53.839294 kernel: loop4: detected capacity change from 0 to 142488 Apr 14 13:16:53.882722 kernel: loop5: detected capacity change from 0 to 228704 Apr 14 13:16:53.992098 (sd-merge)[1314]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Apr 14 13:16:54.006993 (sd-merge)[1314]: Merged extensions into '/usr'. Apr 14 13:16:54.095276 systemd[1]: Reloading requested from client PID 1302 ('systemd-sysext') (unit systemd-sysext.service)... Apr 14 13:16:54.095449 systemd[1]: Reloading... Apr 14 13:16:54.209661 zram_generator::config[1346]: No configuration found. Apr 14 13:16:54.574003 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 14 13:16:54.867042 systemd[1]: Reloading finished in 768 ms. Apr 14 13:16:54.915210 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 14 13:16:54.934439 ldconfig[1298]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 14 13:16:54.978640 systemd[1]: Starting ensure-sysext.service... Apr 14 13:16:54.981782 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 14 13:16:54.984303 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 14 13:16:54.993128 systemd[1]: Reloading requested from client PID 1385 ('systemctl') (unit ensure-sysext.service)... Apr 14 13:16:54.993272 systemd[1]: Reloading... Apr 14 13:16:55.273840 zram_generator::config[1416]: No configuration found. Apr 14 13:16:55.284232 systemd-tmpfiles[1387]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 14 13:16:55.285338 systemd-tmpfiles[1387]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 14 13:16:55.286050 systemd-tmpfiles[1387]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 14 13:16:55.286229 systemd-tmpfiles[1387]: ACLs are not supported, ignoring. Apr 14 13:16:55.286287 systemd-tmpfiles[1387]: ACLs are not supported, ignoring. Apr 14 13:16:55.288209 systemd-tmpfiles[1387]: Detected autofs mount point /boot during canonicalization of boot. Apr 14 13:16:55.288234 systemd-tmpfiles[1387]: Skipping /boot Apr 14 13:16:55.304220 systemd-tmpfiles[1387]: Detected autofs mount point /boot during canonicalization of boot. Apr 14 13:16:55.304260 systemd-tmpfiles[1387]: Skipping /boot Apr 14 13:16:55.677767 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 14 13:16:55.743049 systemd[1]: Reloading finished in 749 ms. Apr 14 13:16:55.766816 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 14 13:16:55.805833 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 14 13:16:55.810575 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 14 13:16:55.822856 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 14 13:16:55.899576 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 14 13:16:55.904690 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 14 13:16:55.934485 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 14 13:16:55.939951 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 14 13:16:55.940148 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 14 13:16:55.953358 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 14 13:16:55.977381 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 14 13:16:56.003891 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 14 13:16:56.006199 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 14 13:16:56.013468 augenrules[1489]: No rules Apr 14 13:16:56.026349 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 14 13:16:56.029709 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 14 13:16:56.031936 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 14 13:16:56.034984 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 14 13:16:56.044877 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 14 13:16:56.048483 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 14 13:16:56.048662 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 14 13:16:56.052842 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 14 13:16:56.053013 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 14 13:16:56.056366 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 14 13:16:56.056690 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 14 13:16:56.071678 systemd-resolved[1467]: Positive Trust Anchors: Apr 14 13:16:56.071690 systemd-resolved[1467]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 14 13:16:56.071723 systemd-resolved[1467]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 14 13:16:56.087057 systemd-resolved[1467]: Defaulting to hostname 'linux'. Apr 14 13:16:56.095323 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 14 13:16:56.095785 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 14 13:16:56.252252 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 14 13:16:56.257675 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 14 13:16:56.262972 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 14 13:16:56.267585 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 14 13:16:56.270312 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 14 13:16:56.270572 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 14 13:16:56.270701 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 14 13:16:56.271756 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 14 13:16:56.276161 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 14 13:16:56.292371 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 14 13:16:56.292589 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 14 13:16:56.296156 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 14 13:16:56.296294 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 14 13:16:56.300121 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 14 13:16:56.301225 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 14 13:16:56.305023 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 14 13:16:56.305188 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 14 13:16:56.311490 systemd[1]: Finished ensure-sysext.service. Apr 14 13:16:56.323267 systemd[1]: Reached target network.target - Network. Apr 14 13:16:56.328631 systemd[1]: Reached target network-online.target - Network is Online. Apr 14 13:16:56.332776 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 14 13:16:56.351156 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 14 13:16:56.351433 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 14 13:16:56.363702 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 14 13:16:56.445863 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 14 13:16:57.011799 systemd[1]: Reached target sysinit.target - System Initialization. Apr 14 13:16:57.011834 systemd-timesyncd[1522]: Contacted time server 10.0.0.1:123 (10.0.0.1). Apr 14 13:16:57.013068 systemd-resolved[1467]: Clock change detected. Flushing caches. Apr 14 13:16:57.014367 systemd-timesyncd[1522]: Initial clock synchronization to Tue 2026-04-14 13:16:57.011763 UTC. Apr 14 13:16:57.014432 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 14 13:16:57.018700 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 14 13:16:57.024001 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 14 13:16:57.029578 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 14 13:16:57.030008 systemd[1]: Reached target paths.target - Path Units. Apr 14 13:16:57.032221 systemd[1]: Reached target time-set.target - System Time Set. Apr 14 13:16:57.036277 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 14 13:16:57.039991 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 14 13:16:57.042937 systemd[1]: Reached target timers.target - Timer Units. Apr 14 13:16:57.046128 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 14 13:16:57.050704 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 14 13:16:57.059701 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 14 13:16:57.064631 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 14 13:16:57.068419 systemd[1]: Reached target sockets.target - Socket Units. Apr 14 13:16:57.071632 systemd[1]: Reached target basic.target - Basic System. Apr 14 13:16:57.074671 systemd[1]: System is tainted: cgroupsv1 Apr 14 13:16:57.074738 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 14 13:16:57.074760 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 14 13:16:57.076222 systemd[1]: Starting containerd.service - containerd container runtime... Apr 14 13:16:57.082239 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Apr 14 13:16:57.091160 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 14 13:16:57.094600 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 14 13:16:57.098256 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 14 13:16:57.101207 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 14 13:16:57.102712 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 13:16:57.104847 jq[1530]: false Apr 14 13:16:57.110383 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 14 13:16:57.115231 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 14 13:16:57.117897 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 14 13:16:57.126633 extend-filesystems[1532]: Found loop3 Apr 14 13:16:57.126633 extend-filesystems[1532]: Found loop4 Apr 14 13:16:57.139912 extend-filesystems[1532]: Found loop5 Apr 14 13:16:57.139912 extend-filesystems[1532]: Found sr0 Apr 14 13:16:57.139912 extend-filesystems[1532]: Found vda Apr 14 13:16:57.139912 extend-filesystems[1532]: Found vda1 Apr 14 13:16:57.139912 extend-filesystems[1532]: Found vda2 Apr 14 13:16:57.139912 extend-filesystems[1532]: Found vda3 Apr 14 13:16:57.139912 extend-filesystems[1532]: Found usr Apr 14 13:16:57.139912 extend-filesystems[1532]: Found vda4 Apr 14 13:16:57.139912 extend-filesystems[1532]: Found vda6 Apr 14 13:16:57.139912 extend-filesystems[1532]: Found vda7 Apr 14 13:16:57.139912 extend-filesystems[1532]: Found vda9 Apr 14 13:16:57.139912 extend-filesystems[1532]: Checking size of /dev/vda9 Apr 14 13:16:57.130426 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 14 13:16:57.164052 dbus-daemon[1528]: [system] SELinux support is enabled Apr 14 13:16:57.136890 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 14 13:16:57.141520 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 14 13:16:57.144457 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 14 13:16:57.148069 systemd[1]: Starting update-engine.service - Update Engine... Apr 14 13:16:57.171659 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 14 13:16:57.175906 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 14 13:16:57.181043 jq[1554]: true Apr 14 13:16:57.196173 extend-filesystems[1532]: Resized partition /dev/vda9 Apr 14 13:16:57.199270 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 14 13:16:57.209345 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 14 13:16:57.210071 extend-filesystems[1568]: resize2fs 1.47.1 (20-May-2024) Apr 14 13:16:57.242162 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Apr 14 13:16:57.250585 systemd[1]: motdgen.service: Deactivated successfully. Apr 14 13:16:57.252677 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 14 13:16:57.272130 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (1571) Apr 14 13:16:57.281357 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 14 13:16:57.295522 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 14 13:16:57.295733 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 14 13:16:57.397445 systemd-logind[1549]: Watching system buttons on /dev/input/event1 (Power Button) Apr 14 13:16:57.397457 systemd-logind[1549]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 14 13:16:57.399345 systemd-logind[1549]: New seat seat0. Apr 14 13:16:57.402876 update_engine[1551]: I20260414 13:16:57.401880 1551 main.cc:92] Flatcar Update Engine starting Apr 14 13:16:57.435241 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Apr 14 13:16:57.435273 extend-filesystems[1568]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Apr 14 13:16:57.435273 extend-filesystems[1568]: old_desc_blocks = 1, new_desc_blocks = 1 Apr 14 13:16:57.435273 extend-filesystems[1568]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Apr 14 13:16:57.449733 update_engine[1551]: I20260414 13:16:57.407132 1551 update_check_scheduler.cc:74] Next update check in 10m3s Apr 14 13:16:57.410406 systemd[1]: Started systemd-logind.service - User Login Management. Apr 14 13:16:57.449799 jq[1580]: true Apr 14 13:16:57.450008 tar[1579]: linux-amd64/LICENSE Apr 14 13:16:57.450008 tar[1579]: linux-amd64/helm Apr 14 13:16:57.450225 extend-filesystems[1532]: Resized filesystem in /dev/vda9 Apr 14 13:16:57.412387 (ntainerd)[1593]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 14 13:16:57.454804 dbus-daemon[1528]: [system] Successfully activated service 'org.freedesktop.systemd1' Apr 14 13:16:57.425979 systemd[1]: coreos-metadata.service: Deactivated successfully. Apr 14 13:16:57.426796 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Apr 14 13:16:57.430576 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 14 13:16:57.430758 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 14 13:16:57.459053 systemd[1]: Started update-engine.service - Update Engine. Apr 14 13:16:57.467274 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 14 13:16:57.468283 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 14 13:16:57.468573 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 14 13:16:57.473291 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 14 13:16:57.473904 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 14 13:16:57.477531 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 14 13:16:57.485317 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 14 13:16:57.510451 bash[1624]: Updated "/home/core/.ssh/authorized_keys" Apr 14 13:16:57.510955 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 14 13:16:57.517533 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Apr 14 13:16:57.645224 sshd_keygen[1559]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 14 13:16:57.662921 locksmithd[1625]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 14 13:16:57.863236 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 14 13:16:57.928463 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 14 13:16:57.946015 systemd[1]: issuegen.service: Deactivated successfully. Apr 14 13:16:57.946284 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 14 13:16:58.180509 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 14 13:16:58.306001 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 14 13:16:58.459896 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 14 13:16:58.468792 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 14 13:16:58.471123 systemd[1]: Reached target getty.target - Login Prompts. Apr 14 13:16:59.078315 containerd[1593]: time="2026-04-14T13:16:59.075321649Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 14 13:16:59.121900 containerd[1593]: time="2026-04-14T13:16:59.121159665Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 14 13:16:59.201416 containerd[1593]: time="2026-04-14T13:16:59.201034823Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 14 13:16:59.203310 containerd[1593]: time="2026-04-14T13:16:59.202633604Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 14 13:16:59.205118 containerd[1593]: time="2026-04-14T13:16:59.203951013Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 14 13:16:59.205118 containerd[1593]: time="2026-04-14T13:16:59.204812596Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 14 13:16:59.205118 containerd[1593]: time="2026-04-14T13:16:59.204844583Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 14 13:16:59.205118 containerd[1593]: time="2026-04-14T13:16:59.204957597Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 14 13:16:59.205118 containerd[1593]: time="2026-04-14T13:16:59.204969354Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 14 13:16:59.205557 containerd[1593]: time="2026-04-14T13:16:59.205512034Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 14 13:16:59.205557 containerd[1593]: time="2026-04-14T13:16:59.205543061Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 14 13:16:59.205624 containerd[1593]: time="2026-04-14T13:16:59.205556783Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 14 13:16:59.205624 containerd[1593]: time="2026-04-14T13:16:59.205564651Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 14 13:16:59.205826 containerd[1593]: time="2026-04-14T13:16:59.205798324Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 14 13:16:59.206384 containerd[1593]: time="2026-04-14T13:16:59.206348817Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 14 13:16:59.206564 containerd[1593]: time="2026-04-14T13:16:59.206526971Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 14 13:16:59.206564 containerd[1593]: time="2026-04-14T13:16:59.206554420Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 14 13:16:59.206696 containerd[1593]: time="2026-04-14T13:16:59.206664223Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 14 13:16:59.206789 containerd[1593]: time="2026-04-14T13:16:59.206758272Z" level=info msg="metadata content store policy set" policy=shared Apr 14 13:16:59.219893 containerd[1593]: time="2026-04-14T13:16:59.219682743Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 14 13:16:59.220416 containerd[1593]: time="2026-04-14T13:16:59.220044536Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 14 13:16:59.220416 containerd[1593]: time="2026-04-14T13:16:59.220160431Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 14 13:16:59.220416 containerd[1593]: time="2026-04-14T13:16:59.220175529Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 14 13:16:59.220416 containerd[1593]: time="2026-04-14T13:16:59.220219293Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 14 13:16:59.221538 containerd[1593]: time="2026-04-14T13:16:59.221377743Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 14 13:16:59.222402 containerd[1593]: time="2026-04-14T13:16:59.222341038Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 14 13:16:59.222654 containerd[1593]: time="2026-04-14T13:16:59.222586694Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 14 13:16:59.222654 containerd[1593]: time="2026-04-14T13:16:59.222635533Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 14 13:16:59.222707 containerd[1593]: time="2026-04-14T13:16:59.222667921Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 14 13:16:59.222707 containerd[1593]: time="2026-04-14T13:16:59.222698001Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 14 13:16:59.222783 containerd[1593]: time="2026-04-14T13:16:59.222714188Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 14 13:16:59.222783 containerd[1593]: time="2026-04-14T13:16:59.222727659Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 14 13:16:59.222783 containerd[1593]: time="2026-04-14T13:16:59.222772733Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 14 13:16:59.222827 containerd[1593]: time="2026-04-14T13:16:59.222804057Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 14 13:16:59.222827 containerd[1593]: time="2026-04-14T13:16:59.222820266Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 14 13:16:59.222852 containerd[1593]: time="2026-04-14T13:16:59.222833473Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 14 13:16:59.222852 containerd[1593]: time="2026-04-14T13:16:59.222846112Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 14 13:16:59.222916 containerd[1593]: time="2026-04-14T13:16:59.222885340Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 14 13:16:59.222966 containerd[1593]: time="2026-04-14T13:16:59.222938926Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 14 13:16:59.222966 containerd[1593]: time="2026-04-14T13:16:59.222954454Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 14 13:16:59.223001 containerd[1593]: time="2026-04-14T13:16:59.222969722Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 14 13:16:59.223001 containerd[1593]: time="2026-04-14T13:16:59.222983666Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 14 13:16:59.223025 containerd[1593]: time="2026-04-14T13:16:59.222998490Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 14 13:16:59.223025 containerd[1593]: time="2026-04-14T13:16:59.223012517Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 14 13:16:59.223054 containerd[1593]: time="2026-04-14T13:16:59.223044415Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 14 13:16:59.223198 containerd[1593]: time="2026-04-14T13:16:59.223112270Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 14 13:16:59.223216 containerd[1593]: time="2026-04-14T13:16:59.223202904Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 14 13:16:59.223229 containerd[1593]: time="2026-04-14T13:16:59.223218395Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 14 13:16:59.223243 containerd[1593]: time="2026-04-14T13:16:59.223231511Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 14 13:16:59.223262 containerd[1593]: time="2026-04-14T13:16:59.223244356Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 14 13:16:59.223276 containerd[1593]: time="2026-04-14T13:16:59.223261490Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 14 13:16:59.223457 containerd[1593]: time="2026-04-14T13:16:59.223373348Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 14 13:16:59.223457 containerd[1593]: time="2026-04-14T13:16:59.223403756Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 14 13:16:59.223457 containerd[1593]: time="2026-04-14T13:16:59.223412686Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 14 13:16:59.223520 containerd[1593]: time="2026-04-14T13:16:59.223486009Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 14 13:16:59.223571 containerd[1593]: time="2026-04-14T13:16:59.223531358Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 14 13:16:59.223571 containerd[1593]: time="2026-04-14T13:16:59.223553169Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 14 13:16:59.223640 containerd[1593]: time="2026-04-14T13:16:59.223601270Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 14 13:16:59.223640 containerd[1593]: time="2026-04-14T13:16:59.223611601Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 14 13:16:59.223761 containerd[1593]: time="2026-04-14T13:16:59.223705406Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 14 13:16:59.224050 containerd[1593]: time="2026-04-14T13:16:59.224012894Z" level=info msg="NRI interface is disabled by configuration." Apr 14 13:16:59.224050 containerd[1593]: time="2026-04-14T13:16:59.224037105Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 14 13:16:59.225206 containerd[1593]: time="2026-04-14T13:16:59.225124637Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 14 13:16:59.225427 containerd[1593]: time="2026-04-14T13:16:59.225243450Z" level=info msg="Connect containerd service" Apr 14 13:16:59.225427 containerd[1593]: time="2026-04-14T13:16:59.225320737Z" level=info msg="using legacy CRI server" Apr 14 13:16:59.225427 containerd[1593]: time="2026-04-14T13:16:59.225347199Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 14 13:16:59.225654 containerd[1593]: time="2026-04-14T13:16:59.225615613Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 14 13:16:59.227017 containerd[1593]: time="2026-04-14T13:16:59.226988966Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 14 13:16:59.227431 containerd[1593]: time="2026-04-14T13:16:59.227285138Z" level=info msg="Start subscribing containerd event" Apr 14 13:16:59.227621 containerd[1593]: time="2026-04-14T13:16:59.227470428Z" level=info msg="Start recovering state" Apr 14 13:16:59.227644 containerd[1593]: time="2026-04-14T13:16:59.227620105Z" level=info msg="Start event monitor" Apr 14 13:16:59.228381 containerd[1593]: time="2026-04-14T13:16:59.227676460Z" level=info msg="Start snapshots syncer" Apr 14 13:16:59.228381 containerd[1593]: time="2026-04-14T13:16:59.227695308Z" level=info msg="Start cni network conf syncer for default" Apr 14 13:16:59.228381 containerd[1593]: time="2026-04-14T13:16:59.227712467Z" level=info msg="Start streaming server" Apr 14 13:16:59.229061 containerd[1593]: time="2026-04-14T13:16:59.229007093Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 14 13:16:59.229119 containerd[1593]: time="2026-04-14T13:16:59.229068469Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 14 13:16:59.229334 systemd[1]: Started containerd.service - containerd container runtime. Apr 14 13:16:59.229529 containerd[1593]: time="2026-04-14T13:16:59.229482817Z" level=info msg="containerd successfully booted in 0.157353s" Apr 14 13:16:59.968460 tar[1579]: linux-amd64/README.md Apr 14 13:17:00.009055 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 14 13:17:01.760808 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 13:17:01.764895 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 14 13:17:01.767157 systemd[1]: Startup finished in 11.655s (kernel) + 14.155s (userspace) = 25.811s. Apr 14 13:17:01.782868 (kubelet)[1674]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 14 13:17:03.358366 kubelet[1674]: E0414 13:17:03.358066 1674 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 14 13:17:03.360878 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 14 13:17:03.361043 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 14 13:17:03.865327 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 14 13:17:03.880834 systemd[1]: Started sshd@0-10.0.0.15:22-10.0.0.1:60640.service - OpenSSH per-connection server daemon (10.0.0.1:60640). Apr 14 13:17:03.983951 sshd[1688]: Accepted publickey for core from 10.0.0.1 port 60640 ssh2: RSA SHA256:yxEwPRvngNLBDT3pQyHsh993EalqAg0K7yQpushVk/s Apr 14 13:17:03.986827 sshd[1688]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:17:04.017133 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 14 13:17:04.027393 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 14 13:17:04.029849 systemd-logind[1549]: New session 1 of user core. Apr 14 13:17:04.047987 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 14 13:17:04.073031 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 14 13:17:04.080160 (systemd)[1694]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 14 13:17:04.204460 systemd[1694]: Queued start job for default target default.target. Apr 14 13:17:04.204818 systemd[1694]: Created slice app.slice - User Application Slice. Apr 14 13:17:04.204832 systemd[1694]: Reached target paths.target - Paths. Apr 14 13:17:04.204841 systemd[1694]: Reached target timers.target - Timers. Apr 14 13:17:04.217968 systemd[1694]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 14 13:17:04.229249 systemd[1694]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 14 13:17:04.229303 systemd[1694]: Reached target sockets.target - Sockets. Apr 14 13:17:04.229313 systemd[1694]: Reached target basic.target - Basic System. Apr 14 13:17:04.229374 systemd[1694]: Reached target default.target - Main User Target. Apr 14 13:17:04.229401 systemd[1694]: Startup finished in 130ms. Apr 14 13:17:04.229712 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 14 13:17:04.231003 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 14 13:17:04.304053 systemd[1]: Started sshd@1-10.0.0.15:22-10.0.0.1:60644.service - OpenSSH per-connection server daemon (10.0.0.1:60644). Apr 14 13:17:04.359188 sshd[1706]: Accepted publickey for core from 10.0.0.1 port 60644 ssh2: RSA SHA256:yxEwPRvngNLBDT3pQyHsh993EalqAg0K7yQpushVk/s Apr 14 13:17:04.360917 sshd[1706]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:17:04.371412 systemd-logind[1549]: New session 2 of user core. Apr 14 13:17:04.392760 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 14 13:17:04.470641 sshd[1706]: pam_unix(sshd:session): session closed for user core Apr 14 13:17:04.488851 systemd[1]: Started sshd@2-10.0.0.15:22-10.0.0.1:60658.service - OpenSSH per-connection server daemon (10.0.0.1:60658). Apr 14 13:17:04.490973 systemd[1]: sshd@1-10.0.0.15:22-10.0.0.1:60644.service: Deactivated successfully. Apr 14 13:17:04.496368 systemd[1]: session-2.scope: Deactivated successfully. Apr 14 13:17:04.504358 systemd-logind[1549]: Session 2 logged out. Waiting for processes to exit. Apr 14 13:17:04.508911 systemd-logind[1549]: Removed session 2. Apr 14 13:17:04.530580 sshd[1711]: Accepted publickey for core from 10.0.0.1 port 60658 ssh2: RSA SHA256:yxEwPRvngNLBDT3pQyHsh993EalqAg0K7yQpushVk/s Apr 14 13:17:04.534182 sshd[1711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:17:04.542626 systemd-logind[1549]: New session 3 of user core. Apr 14 13:17:04.555168 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 14 13:17:04.613580 sshd[1711]: pam_unix(sshd:session): session closed for user core Apr 14 13:17:04.676390 systemd[1]: Started sshd@3-10.0.0.15:22-10.0.0.1:60672.service - OpenSSH per-connection server daemon (10.0.0.1:60672). Apr 14 13:17:04.676995 systemd[1]: sshd@2-10.0.0.15:22-10.0.0.1:60658.service: Deactivated successfully. Apr 14 13:17:04.680343 systemd[1]: session-3.scope: Deactivated successfully. Apr 14 13:17:04.683727 systemd-logind[1549]: Session 3 logged out. Waiting for processes to exit. Apr 14 13:17:04.685459 systemd-logind[1549]: Removed session 3. Apr 14 13:17:04.726860 sshd[1719]: Accepted publickey for core from 10.0.0.1 port 60672 ssh2: RSA SHA256:yxEwPRvngNLBDT3pQyHsh993EalqAg0K7yQpushVk/s Apr 14 13:17:04.728499 sshd[1719]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:17:04.736719 systemd-logind[1549]: New session 4 of user core. Apr 14 13:17:04.752340 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 14 13:17:04.814464 sshd[1719]: pam_unix(sshd:session): session closed for user core Apr 14 13:17:04.825126 systemd[1]: Started sshd@4-10.0.0.15:22-10.0.0.1:60686.service - OpenSSH per-connection server daemon (10.0.0.1:60686). Apr 14 13:17:04.825577 systemd[1]: sshd@3-10.0.0.15:22-10.0.0.1:60672.service: Deactivated successfully. Apr 14 13:17:04.827954 systemd[1]: session-4.scope: Deactivated successfully. Apr 14 13:17:04.828856 systemd-logind[1549]: Session 4 logged out. Waiting for processes to exit. Apr 14 13:17:04.830476 systemd-logind[1549]: Removed session 4. Apr 14 13:17:04.869370 sshd[1727]: Accepted publickey for core from 10.0.0.1 port 60686 ssh2: RSA SHA256:yxEwPRvngNLBDT3pQyHsh993EalqAg0K7yQpushVk/s Apr 14 13:17:04.873315 sshd[1727]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:17:04.878692 systemd-logind[1549]: New session 5 of user core. Apr 14 13:17:04.890757 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 14 13:17:04.975742 sudo[1734]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 14 13:17:04.976140 sudo[1734]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 14 13:17:05.000239 sudo[1734]: pam_unix(sudo:session): session closed for user root Apr 14 13:17:05.002680 sshd[1727]: pam_unix(sshd:session): session closed for user core Apr 14 13:17:05.027667 systemd[1]: Started sshd@5-10.0.0.15:22-10.0.0.1:60694.service - OpenSSH per-connection server daemon (10.0.0.1:60694). Apr 14 13:17:05.031311 systemd[1]: sshd@4-10.0.0.15:22-10.0.0.1:60686.service: Deactivated successfully. Apr 14 13:17:05.033025 systemd[1]: session-5.scope: Deactivated successfully. Apr 14 13:17:05.033849 systemd-logind[1549]: Session 5 logged out. Waiting for processes to exit. Apr 14 13:17:05.035670 systemd-logind[1549]: Removed session 5. Apr 14 13:17:05.066625 sshd[1736]: Accepted publickey for core from 10.0.0.1 port 60694 ssh2: RSA SHA256:yxEwPRvngNLBDT3pQyHsh993EalqAg0K7yQpushVk/s Apr 14 13:17:05.067829 sshd[1736]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:17:05.075807 systemd-logind[1549]: New session 6 of user core. Apr 14 13:17:05.092815 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 14 13:17:05.153433 sudo[1744]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 14 13:17:05.153681 sudo[1744]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 14 13:17:05.158222 sudo[1744]: pam_unix(sudo:session): session closed for user root Apr 14 13:17:05.167774 sudo[1743]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 14 13:17:05.167992 sudo[1743]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 14 13:17:05.196188 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 14 13:17:05.196786 auditctl[1747]: No rules Apr 14 13:17:05.197313 systemd[1]: audit-rules.service: Deactivated successfully. Apr 14 13:17:05.197612 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 14 13:17:05.203208 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 14 13:17:05.243779 augenrules[1766]: No rules Apr 14 13:17:05.244844 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 14 13:17:05.246347 sudo[1743]: pam_unix(sudo:session): session closed for user root Apr 14 13:17:05.248204 sshd[1736]: pam_unix(sshd:session): session closed for user core Apr 14 13:17:05.263190 systemd[1]: Started sshd@6-10.0.0.15:22-10.0.0.1:60704.service - OpenSSH per-connection server daemon (10.0.0.1:60704). Apr 14 13:17:05.266642 systemd[1]: sshd@5-10.0.0.15:22-10.0.0.1:60694.service: Deactivated successfully. Apr 14 13:17:05.268280 systemd[1]: session-6.scope: Deactivated successfully. Apr 14 13:17:05.268895 systemd-logind[1549]: Session 6 logged out. Waiting for processes to exit. Apr 14 13:17:05.270459 systemd-logind[1549]: Removed session 6. Apr 14 13:17:05.313224 sshd[1772]: Accepted publickey for core from 10.0.0.1 port 60704 ssh2: RSA SHA256:yxEwPRvngNLBDT3pQyHsh993EalqAg0K7yQpushVk/s Apr 14 13:17:05.314730 sshd[1772]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:17:05.320366 systemd-logind[1549]: New session 7 of user core. Apr 14 13:17:05.328012 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 14 13:17:05.391272 sudo[1779]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 14 13:17:05.391595 sudo[1779]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 14 13:17:05.781519 (dockerd)[1797]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 14 13:17:05.784953 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 14 13:17:06.151785 dockerd[1797]: time="2026-04-14T13:17:06.151176458Z" level=info msg="Starting up" Apr 14 13:17:06.396468 dockerd[1797]: time="2026-04-14T13:17:06.396206013Z" level=info msg="Loading containers: start." Apr 14 13:17:06.547425 kernel: Initializing XFRM netlink socket Apr 14 13:17:06.658609 systemd-networkd[1253]: docker0: Link UP Apr 14 13:17:06.692999 dockerd[1797]: time="2026-04-14T13:17:06.692700656Z" level=info msg="Loading containers: done." Apr 14 13:17:06.769994 dockerd[1797]: time="2026-04-14T13:17:06.767487635Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 14 13:17:06.769994 dockerd[1797]: time="2026-04-14T13:17:06.767863538Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 14 13:17:06.769994 dockerd[1797]: time="2026-04-14T13:17:06.768360188Z" level=info msg="Daemon has completed initialization" Apr 14 13:17:06.820008 dockerd[1797]: time="2026-04-14T13:17:06.819309189Z" level=info msg="API listen on /run/docker.sock" Apr 14 13:17:06.820376 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 14 13:17:07.373574 containerd[1593]: time="2026-04-14T13:17:07.373295282Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.10\"" Apr 14 13:17:08.227603 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3809232048.mount: Deactivated successfully. Apr 14 13:17:09.565448 containerd[1593]: time="2026-04-14T13:17:09.565063269Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:17:09.565448 containerd[1593]: time="2026-04-14T13:17:09.565334014Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.10: active requests=0, bytes read=29988857" Apr 14 13:17:09.570264 containerd[1593]: time="2026-04-14T13:17:09.570206677Z" level=info msg="ImageCreate event name:\"sha256:e1586f2f8635ddb8eb665e8155e4aadb66d9ca499906c11db63a79ae66456b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:17:09.574750 containerd[1593]: time="2026-04-14T13:17:09.574517293Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:bbff81e41af4bfca88a1d05a066a48e12e2689c534d073a8c688e3ad6c8701e3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:17:09.576254 containerd[1593]: time="2026-04-14T13:17:09.576161160Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.10\" with image id \"sha256:e1586f2f8635ddb8eb665e8155e4aadb66d9ca499906c11db63a79ae66456b74\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:bbff81e41af4bfca88a1d05a066a48e12e2689c534d073a8c688e3ad6c8701e3\", size \"29986018\" in 2.202690166s" Apr 14 13:17:09.576254 containerd[1593]: time="2026-04-14T13:17:09.576237048Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.10\" returns image reference \"sha256:e1586f2f8635ddb8eb665e8155e4aadb66d9ca499906c11db63a79ae66456b74\"" Apr 14 13:17:09.581011 containerd[1593]: time="2026-04-14T13:17:09.579379198Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.10\"" Apr 14 13:17:13.149033 containerd[1593]: time="2026-04-14T13:17:13.148712053Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:17:13.150419 containerd[1593]: time="2026-04-14T13:17:13.149631708Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.10: active requests=0, bytes read=26021841" Apr 14 13:17:13.152503 containerd[1593]: time="2026-04-14T13:17:13.152195567Z" level=info msg="ImageCreate event name:\"sha256:26db35ccbf4330e5ada4a2786276aac158e92aced08cecce6cb614146e224230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:17:13.165032 containerd[1593]: time="2026-04-14T13:17:13.164535321Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:b0880d6ee19f2b9148d3d37008c5ee9fc73976e8edad4d0709f11d32ab3ee709\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:17:13.196764 containerd[1593]: time="2026-04-14T13:17:13.196419691Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.10\" with image id \"sha256:26db35ccbf4330e5ada4a2786276aac158e92aced08cecce6cb614146e224230\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:b0880d6ee19f2b9148d3d37008c5ee9fc73976e8edad4d0709f11d32ab3ee709\", size \"27552094\" in 3.613453066s" Apr 14 13:17:13.196764 containerd[1593]: time="2026-04-14T13:17:13.196483345Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.10\" returns image reference \"sha256:26db35ccbf4330e5ada4a2786276aac158e92aced08cecce6cb614146e224230\"" Apr 14 13:17:13.197921 containerd[1593]: time="2026-04-14T13:17:13.197720420Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.10\"" Apr 14 13:17:13.423703 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 14 13:17:13.435218 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 13:17:14.638466 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 13:17:14.671154 (kubelet)[2023]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 14 13:17:15.604965 kubelet[2023]: E0414 13:17:15.604790 2023 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 14 13:17:15.609449 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 14 13:17:15.609851 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 14 13:17:16.548034 containerd[1593]: time="2026-04-14T13:17:16.547756377Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:17:16.551041 containerd[1593]: time="2026-04-14T13:17:16.548493783Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.10: active requests=0, bytes read=20162685" Apr 14 13:17:16.551041 containerd[1593]: time="2026-04-14T13:17:16.550720629Z" level=info msg="ImageCreate event name:\"sha256:7f5d3f3b598c23877c138d7739627d8f0160b0a91d321108e9b5affad54f85f7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:17:16.555304 containerd[1593]: time="2026-04-14T13:17:16.554776499Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:dc1a1aec3bb0ed126b1adff795935124f719969356b24a159fc1a2a0883b89bc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:17:16.558309 containerd[1593]: time="2026-04-14T13:17:16.558248898Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.10\" with image id \"sha256:7f5d3f3b598c23877c138d7739627d8f0160b0a91d321108e9b5affad54f85f7\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:dc1a1aec3bb0ed126b1adff795935124f719969356b24a159fc1a2a0883b89bc\", size \"21692956\" in 3.360500517s" Apr 14 13:17:16.558309 containerd[1593]: time="2026-04-14T13:17:16.558306960Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.10\" returns image reference \"sha256:7f5d3f3b598c23877c138d7739627d8f0160b0a91d321108e9b5affad54f85f7\"" Apr 14 13:17:16.559379 containerd[1593]: time="2026-04-14T13:17:16.559230094Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.10\"" Apr 14 13:17:18.339571 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount679754112.mount: Deactivated successfully. Apr 14 13:17:19.283303 containerd[1593]: time="2026-04-14T13:17:19.282046863Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:17:19.285897 containerd[1593]: time="2026-04-14T13:17:19.283984691Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.10: active requests=0, bytes read=31828657" Apr 14 13:17:19.297440 containerd[1593]: time="2026-04-14T13:17:19.297146814Z" level=info msg="ImageCreate event name:\"sha256:bed75257625288e2a7e106a7fe6bf8373eaa2bc2b14805d32033c7655e882f76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:17:19.303563 containerd[1593]: time="2026-04-14T13:17:19.303327083Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e8151e38ef22f032dba686cc1bba5a3e525dedbe2d549fa44e653fe79426e261\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:17:19.304213 containerd[1593]: time="2026-04-14T13:17:19.303971823Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.10\" with image id \"sha256:bed75257625288e2a7e106a7fe6bf8373eaa2bc2b14805d32033c7655e882f76\", repo tag \"registry.k8s.io/kube-proxy:v1.33.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:e8151e38ef22f032dba686cc1bba5a3e525dedbe2d549fa44e653fe79426e261\", size \"31827782\" in 2.744717437s" Apr 14 13:17:19.304213 containerd[1593]: time="2026-04-14T13:17:19.304000154Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.10\" returns image reference \"sha256:bed75257625288e2a7e106a7fe6bf8373eaa2bc2b14805d32033c7655e882f76\"" Apr 14 13:17:19.306300 containerd[1593]: time="2026-04-14T13:17:19.306062152Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Apr 14 13:17:19.940387 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3707757957.mount: Deactivated successfully. Apr 14 13:17:21.427779 containerd[1593]: time="2026-04-14T13:17:21.427492348Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:17:21.429812 containerd[1593]: time="2026-04-14T13:17:21.428375097Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20941714" Apr 14 13:17:21.430558 containerd[1593]: time="2026-04-14T13:17:21.430487878Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:17:21.439033 containerd[1593]: time="2026-04-14T13:17:21.438705136Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:17:21.440349 containerd[1593]: time="2026-04-14T13:17:21.440276222Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 2.13411405s" Apr 14 13:17:21.440349 containerd[1593]: time="2026-04-14T13:17:21.440338644Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Apr 14 13:17:21.443590 containerd[1593]: time="2026-04-14T13:17:21.443399557Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Apr 14 13:17:22.094994 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1144207787.mount: Deactivated successfully. Apr 14 13:17:22.111519 containerd[1593]: time="2026-04-14T13:17:22.111292281Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:17:22.112321 containerd[1593]: time="2026-04-14T13:17:22.112265236Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321070" Apr 14 13:17:22.113728 containerd[1593]: time="2026-04-14T13:17:22.113691927Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:17:22.115791 containerd[1593]: time="2026-04-14T13:17:22.115711493Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:17:22.116573 containerd[1593]: time="2026-04-14T13:17:22.116517245Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 673.015206ms" Apr 14 13:17:22.116573 containerd[1593]: time="2026-04-14T13:17:22.116559519Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Apr 14 13:17:22.117551 containerd[1593]: time="2026-04-14T13:17:22.117455983Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Apr 14 13:17:22.863663 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount147320867.mount: Deactivated successfully. Apr 14 13:17:25.665611 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 14 13:17:25.698515 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 13:17:26.356547 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 13:17:26.362058 (kubelet)[2165]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 14 13:17:27.361760 kubelet[2165]: E0414 13:17:27.361494 2165 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 14 13:17:27.367141 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 14 13:17:27.369332 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 14 13:17:27.432059 containerd[1593]: time="2026-04-14T13:17:27.431707103Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:17:27.437255 containerd[1593]: time="2026-04-14T13:17:27.435572129Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=23718278" Apr 14 13:17:27.439496 containerd[1593]: time="2026-04-14T13:17:27.439258926Z" level=info msg="ImageCreate event name:\"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:17:27.671463 containerd[1593]: time="2026-04-14T13:17:27.671280584Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:17:27.686719 containerd[1593]: time="2026-04-14T13:17:27.686107447Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"23716032\" in 5.568570496s" Apr 14 13:17:27.686719 containerd[1593]: time="2026-04-14T13:17:27.686231629Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\"" Apr 14 13:17:31.252905 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 13:17:31.268228 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 13:17:31.366454 systemd[1]: Reloading requested from client PID 2214 ('systemctl') (unit session-7.scope)... Apr 14 13:17:31.366494 systemd[1]: Reloading... Apr 14 13:17:31.534451 zram_generator::config[2253]: No configuration found. Apr 14 13:17:32.040300 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 14 13:17:32.182864 systemd[1]: Reloading finished in 815 ms. Apr 14 13:17:32.321519 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 13:17:32.340654 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 13:17:32.354344 systemd[1]: kubelet.service: Deactivated successfully. Apr 14 13:17:32.355212 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 13:17:32.393727 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 13:17:32.863325 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 13:17:32.871194 (kubelet)[2316]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 14 13:17:33.377519 kubelet[2316]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 14 13:17:33.377519 kubelet[2316]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 14 13:17:33.377519 kubelet[2316]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 14 13:17:33.378697 kubelet[2316]: I0414 13:17:33.378036 2316 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 14 13:17:34.172375 kubelet[2316]: I0414 13:17:34.171983 2316 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 14 13:17:34.172375 kubelet[2316]: I0414 13:17:34.172220 2316 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 14 13:17:34.174879 kubelet[2316]: I0414 13:17:34.174738 2316 server.go:956] "Client rotation is on, will bootstrap in background" Apr 14 13:17:34.385006 kubelet[2316]: E0414 13:17:34.384042 2316 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.15:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 14 13:17:34.389618 kubelet[2316]: I0414 13:17:34.388607 2316 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 14 13:17:34.460544 kubelet[2316]: E0414 13:17:34.460142 2316 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 14 13:17:34.460544 kubelet[2316]: I0414 13:17:34.460284 2316 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 14 13:17:34.518261 kubelet[2316]: I0414 13:17:34.517369 2316 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 14 13:17:34.527279 kubelet[2316]: I0414 13:17:34.526319 2316 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 14 13:17:34.533190 kubelet[2316]: I0414 13:17:34.527399 2316 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Apr 14 13:17:34.535106 kubelet[2316]: I0414 13:17:34.534562 2316 topology_manager.go:138] "Creating topology manager with none policy" Apr 14 13:17:34.536047 kubelet[2316]: I0414 13:17:34.535558 2316 container_manager_linux.go:303] "Creating device plugin manager" Apr 14 13:17:34.542142 kubelet[2316]: I0414 13:17:34.541557 2316 state_mem.go:36] "Initialized new in-memory state store" Apr 14 13:17:34.553039 kubelet[2316]: I0414 13:17:34.552847 2316 kubelet.go:480] "Attempting to sync node with API server" Apr 14 13:17:34.553476 kubelet[2316]: I0414 13:17:34.553177 2316 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 14 13:17:34.553476 kubelet[2316]: I0414 13:17:34.553403 2316 kubelet.go:386] "Adding apiserver pod source" Apr 14 13:17:34.559178 kubelet[2316]: I0414 13:17:34.557176 2316 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 14 13:17:34.569281 kubelet[2316]: E0414 13:17:34.567405 2316 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.15:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 14 13:17:34.570969 kubelet[2316]: E0414 13:17:34.570911 2316 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 14 13:17:34.572515 kubelet[2316]: I0414 13:17:34.572485 2316 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 14 13:17:34.580442 kubelet[2316]: I0414 13:17:34.579901 2316 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 14 13:17:34.586010 kubelet[2316]: W0414 13:17:34.585452 2316 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 14 13:17:34.597412 kubelet[2316]: I0414 13:17:34.597368 2316 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 14 13:17:34.597572 kubelet[2316]: I0414 13:17:34.597493 2316 server.go:1289] "Started kubelet" Apr 14 13:17:34.599212 kubelet[2316]: I0414 13:17:34.598990 2316 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 14 13:17:34.599786 kubelet[2316]: I0414 13:17:34.599752 2316 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 14 13:17:34.599905 kubelet[2316]: I0414 13:17:34.599867 2316 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 14 13:17:34.602064 kubelet[2316]: I0414 13:17:34.600626 2316 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 14 13:17:34.602064 kubelet[2316]: I0414 13:17:34.601067 2316 server.go:317] "Adding debug handlers to kubelet server" Apr 14 13:17:34.602064 kubelet[2316]: I0414 13:17:34.602070 2316 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 14 13:17:34.603897 kubelet[2316]: E0414 13:17:34.603880 2316 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 13:17:34.608205 kubelet[2316]: I0414 13:17:34.605294 2316 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 14 13:17:34.608796 kubelet[2316]: I0414 13:17:34.608783 2316 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 14 13:17:34.609023 kubelet[2316]: I0414 13:17:34.609016 2316 reconciler.go:26] "Reconciler: start to sync state" Apr 14 13:17:34.609638 kubelet[2316]: E0414 13:17:34.609622 2316 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 14 13:17:34.610004 kubelet[2316]: E0414 13:17:34.608423 2316 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.15:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.15:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a63b9e690aa5d1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-14 13:17:34.597416401 +0000 UTC m=+1.518126311,LastTimestamp:2026-04-14 13:17:34.597416401 +0000 UTC m=+1.518126311,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 14 13:17:34.610462 kubelet[2316]: E0414 13:17:34.610382 2316 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.15:6443: connect: connection refused" interval="200ms" Apr 14 13:17:34.611325 kubelet[2316]: I0414 13:17:34.611310 2316 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 14 13:17:34.612555 kubelet[2316]: I0414 13:17:34.612531 2316 factory.go:223] Registration of the containerd container factory successfully Apr 14 13:17:34.612555 kubelet[2316]: I0414 13:17:34.612557 2316 factory.go:223] Registration of the systemd container factory successfully Apr 14 13:17:34.614191 kubelet[2316]: E0414 13:17:34.614164 2316 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 14 13:17:34.615877 kubelet[2316]: I0414 13:17:34.615825 2316 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 14 13:17:34.682523 kubelet[2316]: I0414 13:17:34.682226 2316 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 14 13:17:34.682523 kubelet[2316]: I0414 13:17:34.682529 2316 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 14 13:17:34.683262 kubelet[2316]: I0414 13:17:34.683165 2316 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 14 13:17:34.683262 kubelet[2316]: I0414 13:17:34.683187 2316 kubelet.go:2436] "Starting kubelet main sync loop" Apr 14 13:17:34.687053 kubelet[2316]: E0414 13:17:34.686700 2316 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 14 13:17:34.693701 kubelet[2316]: E0414 13:17:34.693401 2316 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 14 13:17:34.710496 kubelet[2316]: E0414 13:17:34.710014 2316 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 13:17:34.720769 kubelet[2316]: I0414 13:17:34.720390 2316 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 14 13:17:34.720769 kubelet[2316]: I0414 13:17:34.720508 2316 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 14 13:17:34.720769 kubelet[2316]: I0414 13:17:34.720647 2316 state_mem.go:36] "Initialized new in-memory state store" Apr 14 13:17:34.765128 kubelet[2316]: I0414 13:17:34.764543 2316 policy_none.go:49] "None policy: Start" Apr 14 13:17:34.765771 kubelet[2316]: I0414 13:17:34.765264 2316 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 14 13:17:34.765771 kubelet[2316]: I0414 13:17:34.765407 2316 state_mem.go:35] "Initializing new in-memory state store" Apr 14 13:17:34.793482 kubelet[2316]: E0414 13:17:34.792993 2316 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 14 13:17:34.812286 kubelet[2316]: E0414 13:17:34.810832 2316 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 13:17:34.818892 kubelet[2316]: E0414 13:17:34.818707 2316 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 14 13:17:34.818892 kubelet[2316]: E0414 13:17:34.818824 2316 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.15:6443: connect: connection refused" interval="400ms" Apr 14 13:17:34.851343 kubelet[2316]: I0414 13:17:34.829792 2316 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 14 13:17:34.856889 kubelet[2316]: I0414 13:17:34.852044 2316 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 14 13:17:34.861301 kubelet[2316]: I0414 13:17:34.859927 2316 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 14 13:17:35.048778 kubelet[2316]: E0414 13:17:35.048617 2316 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 14 13:17:35.051710 kubelet[2316]: E0414 13:17:35.051520 2316 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 14 13:17:35.060042 kubelet[2316]: I0414 13:17:35.058520 2316 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 14 13:17:35.061254 kubelet[2316]: E0414 13:17:35.061206 2316 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.15:6443/api/v1/nodes\": dial tcp 10.0.0.15:6443: connect: connection refused" node="localhost" Apr 14 13:17:35.071677 kubelet[2316]: E0414 13:17:35.071386 2316 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 13:17:35.095328 kubelet[2316]: E0414 13:17:35.092719 2316 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 13:17:35.103397 kubelet[2316]: E0414 13:17:35.103250 2316 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 13:17:35.149999 kubelet[2316]: I0414 13:17:35.149618 2316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ebf8e820819e4b80bc03d078b9ba80f5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ebf8e820819e4b80bc03d078b9ba80f5\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 13:17:35.149999 kubelet[2316]: I0414 13:17:35.149896 2316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ebf8e820819e4b80bc03d078b9ba80f5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ebf8e820819e4b80bc03d078b9ba80f5\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 13:17:35.149999 kubelet[2316]: I0414 13:17:35.149918 2316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ebf8e820819e4b80bc03d078b9ba80f5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ebf8e820819e4b80bc03d078b9ba80f5\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 13:17:35.149999 kubelet[2316]: I0414 13:17:35.149953 2316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/39798d73a6894e44ae801eb773bf9a39-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"39798d73a6894e44ae801eb773bf9a39\") " pod="kube-system/kube-scheduler-localhost" Apr 14 13:17:35.149999 kubelet[2316]: I0414 13:17:35.149966 2316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f089f903a688ef2fe3c76cc6acaee4a5-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"f089f903a688ef2fe3c76cc6acaee4a5\") " pod="kube-system/kube-apiserver-localhost" Apr 14 13:17:35.150943 kubelet[2316]: I0414 13:17:35.150122 2316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f089f903a688ef2fe3c76cc6acaee4a5-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"f089f903a688ef2fe3c76cc6acaee4a5\") " pod="kube-system/kube-apiserver-localhost" Apr 14 13:17:35.150943 kubelet[2316]: I0414 13:17:35.150163 2316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ebf8e820819e4b80bc03d078b9ba80f5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ebf8e820819e4b80bc03d078b9ba80f5\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 13:17:35.150943 kubelet[2316]: I0414 13:17:35.150175 2316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ebf8e820819e4b80bc03d078b9ba80f5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ebf8e820819e4b80bc03d078b9ba80f5\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 13:17:35.150943 kubelet[2316]: I0414 13:17:35.150202 2316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f089f903a688ef2fe3c76cc6acaee4a5-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"f089f903a688ef2fe3c76cc6acaee4a5\") " pod="kube-system/kube-apiserver-localhost" Apr 14 13:17:35.224421 kubelet[2316]: E0414 13:17:35.223917 2316 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.15:6443: connect: connection refused" interval="800ms" Apr 14 13:17:35.310011 kubelet[2316]: I0414 13:17:35.304009 2316 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 14 13:17:35.356006 kubelet[2316]: E0414 13:17:35.355668 2316 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.15:6443/api/v1/nodes\": dial tcp 10.0.0.15:6443: connect: connection refused" node="localhost" Apr 14 13:17:35.386794 kubelet[2316]: E0414 13:17:35.383560 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:17:35.400036 kubelet[2316]: E0414 13:17:35.399953 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:17:35.405347 containerd[1593]: time="2026-04-14T13:17:35.405140569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:f089f903a688ef2fe3c76cc6acaee4a5,Namespace:kube-system,Attempt:0,}" Apr 14 13:17:35.406347 containerd[1593]: time="2026-04-14T13:17:35.405056796Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ebf8e820819e4b80bc03d078b9ba80f5,Namespace:kube-system,Attempt:0,}" Apr 14 13:17:35.406466 kubelet[2316]: E0414 13:17:35.406406 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:17:35.420215 containerd[1593]: time="2026-04-14T13:17:35.420026990Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:39798d73a6894e44ae801eb773bf9a39,Namespace:kube-system,Attempt:0,}" Apr 14 13:17:35.539146 kubelet[2316]: E0414 13:17:35.538866 2316 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 14 13:17:35.645222 kubelet[2316]: E0414 13:17:35.643460 2316 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.15:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 14 13:17:35.769881 kubelet[2316]: I0414 13:17:35.769458 2316 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 14 13:17:35.776360 kubelet[2316]: E0414 13:17:35.776234 2316 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.15:6443/api/v1/nodes\": dial tcp 10.0.0.15:6443: connect: connection refused" node="localhost" Apr 14 13:17:35.961487 kubelet[2316]: E0414 13:17:35.961407 2316 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 14 13:17:36.047403 kubelet[2316]: E0414 13:17:36.047070 2316 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.15:6443: connect: connection refused" interval="1.6s" Apr 14 13:17:36.061705 kubelet[2316]: E0414 13:17:36.061452 2316 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 14 13:17:36.161134 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1622221596.mount: Deactivated successfully. Apr 14 13:17:36.268540 containerd[1593]: time="2026-04-14T13:17:36.267687477Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 14 13:17:36.270035 containerd[1593]: time="2026-04-14T13:17:36.269758944Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=311988" Apr 14 13:17:36.279483 containerd[1593]: time="2026-04-14T13:17:36.278807214Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 14 13:17:36.291492 containerd[1593]: time="2026-04-14T13:17:36.291171425Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 14 13:17:36.294854 containerd[1593]: time="2026-04-14T13:17:36.294748448Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 14 13:17:36.295834 containerd[1593]: time="2026-04-14T13:17:36.295724855Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 14 13:17:36.301967 containerd[1593]: time="2026-04-14T13:17:36.301272655Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 14 13:17:36.305867 containerd[1593]: time="2026-04-14T13:17:36.305363356Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 14 13:17:36.307698 containerd[1593]: time="2026-04-14T13:17:36.307593603Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 887.080276ms" Apr 14 13:17:36.311520 containerd[1593]: time="2026-04-14T13:17:36.311306347Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 905.973501ms" Apr 14 13:17:36.314221 containerd[1593]: time="2026-04-14T13:17:36.313224139Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 907.029682ms" Apr 14 13:17:36.534479 kubelet[2316]: E0414 13:17:36.534013 2316 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.15:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 14 13:17:36.605613 kubelet[2316]: I0414 13:17:36.605332 2316 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 14 13:17:36.606964 kubelet[2316]: E0414 13:17:36.606184 2316 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.15:6443/api/v1/nodes\": dial tcp 10.0.0.15:6443: connect: connection refused" node="localhost" Apr 14 13:17:37.313501 containerd[1593]: time="2026-04-14T13:17:37.312690692Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 13:17:37.313501 containerd[1593]: time="2026-04-14T13:17:37.312879130Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 13:17:37.313501 containerd[1593]: time="2026-04-14T13:17:37.312891762Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 13:17:37.313501 containerd[1593]: time="2026-04-14T13:17:37.313138963Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 13:17:37.317491 containerd[1593]: time="2026-04-14T13:17:37.315401368Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 13:17:37.317491 containerd[1593]: time="2026-04-14T13:17:37.315440426Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 13:17:37.317491 containerd[1593]: time="2026-04-14T13:17:37.315452839Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 13:17:37.317491 containerd[1593]: time="2026-04-14T13:17:37.316038767Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 13:17:37.340520 containerd[1593]: time="2026-04-14T13:17:37.334321752Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 13:17:37.341844 containerd[1593]: time="2026-04-14T13:17:37.341525669Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 13:17:37.341844 containerd[1593]: time="2026-04-14T13:17:37.341650474Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 13:17:37.343579 containerd[1593]: time="2026-04-14T13:17:37.343155247Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 13:17:37.671148 kubelet[2316]: E0414 13:17:37.670895 2316 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.15:6443: connect: connection refused" interval="3.2s" Apr 14 13:17:37.770865 kubelet[2316]: E0414 13:17:37.770541 2316 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 14 13:17:37.865869 containerd[1593]: time="2026-04-14T13:17:37.865667354Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:f089f903a688ef2fe3c76cc6acaee4a5,Namespace:kube-system,Attempt:0,} returns sandbox id \"2a4234c6e51ca2bd04717114861ba137a9ce14d7fd7dfa7711167ceb1e70dc3d\"" Apr 14 13:17:37.886528 containerd[1593]: time="2026-04-14T13:17:37.886272866Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:39798d73a6894e44ae801eb773bf9a39,Namespace:kube-system,Attempt:0,} returns sandbox id \"f3e6973b214880ab161cc4c485407363ab9c0d5e9c44f9984ce852d2f1d16415\"" Apr 14 13:17:37.908058 kubelet[2316]: E0414 13:17:37.907321 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:17:37.911785 kubelet[2316]: E0414 13:17:37.911410 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:17:37.915785 containerd[1593]: time="2026-04-14T13:17:37.915458779Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ebf8e820819e4b80bc03d078b9ba80f5,Namespace:kube-system,Attempt:0,} returns sandbox id \"ff98d78c4020fc8d7abd9fd7aa79ee03d748d4d072cf21ac8ff93a44d24d7a30\"" Apr 14 13:17:37.922909 kubelet[2316]: E0414 13:17:37.921054 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:17:37.942991 containerd[1593]: time="2026-04-14T13:17:37.941825380Z" level=info msg="CreateContainer within sandbox \"f3e6973b214880ab161cc4c485407363ab9c0d5e9c44f9984ce852d2f1d16415\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 14 13:17:37.942991 containerd[1593]: time="2026-04-14T13:17:37.941924435Z" level=info msg="CreateContainer within sandbox \"2a4234c6e51ca2bd04717114861ba137a9ce14d7fd7dfa7711167ceb1e70dc3d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 14 13:17:37.949976 containerd[1593]: time="2026-04-14T13:17:37.948002592Z" level=info msg="CreateContainer within sandbox \"ff98d78c4020fc8d7abd9fd7aa79ee03d748d4d072cf21ac8ff93a44d24d7a30\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 14 13:17:38.112910 containerd[1593]: time="2026-04-14T13:17:38.111718849Z" level=info msg="CreateContainer within sandbox \"2a4234c6e51ca2bd04717114861ba137a9ce14d7fd7dfa7711167ceb1e70dc3d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"081d5bd0a569c0238a3bc1c67c78cc6b88de62963159822a549d9c15213fa5e6\"" Apr 14 13:17:38.118898 containerd[1593]: time="2026-04-14T13:17:38.118526409Z" level=info msg="CreateContainer within sandbox \"f3e6973b214880ab161cc4c485407363ab9c0d5e9c44f9984ce852d2f1d16415\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"4ba36ec4dc3157ae25536e57af758f739c6f8f4ed64279d5c654ee0aee3d0337\"" Apr 14 13:17:38.121237 containerd[1593]: time="2026-04-14T13:17:38.121135546Z" level=info msg="StartContainer for \"081d5bd0a569c0238a3bc1c67c78cc6b88de62963159822a549d9c15213fa5e6\"" Apr 14 13:17:38.121334 containerd[1593]: time="2026-04-14T13:17:38.121301130Z" level=info msg="StartContainer for \"4ba36ec4dc3157ae25536e57af758f739c6f8f4ed64279d5c654ee0aee3d0337\"" Apr 14 13:17:38.154883 kubelet[2316]: E0414 13:17:38.154662 2316 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.15:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 14 13:17:38.155300 containerd[1593]: time="2026-04-14T13:17:38.154843911Z" level=info msg="CreateContainer within sandbox \"ff98d78c4020fc8d7abd9fd7aa79ee03d748d4d072cf21ac8ff93a44d24d7a30\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b9b5b4643903b27b7a6c10e476d45d1a898dc14eca66db1d53a33f11305729a4\"" Apr 14 13:17:38.179459 containerd[1593]: time="2026-04-14T13:17:38.176642967Z" level=info msg="StartContainer for \"b9b5b4643903b27b7a6c10e476d45d1a898dc14eca66db1d53a33f11305729a4\"" Apr 14 13:17:38.480270 kubelet[2316]: I0414 13:17:38.479948 2316 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 14 13:17:38.481953 kubelet[2316]: E0414 13:17:38.480462 2316 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 14 13:17:38.482660 kubelet[2316]: E0414 13:17:38.482593 2316 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.15:6443/api/v1/nodes\": dial tcp 10.0.0.15:6443: connect: connection refused" node="localhost" Apr 14 13:17:38.509331 kubelet[2316]: E0414 13:17:38.509253 2316 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 14 13:17:38.801046 containerd[1593]: time="2026-04-14T13:17:38.798391403Z" level=info msg="StartContainer for \"081d5bd0a569c0238a3bc1c67c78cc6b88de62963159822a549d9c15213fa5e6\" returns successfully" Apr 14 13:17:38.900752 containerd[1593]: time="2026-04-14T13:17:38.898919619Z" level=info msg="StartContainer for \"b9b5b4643903b27b7a6c10e476d45d1a898dc14eca66db1d53a33f11305729a4\" returns successfully" Apr 14 13:17:38.915440 containerd[1593]: time="2026-04-14T13:17:38.913512925Z" level=info msg="StartContainer for \"4ba36ec4dc3157ae25536e57af758f739c6f8f4ed64279d5c654ee0aee3d0337\" returns successfully" Apr 14 13:17:39.150199 kubelet[2316]: E0414 13:17:39.149052 2316 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 13:17:39.150199 kubelet[2316]: E0414 13:17:39.149422 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:17:39.652117 kubelet[2316]: E0414 13:17:39.651799 2316 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 13:17:39.672692 kubelet[2316]: E0414 13:17:39.672529 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:17:39.814828 kubelet[2316]: E0414 13:17:39.814383 2316 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 13:17:39.826696 kubelet[2316]: E0414 13:17:39.826381 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:17:41.362453 kubelet[2316]: E0414 13:17:41.361262 2316 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 13:17:41.367627 kubelet[2316]: E0414 13:17:41.366474 2316 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 13:17:41.391789 kubelet[2316]: E0414 13:17:41.377928 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:17:41.443618 kubelet[2316]: E0414 13:17:41.441355 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:17:41.737225 kubelet[2316]: I0414 13:17:41.736811 2316 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 14 13:17:42.402210 kubelet[2316]: E0414 13:17:42.397981 2316 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 13:17:42.402210 kubelet[2316]: E0414 13:17:42.399823 2316 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 13:17:42.402210 kubelet[2316]: E0414 13:17:42.400142 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:17:42.402210 kubelet[2316]: E0414 13:17:42.400371 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:17:43.051693 update_engine[1551]: I20260414 13:17:42.995391 1551 update_attempter.cc:509] Updating boot flags... Apr 14 13:17:43.310358 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (2609) Apr 14 13:17:43.480704 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (2611) Apr 14 13:17:43.508439 kubelet[2316]: E0414 13:17:43.507705 2316 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 13:17:43.511263 kubelet[2316]: E0414 13:17:43.511219 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:17:45.053273 kubelet[2316]: E0414 13:17:45.052989 2316 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 14 13:17:46.353161 kubelet[2316]: E0414 13:17:46.352853 2316 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Apr 14 13:17:46.358133 kubelet[2316]: E0414 13:17:46.357253 2316 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.18a63b9e690aa5d1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-14 13:17:34.597416401 +0000 UTC m=+1.518126311,LastTimestamp:2026-04-14 13:17:34.597416401 +0000 UTC m=+1.518126311,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 14 13:17:46.433629 kubelet[2316]: I0414 13:17:46.431375 2316 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Apr 14 13:17:46.433629 kubelet[2316]: E0414 13:17:46.431805 2316 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Apr 14 13:17:46.517685 kubelet[2316]: I0414 13:17:46.517269 2316 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 14 13:17:46.626953 kubelet[2316]: I0414 13:17:46.624932 2316 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 14 13:17:47.135310 kubelet[2316]: I0414 13:17:47.130050 2316 apiserver.go:52] "Watching apiserver" Apr 14 13:17:47.179368 kubelet[2316]: E0414 13:17:47.167018 2316 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Apr 14 13:17:47.212720 kubelet[2316]: E0414 13:17:47.212068 2316 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Apr 14 13:17:47.220314 kubelet[2316]: I0414 13:17:47.212770 2316 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 14 13:17:47.220314 kubelet[2316]: E0414 13:17:47.213389 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:17:47.228725 kubelet[2316]: I0414 13:17:47.224927 2316 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 14 13:17:47.364892 kubelet[2316]: I0414 13:17:47.364132 2316 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 14 13:17:47.364892 kubelet[2316]: E0414 13:17:47.364244 2316 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Apr 14 13:17:47.364892 kubelet[2316]: I0414 13:17:47.364342 2316 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 14 13:17:47.370981 kubelet[2316]: E0414 13:17:47.370872 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:17:47.410584 kubelet[2316]: E0414 13:17:47.410223 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:17:48.334588 kubelet[2316]: E0414 13:17:48.334029 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:17:53.550957 kubelet[2316]: E0414 13:17:53.549702 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:17:53.908313 kubelet[2316]: I0414 13:17:53.907990 2316 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=6.907782758 podStartE2EDuration="6.907782758s" podCreationTimestamp="2026-04-14 13:17:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-14 13:17:53.907761004 +0000 UTC m=+20.828470915" watchObservedRunningTime="2026-04-14 13:17:53.907782758 +0000 UTC m=+20.828492676" Apr 14 13:17:57.151517 kubelet[2316]: I0414 13:17:57.149795 2316 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 14 13:17:57.410744 kubelet[2316]: E0414 13:17:57.404595 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:17:57.417540 kubelet[2316]: I0414 13:17:57.417278 2316 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=10.417142266 podStartE2EDuration="10.417142266s" podCreationTimestamp="2026-04-14 13:17:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-14 13:17:54.059667162 +0000 UTC m=+20.980377076" watchObservedRunningTime="2026-04-14 13:17:57.417142266 +0000 UTC m=+24.337852194" Apr 14 13:17:57.491648 kubelet[2316]: E0414 13:17:57.490545 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:17:57.640662 kubelet[2316]: E0414 13:17:57.640547 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:17:57.934931 systemd[1]: Reloading requested from client PID 2622 ('systemctl') (unit session-7.scope)... Apr 14 13:17:57.935281 systemd[1]: Reloading... Apr 14 13:17:58.338721 kubelet[2316]: E0414 13:17:58.338286 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:17:58.580889 kubelet[2316]: I0414 13:17:58.580161 2316 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.580023776 podStartE2EDuration="1.580023776s" podCreationTimestamp="2026-04-14 13:17:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-14 13:17:58.578728493 +0000 UTC m=+25.499438430" watchObservedRunningTime="2026-04-14 13:17:58.580023776 +0000 UTC m=+25.500733707" Apr 14 13:17:59.260373 zram_generator::config[2667]: No configuration found. Apr 14 13:17:59.533482 kubelet[2316]: E0414 13:17:59.532792 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:18:01.151700 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 14 13:18:01.471523 systemd[1]: Reloading finished in 3532 ms. Apr 14 13:18:01.820689 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 13:18:01.939559 systemd[1]: kubelet.service: Deactivated successfully. Apr 14 13:18:01.940415 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 13:18:01.980656 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 13:18:03.266170 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 13:18:03.378963 (kubelet)[2716]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 14 13:18:07.174774 kubelet[2716]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 14 13:18:07.174774 kubelet[2716]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 14 13:18:07.174774 kubelet[2716]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 14 13:18:07.174774 kubelet[2716]: I0414 13:18:07.174674 2716 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 14 13:18:08.080500 kubelet[2716]: I0414 13:18:08.075248 2716 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 14 13:18:08.097741 kubelet[2716]: I0414 13:18:08.086469 2716 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 14 13:18:08.107763 kubelet[2716]: I0414 13:18:08.104964 2716 server.go:956] "Client rotation is on, will bootstrap in background" Apr 14 13:18:08.254738 kubelet[2716]: I0414 13:18:08.253542 2716 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 14 13:18:08.580437 kubelet[2716]: I0414 13:18:08.580285 2716 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 14 13:18:08.976910 kubelet[2716]: E0414 13:18:08.971180 2716 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 14 13:18:09.002855 kubelet[2716]: I0414 13:18:09.001784 2716 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 14 13:18:09.361418 kubelet[2716]: I0414 13:18:09.360225 2716 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 14 13:18:09.361955 kubelet[2716]: I0414 13:18:09.361831 2716 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 14 13:18:09.362631 kubelet[2716]: I0414 13:18:09.361948 2716 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Apr 14 13:18:09.362631 kubelet[2716]: I0414 13:18:09.362562 2716 topology_manager.go:138] "Creating topology manager with none policy" Apr 14 13:18:09.370356 kubelet[2716]: I0414 13:18:09.369974 2716 container_manager_linux.go:303] "Creating device plugin manager" Apr 14 13:18:09.413954 kubelet[2716]: I0414 13:18:09.382031 2716 state_mem.go:36] "Initialized new in-memory state store" Apr 14 13:18:09.436402 kubelet[2716]: I0414 13:18:09.436255 2716 kubelet.go:480] "Attempting to sync node with API server" Apr 14 13:18:09.441005 kubelet[2716]: I0414 13:18:09.436846 2716 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 14 13:18:09.441005 kubelet[2716]: I0414 13:18:09.436888 2716 kubelet.go:386] "Adding apiserver pod source" Apr 14 13:18:09.441005 kubelet[2716]: I0414 13:18:09.436904 2716 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 14 13:18:09.546737 kubelet[2716]: I0414 13:18:09.538407 2716 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 14 13:18:09.554726 kubelet[2716]: I0414 13:18:09.553980 2716 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 14 13:18:09.717926 kubelet[2716]: I0414 13:18:09.714012 2716 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 14 13:18:09.717926 kubelet[2716]: I0414 13:18:09.715026 2716 server.go:1289] "Started kubelet" Apr 14 13:18:09.717926 kubelet[2716]: I0414 13:18:09.717659 2716 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 14 13:18:09.765069 kubelet[2716]: I0414 13:18:09.761413 2716 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 14 13:18:09.780832 kubelet[2716]: I0414 13:18:09.778220 2716 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 14 13:18:09.804188 kubelet[2716]: I0414 13:18:09.803731 2716 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 14 13:18:09.804188 kubelet[2716]: I0414 13:18:09.804065 2716 reconciler.go:26] "Reconciler: start to sync state" Apr 14 13:18:09.804659 kubelet[2716]: I0414 13:18:09.804514 2716 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 14 13:18:09.815459 kubelet[2716]: I0414 13:18:09.815153 2716 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 14 13:18:09.835415 kubelet[2716]: I0414 13:18:09.830849 2716 server.go:317] "Adding debug handlers to kubelet server" Apr 14 13:18:09.933608 kubelet[2716]: I0414 13:18:09.933500 2716 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 14 13:18:10.335756 kubelet[2716]: E0414 13:18:10.331320 2716 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 14 13:18:10.352267 kubelet[2716]: I0414 13:18:10.352004 2716 factory.go:223] Registration of the containerd container factory successfully Apr 14 13:18:10.352267 kubelet[2716]: I0414 13:18:10.352138 2716 factory.go:223] Registration of the systemd container factory successfully Apr 14 13:18:10.354062 kubelet[2716]: I0414 13:18:10.354014 2716 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 14 13:18:10.485480 kubelet[2716]: I0414 13:18:10.473953 2716 apiserver.go:52] "Watching apiserver" Apr 14 13:18:10.572839 kubelet[2716]: I0414 13:18:10.463012 2716 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 14 13:18:10.626762 kubelet[2716]: I0414 13:18:10.626363 2716 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 14 13:18:10.634502 kubelet[2716]: I0414 13:18:10.633414 2716 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 14 13:18:10.634502 kubelet[2716]: I0414 13:18:10.633573 2716 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 14 13:18:10.634502 kubelet[2716]: I0414 13:18:10.633580 2716 kubelet.go:2436] "Starting kubelet main sync loop" Apr 14 13:18:10.634502 kubelet[2716]: E0414 13:18:10.633707 2716 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 14 13:18:10.810381 kubelet[2716]: E0414 13:18:10.808196 2716 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 14 13:18:11.056700 kubelet[2716]: E0414 13:18:11.053469 2716 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 14 13:18:11.564952 kubelet[2716]: E0414 13:18:11.564786 2716 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 14 13:18:12.383778 kubelet[2716]: E0414 13:18:12.377942 2716 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 14 13:18:14.341919 kubelet[2716]: E0414 13:18:14.341577 2716 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 14 13:18:17.629809 kubelet[2716]: E0414 13:18:17.626770 2716 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 14 13:18:18.572468 kubelet[2716]: I0414 13:18:18.572330 2716 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 14 13:18:18.588776 kubelet[2716]: I0414 13:18:18.586358 2716 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 14 13:18:18.640311 kubelet[2716]: I0414 13:18:18.627383 2716 state_mem.go:36] "Initialized new in-memory state store" Apr 14 13:18:18.750533 kubelet[2716]: I0414 13:18:18.747635 2716 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 14 13:18:18.750533 kubelet[2716]: I0414 13:18:18.747828 2716 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 14 13:18:18.750533 kubelet[2716]: I0414 13:18:18.748242 2716 policy_none.go:49] "None policy: Start" Apr 14 13:18:18.750533 kubelet[2716]: I0414 13:18:18.748360 2716 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 14 13:18:18.750533 kubelet[2716]: I0414 13:18:18.748613 2716 state_mem.go:35] "Initializing new in-memory state store" Apr 14 13:18:18.755956 kubelet[2716]: I0414 13:18:18.753674 2716 state_mem.go:75] "Updated machine memory state" Apr 14 13:18:18.855842 kubelet[2716]: E0414 13:18:18.841985 2716 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 14 13:18:18.881115 kubelet[2716]: I0414 13:18:18.880812 2716 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 14 13:18:18.941546 kubelet[2716]: I0414 13:18:18.920278 2716 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 14 13:18:19.141928 kubelet[2716]: I0414 13:18:19.137865 2716 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 14 13:18:19.216517 kubelet[2716]: E0414 13:18:19.216453 2716 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 14 13:18:19.537509 kubelet[2716]: I0414 13:18:19.536909 2716 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 14 13:18:20.290340 kubelet[2716]: I0414 13:18:20.286844 2716 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Apr 14 13:18:20.327156 kubelet[2716]: I0414 13:18:20.325625 2716 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Apr 14 13:18:20.530524 kubelet[2716]: I0414 13:18:20.530053 2716 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 14 13:18:20.638534 containerd[1593]: time="2026-04-14T13:18:20.637664409Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 14 13:18:20.816528 kubelet[2716]: I0414 13:18:20.815383 2716 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 14 13:18:22.661280 kubelet[2716]: I0414 13:18:22.659049 2716 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 14 13:18:22.966901 kubelet[2716]: I0414 13:18:22.939284 2716 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 14 13:18:23.042837 kubelet[2716]: I0414 13:18:23.041310 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ebf8e820819e4b80bc03d078b9ba80f5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ebf8e820819e4b80bc03d078b9ba80f5\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 13:18:23.046664 kubelet[2716]: I0414 13:18:23.044656 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ebf8e820819e4b80bc03d078b9ba80f5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ebf8e820819e4b80bc03d078b9ba80f5\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 13:18:23.046664 kubelet[2716]: I0414 13:18:23.044973 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/fc1fe35d-a33c-44d9-a9cd-51a1980b178d-kube-proxy\") pod \"kube-proxy-2vj6p\" (UID: \"fc1fe35d-a33c-44d9-a9cd-51a1980b178d\") " pod="kube-system/kube-proxy-2vj6p" Apr 14 13:18:23.046664 kubelet[2716]: I0414 13:18:23.045046 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wjbv\" (UniqueName: \"kubernetes.io/projected/fc1fe35d-a33c-44d9-a9cd-51a1980b178d-kube-api-access-9wjbv\") pod \"kube-proxy-2vj6p\" (UID: \"fc1fe35d-a33c-44d9-a9cd-51a1980b178d\") " pod="kube-system/kube-proxy-2vj6p" Apr 14 13:18:23.046664 kubelet[2716]: I0414 13:18:23.045187 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f089f903a688ef2fe3c76cc6acaee4a5-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"f089f903a688ef2fe3c76cc6acaee4a5\") " pod="kube-system/kube-apiserver-localhost" Apr 14 13:18:23.046664 kubelet[2716]: I0414 13:18:23.045203 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f089f903a688ef2fe3c76cc6acaee4a5-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"f089f903a688ef2fe3c76cc6acaee4a5\") " pod="kube-system/kube-apiserver-localhost" Apr 14 13:18:23.046873 kubelet[2716]: I0414 13:18:23.045214 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f089f903a688ef2fe3c76cc6acaee4a5-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"f089f903a688ef2fe3c76cc6acaee4a5\") " pod="kube-system/kube-apiserver-localhost" Apr 14 13:18:23.046873 kubelet[2716]: I0414 13:18:23.045226 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ebf8e820819e4b80bc03d078b9ba80f5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ebf8e820819e4b80bc03d078b9ba80f5\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 13:18:23.046873 kubelet[2716]: I0414 13:18:23.045309 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ebf8e820819e4b80bc03d078b9ba80f5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ebf8e820819e4b80bc03d078b9ba80f5\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 13:18:23.046873 kubelet[2716]: I0414 13:18:23.045345 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ebf8e820819e4b80bc03d078b9ba80f5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ebf8e820819e4b80bc03d078b9ba80f5\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 13:18:23.046873 kubelet[2716]: I0414 13:18:23.045357 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/39798d73a6894e44ae801eb773bf9a39-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"39798d73a6894e44ae801eb773bf9a39\") " pod="kube-system/kube-scheduler-localhost" Apr 14 13:18:23.047015 kubelet[2716]: I0414 13:18:23.045385 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fc1fe35d-a33c-44d9-a9cd-51a1980b178d-xtables-lock\") pod \"kube-proxy-2vj6p\" (UID: \"fc1fe35d-a33c-44d9-a9cd-51a1980b178d\") " pod="kube-system/kube-proxy-2vj6p" Apr 14 13:18:23.047015 kubelet[2716]: I0414 13:18:23.045397 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fc1fe35d-a33c-44d9-a9cd-51a1980b178d-lib-modules\") pod \"kube-proxy-2vj6p\" (UID: \"fc1fe35d-a33c-44d9-a9cd-51a1980b178d\") " pod="kube-system/kube-proxy-2vj6p" Apr 14 13:18:23.151699 kubelet[2716]: E0414 13:18:23.145774 2716 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Apr 14 13:18:23.264044 kubelet[2716]: E0414 13:18:23.248354 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:18:23.428001 kubelet[2716]: E0414 13:18:23.427472 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:18:23.463453 kubelet[2716]: E0414 13:18:23.463320 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:18:23.463859 kubelet[2716]: E0414 13:18:23.463844 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:18:23.530330 containerd[1593]: time="2026-04-14T13:18:23.476062031Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2vj6p,Uid:fc1fe35d-a33c-44d9-a9cd-51a1980b178d,Namespace:kube-system,Attempt:0,}" Apr 14 13:18:24.619813 kubelet[2716]: E0414 13:18:24.619642 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:18:25.045259 kubelet[2716]: E0414 13:18:25.044784 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:18:25.457499 kubelet[2716]: E0414 13:18:25.454696 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:18:26.015219 kubelet[2716]: E0414 13:18:26.015023 2716 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.339s" Apr 14 13:18:26.422072 containerd[1593]: time="2026-04-14T13:18:26.264983874Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 13:18:26.428339 containerd[1593]: time="2026-04-14T13:18:26.419170215Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 13:18:26.428339 containerd[1593]: time="2026-04-14T13:18:26.419240148Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 13:18:26.448878 containerd[1593]: time="2026-04-14T13:18:26.419607480Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 13:18:27.836165 kubelet[2716]: E0414 13:18:27.836054 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:18:28.378532 containerd[1593]: time="2026-04-14T13:18:28.378442370Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2vj6p,Uid:fc1fe35d-a33c-44d9-a9cd-51a1980b178d,Namespace:kube-system,Attempt:0,} returns sandbox id \"e24114026c3bafb6108cb63a1481ece20422a821dd4c3a5d644609655e28909e\"" Apr 14 13:18:28.880356 kubelet[2716]: E0414 13:18:28.728041 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:18:30.109349 kubelet[2716]: E0414 13:18:30.105880 2716 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.378s" Apr 14 13:18:30.238830 containerd[1593]: time="2026-04-14T13:18:30.238203653Z" level=info msg="CreateContainer within sandbox \"e24114026c3bafb6108cb63a1481ece20422a821dd4c3a5d644609655e28909e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 14 13:18:30.569304 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2193697985.mount: Deactivated successfully. Apr 14 13:18:30.652043 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3777725194.mount: Deactivated successfully. Apr 14 13:18:30.750981 containerd[1593]: time="2026-04-14T13:18:30.749907833Z" level=info msg="CreateContainer within sandbox \"e24114026c3bafb6108cb63a1481ece20422a821dd4c3a5d644609655e28909e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"11ad1e4b0e3ca53e0619c9125f16d4af1c793dcd7187cfc335f20fc80516d47e\"" Apr 14 13:18:30.857588 containerd[1593]: time="2026-04-14T13:18:30.851329784Z" level=info msg="StartContainer for \"11ad1e4b0e3ca53e0619c9125f16d4af1c793dcd7187cfc335f20fc80516d47e\"" Apr 14 13:18:32.540181 kubelet[2716]: I0414 13:18:32.539733 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0946a41e-7e03-41a0-8dd8-549abdcdc5a2-var-lib-calico\") pod \"tigera-operator-6bf85f8dd-w6b9v\" (UID: \"0946a41e-7e03-41a0-8dd8-549abdcdc5a2\") " pod="tigera-operator/tigera-operator-6bf85f8dd-w6b9v" Apr 14 13:18:32.540181 kubelet[2716]: I0414 13:18:32.539866 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25cns\" (UniqueName: \"kubernetes.io/projected/0946a41e-7e03-41a0-8dd8-549abdcdc5a2-kube-api-access-25cns\") pod \"tigera-operator-6bf85f8dd-w6b9v\" (UID: \"0946a41e-7e03-41a0-8dd8-549abdcdc5a2\") " pod="tigera-operator/tigera-operator-6bf85f8dd-w6b9v" Apr 14 13:18:33.385553 containerd[1593]: time="2026-04-14T13:18:33.379628504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-w6b9v,Uid:0946a41e-7e03-41a0-8dd8-549abdcdc5a2,Namespace:tigera-operator,Attempt:0,}" Apr 14 13:18:33.613925 kubelet[2716]: E0414 13:18:33.606441 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:18:33.806530 containerd[1593]: time="2026-04-14T13:18:33.806158048Z" level=info msg="StartContainer for \"11ad1e4b0e3ca53e0619c9125f16d4af1c793dcd7187cfc335f20fc80516d47e\" returns successfully" Apr 14 13:18:34.213627 kubelet[2716]: E0414 13:18:34.213545 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:18:34.214420 kubelet[2716]: E0414 13:18:34.214252 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:18:34.360918 containerd[1593]: time="2026-04-14T13:18:34.357890528Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 13:18:34.360918 containerd[1593]: time="2026-04-14T13:18:34.358337533Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 13:18:34.360918 containerd[1593]: time="2026-04-14T13:18:34.358370977Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 13:18:34.360918 containerd[1593]: time="2026-04-14T13:18:34.358513445Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 13:18:35.541516 kubelet[2716]: E0414 13:18:35.540855 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:18:35.573551 kubelet[2716]: E0414 13:18:35.569991 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:18:35.604985 kubelet[2716]: E0414 13:18:35.604685 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:18:35.733065 containerd[1593]: time="2026-04-14T13:18:35.728039983Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-w6b9v,Uid:0946a41e-7e03-41a0-8dd8-549abdcdc5a2,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"55636687ab4158ad9e6b2dc567b56e4adf95551dca2e49c96f3ef41b32bc27ea\"" Apr 14 13:18:36.026064 containerd[1593]: time="2026-04-14T13:18:36.025909275Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Apr 14 13:18:37.201056 kubelet[2716]: E0414 13:18:37.200531 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:18:39.744625 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3507908403.mount: Deactivated successfully. Apr 14 13:18:53.062261 containerd[1593]: time="2026-04-14T13:18:53.061959355Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:18:53.138576 containerd[1593]: time="2026-04-14T13:18:53.138134398Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Apr 14 13:18:53.141690 containerd[1593]: time="2026-04-14T13:18:53.141630047Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:18:53.144535 containerd[1593]: time="2026-04-14T13:18:53.144355003Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:18:53.146982 containerd[1593]: time="2026-04-14T13:18:53.144912759Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 17.118888486s" Apr 14 13:18:53.146982 containerd[1593]: time="2026-04-14T13:18:53.144951189Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Apr 14 13:18:53.280622 containerd[1593]: time="2026-04-14T13:18:53.280145701Z" level=info msg="CreateContainer within sandbox \"55636687ab4158ad9e6b2dc567b56e4adf95551dca2e49c96f3ef41b32bc27ea\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 14 13:18:53.905559 containerd[1593]: time="2026-04-14T13:18:53.905484360Z" level=info msg="CreateContainer within sandbox \"55636687ab4158ad9e6b2dc567b56e4adf95551dca2e49c96f3ef41b32bc27ea\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"982eb30a46df62c511ccd4ae8a7a6d6aa6ce6d1f39048188c646d737926a8a46\"" Apr 14 13:18:53.913318 containerd[1593]: time="2026-04-14T13:18:53.913275704Z" level=info msg="StartContainer for \"982eb30a46df62c511ccd4ae8a7a6d6aa6ce6d1f39048188c646d737926a8a46\"" Apr 14 13:18:54.478656 containerd[1593]: time="2026-04-14T13:18:54.389042088Z" level=info msg="StartContainer for \"982eb30a46df62c511ccd4ae8a7a6d6aa6ce6d1f39048188c646d737926a8a46\" returns successfully" Apr 14 13:18:55.044655 kubelet[2716]: I0414 13:18:55.044437 2716 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-2vj6p" podStartSLOduration=39.044398466 podStartE2EDuration="39.044398466s" podCreationTimestamp="2026-04-14 13:18:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-14 13:18:37.183939094 +0000 UTC m=+33.631051129" watchObservedRunningTime="2026-04-14 13:18:55.044398466 +0000 UTC m=+51.491510508" Apr 14 13:18:55.044655 kubelet[2716]: I0414 13:18:55.044586 2716 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6bf85f8dd-w6b9v" podStartSLOduration=6.829629458 podStartE2EDuration="24.044582434s" podCreationTimestamp="2026-04-14 13:18:31 +0000 UTC" firstStartedPulling="2026-04-14 13:18:35.951024778 +0000 UTC m=+32.398136810" lastFinishedPulling="2026-04-14 13:18:53.165977745 +0000 UTC m=+49.613089786" observedRunningTime="2026-04-14 13:18:55.044139321 +0000 UTC m=+51.491251368" watchObservedRunningTime="2026-04-14 13:18:55.044582434 +0000 UTC m=+51.491694477" Apr 14 13:19:17.863583 kubelet[2716]: E0414 13:19:17.861171 2716 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.168s" Apr 14 13:19:23.875419 kubelet[2716]: E0414 13:19:23.875252 2716 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.197s" Apr 14 13:19:35.080772 kubelet[2716]: E0414 13:19:35.080493 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:19:46.773892 kubelet[2716]: E0414 13:19:46.773620 2716 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.063s" Apr 14 13:19:53.720310 kubelet[2716]: E0414 13:19:53.711740 2716 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.076s" Apr 14 13:19:54.643587 kubelet[2716]: E0414 13:19:54.642584 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:19:55.648883 kubelet[2716]: E0414 13:19:55.648794 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:19:58.108474 sudo[1779]: pam_unix(sudo:session): session closed for user root Apr 14 13:19:58.131574 sshd[1772]: pam_unix(sshd:session): session closed for user core Apr 14 13:19:58.298774 systemd[1]: sshd@6-10.0.0.15:22-10.0.0.1:60704.service: Deactivated successfully. Apr 14 13:19:58.319565 systemd[1]: session-7.scope: Deactivated successfully. Apr 14 13:19:58.320309 systemd-logind[1549]: Session 7 logged out. Waiting for processes to exit. Apr 14 13:19:58.540464 systemd-logind[1549]: Removed session 7. Apr 14 13:19:58.744738 kubelet[2716]: E0414 13:19:58.739665 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:20:10.054072 kubelet[2716]: E0414 13:20:10.039974 2716 kubelet_node_status.go:460] "Node not becoming ready in time after startup" Apr 14 13:20:15.551329 kubelet[2716]: E0414 13:20:15.548392 2716 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 13:20:20.762741 kubelet[2716]: E0414 13:20:20.759439 2716 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 13:20:25.224688 kubelet[2716]: E0414 13:20:25.220523 2716 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.582s" Apr 14 13:20:27.017113 kubelet[2716]: E0414 13:20:27.001258 2716 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.78s" Apr 14 13:20:27.176751 kubelet[2716]: E0414 13:20:27.171276 2716 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 13:20:32.510672 kubelet[2716]: E0414 13:20:32.509778 2716 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 13:20:36.362673 kubelet[2716]: E0414 13:20:36.357662 2716 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.494s" Apr 14 13:20:36.362673 kubelet[2716]: E0414 13:20:36.359427 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:20:38.031582 kubelet[2716]: E0414 13:20:38.030891 2716 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.373s" Apr 14 13:20:38.031582 kubelet[2716]: E0414 13:20:38.031394 2716 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 13:20:43.554903 kubelet[2716]: E0414 13:20:43.548064 2716 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 13:20:46.528789 kubelet[2716]: E0414 13:20:46.519718 2716 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.832s" Apr 14 13:20:50.249508 kubelet[2716]: E0414 13:20:50.246808 2716 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 13:20:51.119577 kubelet[2716]: E0414 13:20:51.119495 2716 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.381s" Apr 14 13:20:55.558173 kubelet[2716]: E0414 13:20:55.556566 2716 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 13:20:56.221352 kubelet[2716]: E0414 13:20:56.171259 2716 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.536s" Apr 14 13:21:01.032770 kubelet[2716]: E0414 13:21:01.028690 2716 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 13:21:02.349501 kubelet[2716]: E0414 13:21:02.304729 2716 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.535s" Apr 14 13:21:03.982349 kubelet[2716]: E0414 13:21:03.966010 2716 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.267s" Apr 14 13:21:04.480056 systemd[1]: Started sshd@7-10.0.0.15:22-10.0.0.1:53132.service - OpenSSH per-connection server daemon (10.0.0.1:53132). Apr 14 13:21:07.146772 sshd[3160]: Accepted publickey for core from 10.0.0.1 port 53132 ssh2: RSA SHA256:yxEwPRvngNLBDT3pQyHsh993EalqAg0K7yQpushVk/s Apr 14 13:21:07.275964 kubelet[2716]: I0414 13:21:07.257830 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/e5de7976-2d57-4f44-8a4b-ef2127456c13-typha-certs\") pod \"calico-typha-5cdcd8fb9d-2crff\" (UID: \"e5de7976-2d57-4f44-8a4b-ef2127456c13\") " pod="calico-system/calico-typha-5cdcd8fb9d-2crff" Apr 14 13:21:07.357461 sshd[3160]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:21:07.920348 systemd-logind[1549]: New session 8 of user core. Apr 14 13:21:07.973890 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 14 13:21:08.141721 kubelet[2716]: E0414 13:21:08.141682 2716 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 13:21:08.267506 kubelet[2716]: I0414 13:21:08.141838 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2mfx\" (UniqueName: \"kubernetes.io/projected/e5de7976-2d57-4f44-8a4b-ef2127456c13-kube-api-access-r2mfx\") pod \"calico-typha-5cdcd8fb9d-2crff\" (UID: \"e5de7976-2d57-4f44-8a4b-ef2127456c13\") " pod="calico-system/calico-typha-5cdcd8fb9d-2crff" Apr 14 13:21:08.633420 kubelet[2716]: I0414 13:21:08.392738 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e5de7976-2d57-4f44-8a4b-ef2127456c13-tigera-ca-bundle\") pod \"calico-typha-5cdcd8fb9d-2crff\" (UID: \"e5de7976-2d57-4f44-8a4b-ef2127456c13\") " pod="calico-system/calico-typha-5cdcd8fb9d-2crff" Apr 14 13:21:09.538650 kubelet[2716]: E0414 13:21:09.531015 2716 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.652s" Apr 14 13:21:11.140194 kubelet[2716]: E0414 13:21:11.070547 2716 secret.go:189] Couldn't get secret calico-system/typha-certs: failed to sync secret cache: timed out waiting for the condition Apr 14 13:21:11.415743 kubelet[2716]: E0414 13:21:11.407907 2716 configmap.go:193] Couldn't get configMap calico-system/tigera-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Apr 14 13:21:11.582462 kubelet[2716]: E0414 13:21:11.582038 2716 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e5de7976-2d57-4f44-8a4b-ef2127456c13-tigera-ca-bundle podName:e5de7976-2d57-4f44-8a4b-ef2127456c13 nodeName:}" failed. No retries permitted until 2026-04-14 13:21:11.929863505 +0000 UTC m=+188.376975547 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tigera-ca-bundle" (UniqueName: "kubernetes.io/configmap/e5de7976-2d57-4f44-8a4b-ef2127456c13-tigera-ca-bundle") pod "calico-typha-5cdcd8fb9d-2crff" (UID: "e5de7976-2d57-4f44-8a4b-ef2127456c13") : failed to sync configmap cache: timed out waiting for the condition Apr 14 13:21:11.654394 kubelet[2716]: E0414 13:21:11.646729 2716 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e5de7976-2d57-4f44-8a4b-ef2127456c13-typha-certs podName:e5de7976-2d57-4f44-8a4b-ef2127456c13 nodeName:}" failed. No retries permitted until 2026-04-14 13:21:12.146592306 +0000 UTC m=+188.593704337 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "typha-certs" (UniqueName: "kubernetes.io/secret/e5de7976-2d57-4f44-8a4b-ef2127456c13-typha-certs") pod "calico-typha-5cdcd8fb9d-2crff" (UID: "e5de7976-2d57-4f44-8a4b-ef2127456c13") : failed to sync secret cache: timed out waiting for the condition Apr 14 13:21:15.252978 kubelet[2716]: E0414 13:21:15.238612 2716 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 13:21:16.502643 kubelet[2716]: E0414 13:21:16.484958 2716 projected.go:289] Couldn't get configMap calico-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Apr 14 13:21:17.012890 kubelet[2716]: E0414 13:21:17.012199 2716 projected.go:194] Error preparing data for projected volume kube-api-access-r2mfx for pod calico-system/calico-typha-5cdcd8fb9d-2crff: failed to sync configmap cache: timed out waiting for the condition Apr 14 13:21:18.027979 kubelet[2716]: E0414 13:21:18.027655 2716 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e5de7976-2d57-4f44-8a4b-ef2127456c13-kube-api-access-r2mfx podName:e5de7976-2d57-4f44-8a4b-ef2127456c13 nodeName:}" failed. No retries permitted until 2026-04-14 13:21:18.358878384 +0000 UTC m=+194.805990438 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-r2mfx" (UniqueName: "kubernetes.io/projected/e5de7976-2d57-4f44-8a4b-ef2127456c13-kube-api-access-r2mfx") pod "calico-typha-5cdcd8fb9d-2crff" (UID: "e5de7976-2d57-4f44-8a4b-ef2127456c13") : failed to sync configmap cache: timed out waiting for the condition Apr 14 13:21:18.874244 sshd[3160]: pam_unix(sshd:session): session closed for user core Apr 14 13:21:18.949377 systemd[1]: sshd@7-10.0.0.15:22-10.0.0.1:53132.service: Deactivated successfully. Apr 14 13:21:19.085908 systemd-logind[1549]: Session 8 logged out. Waiting for processes to exit. Apr 14 13:21:19.092863 systemd[1]: session-8.scope: Deactivated successfully. Apr 14 13:21:19.735481 systemd-logind[1549]: Removed session 8. Apr 14 13:21:22.595788 kubelet[2716]: E0414 13:21:22.595205 2716 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 13:21:23.074853 kubelet[2716]: E0414 13:21:23.073282 2716 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="13.541s" Apr 14 13:21:24.098127 systemd[1]: Started sshd@8-10.0.0.15:22-10.0.0.1:60332.service - OpenSSH per-connection server daemon (10.0.0.1:60332). Apr 14 13:21:25.142698 kubelet[2716]: E0414 13:21:25.142411 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:21:25.954067 sshd[3186]: Accepted publickey for core from 10.0.0.1 port 60332 ssh2: RSA SHA256:yxEwPRvngNLBDT3pQyHsh993EalqAg0K7yQpushVk/s Apr 14 13:21:26.217564 containerd[1593]: time="2026-04-14T13:21:26.212854623Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5cdcd8fb9d-2crff,Uid:e5de7976-2d57-4f44-8a4b-ef2127456c13,Namespace:calico-system,Attempt:0,}" Apr 14 13:21:26.387453 sshd[3186]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:21:27.930528 systemd-logind[1549]: New session 9 of user core. Apr 14 13:21:27.959592 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 14 13:21:31.325510 kubelet[2716]: E0414 13:21:31.308568 2716 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 13:21:37.080615 kubelet[2716]: E0414 13:21:37.045131 2716 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 14 13:21:40.744371 containerd[1593]: time="2026-04-14T13:21:40.727459772Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 13:21:40.744371 containerd[1593]: time="2026-04-14T13:21:40.727836161Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 13:21:40.744371 containerd[1593]: time="2026-04-14T13:21:40.727849757Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 13:21:40.984848 containerd[1593]: time="2026-04-14T13:21:40.932666795Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 13:21:44.007540 sshd[3186]: pam_unix(sshd:session): session closed for user core Apr 14 13:21:44.635646 systemd[1]: sshd@8-10.0.0.15:22-10.0.0.1:60332.service: Deactivated successfully. Apr 14 13:21:44.733008 systemd[1]: session-9.scope: Deactivated successfully. Apr 14 13:21:44.892400 systemd-logind[1549]: Session 9 logged out. Waiting for processes to exit. Apr 14 13:21:45.594998 systemd-logind[1549]: Removed session 9. Apr 14 13:21:49.941601 systemd[1]: Started sshd@9-10.0.0.15:22-10.0.0.1:44824.service - OpenSSH per-connection server daemon (10.0.0.1:44824). Apr 14 13:21:53.474401 kubelet[2716]: E0414 13:21:53.463061 2716 request.go:1360] "Unexpected error when reading response body" err="net/http: request canceled (Client.Timeout or context cancellation while reading body)" Apr 14 13:21:55.744834 containerd[1593]: time="2026-04-14T13:21:55.741230382Z" level=error msg="ttrpc: received message on inactive stream" stream=53 Apr 14 13:21:56.273363 containerd[1593]: time="2026-04-14T13:21:56.246728049Z" level=error msg="failed to handle container TaskExit event container_id:\"b9b5b4643903b27b7a6c10e476d45d1a898dc14eca66db1d53a33f11305729a4\" id:\"b9b5b4643903b27b7a6c10e476d45d1a898dc14eca66db1d53a33f11305729a4\" pid:2548 exit_status:1 exited_at:{seconds:1776172903 nanos:137538134}" error="failed to stop container: context deadline exceeded: unknown" Apr 14 13:21:56.741002 containerd[1593]: time="2026-04-14T13:21:56.734844613Z" level=error msg="ttrpc: received message on inactive stream" stream=55 Apr 14 13:21:59.026771 containerd[1593]: time="2026-04-14T13:21:59.023704513Z" level=info msg="TaskExit event container_id:\"b9b5b4643903b27b7a6c10e476d45d1a898dc14eca66db1d53a33f11305729a4\" id:\"b9b5b4643903b27b7a6c10e476d45d1a898dc14eca66db1d53a33f11305729a4\" pid:2548 exit_status:1 exited_at:{seconds:1776172903 nanos:137538134}" Apr 14 13:22:07.082881 containerd[1593]: time="2026-04-14T13:22:07.081325985Z" level=error msg="failed to handle container TaskExit event container_id:\"4ba36ec4dc3157ae25536e57af758f739c6f8f4ed64279d5c654ee0aee3d0337\" id:\"4ba36ec4dc3157ae25536e57af758f739c6f8f4ed64279d5c654ee0aee3d0337\" pid:2538 exit_status:1 exited_at:{seconds:1776172915 nanos:631152089}" error="failed to stop container: context deadline exceeded: unknown" Apr 14 13:22:09.499816 sshd[3246]: Accepted publickey for core from 10.0.0.1 port 44824 ssh2: RSA SHA256:yxEwPRvngNLBDT3pQyHsh993EalqAg0K7yQpushVk/s Apr 14 13:22:09.839773 containerd[1593]: time="2026-04-14T13:22:09.023053191Z" level=error msg="ttrpc: received message on inactive stream" stream=51 Apr 14 13:22:10.328844 containerd[1593]: time="2026-04-14T13:22:09.908754110Z" level=error msg="ttrpc: received message on inactive stream" stream=55 Apr 14 13:22:11.165538 sshd[3246]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:22:11.503569 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b9b5b4643903b27b7a6c10e476d45d1a898dc14eca66db1d53a33f11305729a4-rootfs.mount: Deactivated successfully. Apr 14 13:22:11.761792 containerd[1593]: time="2026-04-14T13:22:11.642624731Z" level=error msg="Failed to handle backOff event container_id:\"b9b5b4643903b27b7a6c10e476d45d1a898dc14eca66db1d53a33f11305729a4\" id:\"b9b5b4643903b27b7a6c10e476d45d1a898dc14eca66db1d53a33f11305729a4\" pid:2548 exit_status:1 exited_at:{seconds:1776172903 nanos:137538134} for b9b5b4643903b27b7a6c10e476d45d1a898dc14eca66db1d53a33f11305729a4" error="failed to handle container TaskExit event: failed to stop container: failed to delete task: context deadline exceeded: unknown" Apr 14 13:22:11.761792 containerd[1593]: time="2026-04-14T13:22:11.642984648Z" level=info msg="TaskExit event container_id:\"4ba36ec4dc3157ae25536e57af758f739c6f8f4ed64279d5c654ee0aee3d0337\" id:\"4ba36ec4dc3157ae25536e57af758f739c6f8f4ed64279d5c654ee0aee3d0337\" pid:2538 exit_status:1 exited_at:{seconds:1776172915 nanos:631152089}" Apr 14 13:22:12.035545 containerd[1593]: time="2026-04-14T13:22:12.033217194Z" level=error msg="ttrpc: received message on inactive stream" stream=67 Apr 14 13:22:12.327507 kubelet[2716]: E0414 13:22:09.002858 2716 controller.go:195] "Failed to update lease" err="unexpected error when reading response body. Please retry. Original error: net/http: request canceled (Client.Timeout or context cancellation while reading body)" Apr 14 13:22:12.995924 systemd-logind[1549]: New session 10 of user core. Apr 14 13:22:13.304477 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 14 13:22:14.484364 kubelet[2716]: E0414 13:22:14.478663 2716 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 13:22:18.057640 containerd[1593]: time="2026-04-14T13:22:18.041790766Z" level=info msg="shim disconnected" id=4ba36ec4dc3157ae25536e57af758f739c6f8f4ed64279d5c654ee0aee3d0337 namespace=k8s.io Apr 14 13:22:18.057640 containerd[1593]: time="2026-04-14T13:22:18.054154969Z" level=warning msg="cleaning up after shim disconnected" id=4ba36ec4dc3157ae25536e57af758f739c6f8f4ed64279d5c654ee0aee3d0337 namespace=k8s.io Apr 14 13:22:18.156966 containerd[1593]: time="2026-04-14T13:22:18.056001486Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 14 13:22:18.215910 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4ba36ec4dc3157ae25536e57af758f739c6f8f4ed64279d5c654ee0aee3d0337-rootfs.mount: Deactivated successfully. Apr 14 13:22:22.020983 containerd[1593]: time="2026-04-14T13:22:21.975042100Z" level=error msg="failed to delete shim" error="1 error occurred:\n\t* close wait error: context deadline exceeded\n\n" id=4ba36ec4dc3157ae25536e57af758f739c6f8f4ed64279d5c654ee0aee3d0337 Apr 14 13:22:22.636062 containerd[1593]: time="2026-04-14T13:22:22.635741498Z" level=info msg="TaskExit event container_id:\"b9b5b4643903b27b7a6c10e476d45d1a898dc14eca66db1d53a33f11305729a4\" id:\"b9b5b4643903b27b7a6c10e476d45d1a898dc14eca66db1d53a33f11305729a4\" pid:2548 exit_status:1 exited_at:{seconds:1776172903 nanos:137538134}" Apr 14 13:22:22.944969 kubelet[2716]: E0414 13:22:22.937630 2716 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" Apr 14 13:22:29.571560 containerd[1593]: time="2026-04-14T13:22:29.546065417Z" level=info msg="shim disconnected" id=b9b5b4643903b27b7a6c10e476d45d1a898dc14eca66db1d53a33f11305729a4 namespace=k8s.io Apr 14 13:22:29.710701 containerd[1593]: time="2026-04-14T13:22:29.709444934Z" level=warning msg="cleaning up after shim disconnected" id=b9b5b4643903b27b7a6c10e476d45d1a898dc14eca66db1d53a33f11305729a4 namespace=k8s.io Apr 14 13:22:29.728646 containerd[1593]: time="2026-04-14T13:22:29.726581503Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 14 13:22:32.910622 containerd[1593]: time="2026-04-14T13:22:32.905912337Z" level=error msg="failed to delete shim" error="1 error occurred:\n\t* close wait error: context deadline exceeded\n\n" id=b9b5b4643903b27b7a6c10e476d45d1a898dc14eca66db1d53a33f11305729a4 Apr 14 13:22:33.921818 kubelet[2716]: E0414 13:22:31.288945 2716 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 13:22:34.940878 containerd[1593]: time="2026-04-14T13:22:34.937771043Z" level=error msg="failed to delete" cmd="/usr/bin/containerd-shim-runc-v2 -namespace k8s.io -address /run/containerd/containerd.sock -publish-binary /usr/bin/containerd -id b9b5b4643903b27b7a6c10e476d45d1a898dc14eca66db1d53a33f11305729a4 -bundle /run/containerd/io.containerd.runtime.v2.task/k8s.io/b9b5b4643903b27b7a6c10e476d45d1a898dc14eca66db1d53a33f11305729a4 delete" error="signal: killed" namespace=k8s.io Apr 14 13:22:35.234802 containerd[1593]: time="2026-04-14T13:22:34.947978577Z" level=warning msg="failed to clean up after shim disconnected" error=": signal: killed" id=b9b5b4643903b27b7a6c10e476d45d1a898dc14eca66db1d53a33f11305729a4 namespace=k8s.io Apr 14 13:22:39.394179 containerd[1593]: time="2026-04-14T13:22:39.044675198Z" level=error msg="failed to handle container TaskExit event container_id:\"982eb30a46df62c511ccd4ae8a7a6d6aa6ce6d1f39048188c646d737926a8a46\" id:\"982eb30a46df62c511ccd4ae8a7a6d6aa6ce6d1f39048188c646d737926a8a46\" pid:3053 exit_status:1 exited_at:{seconds:1776172948 nanos:397627691}" error="failed to stop container: failed to delete task: context deadline exceeded: unknown" Apr 14 13:22:41.361262 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-982eb30a46df62c511ccd4ae8a7a6d6aa6ce6d1f39048188c646d737926a8a46-rootfs.mount: Deactivated successfully. Apr 14 13:22:41.648929 containerd[1593]: time="2026-04-14T13:22:41.645164017Z" level=info msg="TaskExit event container_id:\"982eb30a46df62c511ccd4ae8a7a6d6aa6ce6d1f39048188c646d737926a8a46\" id:\"982eb30a46df62c511ccd4ae8a7a6d6aa6ce6d1f39048188c646d737926a8a46\" pid:3053 exit_status:1 exited_at:{seconds:1776172948 nanos:397627691}" Apr 14 13:22:41.753661 containerd[1593]: time="2026-04-14T13:22:41.684723622Z" level=error msg="ttrpc: received message on inactive stream" stream=47 Apr 14 13:22:43.689386 kubelet[2716]: E0414 13:22:42.594351 2716 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" Apr 14 13:22:47.760952 kubelet[2716]: E0414 13:22:47.756483 2716 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 13:22:51.350605 containerd[1593]: time="2026-04-14T13:22:51.348472064Z" level=error msg="Failed to handle backOff event container_id:\"982eb30a46df62c511ccd4ae8a7a6d6aa6ce6d1f39048188c646d737926a8a46\" id:\"982eb30a46df62c511ccd4ae8a7a6d6aa6ce6d1f39048188c646d737926a8a46\" pid:3053 exit_status:1 exited_at:{seconds:1776172948 nanos:397627691} for 982eb30a46df62c511ccd4ae8a7a6d6aa6ce6d1f39048188c646d737926a8a46" error="failed to handle container TaskExit event: failed to stop container: failed to delete task: context deadline exceeded: unknown" Apr 14 13:22:51.452797 kubelet[2716]: E0414 13:22:50.546914 2716 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1m27.468s" Apr 14 13:22:51.875789 containerd[1593]: time="2026-04-14T13:22:51.850200888Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5cdcd8fb9d-2crff,Uid:e5de7976-2d57-4f44-8a4b-ef2127456c13,Namespace:calico-system,Attempt:0,} returns sandbox id \"2f089032047f66f3fb95b411c98a19e3bc54b8da53be70ce15fb45202d66a9a0\"" Apr 14 13:22:52.395048 containerd[1593]: time="2026-04-14T13:22:52.388425298Z" level=error msg="ttrpc: received message on inactive stream" stream=59 Apr 14 13:22:53.317383 kubelet[2716]: E0414 13:22:53.298624 2716 controller.go:195] "Failed to update lease" err="Operation cannot be fulfilled on leases.coordination.k8s.io \"localhost\": the object has been modified; please apply your changes to the latest version and try again" Apr 14 13:22:53.622519 kubelet[2716]: E0414 13:22:53.523643 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:22:54.399371 containerd[1593]: time="2026-04-14T13:22:54.398567615Z" level=info msg="TaskExit event container_id:\"982eb30a46df62c511ccd4ae8a7a6d6aa6ce6d1f39048188c646d737926a8a46\" id:\"982eb30a46df62c511ccd4ae8a7a6d6aa6ce6d1f39048188c646d737926a8a46\" pid:3053 exit_status:1 exited_at:{seconds:1776172948 nanos:397627691}" Apr 14 13:22:55.492371 kubelet[2716]: E0414 13:22:55.481470 2716 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.164s" Apr 14 13:22:55.558783 sshd[3246]: pam_unix(sshd:session): session closed for user core Apr 14 13:22:56.039663 containerd[1593]: time="2026-04-14T13:22:56.039335130Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Apr 14 13:22:56.156781 systemd[1]: sshd@9-10.0.0.15:22-10.0.0.1:44824.service: Deactivated successfully. Apr 14 13:22:56.487972 systemd[1]: session-10.scope: Deactivated successfully. Apr 14 13:22:56.735985 systemd-logind[1549]: Session 10 logged out. Waiting for processes to exit. Apr 14 13:22:56.956274 systemd-logind[1549]: Removed session 10. Apr 14 13:22:57.174777 kubelet[2716]: I0414 13:22:57.170605 2716 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Apr 14 13:22:57.848380 kubelet[2716]: E0414 13:22:57.846292 2716 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 13:23:00.681350 containerd[1593]: time="2026-04-14T13:23:00.680439299Z" level=info msg="shim disconnected" id=982eb30a46df62c511ccd4ae8a7a6d6aa6ce6d1f39048188c646d737926a8a46 namespace=k8s.io Apr 14 13:23:00.753347 containerd[1593]: time="2026-04-14T13:23:00.751650055Z" level=warning msg="cleaning up after shim disconnected" id=982eb30a46df62c511ccd4ae8a7a6d6aa6ce6d1f39048188c646d737926a8a46 namespace=k8s.io Apr 14 13:23:00.761634 containerd[1593]: time="2026-04-14T13:23:00.760951738Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 14 13:23:01.032719 systemd[1]: Started sshd@10-10.0.0.15:22-10.0.0.1:50594.service - OpenSSH per-connection server daemon (10.0.0.1:50594). Apr 14 13:23:03.760048 kubelet[2716]: E0414 13:23:03.759868 2716 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="8.272s" Apr 14 13:23:03.930969 kubelet[2716]: E0414 13:23:03.930845 2716 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 13:23:04.401747 kubelet[2716]: I0414 13:23:04.165761 2716 scope.go:117] "RemoveContainer" containerID="4ba36ec4dc3157ae25536e57af758f739c6f8f4ed64279d5c654ee0aee3d0337" Apr 14 13:23:04.438717 sshd[3376]: Accepted publickey for core from 10.0.0.1 port 50594 ssh2: RSA SHA256:yxEwPRvngNLBDT3pQyHsh993EalqAg0K7yQpushVk/s Apr 14 13:23:04.529781 sshd[3376]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:23:04.567704 containerd[1593]: time="2026-04-14T13:23:04.539553357Z" level=error msg="failed to delete shim" error="1 error occurred:\n\t* close wait error: context deadline exceeded\n\n" id=982eb30a46df62c511ccd4ae8a7a6d6aa6ce6d1f39048188c646d737926a8a46 Apr 14 13:23:04.654626 kubelet[2716]: E0414 13:23:04.642725 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:23:05.387926 systemd-logind[1549]: New session 11 of user core. Apr 14 13:23:05.479147 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 14 13:23:05.480626 kubelet[2716]: E0414 13:23:05.479962 2716 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.72s" Apr 14 13:23:06.047264 containerd[1593]: time="2026-04-14T13:23:06.041281479Z" level=info msg="CreateContainer within sandbox \"f3e6973b214880ab161cc4c485407363ab9c0d5e9c44f9984ce852d2f1d16415\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Apr 14 13:23:06.399659 kubelet[2716]: E0414 13:23:06.071779 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:23:07.466569 kubelet[2716]: I0414 13:23:07.454654 2716 scope.go:117] "RemoveContainer" containerID="b9b5b4643903b27b7a6c10e476d45d1a898dc14eca66db1d53a33f11305729a4" Apr 14 13:23:08.014708 kubelet[2716]: E0414 13:23:08.014258 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:23:08.946772 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4116212249.mount: Deactivated successfully. Apr 14 13:23:08.979866 kubelet[2716]: E0414 13:23:08.970105 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:23:09.063857 kubelet[2716]: E0414 13:23:09.039055 2716 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.504s" Apr 14 13:23:09.761045 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2757063202.mount: Deactivated successfully. Apr 14 13:23:10.239963 containerd[1593]: time="2026-04-14T13:23:10.238752413Z" level=info msg="CreateContainer within sandbox \"f3e6973b214880ab161cc4c485407363ab9c0d5e9c44f9984ce852d2f1d16415\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"8c8514212d0393aac6466db1e24b950cd6e3c3dbf77b223fd7900d970e8909c3\"" Apr 14 13:23:13.689376 containerd[1593]: time="2026-04-14T13:23:13.673334746Z" level=info msg="StartContainer for \"8c8514212d0393aac6466db1e24b950cd6e3c3dbf77b223fd7900d970e8909c3\"" Apr 14 13:23:14.166599 kubelet[2716]: E0414 13:23:14.045294 2716 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 13:23:15.783763 sshd[3376]: pam_unix(sshd:session): session closed for user core Apr 14 13:23:16.038501 systemd[1]: sshd@10-10.0.0.15:22-10.0.0.1:50594.service: Deactivated successfully. Apr 14 13:23:16.142861 containerd[1593]: time="2026-04-14T13:23:16.137019131Z" level=info msg="CreateContainer within sandbox \"ff98d78c4020fc8d7abd9fd7aa79ee03d748d4d072cf21ac8ff93a44d24d7a30\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Apr 14 13:23:16.300326 systemd[1]: session-11.scope: Deactivated successfully. Apr 14 13:23:16.532512 systemd-logind[1549]: Session 11 logged out. Waiting for processes to exit. Apr 14 13:23:16.689250 systemd-logind[1549]: Removed session 11. Apr 14 13:23:17.609020 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount837452845.mount: Deactivated successfully. Apr 14 13:23:17.867391 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1988986597.mount: Deactivated successfully. Apr 14 13:23:18.228664 kubelet[2716]: E0414 13:23:18.173810 2716 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="9.115s" Apr 14 13:23:19.023539 containerd[1593]: time="2026-04-14T13:23:19.022956609Z" level=info msg="CreateContainer within sandbox \"ff98d78c4020fc8d7abd9fd7aa79ee03d748d4d072cf21ac8ff93a44d24d7a30\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"ad16724f35edb3b8ca6c24ed0b291c5cd1c47741fc2a1f346f13c9ccead1d6e9\"" Apr 14 13:23:19.447758 kubelet[2716]: E0414 13:23:19.439329 2716 manager.go:1116] Failed to create existing container: /kubepods/besteffort/pod0946a41e-7e03-41a0-8dd8-549abdcdc5a2/982eb30a46df62c511ccd4ae8a7a6d6aa6ce6d1f39048188c646d737926a8a46: task 982eb30a46df62c511ccd4ae8a7a6d6aa6ce6d1f39048188c646d737926a8a46 not found Apr 14 13:23:19.464681 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1584390587.mount: Deactivated successfully. Apr 14 13:23:19.738614 kubelet[2716]: E0414 13:23:19.729609 2716 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.55s" Apr 14 13:23:19.757689 kubelet[2716]: E0414 13:23:19.755965 2716 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 13:23:19.794604 containerd[1593]: time="2026-04-14T13:23:19.782180205Z" level=info msg="StartContainer for \"ad16724f35edb3b8ca6c24ed0b291c5cd1c47741fc2a1f346f13c9ccead1d6e9\"" Apr 14 13:23:19.850807 kubelet[2716]: I0414 13:23:19.829873 2716 scope.go:117] "RemoveContainer" containerID="982eb30a46df62c511ccd4ae8a7a6d6aa6ce6d1f39048188c646d737926a8a46" Apr 14 13:23:20.955958 systemd[1]: Started sshd@11-10.0.0.15:22-10.0.0.1:58054.service - OpenSSH per-connection server daemon (10.0.0.1:58054). Apr 14 13:23:21.066990 kubelet[2716]: E0414 13:23:20.861760 2716 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.121s" Apr 14 13:23:22.833855 sshd[3442]: Accepted publickey for core from 10.0.0.1 port 58054 ssh2: RSA SHA256:yxEwPRvngNLBDT3pQyHsh993EalqAg0K7yQpushVk/s Apr 14 13:23:23.181940 containerd[1593]: time="2026-04-14T13:23:23.164066810Z" level=info msg="CreateContainer within sandbox \"55636687ab4158ad9e6b2dc567b56e4adf95551dca2e49c96f3ef41b32bc27ea\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Apr 14 13:23:23.165256 sshd[3442]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:23:23.247025 containerd[1593]: time="2026-04-14T13:23:23.237287790Z" level=info msg="StartContainer for \"8c8514212d0393aac6466db1e24b950cd6e3c3dbf77b223fd7900d970e8909c3\" returns successfully" Apr 14 13:23:23.533538 systemd-logind[1549]: New session 12 of user core. Apr 14 13:23:23.576304 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 14 13:23:24.843823 kubelet[2716]: E0414 13:23:24.838277 2716 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.793s" Apr 14 13:23:27.316399 containerd[1593]: time="2026-04-14T13:23:27.314992247Z" level=info msg="StartContainer for \"ad16724f35edb3b8ca6c24ed0b291c5cd1c47741fc2a1f346f13c9ccead1d6e9\" returns successfully" Apr 14 13:23:27.591941 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount206032760.mount: Deactivated successfully. Apr 14 13:23:28.859645 kubelet[2716]: E0414 13:23:28.852105 2716 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 13:23:29.293814 containerd[1593]: time="2026-04-14T13:23:29.289971956Z" level=info msg="CreateContainer within sandbox \"55636687ab4158ad9e6b2dc567b56e4adf95551dca2e49c96f3ef41b32bc27ea\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"08bea41936ed2ff345ba6e605ca49bb15f4701cea3225733d0a55a798cd6c8c3\"" Apr 14 13:23:34.924208 containerd[1593]: time="2026-04-14T13:23:34.919196109Z" level=info msg="StartContainer for \"08bea41936ed2ff345ba6e605ca49bb15f4701cea3225733d0a55a798cd6c8c3\"" Apr 14 13:23:36.428559 sshd[3442]: pam_unix(sshd:session): session closed for user core Apr 14 13:23:36.774699 systemd[1]: sshd@11-10.0.0.15:22-10.0.0.1:58054.service: Deactivated successfully. Apr 14 13:23:36.966954 systemd[1]: session-12.scope: Deactivated successfully. Apr 14 13:23:37.250067 systemd-logind[1549]: Session 12 logged out. Waiting for processes to exit. Apr 14 13:23:37.475042 systemd-logind[1549]: Removed session 12. Apr 14 13:23:39.978166 kubelet[2716]: E0414 13:23:39.977825 2716 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 13:23:41.040537 systemd[1]: run-containerd-runc-k8s.io-08bea41936ed2ff345ba6e605ca49bb15f4701cea3225733d0a55a798cd6c8c3-runc.N1739P.mount: Deactivated successfully. Apr 14 13:23:41.701555 systemd[1]: Started sshd@12-10.0.0.15:22-10.0.0.1:48628.service - OpenSSH per-connection server daemon (10.0.0.1:48628). Apr 14 13:23:44.470618 kubelet[2716]: E0414 13:23:44.078517 2716 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="19.224s" Apr 14 13:23:44.665324 sshd[3529]: Accepted publickey for core from 10.0.0.1 port 48628 ssh2: RSA SHA256:yxEwPRvngNLBDT3pQyHsh993EalqAg0K7yQpushVk/s Apr 14 13:23:44.884936 sshd[3529]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:23:45.736880 systemd-logind[1549]: New session 13 of user core. Apr 14 13:23:45.957935 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 14 13:23:47.982616 containerd[1593]: time="2026-04-14T13:23:47.981556110Z" level=info msg="StartContainer for \"08bea41936ed2ff345ba6e605ca49bb15f4701cea3225733d0a55a798cd6c8c3\" returns successfully" Apr 14 13:23:48.047776 kubelet[2716]: E0414 13:23:48.042442 2716 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 13:23:52.758865 kubelet[2716]: E0414 13:23:52.753586 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:23:58.874261 kubelet[2716]: E0414 13:23:58.853743 2716 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="14.289s" Apr 14 13:23:59.051766 kubelet[2716]: E0414 13:23:59.037959 2716 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 13:23:59.831554 sshd[3529]: pam_unix(sshd:session): session closed for user core Apr 14 13:24:00.238802 systemd[1]: sshd@12-10.0.0.15:22-10.0.0.1:48628.service: Deactivated successfully. Apr 14 13:24:00.696768 systemd[1]: session-13.scope: Deactivated successfully. Apr 14 13:24:00.755763 systemd-logind[1549]: Session 13 logged out. Waiting for processes to exit. Apr 14 13:24:01.143714 systemd-logind[1549]: Removed session 13. Apr 14 13:24:05.407233 systemd[1]: Started sshd@13-10.0.0.15:22-10.0.0.1:36398.service - OpenSSH per-connection server daemon (10.0.0.1:36398). Apr 14 13:24:08.727824 sshd[3566]: Accepted publickey for core from 10.0.0.1 port 36398 ssh2: RSA SHA256:yxEwPRvngNLBDT3pQyHsh993EalqAg0K7yQpushVk/s Apr 14 13:24:08.850442 sshd[3566]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:24:09.841384 kubelet[2716]: E0414 13:24:09.840987 2716 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 13:24:09.865369 systemd-logind[1549]: New session 14 of user core. Apr 14 13:24:10.182918 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 14 13:24:13.580036 kubelet[2716]: E0414 13:24:13.576973 2716 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="13.821s" Apr 14 13:24:17.064707 sshd[3566]: pam_unix(sshd:session): session closed for user core Apr 14 13:24:17.377714 systemd[1]: sshd@13-10.0.0.15:22-10.0.0.1:36398.service: Deactivated successfully. Apr 14 13:24:17.640784 systemd[1]: session-14.scope: Deactivated successfully. Apr 14 13:24:17.828635 systemd-logind[1549]: Session 14 logged out. Waiting for processes to exit. Apr 14 13:24:18.353606 systemd-logind[1549]: Removed session 14. Apr 14 13:24:20.256270 kubelet[2716]: E0414 13:24:20.046373 2716 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 13:24:22.772144 systemd[1]: Started sshd@14-10.0.0.15:22-10.0.0.1:60920.service - OpenSSH per-connection server daemon (10.0.0.1:60920). Apr 14 13:24:23.692701 sshd[3587]: Accepted publickey for core from 10.0.0.1 port 60920 ssh2: RSA SHA256:yxEwPRvngNLBDT3pQyHsh993EalqAg0K7yQpushVk/s Apr 14 13:24:23.868024 sshd[3587]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:24:24.737469 systemd-logind[1549]: New session 15 of user core. Apr 14 13:24:24.771524 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 14 13:24:24.875982 kubelet[2716]: E0414 13:24:24.875737 2716 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="11.236s" Apr 14 13:24:26.865439 kubelet[2716]: E0414 13:24:26.845517 2716 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 13:24:27.894435 kubelet[2716]: E0414 13:24:27.893578 2716 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.009s" Apr 14 13:24:28.982673 kubelet[2716]: E0414 13:24:28.982451 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:24:29.950904 sshd[3587]: pam_unix(sshd:session): session closed for user core Apr 14 13:24:30.085650 systemd[1]: sshd@14-10.0.0.15:22-10.0.0.1:60920.service: Deactivated successfully. Apr 14 13:24:30.333012 systemd[1]: session-15.scope: Deactivated successfully. Apr 14 13:24:30.661842 systemd-logind[1549]: Session 15 logged out. Waiting for processes to exit. Apr 14 13:24:30.938510 systemd-logind[1549]: Removed session 15. Apr 14 13:24:35.015762 kubelet[2716]: E0414 13:24:35.011032 2716 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 13:24:35.835990 systemd[1]: Started sshd@15-10.0.0.15:22-10.0.0.1:51104.service - OpenSSH per-connection server daemon (10.0.0.1:51104). Apr 14 13:24:39.095737 sshd[3605]: Accepted publickey for core from 10.0.0.1 port 51104 ssh2: RSA SHA256:yxEwPRvngNLBDT3pQyHsh993EalqAg0K7yQpushVk/s Apr 14 13:24:39.334994 sshd[3605]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:24:40.836851 systemd-logind[1549]: New session 16 of user core. Apr 14 13:24:40.884626 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 14 13:24:55.802910 kubelet[2716]: E0414 13:24:55.782335 2716 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Apr 14 13:24:55.869164 kubelet[2716]: E0414 13:24:55.837568 2716 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 13:24:55.945293 kubelet[2716]: E0414 13:24:55.942353 2716 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="27.881s" Apr 14 13:24:55.998524 sshd[3605]: pam_unix(sshd:session): session closed for user core Apr 14 13:24:56.059648 systemd[1]: sshd@15-10.0.0.15:22-10.0.0.1:51104.service: Deactivated successfully. Apr 14 13:24:56.160417 systemd[1]: session-16.scope: Deactivated successfully. Apr 14 13:24:56.328953 systemd-logind[1549]: Session 16 logged out. Waiting for processes to exit. Apr 14 13:24:56.539652 systemd-logind[1549]: Removed session 16. Apr 14 13:24:56.573733 kubelet[2716]: E0414 13:24:56.573644 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:24:56.576225 kubelet[2716]: E0414 13:24:56.574513 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:24:56.576225 kubelet[2716]: E0414 13:24:56.574596 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:24:56.706800 containerd[1593]: time="2026-04-14T13:24:56.673423625Z" level=info msg="StopContainer for \"8c8514212d0393aac6466db1e24b950cd6e3c3dbf77b223fd7900d970e8909c3\" with timeout 30 (s)" Apr 14 13:24:56.745172 containerd[1593]: time="2026-04-14T13:24:56.742712431Z" level=info msg="Stop container \"8c8514212d0393aac6466db1e24b950cd6e3c3dbf77b223fd7900d970e8909c3\" with signal terminated" Apr 14 13:24:56.859741 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ad16724f35edb3b8ca6c24ed0b291c5cd1c47741fc2a1f346f13c9ccead1d6e9-rootfs.mount: Deactivated successfully. Apr 14 13:24:56.980938 containerd[1593]: time="2026-04-14T13:24:56.962367110Z" level=info msg="shim disconnected" id=ad16724f35edb3b8ca6c24ed0b291c5cd1c47741fc2a1f346f13c9ccead1d6e9 namespace=k8s.io Apr 14 13:24:56.980938 containerd[1593]: time="2026-04-14T13:24:56.972832030Z" level=warning msg="cleaning up after shim disconnected" id=ad16724f35edb3b8ca6c24ed0b291c5cd1c47741fc2a1f346f13c9ccead1d6e9 namespace=k8s.io Apr 14 13:24:56.980938 containerd[1593]: time="2026-04-14T13:24:56.972847622Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 14 13:24:56.984784 containerd[1593]: time="2026-04-14T13:24:56.964783196Z" level=error msg="collecting metrics for ad16724f35edb3b8ca6c24ed0b291c5cd1c47741fc2a1f346f13c9ccead1d6e9" error="ttrpc: closed: unknown" Apr 14 13:24:57.847759 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8c8514212d0393aac6466db1e24b950cd6e3c3dbf77b223fd7900d970e8909c3-rootfs.mount: Deactivated successfully. Apr 14 13:24:57.885056 containerd[1593]: time="2026-04-14T13:24:57.884913296Z" level=info msg="shim disconnected" id=8c8514212d0393aac6466db1e24b950cd6e3c3dbf77b223fd7900d970e8909c3 namespace=k8s.io Apr 14 13:24:57.887526 containerd[1593]: time="2026-04-14T13:24:57.885061943Z" level=warning msg="cleaning up after shim disconnected" id=8c8514212d0393aac6466db1e24b950cd6e3c3dbf77b223fd7900d970e8909c3 namespace=k8s.io Apr 14 13:24:57.887526 containerd[1593]: time="2026-04-14T13:24:57.886470146Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 14 13:24:58.082753 containerd[1593]: time="2026-04-14T13:24:58.073013087Z" level=warning msg="cleanup warnings time=\"2026-04-14T13:24:58Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 14 13:24:58.191204 containerd[1593]: time="2026-04-14T13:24:58.190448038Z" level=info msg="StopContainer for \"8c8514212d0393aac6466db1e24b950cd6e3c3dbf77b223fd7900d970e8909c3\" returns successfully" Apr 14 13:24:58.263874 kubelet[2716]: E0414 13:24:58.258241 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:24:58.295798 kubelet[2716]: I0414 13:24:58.295616 2716 scope.go:117] "RemoveContainer" containerID="b9b5b4643903b27b7a6c10e476d45d1a898dc14eca66db1d53a33f11305729a4" Apr 14 13:24:58.525627 containerd[1593]: time="2026-04-14T13:24:58.485009237Z" level=info msg="CreateContainer within sandbox \"f3e6973b214880ab161cc4c485407363ab9c0d5e9c44f9984ce852d2f1d16415\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:2,}" Apr 14 13:24:58.533799 kubelet[2716]: I0414 13:24:58.533601 2716 scope.go:117] "RemoveContainer" containerID="ad16724f35edb3b8ca6c24ed0b291c5cd1c47741fc2a1f346f13c9ccead1d6e9" Apr 14 13:24:58.534203 kubelet[2716]: E0414 13:24:58.533939 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:24:58.534279 kubelet[2716]: E0414 13:24:58.534229 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(ebf8e820819e4b80bc03d078b9ba80f5)\"" pod="kube-system/kube-controller-manager-localhost" podUID="ebf8e820819e4b80bc03d078b9ba80f5" Apr 14 13:24:58.535681 containerd[1593]: time="2026-04-14T13:24:58.535636953Z" level=info msg="RemoveContainer for \"b9b5b4643903b27b7a6c10e476d45d1a898dc14eca66db1d53a33f11305729a4\"" Apr 14 13:24:58.610845 containerd[1593]: time="2026-04-14T13:24:58.610578330Z" level=info msg="RemoveContainer for \"b9b5b4643903b27b7a6c10e476d45d1a898dc14eca66db1d53a33f11305729a4\" returns successfully" Apr 14 13:24:58.938486 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount907115509.mount: Deactivated successfully. Apr 14 13:24:59.252164 containerd[1593]: time="2026-04-14T13:24:59.249974294Z" level=info msg="CreateContainer within sandbox \"f3e6973b214880ab161cc4c485407363ab9c0d5e9c44f9984ce852d2f1d16415\" for &ContainerMetadata{Name:kube-scheduler,Attempt:2,} returns container id \"ad63c135498eb2410c28d8f94080be8c732dc5a80b2d8eeac1d7962ed97eba41\"" Apr 14 13:24:59.334358 containerd[1593]: time="2026-04-14T13:24:59.330415393Z" level=info msg="StartContainer for \"ad63c135498eb2410c28d8f94080be8c732dc5a80b2d8eeac1d7962ed97eba41\"" Apr 14 13:25:00.141033 kubelet[2716]: I0414 13:25:00.140329 2716 scope.go:117] "RemoveContainer" containerID="4ba36ec4dc3157ae25536e57af758f739c6f8f4ed64279d5c654ee0aee3d0337" Apr 14 13:25:00.220942 containerd[1593]: time="2026-04-14T13:25:00.220393883Z" level=info msg="RemoveContainer for \"4ba36ec4dc3157ae25536e57af758f739c6f8f4ed64279d5c654ee0aee3d0337\"" Apr 14 13:25:00.307612 containerd[1593]: time="2026-04-14T13:25:00.300832791Z" level=info msg="RemoveContainer for \"4ba36ec4dc3157ae25536e57af758f739c6f8f4ed64279d5c654ee0aee3d0337\" returns successfully" Apr 14 13:25:00.496613 containerd[1593]: time="2026-04-14T13:25:00.496348682Z" level=info msg="StartContainer for \"ad63c135498eb2410c28d8f94080be8c732dc5a80b2d8eeac1d7962ed97eba41\" returns successfully" Apr 14 13:25:00.863369 kubelet[2716]: E0414 13:25:00.863122 2716 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 13:25:00.928855 containerd[1593]: time="2026-04-14T13:25:00.928705968Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:25:00.933386 containerd[1593]: time="2026-04-14T13:25:00.933303741Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=36107596" Apr 14 13:25:00.939440 containerd[1593]: time="2026-04-14T13:25:00.938124196Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:25:00.946297 containerd[1593]: time="2026-04-14T13:25:00.945551538Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:25:00.946297 containerd[1593]: time="2026-04-14T13:25:00.946027179Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 2m4.893501772s" Apr 14 13:25:00.946297 containerd[1593]: time="2026-04-14T13:25:00.946056549Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Apr 14 13:25:01.109636 systemd[1]: Started sshd@16-10.0.0.15:22-10.0.0.1:50616.service - OpenSSH per-connection server daemon (10.0.0.1:50616). Apr 14 13:25:01.110477 containerd[1593]: time="2026-04-14T13:25:01.110386462Z" level=info msg="CreateContainer within sandbox \"2f089032047f66f3fb95b411c98a19e3bc54b8da53be70ce15fb45202d66a9a0\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 14 13:25:01.138349 containerd[1593]: time="2026-04-14T13:25:01.134698300Z" level=info msg="CreateContainer within sandbox \"2f089032047f66f3fb95b411c98a19e3bc54b8da53be70ce15fb45202d66a9a0\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"92c7ad6f63c1bd549105e393a6b917b61535bc07f8e52965065ae323eac6dc6f\"" Apr 14 13:25:01.145238 containerd[1593]: time="2026-04-14T13:25:01.141784681Z" level=info msg="StartContainer for \"92c7ad6f63c1bd549105e393a6b917b61535bc07f8e52965065ae323eac6dc6f\"" Apr 14 13:25:01.507893 sshd[3724]: Accepted publickey for core from 10.0.0.1 port 50616 ssh2: RSA SHA256:yxEwPRvngNLBDT3pQyHsh993EalqAg0K7yQpushVk/s Apr 14 13:25:01.518432 sshd[3724]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:25:01.600169 systemd-logind[1549]: New session 17 of user core. Apr 14 13:25:01.619347 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 14 13:25:01.624542 kubelet[2716]: E0414 13:25:01.624206 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:25:02.000341 systemd[1]: run-containerd-runc-k8s.io-92c7ad6f63c1bd549105e393a6b917b61535bc07f8e52965065ae323eac6dc6f-runc.pGovgc.mount: Deactivated successfully. Apr 14 13:25:02.608348 containerd[1593]: time="2026-04-14T13:25:02.608141702Z" level=info msg="StartContainer for \"92c7ad6f63c1bd549105e393a6b917b61535bc07f8e52965065ae323eac6dc6f\" returns successfully" Apr 14 13:25:03.361351 kubelet[2716]: I0414 13:25:03.357965 2716 scope.go:117] "RemoveContainer" containerID="ad16724f35edb3b8ca6c24ed0b291c5cd1c47741fc2a1f346f13c9ccead1d6e9" Apr 14 13:25:03.461979 kubelet[2716]: E0414 13:25:03.458858 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:25:03.647820 kubelet[2716]: E0414 13:25:03.644219 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:25:03.707935 containerd[1593]: time="2026-04-14T13:25:03.692883607Z" level=info msg="CreateContainer within sandbox \"ff98d78c4020fc8d7abd9fd7aa79ee03d748d4d072cf21ac8ff93a44d24d7a30\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:2,}" Apr 14 13:25:03.695425 sshd[3724]: pam_unix(sshd:session): session closed for user core Apr 14 13:25:03.891456 systemd[1]: sshd@16-10.0.0.15:22-10.0.0.1:50616.service: Deactivated successfully. Apr 14 13:25:04.029894 systemd[1]: session-17.scope: Deactivated successfully. Apr 14 13:25:04.083903 kubelet[2716]: E0414 13:25:04.072978 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:25:04.073805 systemd-logind[1549]: Session 17 logged out. Waiting for processes to exit. Apr 14 13:25:04.146427 containerd[1593]: time="2026-04-14T13:25:04.144933845Z" level=info msg="CreateContainer within sandbox \"ff98d78c4020fc8d7abd9fd7aa79ee03d748d4d072cf21ac8ff93a44d24d7a30\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:2,} returns container id \"90882a6a4f97f7926ef397ea90f1af60fb0bbb41418ba0407f4b2f8d19648d7e\"" Apr 14 13:25:04.334780 systemd-logind[1549]: Removed session 17. Apr 14 13:25:04.337208 containerd[1593]: time="2026-04-14T13:25:04.335732692Z" level=info msg="StartContainer for \"90882a6a4f97f7926ef397ea90f1af60fb0bbb41418ba0407f4b2f8d19648d7e\"" Apr 14 13:25:05.954320 kubelet[2716]: I0414 13:25:05.951817 2716 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5cdcd8fb9d-2crff" podStartSLOduration=119.739465174 podStartE2EDuration="4m4.951794946s" podCreationTimestamp="2026-04-14 13:21:01 +0000 UTC" firstStartedPulling="2026-04-14 13:22:55.736385355 +0000 UTC m=+292.183497399" lastFinishedPulling="2026-04-14 13:25:00.948715137 +0000 UTC m=+417.395827171" observedRunningTime="2026-04-14 13:25:05.951575687 +0000 UTC m=+422.398687739" watchObservedRunningTime="2026-04-14 13:25:05.951794946 +0000 UTC m=+422.398906990" Apr 14 13:25:06.490763 kubelet[2716]: I0414 13:25:06.489891 2716 scope.go:117] "RemoveContainer" containerID="ad16724f35edb3b8ca6c24ed0b291c5cd1c47741fc2a1f346f13c9ccead1d6e9" Apr 14 13:25:06.650504 kubelet[2716]: E0414 13:25:06.487054 2716 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 13:25:06.831183 containerd[1593]: time="2026-04-14T13:25:06.823448713Z" level=info msg="StartContainer for \"90882a6a4f97f7926ef397ea90f1af60fb0bbb41418ba0407f4b2f8d19648d7e\" returns successfully" Apr 14 13:25:07.728406 kubelet[2716]: E0414 13:25:07.728276 2716 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.082s" Apr 14 13:25:07.910172 containerd[1593]: time="2026-04-14T13:25:07.908700604Z" level=info msg="RemoveContainer for \"ad16724f35edb3b8ca6c24ed0b291c5cd1c47741fc2a1f346f13c9ccead1d6e9\"" Apr 14 13:25:07.995946 containerd[1593]: time="2026-04-14T13:25:07.990846301Z" level=info msg="RemoveContainer for \"ad16724f35edb3b8ca6c24ed0b291c5cd1c47741fc2a1f346f13c9ccead1d6e9\" returns successfully" Apr 14 13:25:08.076643 kubelet[2716]: E0414 13:25:08.075602 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:25:08.121261 kubelet[2716]: E0414 13:25:08.119231 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:25:08.761942 systemd[1]: Started sshd@17-10.0.0.15:22-10.0.0.1:50628.service - OpenSSH per-connection server daemon (10.0.0.1:50628). Apr 14 13:25:09.236562 sshd[3824]: Accepted publickey for core from 10.0.0.1 port 50628 ssh2: RSA SHA256:yxEwPRvngNLBDT3pQyHsh993EalqAg0K7yQpushVk/s Apr 14 13:25:09.279028 sshd[3824]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:25:09.480995 systemd-logind[1549]: New session 18 of user core. Apr 14 13:25:09.562735 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 14 13:25:10.365863 kubelet[2716]: E0414 13:25:10.364742 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:25:10.874540 sshd[3824]: pam_unix(sshd:session): session closed for user core Apr 14 13:25:10.983244 systemd[1]: sshd@17-10.0.0.15:22-10.0.0.1:50628.service: Deactivated successfully. Apr 14 13:25:10.990610 systemd[1]: session-18.scope: Deactivated successfully. Apr 14 13:25:10.994244 systemd-logind[1549]: Session 18 logged out. Waiting for processes to exit. Apr 14 13:25:10.996166 systemd-logind[1549]: Removed session 18. Apr 14 13:25:11.330379 kubelet[2716]: E0414 13:25:11.330245 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:25:11.663409 kubelet[2716]: E0414 13:25:11.663186 2716 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 13:25:13.284573 kubelet[2716]: E0414 13:25:13.284055 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:25:13.438757 kubelet[2716]: E0414 13:25:13.438437 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:25:14.348676 kubelet[2716]: E0414 13:25:14.348557 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:25:15.900136 systemd[1]: Started sshd@18-10.0.0.15:22-10.0.0.1:35882.service - OpenSSH per-connection server daemon (10.0.0.1:35882). Apr 14 13:25:15.962186 sshd[3849]: Accepted publickey for core from 10.0.0.1 port 35882 ssh2: RSA SHA256:yxEwPRvngNLBDT3pQyHsh993EalqAg0K7yQpushVk/s Apr 14 13:25:15.966862 sshd[3849]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:25:15.994472 systemd-logind[1549]: New session 19 of user core. Apr 14 13:25:16.006018 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 14 13:25:16.524282 sshd[3849]: pam_unix(sshd:session): session closed for user core Apr 14 13:25:16.546815 systemd[1]: sshd@18-10.0.0.15:22-10.0.0.1:35882.service: Deactivated successfully. Apr 14 13:25:16.572562 systemd[1]: session-19.scope: Deactivated successfully. Apr 14 13:25:16.669870 systemd-logind[1549]: Session 19 logged out. Waiting for processes to exit. Apr 14 13:25:16.685373 kubelet[2716]: E0414 13:25:16.684510 2716 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 13:25:16.685355 systemd-logind[1549]: Removed session 19. Apr 14 13:25:21.549225 systemd[1]: Started sshd@19-10.0.0.15:22-10.0.0.1:58450.service - OpenSSH per-connection server daemon (10.0.0.1:58450). Apr 14 13:25:21.700238 kubelet[2716]: E0414 13:25:21.699887 2716 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 13:25:21.714597 sshd[3866]: Accepted publickey for core from 10.0.0.1 port 58450 ssh2: RSA SHA256:yxEwPRvngNLBDT3pQyHsh993EalqAg0K7yQpushVk/s Apr 14 13:25:21.716474 sshd[3866]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:25:21.763576 systemd-logind[1549]: New session 20 of user core. Apr 14 13:25:21.781244 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 14 13:25:22.770871 sshd[3866]: pam_unix(sshd:session): session closed for user core Apr 14 13:25:22.777428 systemd[1]: sshd@19-10.0.0.15:22-10.0.0.1:58450.service: Deactivated successfully. Apr 14 13:25:22.786977 systemd[1]: session-20.scope: Deactivated successfully. Apr 14 13:25:22.800332 systemd-logind[1549]: Session 20 logged out. Waiting for processes to exit. Apr 14 13:25:22.805617 systemd-logind[1549]: Removed session 20. Apr 14 13:25:23.329300 kubelet[2716]: E0414 13:25:23.329188 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:25:26.824871 kubelet[2716]: E0414 13:25:26.796016 2716 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 13:25:27.915940 systemd[1]: Started sshd@20-10.0.0.15:22-10.0.0.1:58454.service - OpenSSH per-connection server daemon (10.0.0.1:58454). Apr 14 13:25:28.048981 sshd[3884]: Accepted publickey for core from 10.0.0.1 port 58454 ssh2: RSA SHA256:yxEwPRvngNLBDT3pQyHsh993EalqAg0K7yQpushVk/s Apr 14 13:25:28.050889 sshd[3884]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:25:28.469003 systemd-logind[1549]: New session 21 of user core. Apr 14 13:25:28.547295 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 14 13:25:30.137683 sshd[3884]: pam_unix(sshd:session): session closed for user core Apr 14 13:25:30.329065 systemd[1]: sshd@20-10.0.0.15:22-10.0.0.1:58454.service: Deactivated successfully. Apr 14 13:25:30.475059 systemd[1]: session-21.scope: Deactivated successfully. Apr 14 13:25:30.508722 systemd-logind[1549]: Session 21 logged out. Waiting for processes to exit. Apr 14 13:25:30.527453 systemd-logind[1549]: Removed session 21. Apr 14 13:25:31.971408 kubelet[2716]: E0414 13:25:31.968753 2716 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 13:25:35.164563 systemd[1]: Started sshd@21-10.0.0.15:22-10.0.0.1:54360.service - OpenSSH per-connection server daemon (10.0.0.1:54360). Apr 14 13:25:35.627591 sshd[3904]: Accepted publickey for core from 10.0.0.1 port 54360 ssh2: RSA SHA256:yxEwPRvngNLBDT3pQyHsh993EalqAg0K7yQpushVk/s Apr 14 13:25:35.720632 sshd[3904]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:25:35.908813 systemd-logind[1549]: New session 22 of user core. Apr 14 13:25:35.918887 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 14 13:25:37.429017 kubelet[2716]: E0414 13:25:37.428688 2716 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 13:25:37.840373 kubelet[2716]: E0414 13:25:37.827712 2716 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.165s" Apr 14 13:25:39.974264 systemd-journald[1180]: Under memory pressure, flushing caches. Apr 14 13:25:39.954192 systemd-resolved[1467]: Under memory pressure, flushing caches. Apr 14 13:25:39.954436 systemd-resolved[1467]: Flushed all caches. Apr 14 13:25:41.045461 sshd[3904]: pam_unix(sshd:session): session closed for user core Apr 14 13:25:41.285535 systemd[1]: sshd@21-10.0.0.15:22-10.0.0.1:54360.service: Deactivated successfully. Apr 14 13:25:41.396358 systemd[1]: session-22.scope: Deactivated successfully. Apr 14 13:25:41.480905 systemd-logind[1549]: Session 22 logged out. Waiting for processes to exit. Apr 14 13:25:41.661908 systemd-logind[1549]: Removed session 22. Apr 14 13:25:41.814927 kubelet[2716]: E0414 13:25:41.765253 2716 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.498s" Apr 14 13:25:42.692253 kubelet[2716]: E0414 13:25:42.682892 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:25:42.843384 kubelet[2716]: I0414 13:25:42.841278 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/46e6eba7-2b7d-4fe5-a472-1f43193af109-cni-bin-dir\") pod \"calico-node-g58ml\" (UID: \"46e6eba7-2b7d-4fe5-a472-1f43193af109\") " pod="calico-system/calico-node-g58ml" Apr 14 13:25:42.843384 kubelet[2716]: I0414 13:25:42.841454 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/46e6eba7-2b7d-4fe5-a472-1f43193af109-nodeproc\") pod \"calico-node-g58ml\" (UID: \"46e6eba7-2b7d-4fe5-a472-1f43193af109\") " pod="calico-system/calico-node-g58ml" Apr 14 13:25:42.843384 kubelet[2716]: I0414 13:25:42.841490 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/46e6eba7-2b7d-4fe5-a472-1f43193af109-var-run-calico\") pod \"calico-node-g58ml\" (UID: \"46e6eba7-2b7d-4fe5-a472-1f43193af109\") " pod="calico-system/calico-node-g58ml" Apr 14 13:25:42.843384 kubelet[2716]: I0414 13:25:42.841520 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/46e6eba7-2b7d-4fe5-a472-1f43193af109-bpffs\") pod \"calico-node-g58ml\" (UID: \"46e6eba7-2b7d-4fe5-a472-1f43193af109\") " pod="calico-system/calico-node-g58ml" Apr 14 13:25:42.843384 kubelet[2716]: I0414 13:25:42.841541 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/46e6eba7-2b7d-4fe5-a472-1f43193af109-sys-fs\") pod \"calico-node-g58ml\" (UID: \"46e6eba7-2b7d-4fe5-a472-1f43193af109\") " pod="calico-system/calico-node-g58ml" Apr 14 13:25:42.843384 kubelet[2716]: I0414 13:25:42.841560 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/46e6eba7-2b7d-4fe5-a472-1f43193af109-cni-log-dir\") pod \"calico-node-g58ml\" (UID: \"46e6eba7-2b7d-4fe5-a472-1f43193af109\") " pod="calico-system/calico-node-g58ml" Apr 14 13:25:42.898052 kubelet[2716]: I0414 13:25:42.841579 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/46e6eba7-2b7d-4fe5-a472-1f43193af109-flexvol-driver-host\") pod \"calico-node-g58ml\" (UID: \"46e6eba7-2b7d-4fe5-a472-1f43193af109\") " pod="calico-system/calico-node-g58ml" Apr 14 13:25:42.898052 kubelet[2716]: I0414 13:25:42.841602 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bs9db\" (UniqueName: \"kubernetes.io/projected/46e6eba7-2b7d-4fe5-a472-1f43193af109-kube-api-access-bs9db\") pod \"calico-node-g58ml\" (UID: \"46e6eba7-2b7d-4fe5-a472-1f43193af109\") " pod="calico-system/calico-node-g58ml" Apr 14 13:25:42.898052 kubelet[2716]: I0414 13:25:42.841622 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/46e6eba7-2b7d-4fe5-a472-1f43193af109-cni-net-dir\") pod \"calico-node-g58ml\" (UID: \"46e6eba7-2b7d-4fe5-a472-1f43193af109\") " pod="calico-system/calico-node-g58ml" Apr 14 13:25:42.898052 kubelet[2716]: I0414 13:25:42.841639 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/46e6eba7-2b7d-4fe5-a472-1f43193af109-policysync\") pod \"calico-node-g58ml\" (UID: \"46e6eba7-2b7d-4fe5-a472-1f43193af109\") " pod="calico-system/calico-node-g58ml" Apr 14 13:25:42.898052 kubelet[2716]: I0414 13:25:42.841656 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/46e6eba7-2b7d-4fe5-a472-1f43193af109-tigera-ca-bundle\") pod \"calico-node-g58ml\" (UID: \"46e6eba7-2b7d-4fe5-a472-1f43193af109\") " pod="calico-system/calico-node-g58ml" Apr 14 13:25:42.906738 kubelet[2716]: I0414 13:25:42.841673 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/46e6eba7-2b7d-4fe5-a472-1f43193af109-var-lib-calico\") pod \"calico-node-g58ml\" (UID: \"46e6eba7-2b7d-4fe5-a472-1f43193af109\") " pod="calico-system/calico-node-g58ml" Apr 14 13:25:42.906738 kubelet[2716]: I0414 13:25:42.904695 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/46e6eba7-2b7d-4fe5-a472-1f43193af109-node-certs\") pod \"calico-node-g58ml\" (UID: \"46e6eba7-2b7d-4fe5-a472-1f43193af109\") " pod="calico-system/calico-node-g58ml" Apr 14 13:25:42.906738 kubelet[2716]: I0414 13:25:42.904843 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/46e6eba7-2b7d-4fe5-a472-1f43193af109-lib-modules\") pod \"calico-node-g58ml\" (UID: \"46e6eba7-2b7d-4fe5-a472-1f43193af109\") " pod="calico-system/calico-node-g58ml" Apr 14 13:25:42.911587 kubelet[2716]: I0414 13:25:42.910893 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/46e6eba7-2b7d-4fe5-a472-1f43193af109-xtables-lock\") pod \"calico-node-g58ml\" (UID: \"46e6eba7-2b7d-4fe5-a472-1f43193af109\") " pod="calico-system/calico-node-g58ml" Apr 14 13:25:42.915376 kubelet[2716]: E0414 13:25:42.915153 2716 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 13:25:43.110605 kubelet[2716]: E0414 13:25:43.106703 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:25:43.110605 kubelet[2716]: W0414 13:25:43.107244 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:25:43.110605 kubelet[2716]: E0414 13:25:43.107766 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:25:43.140470 kubelet[2716]: E0414 13:25:43.137417 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:25:43.140470 kubelet[2716]: W0414 13:25:43.139549 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:25:43.147126 kubelet[2716]: E0414 13:25:43.144828 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:25:43.158146 kubelet[2716]: E0414 13:25:43.157946 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:25:43.244896 kubelet[2716]: W0414 13:25:43.230070 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:25:43.244896 kubelet[2716]: E0414 13:25:43.230585 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:25:43.247306 kubelet[2716]: E0414 13:25:43.247243 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:25:43.247495 kubelet[2716]: W0414 13:25:43.247447 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:25:43.247593 kubelet[2716]: E0414 13:25:43.247583 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:25:43.248517 kubelet[2716]: E0414 13:25:43.248463 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:25:43.248694 kubelet[2716]: W0414 13:25:43.248652 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:25:43.248809 kubelet[2716]: E0414 13:25:43.248797 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:25:43.249271 kubelet[2716]: E0414 13:25:43.249217 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:25:43.249399 kubelet[2716]: W0414 13:25:43.249335 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:25:43.249482 kubelet[2716]: E0414 13:25:43.249470 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:25:43.573505 kubelet[2716]: E0414 13:25:43.571603 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:25:43.573505 kubelet[2716]: W0414 13:25:43.571676 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:25:43.573505 kubelet[2716]: E0414 13:25:43.571814 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:25:43.637655 kubelet[2716]: E0414 13:25:43.637243 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:25:43.638996 kubelet[2716]: W0414 13:25:43.638505 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:25:43.638996 kubelet[2716]: E0414 13:25:43.638717 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:25:43.675219 containerd[1593]: time="2026-04-14T13:25:43.673881186Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-g58ml,Uid:46e6eba7-2b7d-4fe5-a472-1f43193af109,Namespace:calico-system,Attempt:0,}" Apr 14 13:25:44.865147 containerd[1593]: time="2026-04-14T13:25:44.862124973Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 13:25:44.865147 containerd[1593]: time="2026-04-14T13:25:44.862313027Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 13:25:44.865147 containerd[1593]: time="2026-04-14T13:25:44.862322686Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 13:25:44.865147 containerd[1593]: time="2026-04-14T13:25:44.862994858Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 13:25:46.281422 systemd[1]: Started sshd@22-10.0.0.15:22-10.0.0.1:54846.service - OpenSSH per-connection server daemon (10.0.0.1:54846). Apr 14 13:25:47.048415 sshd[3985]: Accepted publickey for core from 10.0.0.1 port 54846 ssh2: RSA SHA256:yxEwPRvngNLBDT3pQyHsh993EalqAg0K7yQpushVk/s Apr 14 13:25:47.039109 sshd[3985]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:25:47.276656 containerd[1593]: time="2026-04-14T13:25:47.273325262Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-g58ml,Uid:46e6eba7-2b7d-4fe5-a472-1f43193af109,Namespace:calico-system,Attempt:0,} returns sandbox id \"97158ed13bb82e9d1d9f01544760488e6c1913a8061df3a89c9cbf4f09a43f1a\"" Apr 14 13:25:48.124795 systemd-logind[1549]: New session 23 of user core. Apr 14 13:25:48.148842 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 14 13:25:48.659764 containerd[1593]: time="2026-04-14T13:25:48.659001710Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Apr 14 13:25:48.682900 kubelet[2716]: E0414 13:25:48.676621 2716 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 13:25:48.773649 kubelet[2716]: E0414 13:25:48.768796 2716 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.013s" Apr 14 13:25:50.070450 systemd-journald[1180]: Under memory pressure, flushing caches. Apr 14 13:25:50.111965 systemd-resolved[1467]: Under memory pressure, flushing caches. Apr 14 13:25:50.112846 systemd-resolved[1467]: Flushed all caches. Apr 14 13:25:50.833818 kubelet[2716]: E0414 13:25:50.833681 2716 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.996s" Apr 14 13:25:51.714967 sshd[3985]: pam_unix(sshd:session): session closed for user core Apr 14 13:25:51.728900 systemd[1]: sshd@22-10.0.0.15:22-10.0.0.1:54846.service: Deactivated successfully. Apr 14 13:25:51.758653 systemd[1]: session-23.scope: Deactivated successfully. Apr 14 13:25:51.924830 systemd-logind[1549]: Session 23 logged out. Waiting for processes to exit. Apr 14 13:25:51.959340 systemd-logind[1549]: Removed session 23. Apr 14 13:25:52.062322 systemd-journald[1180]: Under memory pressure, flushing caches. Apr 14 13:25:52.052604 systemd-resolved[1467]: Under memory pressure, flushing caches. Apr 14 13:25:52.052730 systemd-resolved[1467]: Flushed all caches. Apr 14 13:25:54.037711 kubelet[2716]: E0414 13:25:53.972506 2716 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 13:25:56.840275 systemd[1]: Started sshd@23-10.0.0.15:22-10.0.0.1:55226.service - OpenSSH per-connection server daemon (10.0.0.1:55226). Apr 14 13:25:57.477825 sshd[4040]: Accepted publickey for core from 10.0.0.1 port 55226 ssh2: RSA SHA256:yxEwPRvngNLBDT3pQyHsh993EalqAg0K7yQpushVk/s Apr 14 13:25:57.482397 sshd[4040]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:25:57.935938 systemd-logind[1549]: New session 24 of user core. Apr 14 13:25:57.977717 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 14 13:25:59.355407 kubelet[2716]: E0414 13:25:59.354601 2716 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 13:25:59.742111 kubelet[2716]: E0414 13:25:59.741781 2716 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.103s" Apr 14 13:26:00.947774 containerd[1593]: time="2026-04-14T13:26:00.935685043Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:26:01.116501 containerd[1593]: time="2026-04-14T13:26:01.108183197Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4630250" Apr 14 13:26:01.330709 containerd[1593]: time="2026-04-14T13:26:01.318298760Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:26:01.633369 kubelet[2716]: E0414 13:26:01.631165 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:26:01.640295 containerd[1593]: time="2026-04-14T13:26:01.637787678Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:26:01.655456 kubelet[2716]: W0414 13:26:01.650488 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:26:01.655456 kubelet[2716]: E0414 13:26:01.651646 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:26:01.719983 containerd[1593]: time="2026-04-14T13:26:01.651655054Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 12.970136488s" Apr 14 13:26:01.719983 containerd[1593]: time="2026-04-14T13:26:01.651813247Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Apr 14 13:26:01.773218 sshd[4040]: pam_unix(sshd:session): session closed for user core Apr 14 13:26:01.945293 kubelet[2716]: E0414 13:26:01.934793 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:26:01.945293 kubelet[2716]: W0414 13:26:01.935425 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:26:01.945293 kubelet[2716]: E0414 13:26:01.944019 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:26:01.947925 kubelet[2716]: E0414 13:26:01.947831 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:26:01.948217 kubelet[2716]: W0414 13:26:01.948132 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:26:01.948418 kubelet[2716]: E0414 13:26:01.948409 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:26:02.039605 systemd[1]: sshd@23-10.0.0.15:22-10.0.0.1:55226.service: Deactivated successfully. Apr 14 13:26:02.157160 systemd[1]: session-24.scope: Deactivated successfully. Apr 14 13:26:02.261226 kubelet[2716]: E0414 13:26:02.183536 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:26:02.259220 systemd-logind[1549]: Session 24 logged out. Waiting for processes to exit. Apr 14 13:26:02.335872 kubelet[2716]: W0414 13:26:02.327773 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:26:02.336217 systemd-logind[1549]: Removed session 24. Apr 14 13:26:02.368253 kubelet[2716]: E0414 13:26:02.366744 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:26:02.447275 kubelet[2716]: E0414 13:26:02.443858 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:26:02.452150 kubelet[2716]: W0414 13:26:02.450777 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:26:02.534567 kubelet[2716]: E0414 13:26:02.525114 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:26:02.569738 kubelet[2716]: E0414 13:26:02.546802 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:26:02.569738 kubelet[2716]: W0414 13:26:02.546961 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:26:02.569738 kubelet[2716]: E0414 13:26:02.547231 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:26:02.569738 kubelet[2716]: E0414 13:26:02.547589 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:26:02.569738 kubelet[2716]: W0414 13:26:02.547597 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:26:02.569738 kubelet[2716]: E0414 13:26:02.547607 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:26:02.569738 kubelet[2716]: E0414 13:26:02.547741 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:26:02.569738 kubelet[2716]: W0414 13:26:02.547807 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:26:02.569738 kubelet[2716]: E0414 13:26:02.547815 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:26:02.570504 kubelet[2716]: E0414 13:26:02.570030 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:26:02.570504 kubelet[2716]: W0414 13:26:02.570237 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:26:02.570565 kubelet[2716]: E0414 13:26:02.570474 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:26:02.619582 kubelet[2716]: E0414 13:26:02.593900 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:26:02.619582 kubelet[2716]: W0414 13:26:02.594216 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:26:02.619582 kubelet[2716]: E0414 13:26:02.594479 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:26:02.952632 kubelet[2716]: E0414 13:26:02.952448 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:26:02.962702 kubelet[2716]: W0414 13:26:02.962330 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:26:02.968334 kubelet[2716]: E0414 13:26:02.967028 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:26:03.102987 kubelet[2716]: E0414 13:26:03.093279 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:26:03.147670 kubelet[2716]: W0414 13:26:03.147582 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:26:03.148516 kubelet[2716]: E0414 13:26:03.148499 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:26:03.149779 kubelet[2716]: E0414 13:26:03.149764 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:26:03.149906 kubelet[2716]: W0414 13:26:03.149895 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:26:03.157645 kubelet[2716]: E0414 13:26:03.157246 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:26:03.247498 containerd[1593]: time="2026-04-14T13:26:03.246464529Z" level=info msg="CreateContainer within sandbox \"97158ed13bb82e9d1d9f01544760488e6c1913a8061df3a89c9cbf4f09a43f1a\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 14 13:26:03.258741 kubelet[2716]: E0414 13:26:03.257009 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:26:03.357596 kubelet[2716]: W0414 13:26:03.356432 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:26:03.358186 kubelet[2716]: E0414 13:26:03.358004 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:26:03.382342 kubelet[2716]: E0414 13:26:03.382060 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:26:03.387324 kubelet[2716]: W0414 13:26:03.382890 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:26:03.387324 kubelet[2716]: E0414 13:26:03.383106 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:26:03.530873 kubelet[2716]: E0414 13:26:03.527388 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 13:26:03.530873 kubelet[2716]: W0414 13:26:03.527528 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 13:26:03.664799 kubelet[2716]: E0414 13:26:03.555942 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 13:26:03.804151 containerd[1593]: time="2026-04-14T13:26:03.802945311Z" level=info msg="CreateContainer within sandbox \"97158ed13bb82e9d1d9f01544760488e6c1913a8061df3a89c9cbf4f09a43f1a\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"3eb91d0bba7c63a95ed67d3f3bbb64d2e83188d42141766f9e87ee527d4a71f0\"" Apr 14 13:26:03.823523 containerd[1593]: time="2026-04-14T13:26:03.823231578Z" level=info msg="StartContainer for \"3eb91d0bba7c63a95ed67d3f3bbb64d2e83188d42141766f9e87ee527d4a71f0\"" Apr 14 13:26:04.394066 containerd[1593]: time="2026-04-14T13:26:04.392725360Z" level=info msg="StartContainer for \"3eb91d0bba7c63a95ed67d3f3bbb64d2e83188d42141766f9e87ee527d4a71f0\" returns successfully" Apr 14 13:26:04.456926 kubelet[2716]: E0414 13:26:04.456311 2716 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 13:26:04.926815 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3eb91d0bba7c63a95ed67d3f3bbb64d2e83188d42141766f9e87ee527d4a71f0-rootfs.mount: Deactivated successfully. Apr 14 13:26:05.050986 containerd[1593]: time="2026-04-14T13:26:05.042674727Z" level=info msg="shim disconnected" id=3eb91d0bba7c63a95ed67d3f3bbb64d2e83188d42141766f9e87ee527d4a71f0 namespace=k8s.io Apr 14 13:26:05.062815 containerd[1593]: time="2026-04-14T13:26:05.061780038Z" level=warning msg="cleaning up after shim disconnected" id=3eb91d0bba7c63a95ed67d3f3bbb64d2e83188d42141766f9e87ee527d4a71f0 namespace=k8s.io Apr 14 13:26:05.062815 containerd[1593]: time="2026-04-14T13:26:05.062117969Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 14 13:26:05.629584 containerd[1593]: time="2026-04-14T13:26:05.587388876Z" level=warning msg="cleanup warnings time=\"2026-04-14T13:26:05Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 14 13:26:06.408320 containerd[1593]: time="2026-04-14T13:26:06.406360185Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Apr 14 13:26:06.824711 systemd[1]: Started sshd@24-10.0.0.15:22-10.0.0.1:46164.service - OpenSSH per-connection server daemon (10.0.0.1:46164). Apr 14 13:26:07.129734 sshd[4142]: Accepted publickey for core from 10.0.0.1 port 46164 ssh2: RSA SHA256:yxEwPRvngNLBDT3pQyHsh993EalqAg0K7yQpushVk/s Apr 14 13:26:07.131032 sshd[4142]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:26:07.327574 systemd-logind[1549]: New session 25 of user core. Apr 14 13:26:07.334803 systemd[1]: Started session-25.scope - Session 25 of User core. Apr 14 13:26:07.636199 kubelet[2716]: E0414 13:26:07.635131 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:26:08.162400 sshd[4142]: pam_unix(sshd:session): session closed for user core Apr 14 13:26:08.214329 systemd[1]: sshd@24-10.0.0.15:22-10.0.0.1:46164.service: Deactivated successfully. Apr 14 13:26:08.225053 systemd[1]: session-25.scope: Deactivated successfully. Apr 14 13:26:08.233120 systemd-logind[1549]: Session 25 logged out. Waiting for processes to exit. Apr 14 13:26:08.234826 systemd-logind[1549]: Removed session 25. Apr 14 13:26:09.563744 kubelet[2716]: E0414 13:26:09.563069 2716 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 13:26:13.243898 systemd[1]: Started sshd@25-10.0.0.15:22-10.0.0.1:40096.service - OpenSSH per-connection server daemon (10.0.0.1:40096). Apr 14 13:26:13.932685 sshd[4163]: Accepted publickey for core from 10.0.0.1 port 40096 ssh2: RSA SHA256:yxEwPRvngNLBDT3pQyHsh993EalqAg0K7yQpushVk/s Apr 14 13:26:14.023280 sshd[4163]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:26:14.162222 systemd-logind[1549]: New session 26 of user core. Apr 14 13:26:14.171234 systemd[1]: Started session-26.scope - Session 26 of User core. Apr 14 13:26:14.635812 kubelet[2716]: E0414 13:26:14.632551 2716 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 13:26:14.905688 sshd[4163]: pam_unix(sshd:session): session closed for user core Apr 14 13:26:14.913201 systemd[1]: sshd@25-10.0.0.15:22-10.0.0.1:40096.service: Deactivated successfully. Apr 14 13:26:14.916897 systemd-logind[1549]: Session 26 logged out. Waiting for processes to exit. Apr 14 13:26:14.917488 systemd[1]: session-26.scope: Deactivated successfully. Apr 14 13:26:14.923812 systemd-logind[1549]: Removed session 26. Apr 14 13:26:19.742847 kubelet[2716]: E0414 13:26:19.735309 2716 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 13:26:20.071675 systemd[1]: Started sshd@26-10.0.0.15:22-10.0.0.1:37718.service - OpenSSH per-connection server daemon (10.0.0.1:37718). Apr 14 13:26:20.425263 sshd[4184]: Accepted publickey for core from 10.0.0.1 port 37718 ssh2: RSA SHA256:yxEwPRvngNLBDT3pQyHsh993EalqAg0K7yQpushVk/s Apr 14 13:26:20.428876 sshd[4184]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:26:20.442996 systemd-logind[1549]: New session 27 of user core. Apr 14 13:26:20.455856 systemd[1]: Started session-27.scope - Session 27 of User core. Apr 14 13:26:21.302546 sshd[4184]: pam_unix(sshd:session): session closed for user core Apr 14 13:26:21.336964 systemd[1]: sshd@26-10.0.0.15:22-10.0.0.1:37718.service: Deactivated successfully. Apr 14 13:26:21.354461 systemd[1]: session-27.scope: Deactivated successfully. Apr 14 13:26:21.380148 systemd-logind[1549]: Session 27 logged out. Waiting for processes to exit. Apr 14 13:26:21.401580 systemd-logind[1549]: Removed session 27. Apr 14 13:26:23.947767 kubelet[2716]: E0414 13:26:23.946884 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qjkwf" podUID="292f49bd-6819-414a-9097-fb8dcd762594" Apr 14 13:26:24.306780 kubelet[2716]: I0414 13:26:24.302880 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9wz8\" (UniqueName: \"kubernetes.io/projected/292f49bd-6819-414a-9097-fb8dcd762594-kube-api-access-h9wz8\") pod \"csi-node-driver-qjkwf\" (UID: \"292f49bd-6819-414a-9097-fb8dcd762594\") " pod="calico-system/csi-node-driver-qjkwf" Apr 14 13:26:24.306780 kubelet[2716]: I0414 13:26:24.302994 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/292f49bd-6819-414a-9097-fb8dcd762594-kubelet-dir\") pod \"csi-node-driver-qjkwf\" (UID: \"292f49bd-6819-414a-9097-fb8dcd762594\") " pod="calico-system/csi-node-driver-qjkwf" Apr 14 13:26:24.306780 kubelet[2716]: I0414 13:26:24.303069 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/292f49bd-6819-414a-9097-fb8dcd762594-registration-dir\") pod \"csi-node-driver-qjkwf\" (UID: \"292f49bd-6819-414a-9097-fb8dcd762594\") " pod="calico-system/csi-node-driver-qjkwf" Apr 14 13:26:24.306780 kubelet[2716]: I0414 13:26:24.303120 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/292f49bd-6819-414a-9097-fb8dcd762594-socket-dir\") pod \"csi-node-driver-qjkwf\" (UID: \"292f49bd-6819-414a-9097-fb8dcd762594\") " pod="calico-system/csi-node-driver-qjkwf" Apr 14 13:26:24.306780 kubelet[2716]: I0414 13:26:24.303134 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/292f49bd-6819-414a-9097-fb8dcd762594-varrun\") pod \"csi-node-driver-qjkwf\" (UID: \"292f49bd-6819-414a-9097-fb8dcd762594\") " pod="calico-system/csi-node-driver-qjkwf" Apr 14 13:26:24.864118 kubelet[2716]: E0414 13:26:24.863516 2716 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 13:26:25.731604 kubelet[2716]: E0414 13:26:25.731414 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qjkwf" podUID="292f49bd-6819-414a-9097-fb8dcd762594" Apr 14 13:26:25.851681 kubelet[2716]: E0414 13:26:25.851578 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:26:26.494181 systemd[1]: Started sshd@27-10.0.0.15:22-10.0.0.1:37724.service - OpenSSH per-connection server daemon (10.0.0.1:37724). Apr 14 13:26:26.702379 sshd[4201]: Accepted publickey for core from 10.0.0.1 port 37724 ssh2: RSA SHA256:yxEwPRvngNLBDT3pQyHsh993EalqAg0K7yQpushVk/s Apr 14 13:26:26.707780 sshd[4201]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:26:26.739054 systemd-logind[1549]: New session 28 of user core. Apr 14 13:26:26.755266 systemd[1]: Started session-28.scope - Session 28 of User core. Apr 14 13:26:27.763387 kubelet[2716]: E0414 13:26:27.761268 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qjkwf" podUID="292f49bd-6819-414a-9097-fb8dcd762594" Apr 14 13:26:27.864216 sshd[4201]: pam_unix(sshd:session): session closed for user core Apr 14 13:26:27.984172 systemd[1]: sshd@27-10.0.0.15:22-10.0.0.1:37724.service: Deactivated successfully. Apr 14 13:26:28.005763 systemd[1]: session-28.scope: Deactivated successfully. Apr 14 13:26:28.006199 systemd-logind[1549]: Session 28 logged out. Waiting for processes to exit. Apr 14 13:26:28.007943 systemd-logind[1549]: Removed session 28. Apr 14 13:26:28.652310 kubelet[2716]: E0414 13:26:28.649206 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:26:29.679431 kubelet[2716]: E0414 13:26:29.675505 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qjkwf" podUID="292f49bd-6819-414a-9097-fb8dcd762594" Apr 14 13:26:29.875274 kubelet[2716]: E0414 13:26:29.873550 2716 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 13:26:31.641321 kubelet[2716]: E0414 13:26:31.639265 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qjkwf" podUID="292f49bd-6819-414a-9097-fb8dcd762594" Apr 14 13:26:32.962695 systemd[1]: Started sshd@28-10.0.0.15:22-10.0.0.1:40496.service - OpenSSH per-connection server daemon (10.0.0.1:40496). Apr 14 13:26:33.388856 sshd[4218]: Accepted publickey for core from 10.0.0.1 port 40496 ssh2: RSA SHA256:yxEwPRvngNLBDT3pQyHsh993EalqAg0K7yQpushVk/s Apr 14 13:26:33.459995 sshd[4218]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:26:33.564949 systemd-logind[1549]: New session 29 of user core. Apr 14 13:26:33.571371 systemd[1]: Started session-29.scope - Session 29 of User core. Apr 14 13:26:33.647157 kubelet[2716]: E0414 13:26:33.639140 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qjkwf" podUID="292f49bd-6819-414a-9097-fb8dcd762594" Apr 14 13:26:34.070491 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount64343961.mount: Deactivated successfully. Apr 14 13:26:34.324828 sshd[4218]: pam_unix(sshd:session): session closed for user core Apr 14 13:26:34.384894 systemd[1]: sshd@28-10.0.0.15:22-10.0.0.1:40496.service: Deactivated successfully. Apr 14 13:26:34.446293 systemd[1]: session-29.scope: Deactivated successfully. Apr 14 13:26:34.475828 containerd[1593]: time="2026-04-14T13:26:34.468906446Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:26:34.500604 containerd[1593]: time="2026-04-14T13:26:34.485107071Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Apr 14 13:26:34.504361 systemd-logind[1549]: Session 29 logged out. Waiting for processes to exit. Apr 14 13:26:34.504655 containerd[1593]: time="2026-04-14T13:26:34.504410824Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:26:34.521647 systemd-logind[1549]: Removed session 29. Apr 14 13:26:34.524175 containerd[1593]: time="2026-04-14T13:26:34.523891295Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:26:34.538298 containerd[1593]: time="2026-04-14T13:26:34.538199556Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 28.131778788s" Apr 14 13:26:34.538298 containerd[1593]: time="2026-04-14T13:26:34.538258483Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Apr 14 13:26:34.555718 containerd[1593]: time="2026-04-14T13:26:34.555310332Z" level=info msg="CreateContainer within sandbox \"97158ed13bb82e9d1d9f01544760488e6c1913a8061df3a89c9cbf4f09a43f1a\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Apr 14 13:26:34.724825 containerd[1593]: time="2026-04-14T13:26:34.724555442Z" level=info msg="CreateContainer within sandbox \"97158ed13bb82e9d1d9f01544760488e6c1913a8061df3a89c9cbf4f09a43f1a\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"2706b37d8da9d532304eb0432fd7ec7d4be66f41744623ec5cb4afbae520ca1c\"" Apr 14 13:26:34.730784 containerd[1593]: time="2026-04-14T13:26:34.730628450Z" level=info msg="StartContainer for \"2706b37d8da9d532304eb0432fd7ec7d4be66f41744623ec5cb4afbae520ca1c\"" Apr 14 13:26:35.023034 kubelet[2716]: E0414 13:26:35.015875 2716 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 13:26:35.423472 containerd[1593]: time="2026-04-14T13:26:35.423289912Z" level=info msg="StartContainer for \"2706b37d8da9d532304eb0432fd7ec7d4be66f41744623ec5cb4afbae520ca1c\" returns successfully" Apr 14 13:26:35.753837 kubelet[2716]: E0414 13:26:35.753157 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qjkwf" podUID="292f49bd-6819-414a-9097-fb8dcd762594" Apr 14 13:26:36.139196 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2706b37d8da9d532304eb0432fd7ec7d4be66f41744623ec5cb4afbae520ca1c-rootfs.mount: Deactivated successfully. Apr 14 13:26:36.185256 containerd[1593]: time="2026-04-14T13:26:36.184647770Z" level=info msg="shim disconnected" id=2706b37d8da9d532304eb0432fd7ec7d4be66f41744623ec5cb4afbae520ca1c namespace=k8s.io Apr 14 13:26:36.185256 containerd[1593]: time="2026-04-14T13:26:36.185134567Z" level=warning msg="cleaning up after shim disconnected" id=2706b37d8da9d532304eb0432fd7ec7d4be66f41744623ec5cb4afbae520ca1c namespace=k8s.io Apr 14 13:26:36.185256 containerd[1593]: time="2026-04-14T13:26:36.185174146Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 14 13:26:36.416998 containerd[1593]: time="2026-04-14T13:26:36.408986737Z" level=warning msg="cleanup warnings time=\"2026-04-14T13:26:36Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 14 13:26:36.649410 kubelet[2716]: E0414 13:26:36.649234 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:26:37.453767 containerd[1593]: time="2026-04-14T13:26:37.445212524Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Apr 14 13:26:37.769785 kubelet[2716]: E0414 13:26:37.761512 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qjkwf" podUID="292f49bd-6819-414a-9097-fb8dcd762594" Apr 14 13:26:39.403701 systemd[1]: Started sshd@29-10.0.0.15:22-10.0.0.1:40508.service - OpenSSH per-connection server daemon (10.0.0.1:40508). Apr 14 13:26:39.572136 sshd[4305]: Accepted publickey for core from 10.0.0.1 port 40508 ssh2: RSA SHA256:yxEwPRvngNLBDT3pQyHsh993EalqAg0K7yQpushVk/s Apr 14 13:26:39.633621 sshd[4305]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:26:39.640006 kubelet[2716]: E0414 13:26:39.639298 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qjkwf" podUID="292f49bd-6819-414a-9097-fb8dcd762594" Apr 14 13:26:39.751806 systemd-logind[1549]: New session 30 of user core. Apr 14 13:26:39.754020 systemd[1]: Started session-30.scope - Session 30 of User core. Apr 14 13:26:40.101242 kubelet[2716]: E0414 13:26:40.100475 2716 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 13:26:41.414547 sshd[4305]: pam_unix(sshd:session): session closed for user core Apr 14 13:26:41.421394 systemd-logind[1549]: Session 30 logged out. Waiting for processes to exit. Apr 14 13:26:41.436582 systemd[1]: sshd@29-10.0.0.15:22-10.0.0.1:40508.service: Deactivated successfully. Apr 14 13:26:41.476948 systemd[1]: session-30.scope: Deactivated successfully. Apr 14 13:26:41.497945 systemd-logind[1549]: Removed session 30. Apr 14 13:26:41.660562 kubelet[2716]: E0414 13:26:41.654810 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qjkwf" podUID="292f49bd-6819-414a-9097-fb8dcd762594" Apr 14 13:26:43.653441 kubelet[2716]: E0414 13:26:43.653131 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qjkwf" podUID="292f49bd-6819-414a-9097-fb8dcd762594" Apr 14 13:26:45.232325 kubelet[2716]: E0414 13:26:45.212298 2716 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 13:26:45.873103 kubelet[2716]: E0414 13:26:45.872708 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qjkwf" podUID="292f49bd-6819-414a-9097-fb8dcd762594" Apr 14 13:26:46.780594 systemd[1]: Started sshd@30-10.0.0.15:22-10.0.0.1:58346.service - OpenSSH per-connection server daemon (10.0.0.1:58346). Apr 14 13:26:48.056413 sshd[4326]: Accepted publickey for core from 10.0.0.1 port 58346 ssh2: RSA SHA256:yxEwPRvngNLBDT3pQyHsh993EalqAg0K7yQpushVk/s Apr 14 13:26:48.095888 sshd[4326]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:26:48.442600 systemd-logind[1549]: New session 31 of user core. Apr 14 13:26:48.449760 systemd[1]: Started session-31.scope - Session 31 of User core. Apr 14 13:26:48.858633 kubelet[2716]: E0414 13:26:48.856745 2716 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.158s" Apr 14 13:26:49.457745 kubelet[2716]: E0414 13:26:49.456322 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qjkwf" podUID="292f49bd-6819-414a-9097-fb8dcd762594" Apr 14 13:26:50.290737 kubelet[2716]: E0414 13:26:50.290602 2716 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 13:26:50.316757 sshd[4326]: pam_unix(sshd:session): session closed for user core Apr 14 13:26:50.346040 systemd[1]: sshd@30-10.0.0.15:22-10.0.0.1:58346.service: Deactivated successfully. Apr 14 13:26:50.439680 systemd[1]: session-31.scope: Deactivated successfully. Apr 14 13:26:50.449247 systemd-logind[1549]: Session 31 logged out. Waiting for processes to exit. Apr 14 13:26:50.483280 systemd-logind[1549]: Removed session 31. Apr 14 13:26:50.892231 kubelet[2716]: E0414 13:26:50.887834 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qjkwf" podUID="292f49bd-6819-414a-9097-fb8dcd762594" Apr 14 13:26:51.821151 containerd[1593]: time="2026-04-14T13:26:51.820391042Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:26:51.821151 containerd[1593]: time="2026-04-14T13:26:51.821037546Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Apr 14 13:26:51.822852 containerd[1593]: time="2026-04-14T13:26:51.822808202Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:26:51.833827 containerd[1593]: time="2026-04-14T13:26:51.833547514Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:26:51.835099 containerd[1593]: time="2026-04-14T13:26:51.835032790Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 14.389758221s" Apr 14 13:26:51.835147 containerd[1593]: time="2026-04-14T13:26:51.835114228Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Apr 14 13:26:52.036702 containerd[1593]: time="2026-04-14T13:26:52.036119919Z" level=info msg="CreateContainer within sandbox \"97158ed13bb82e9d1d9f01544760488e6c1913a8061df3a89c9cbf4f09a43f1a\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 14 13:26:52.188152 containerd[1593]: time="2026-04-14T13:26:52.187629325Z" level=info msg="CreateContainer within sandbox \"97158ed13bb82e9d1d9f01544760488e6c1913a8061df3a89c9cbf4f09a43f1a\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"834df1735edd1ad1a3021c5b82df4ad8c0b4ea689d15ad3e216ee1e33f028be6\"" Apr 14 13:26:52.246589 containerd[1593]: time="2026-04-14T13:26:52.246424758Z" level=info msg="StartContainer for \"834df1735edd1ad1a3021c5b82df4ad8c0b4ea689d15ad3e216ee1e33f028be6\"" Apr 14 13:26:52.699553 kubelet[2716]: E0414 13:26:52.698859 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qjkwf" podUID="292f49bd-6819-414a-9097-fb8dcd762594" Apr 14 13:26:52.865474 containerd[1593]: time="2026-04-14T13:26:52.863132179Z" level=info msg="StartContainer for \"834df1735edd1ad1a3021c5b82df4ad8c0b4ea689d15ad3e216ee1e33f028be6\" returns successfully" Apr 14 13:26:54.652151 kubelet[2716]: E0414 13:26:54.649035 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qjkwf" podUID="292f49bd-6819-414a-9097-fb8dcd762594" Apr 14 13:26:55.297770 kubelet[2716]: E0414 13:26:55.297568 2716 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 13:26:55.339964 systemd[1]: Started sshd@31-10.0.0.15:22-10.0.0.1:42696.service - OpenSSH per-connection server daemon (10.0.0.1:42696). Apr 14 13:26:55.417757 sshd[4383]: Accepted publickey for core from 10.0.0.1 port 42696 ssh2: RSA SHA256:yxEwPRvngNLBDT3pQyHsh993EalqAg0K7yQpushVk/s Apr 14 13:26:55.421330 sshd[4383]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:26:55.459869 systemd-logind[1549]: New session 32 of user core. Apr 14 13:26:55.472423 systemd[1]: Started session-32.scope - Session 32 of User core. Apr 14 13:26:55.662766 containerd[1593]: time="2026-04-14T13:26:55.662459703Z" level=info msg="shim disconnected" id=834df1735edd1ad1a3021c5b82df4ad8c0b4ea689d15ad3e216ee1e33f028be6 namespace=k8s.io Apr 14 13:26:55.662766 containerd[1593]: time="2026-04-14T13:26:55.662676206Z" level=warning msg="cleaning up after shim disconnected" id=834df1735edd1ad1a3021c5b82df4ad8c0b4ea689d15ad3e216ee1e33f028be6 namespace=k8s.io Apr 14 13:26:55.662766 containerd[1593]: time="2026-04-14T13:26:55.662713856Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 14 13:26:55.662665 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-834df1735edd1ad1a3021c5b82df4ad8c0b4ea689d15ad3e216ee1e33f028be6-rootfs.mount: Deactivated successfully. Apr 14 13:26:55.822422 containerd[1593]: time="2026-04-14T13:26:55.822040474Z" level=warning msg="cleanup warnings time=\"2026-04-14T13:26:55Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 14 13:26:56.124440 containerd[1593]: time="2026-04-14T13:26:56.123446074Z" level=info msg="CreateContainer within sandbox \"97158ed13bb82e9d1d9f01544760488e6c1913a8061df3a89c9cbf4f09a43f1a\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 14 13:26:56.323469 containerd[1593]: time="2026-04-14T13:26:56.323244419Z" level=info msg="CreateContainer within sandbox \"97158ed13bb82e9d1d9f01544760488e6c1913a8061df3a89c9cbf4f09a43f1a\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"82da48d6414986a3420b6788d9ddff0654e378ee988e8ab1ce467bd86f14c917\"" Apr 14 13:26:56.335629 sshd[4383]: pam_unix(sshd:session): session closed for user core Apr 14 13:26:56.344700 containerd[1593]: time="2026-04-14T13:26:56.342985656Z" level=info msg="StartContainer for \"82da48d6414986a3420b6788d9ddff0654e378ee988e8ab1ce467bd86f14c917\"" Apr 14 13:26:56.355960 systemd[1]: sshd@31-10.0.0.15:22-10.0.0.1:42696.service: Deactivated successfully. Apr 14 13:26:56.360303 systemd-logind[1549]: Session 32 logged out. Waiting for processes to exit. Apr 14 13:26:56.392499 systemd[1]: session-32.scope: Deactivated successfully. Apr 14 13:26:56.407450 systemd-logind[1549]: Removed session 32. Apr 14 13:26:56.635494 kubelet[2716]: E0414 13:26:56.634626 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qjkwf" podUID="292f49bd-6819-414a-9097-fb8dcd762594" Apr 14 13:26:56.816470 containerd[1593]: time="2026-04-14T13:26:56.816296468Z" level=info msg="StartContainer for \"82da48d6414986a3420b6788d9ddff0654e378ee988e8ab1ce467bd86f14c917\" returns successfully" Apr 14 13:26:57.641943 kubelet[2716]: E0414 13:26:57.640777 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:26:58.682787 kubelet[2716]: E0414 13:26:58.680234 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qjkwf" podUID="292f49bd-6819-414a-9097-fb8dcd762594" Apr 14 13:27:00.954685 containerd[1593]: time="2026-04-14T13:27:00.953472640Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qjkwf,Uid:292f49bd-6819-414a-9097-fb8dcd762594,Namespace:calico-system,Attempt:0,}" Apr 14 13:27:01.039541 update_engine[1551]: I20260414 13:27:01.039345 1551 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Apr 14 13:27:01.039541 update_engine[1551]: I20260414 13:27:01.039433 1551 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Apr 14 13:27:01.051393 update_engine[1551]: I20260414 13:27:01.051049 1551 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Apr 14 13:27:01.159632 update_engine[1551]: I20260414 13:27:01.132673 1551 omaha_request_params.cc:62] Current group set to lts Apr 14 13:27:01.159632 update_engine[1551]: I20260414 13:27:01.137828 1551 update_attempter.cc:499] Already updated boot flags. Skipping. Apr 14 13:27:01.159632 update_engine[1551]: I20260414 13:27:01.137854 1551 update_attempter.cc:643] Scheduling an action processor start. Apr 14 13:27:01.159632 update_engine[1551]: I20260414 13:27:01.138028 1551 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 14 13:27:01.159632 update_engine[1551]: I20260414 13:27:01.138595 1551 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Apr 14 13:27:01.159632 update_engine[1551]: I20260414 13:27:01.138720 1551 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 14 13:27:01.159632 update_engine[1551]: I20260414 13:27:01.138726 1551 omaha_request_action.cc:272] Request: Apr 14 13:27:01.159632 update_engine[1551]: Apr 14 13:27:01.159632 update_engine[1551]: Apr 14 13:27:01.159632 update_engine[1551]: Apr 14 13:27:01.159632 update_engine[1551]: Apr 14 13:27:01.159632 update_engine[1551]: Apr 14 13:27:01.159632 update_engine[1551]: Apr 14 13:27:01.159632 update_engine[1551]: Apr 14 13:27:01.159632 update_engine[1551]: Apr 14 13:27:01.159632 update_engine[1551]: I20260414 13:27:01.138733 1551 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 14 13:27:01.202400 update_engine[1551]: I20260414 13:27:01.201620 1551 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 14 13:27:01.234716 update_engine[1551]: I20260414 13:27:01.231150 1551 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 14 13:27:01.247515 update_engine[1551]: E20260414 13:27:01.245060 1551 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 14 13:27:01.247515 update_engine[1551]: I20260414 13:27:01.245430 1551 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Apr 14 13:27:01.252543 locksmithd[1625]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Apr 14 13:27:01.389949 systemd[1]: Started sshd@32-10.0.0.15:22-10.0.0.1:39354.service - OpenSSH per-connection server daemon (10.0.0.1:39354). Apr 14 13:27:01.811529 sshd[4469]: Accepted publickey for core from 10.0.0.1 port 39354 ssh2: RSA SHA256:yxEwPRvngNLBDT3pQyHsh993EalqAg0K7yQpushVk/s Apr 14 13:27:01.822047 sshd[4469]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:27:01.903832 systemd-logind[1549]: New session 33 of user core. Apr 14 13:27:01.929822 systemd[1]: Started session-33.scope - Session 33 of User core. Apr 14 13:27:02.755058 sshd[4469]: pam_unix(sshd:session): session closed for user core Apr 14 13:27:02.858477 systemd[1]: sshd@32-10.0.0.15:22-10.0.0.1:39354.service: Deactivated successfully. Apr 14 13:27:02.898942 systemd[1]: session-33.scope: Deactivated successfully. Apr 14 13:27:02.904353 systemd-logind[1549]: Session 33 logged out. Waiting for processes to exit. Apr 14 13:27:02.905713 systemd-logind[1549]: Removed session 33. Apr 14 13:27:03.332250 kubelet[2716]: I0414 13:27:03.331937 2716 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-g58ml" podStartSLOduration=21.144921465 podStartE2EDuration="1m24.331803512s" podCreationTimestamp="2026-04-14 13:25:39 +0000 UTC" firstStartedPulling="2026-04-14 13:25:48.65856299 +0000 UTC m=+465.105675028" lastFinishedPulling="2026-04-14 13:26:51.84544504 +0000 UTC m=+528.292557075" observedRunningTime="2026-04-14 13:26:57.133704426 +0000 UTC m=+533.580816468" watchObservedRunningTime="2026-04-14 13:27:03.331803512 +0000 UTC m=+539.778915563" Apr 14 13:27:04.537271 containerd[1593]: 2026-04-14 13:27:03.331 [INFO][4509] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6f72d9900c75d6070fd4dc2a7d800cd8646ae445d41d7ea80af21d8e5ac57722" Apr 14 13:27:04.537271 containerd[1593]: 2026-04-14 13:27:03.332 [INFO][4509] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6f72d9900c75d6070fd4dc2a7d800cd8646ae445d41d7ea80af21d8e5ac57722" iface="eth0" netns="/var/run/netns/cni-003f0bf2-52c5-3764-2b62-b4a06f115d0d" Apr 14 13:27:04.537271 containerd[1593]: 2026-04-14 13:27:03.333 [INFO][4509] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6f72d9900c75d6070fd4dc2a7d800cd8646ae445d41d7ea80af21d8e5ac57722" iface="eth0" netns="/var/run/netns/cni-003f0bf2-52c5-3764-2b62-b4a06f115d0d" Apr 14 13:27:04.537271 containerd[1593]: 2026-04-14 13:27:03.360 [INFO][4509] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6f72d9900c75d6070fd4dc2a7d800cd8646ae445d41d7ea80af21d8e5ac57722" iface="eth0" netns="/var/run/netns/cni-003f0bf2-52c5-3764-2b62-b4a06f115d0d" Apr 14 13:27:04.537271 containerd[1593]: 2026-04-14 13:27:03.367 [INFO][4509] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6f72d9900c75d6070fd4dc2a7d800cd8646ae445d41d7ea80af21d8e5ac57722" Apr 14 13:27:04.537271 containerd[1593]: 2026-04-14 13:27:03.371 [INFO][4509] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6f72d9900c75d6070fd4dc2a7d800cd8646ae445d41d7ea80af21d8e5ac57722" Apr 14 13:27:04.537271 containerd[1593]: 2026-04-14 13:27:03.937 [INFO][4532] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6f72d9900c75d6070fd4dc2a7d800cd8646ae445d41d7ea80af21d8e5ac57722" HandleID="k8s-pod-network.6f72d9900c75d6070fd4dc2a7d800cd8646ae445d41d7ea80af21d8e5ac57722" Workload="localhost-k8s-csi--node--driver--qjkwf-eth0" Apr 14 13:27:04.537271 containerd[1593]: 2026-04-14 13:27:03.938 [INFO][4532] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 13:27:04.537271 containerd[1593]: 2026-04-14 13:27:03.938 [INFO][4532] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 13:27:04.537271 containerd[1593]: 2026-04-14 13:27:04.231 [WARNING][4532] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6f72d9900c75d6070fd4dc2a7d800cd8646ae445d41d7ea80af21d8e5ac57722" HandleID="k8s-pod-network.6f72d9900c75d6070fd4dc2a7d800cd8646ae445d41d7ea80af21d8e5ac57722" Workload="localhost-k8s-csi--node--driver--qjkwf-eth0" Apr 14 13:27:04.537271 containerd[1593]: 2026-04-14 13:27:04.232 [INFO][4532] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6f72d9900c75d6070fd4dc2a7d800cd8646ae445d41d7ea80af21d8e5ac57722" HandleID="k8s-pod-network.6f72d9900c75d6070fd4dc2a7d800cd8646ae445d41d7ea80af21d8e5ac57722" Workload="localhost-k8s-csi--node--driver--qjkwf-eth0" Apr 14 13:27:04.537271 containerd[1593]: 2026-04-14 13:27:04.295 [INFO][4532] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 13:27:04.537271 containerd[1593]: 2026-04-14 13:27:04.457 [INFO][4509] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6f72d9900c75d6070fd4dc2a7d800cd8646ae445d41d7ea80af21d8e5ac57722" Apr 14 13:27:04.659643 systemd[1]: run-netns-cni\x2d003f0bf2\x2d52c5\x2d3764\x2d2b62\x2db4a06f115d0d.mount: Deactivated successfully. Apr 14 13:27:04.729395 containerd[1593]: time="2026-04-14T13:27:04.725675107Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qjkwf,Uid:292f49bd-6819-414a-9097-fb8dcd762594,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6f72d9900c75d6070fd4dc2a7d800cd8646ae445d41d7ea80af21d8e5ac57722\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 13:27:04.730540 kubelet[2716]: E0414 13:27:04.729743 2716 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f72d9900c75d6070fd4dc2a7d800cd8646ae445d41d7ea80af21d8e5ac57722\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 13:27:04.737362 kubelet[2716]: E0414 13:27:04.731822 2716 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f72d9900c75d6070fd4dc2a7d800cd8646ae445d41d7ea80af21d8e5ac57722\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qjkwf" Apr 14 13:27:04.737362 kubelet[2716]: E0414 13:27:04.734004 2716 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f72d9900c75d6070fd4dc2a7d800cd8646ae445d41d7ea80af21d8e5ac57722\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qjkwf" Apr 14 13:27:04.737623 kubelet[2716]: E0414 13:27:04.737294 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-qjkwf_calico-system(292f49bd-6819-414a-9097-fb8dcd762594)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-qjkwf_calico-system(292f49bd-6819-414a-9097-fb8dcd762594)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6f72d9900c75d6070fd4dc2a7d800cd8646ae445d41d7ea80af21d8e5ac57722\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-qjkwf" podUID="292f49bd-6819-414a-9097-fb8dcd762594" Apr 14 13:27:04.738494 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6f72d9900c75d6070fd4dc2a7d800cd8646ae445d41d7ea80af21d8e5ac57722-shm.mount: Deactivated successfully. Apr 14 13:27:05.481301 containerd[1593]: time="2026-04-14T13:27:05.480869730Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qjkwf,Uid:292f49bd-6819-414a-9097-fb8dcd762594,Namespace:calico-system,Attempt:0,}" Apr 14 13:27:08.173966 systemd[1]: Started sshd@33-10.0.0.15:22-10.0.0.1:39360.service - OpenSSH per-connection server daemon (10.0.0.1:39360). Apr 14 13:27:09.348420 sshd[4551]: Accepted publickey for core from 10.0.0.1 port 39360 ssh2: RSA SHA256:yxEwPRvngNLBDT3pQyHsh993EalqAg0K7yQpushVk/s Apr 14 13:27:09.350979 sshd[4551]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:27:09.756039 systemd-logind[1549]: New session 34 of user core. Apr 14 13:27:09.844770 systemd[1]: Started session-34.scope - Session 34 of User core. Apr 14 13:27:10.133246 kubelet[2716]: E0414 13:27:10.131518 2716 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.343s" Apr 14 13:27:11.965274 update_engine[1551]: I20260414 13:27:11.964877 1551 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 14 13:27:12.017735 update_engine[1551]: I20260414 13:27:11.995015 1551 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 14 13:27:12.087626 update_engine[1551]: I20260414 13:27:12.037042 1551 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 14 13:27:12.203272 update_engine[1551]: E20260414 13:27:12.188763 1551 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 14 13:27:12.248632 update_engine[1551]: I20260414 13:27:12.226718 1551 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Apr 14 13:27:15.319786 kubelet[2716]: E0414 13:27:15.319692 2716 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.579s" Apr 14 13:27:16.041832 sshd[4551]: pam_unix(sshd:session): session closed for user core Apr 14 13:27:16.126193 systemd[1]: Started sshd@34-10.0.0.15:22-10.0.0.1:59076.service - OpenSSH per-connection server daemon (10.0.0.1:59076). Apr 14 13:27:16.489984 systemd[1]: sshd@33-10.0.0.15:22-10.0.0.1:39360.service: Deactivated successfully. Apr 14 13:27:16.553785 systemd[1]: session-34.scope: Deactivated successfully. Apr 14 13:27:16.676161 systemd-logind[1549]: Session 34 logged out. Waiting for processes to exit. Apr 14 13:27:16.759493 systemd-logind[1549]: Removed session 34. Apr 14 13:27:17.250652 kubelet[2716]: E0414 13:27:17.250019 2716 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.926s" Apr 14 13:27:18.144311 sshd[4577]: Accepted publickey for core from 10.0.0.1 port 59076 ssh2: RSA SHA256:yxEwPRvngNLBDT3pQyHsh993EalqAg0K7yQpushVk/s Apr 14 13:27:18.430829 sshd[4577]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:27:19.079495 kubelet[2716]: E0414 13:27:19.076406 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:27:19.148743 systemd-logind[1549]: New session 35 of user core. Apr 14 13:27:19.159877 systemd[1]: Started session-35.scope - Session 35 of User core. Apr 14 13:27:22.987810 update_engine[1551]: I20260414 13:27:22.985699 1551 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 14 13:27:23.056963 update_engine[1551]: I20260414 13:27:23.056218 1551 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 14 13:27:23.057226 update_engine[1551]: I20260414 13:27:23.057199 1551 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 14 13:27:23.083831 update_engine[1551]: E20260414 13:27:23.082733 1551 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 14 13:27:23.083831 update_engine[1551]: I20260414 13:27:23.083315 1551 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Apr 14 13:27:25.414760 kubelet[2716]: E0414 13:27:25.397383 2716 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="8.144s" Apr 14 13:27:26.657198 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-08bea41936ed2ff345ba6e605ca49bb15f4701cea3225733d0a55a798cd6c8c3-rootfs.mount: Deactivated successfully. Apr 14 13:27:27.070780 containerd[1593]: time="2026-04-14T13:27:27.043589870Z" level=info msg="shim disconnected" id=08bea41936ed2ff345ba6e605ca49bb15f4701cea3225733d0a55a798cd6c8c3 namespace=k8s.io Apr 14 13:27:27.070780 containerd[1593]: time="2026-04-14T13:27:27.065220377Z" level=warning msg="cleaning up after shim disconnected" id=08bea41936ed2ff345ba6e605ca49bb15f4701cea3225733d0a55a798cd6c8c3 namespace=k8s.io Apr 14 13:27:27.070780 containerd[1593]: time="2026-04-14T13:27:27.065482400Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 14 13:27:29.579779 kubelet[2716]: E0414 13:27:29.576336 2716 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.714s" Apr 14 13:27:32.983783 update_engine[1551]: I20260414 13:27:32.981227 1551 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 14 13:27:33.116716 update_engine[1551]: I20260414 13:27:33.114176 1551 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 14 13:27:33.160948 sshd[4577]: pam_unix(sshd:session): session closed for user core Apr 14 13:27:33.338521 update_engine[1551]: I20260414 13:27:33.294665 1551 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 14 13:27:33.338521 update_engine[1551]: E20260414 13:27:33.296391 1551 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 14 13:27:33.338521 update_engine[1551]: I20260414 13:27:33.296489 1551 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 14 13:27:33.338521 update_engine[1551]: I20260414 13:27:33.296496 1551 omaha_request_action.cc:617] Omaha request response: Apr 14 13:27:33.338521 update_engine[1551]: E20260414 13:27:33.296823 1551 omaha_request_action.cc:636] Omaha request network transfer failed. Apr 14 13:27:33.339479 kubelet[2716]: E0414 13:27:33.307146 2716 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.7s" Apr 14 13:27:33.360549 update_engine[1551]: I20260414 13:27:33.347849 1551 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Apr 14 13:27:33.360549 update_engine[1551]: I20260414 13:27:33.347975 1551 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 14 13:27:33.360549 update_engine[1551]: I20260414 13:27:33.347981 1551 update_attempter.cc:306] Processing Done. Apr 14 13:27:33.360549 update_engine[1551]: E20260414 13:27:33.348055 1551 update_attempter.cc:619] Update failed. Apr 14 13:27:33.376378 update_engine[1551]: I20260414 13:27:33.364130 1551 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Apr 14 13:27:33.376378 update_engine[1551]: I20260414 13:27:33.364440 1551 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Apr 14 13:27:33.376378 update_engine[1551]: I20260414 13:27:33.364528 1551 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Apr 14 13:27:33.502507 update_engine[1551]: I20260414 13:27:33.461295 1551 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 14 13:27:33.502507 update_engine[1551]: I20260414 13:27:33.466465 1551 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 14 13:27:33.502507 update_engine[1551]: I20260414 13:27:33.466573 1551 omaha_request_action.cc:272] Request: Apr 14 13:27:33.502507 update_engine[1551]: Apr 14 13:27:33.502507 update_engine[1551]: Apr 14 13:27:33.502507 update_engine[1551]: Apr 14 13:27:33.502507 update_engine[1551]: Apr 14 13:27:33.502507 update_engine[1551]: Apr 14 13:27:33.502507 update_engine[1551]: Apr 14 13:27:33.502507 update_engine[1551]: I20260414 13:27:33.466584 1551 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 14 13:27:33.502507 update_engine[1551]: I20260414 13:27:33.474436 1551 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 14 13:27:33.502507 update_engine[1551]: I20260414 13:27:33.476097 1551 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 14 13:27:33.502507 update_engine[1551]: E20260414 13:27:33.489392 1551 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 14 13:27:33.502507 update_engine[1551]: I20260414 13:27:33.489569 1551 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 14 13:27:33.502507 update_engine[1551]: I20260414 13:27:33.489579 1551 omaha_request_action.cc:617] Omaha request response: Apr 14 13:27:33.502507 update_engine[1551]: I20260414 13:27:33.489657 1551 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 14 13:27:33.502507 update_engine[1551]: I20260414 13:27:33.489701 1551 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 14 13:27:33.502507 update_engine[1551]: I20260414 13:27:33.489705 1551 update_attempter.cc:306] Processing Done. Apr 14 13:27:33.502507 update_engine[1551]: I20260414 13:27:33.489712 1551 update_attempter.cc:310] Error event sent. Apr 14 13:27:33.502507 update_engine[1551]: I20260414 13:27:33.489802 1551 update_check_scheduler.cc:74] Next update check in 47m28s Apr 14 13:27:33.504883 systemd[1]: Started sshd@35-10.0.0.15:22-10.0.0.1:50996.service - OpenSSH per-connection server daemon (10.0.0.1:50996). Apr 14 13:27:33.544279 systemd[1]: sshd@34-10.0.0.15:22-10.0.0.1:59076.service: Deactivated successfully. Apr 14 13:27:33.700456 systemd[1]: session-35.scope: Deactivated successfully. Apr 14 13:27:33.892867 systemd-logind[1549]: Session 35 logged out. Waiting for processes to exit. Apr 14 13:27:33.939536 systemd-logind[1549]: Removed session 35. Apr 14 13:27:33.995027 containerd[1593]: time="2026-04-14T13:27:33.467043513Z" level=error msg="failed to delete" cmd="/usr/bin/containerd-shim-runc-v2 -namespace k8s.io -address /run/containerd/containerd.sock -publish-binary /usr/bin/containerd -id 08bea41936ed2ff345ba6e605ca49bb15f4701cea3225733d0a55a798cd6c8c3 -bundle /run/containerd/io.containerd.runtime.v2.task/k8s.io/08bea41936ed2ff345ba6e605ca49bb15f4701cea3225733d0a55a798cd6c8c3 delete" error="signal: killed" namespace=k8s.io Apr 14 13:27:34.076626 containerd[1593]: time="2026-04-14T13:27:33.762275016Z" level=error msg="failed to delete shim" error="1 error occurred:\n\t* close wait error: context deadline exceeded\n\n" id=08bea41936ed2ff345ba6e605ca49bb15f4701cea3225733d0a55a798cd6c8c3 Apr 14 13:27:34.449797 locksmithd[1625]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Apr 14 13:27:34.449797 locksmithd[1625]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Apr 14 13:27:35.433190 containerd[1593]: time="2026-04-14T13:27:35.076652392Z" level=warning msg="failed to clean up after shim disconnected" error=": signal: killed" id=08bea41936ed2ff345ba6e605ca49bb15f4701cea3225733d0a55a798cd6c8c3 namespace=k8s.io Apr 14 13:27:36.107531 sshd[4635]: Accepted publickey for core from 10.0.0.1 port 50996 ssh2: RSA SHA256:yxEwPRvngNLBDT3pQyHsh993EalqAg0K7yQpushVk/s Apr 14 13:27:36.577591 sshd[4635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:27:37.190453 systemd-logind[1549]: New session 36 of user core. Apr 14 13:27:37.307286 systemd[1]: Started session-36.scope - Session 36 of User core. Apr 14 13:27:42.008001 kubelet[2716]: E0414 13:27:42.007862 2716 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="8.7s" Apr 14 13:27:42.860945 sshd[4635]: pam_unix(sshd:session): session closed for user core Apr 14 13:27:43.125905 systemd[1]: sshd@35-10.0.0.15:22-10.0.0.1:50996.service: Deactivated successfully. Apr 14 13:27:43.241241 systemd[1]: session-36.scope: Deactivated successfully. Apr 14 13:27:43.331522 systemd-logind[1549]: Session 36 logged out. Waiting for processes to exit. Apr 14 13:27:43.540267 systemd-logind[1549]: Removed session 36. Apr 14 13:27:43.871956 kubelet[2716]: E0414 13:27:43.869579 2716 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.826s" Apr 14 13:27:43.994359 kubelet[2716]: I0414 13:27:43.990419 2716 scope.go:117] "RemoveContainer" containerID="982eb30a46df62c511ccd4ae8a7a6d6aa6ce6d1f39048188c646d737926a8a46" Apr 14 13:27:44.325282 kubelet[2716]: I0414 13:27:44.322330 2716 scope.go:117] "RemoveContainer" containerID="08bea41936ed2ff345ba6e605ca49bb15f4701cea3225733d0a55a798cd6c8c3" Apr 14 13:27:45.426597 kubelet[2716]: E0414 13:27:45.426422 2716 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.499s" Apr 14 13:27:45.747911 containerd[1593]: time="2026-04-14T13:27:45.664943433Z" level=info msg="CreateContainer within sandbox \"55636687ab4158ad9e6b2dc567b56e4adf95551dca2e49c96f3ef41b32bc27ea\" for container &ContainerMetadata{Name:tigera-operator,Attempt:2,}" Apr 14 13:27:46.341788 containerd[1593]: time="2026-04-14T13:27:46.227718789Z" level=info msg="RemoveContainer for \"982eb30a46df62c511ccd4ae8a7a6d6aa6ce6d1f39048188c646d737926a8a46\"" Apr 14 13:27:46.640797 kubelet[2716]: E0414 13:27:46.623499 2716 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.167s" Apr 14 13:27:47.840744 containerd[1593]: time="2026-04-14T13:27:47.840247365Z" level=info msg="RemoveContainer for \"982eb30a46df62c511ccd4ae8a7a6d6aa6ce6d1f39048188c646d737926a8a46\" returns successfully" Apr 14 13:27:48.196921 kubelet[2716]: E0414 13:27:48.194392 2716 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.416s" Apr 14 13:27:48.216294 kubelet[2716]: E0414 13:27:48.216015 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:27:48.222700 kubelet[2716]: E0414 13:27:48.222534 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:27:48.647673 containerd[1593]: time="2026-04-14T13:27:48.622804966Z" level=info msg="CreateContainer within sandbox \"55636687ab4158ad9e6b2dc567b56e4adf95551dca2e49c96f3ef41b32bc27ea\" for &ContainerMetadata{Name:tigera-operator,Attempt:2,} returns container id \"7de588c0c54ad3978119bef3fc73ec1c029b3bd880839b363acbb3f30c760ba5\"" Apr 14 13:27:48.734969 systemd[1]: Started sshd@36-10.0.0.15:22-10.0.0.1:52932.service - OpenSSH per-connection server daemon (10.0.0.1:52932). Apr 14 13:27:50.011558 containerd[1593]: time="2026-04-14T13:27:50.005649216Z" level=info msg="StartContainer for \"7de588c0c54ad3978119bef3fc73ec1c029b3bd880839b363acbb3f30c760ba5\"" Apr 14 13:27:50.333918 sshd[4672]: Accepted publickey for core from 10.0.0.1 port 52932 ssh2: RSA SHA256:yxEwPRvngNLBDT3pQyHsh993EalqAg0K7yQpushVk/s Apr 14 13:27:50.339038 sshd[4672]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:27:50.681763 systemd-logind[1549]: New session 37 of user core. Apr 14 13:27:50.696817 systemd[1]: Started session-37.scope - Session 37 of User core. Apr 14 13:27:50.791036 kubelet[2716]: E0414 13:27:50.789974 2716 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.513s" Apr 14 13:27:51.045400 systemd[1]: run-containerd-runc-k8s.io-7de588c0c54ad3978119bef3fc73ec1c029b3bd880839b363acbb3f30c760ba5-runc.gV1wqO.mount: Deactivated successfully. Apr 14 13:27:51.663167 systemd-networkd[1253]: calie7be5fa7599: Link UP Apr 14 13:27:51.747626 systemd-networkd[1253]: calie7be5fa7599: Gained carrier Apr 14 13:27:52.117637 containerd[1593]: time="2026-04-14T13:27:52.114643616Z" level=info msg="StartContainer for \"7de588c0c54ad3978119bef3fc73ec1c029b3bd880839b363acbb3f30c760ba5\" returns successfully" Apr 14 13:27:52.358426 sshd[4672]: pam_unix(sshd:session): session closed for user core Apr 14 13:27:52.387116 containerd[1593]: 2026-04-14 13:27:18.927 [ERROR][4541] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 14 13:27:52.387116 containerd[1593]: 2026-04-14 13:27:23.611 [INFO][4541] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--qjkwf-eth0 csi-node-driver- calico-system 292f49bd-6819-414a-9097-fb8dcd762594 1843 0 2026-04-14 13:26:23 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6d9d697c7c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-qjkwf eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calie7be5fa7599 [] [] }} ContainerID="b906a2b37e590e1e7c17e0f45530346417421f0e739aebe5a9c02ee64ad586fc" Namespace="calico-system" Pod="csi-node-driver-qjkwf" WorkloadEndpoint="localhost-k8s-csi--node--driver--qjkwf-" Apr 14 13:27:52.387116 containerd[1593]: 2026-04-14 13:27:23.613 [INFO][4541] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b906a2b37e590e1e7c17e0f45530346417421f0e739aebe5a9c02ee64ad586fc" Namespace="calico-system" Pod="csi-node-driver-qjkwf" WorkloadEndpoint="localhost-k8s-csi--node--driver--qjkwf-eth0" Apr 14 13:27:52.387116 containerd[1593]: 2026-04-14 13:27:42.738 [INFO][4607] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b906a2b37e590e1e7c17e0f45530346417421f0e739aebe5a9c02ee64ad586fc" HandleID="k8s-pod-network.b906a2b37e590e1e7c17e0f45530346417421f0e739aebe5a9c02ee64ad586fc" Workload="localhost-k8s-csi--node--driver--qjkwf-eth0" Apr 14 13:27:52.387116 containerd[1593]: 2026-04-14 13:27:43.257 [INFO][4607] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="b906a2b37e590e1e7c17e0f45530346417421f0e739aebe5a9c02ee64ad586fc" HandleID="k8s-pod-network.b906a2b37e590e1e7c17e0f45530346417421f0e739aebe5a9c02ee64ad586fc" Workload="localhost-k8s-csi--node--driver--qjkwf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000528ec0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-qjkwf", "timestamp":"2026-04-14 13:27:42.738544313 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000494000)} Apr 14 13:27:52.387116 containerd[1593]: 2026-04-14 13:27:43.623 [INFO][4607] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 13:27:52.387116 containerd[1593]: 2026-04-14 13:27:43.727 [INFO][4607] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 13:27:52.387116 containerd[1593]: 2026-04-14 13:27:43.727 [INFO][4607] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 14 13:27:52.387116 containerd[1593]: 2026-04-14 13:27:44.314 [INFO][4607] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.b906a2b37e590e1e7c17e0f45530346417421f0e739aebe5a9c02ee64ad586fc" host="localhost" Apr 14 13:27:52.387116 containerd[1593]: 2026-04-14 13:27:46.237 [INFO][4607] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 14 13:27:52.387116 containerd[1593]: 2026-04-14 13:27:49.439 [INFO][4607] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 14 13:27:52.387116 containerd[1593]: 2026-04-14 13:27:50.144 [INFO][4607] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 14 13:27:52.387116 containerd[1593]: 2026-04-14 13:27:50.655 [INFO][4607] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 14 13:27:52.387116 containerd[1593]: 2026-04-14 13:27:50.660 [INFO][4607] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b906a2b37e590e1e7c17e0f45530346417421f0e739aebe5a9c02ee64ad586fc" host="localhost" Apr 14 13:27:52.387116 containerd[1593]: 2026-04-14 13:27:50.835 [INFO][4607] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.b906a2b37e590e1e7c17e0f45530346417421f0e739aebe5a9c02ee64ad586fc Apr 14 13:27:52.387116 containerd[1593]: 2026-04-14 13:27:51.004 [INFO][4607] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b906a2b37e590e1e7c17e0f45530346417421f0e739aebe5a9c02ee64ad586fc" host="localhost" Apr 14 13:27:52.387116 containerd[1593]: 2026-04-14 13:27:51.040 [INFO][4607] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.128/26] block=192.168.88.128/26 handle="k8s-pod-network.b906a2b37e590e1e7c17e0f45530346417421f0e739aebe5a9c02ee64ad586fc" host="localhost" Apr 14 13:27:52.387116 containerd[1593]: 2026-04-14 13:27:51.044 [INFO][4607] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.128/26] handle="k8s-pod-network.b906a2b37e590e1e7c17e0f45530346417421f0e739aebe5a9c02ee64ad586fc" host="localhost" Apr 14 13:27:52.387116 containerd[1593]: 2026-04-14 13:27:51.044 [INFO][4607] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 13:27:52.387116 containerd[1593]: 2026-04-14 13:27:51.044 [INFO][4607] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.128/26] IPv6=[] ContainerID="b906a2b37e590e1e7c17e0f45530346417421f0e739aebe5a9c02ee64ad586fc" HandleID="k8s-pod-network.b906a2b37e590e1e7c17e0f45530346417421f0e739aebe5a9c02ee64ad586fc" Workload="localhost-k8s-csi--node--driver--qjkwf-eth0" Apr 14 13:27:52.387816 containerd[1593]: 2026-04-14 13:27:51.378 [INFO][4541] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b906a2b37e590e1e7c17e0f45530346417421f0e739aebe5a9c02ee64ad586fc" Namespace="calico-system" Pod="csi-node-driver-qjkwf" WorkloadEndpoint="localhost-k8s-csi--node--driver--qjkwf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--qjkwf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"292f49bd-6819-414a-9097-fb8dcd762594", ResourceVersion:"1843", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 13, 26, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-qjkwf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.128/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie7be5fa7599", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 13:27:52.387816 containerd[1593]: 2026-04-14 13:27:51.463 [INFO][4541] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.128/32] ContainerID="b906a2b37e590e1e7c17e0f45530346417421f0e739aebe5a9c02ee64ad586fc" Namespace="calico-system" Pod="csi-node-driver-qjkwf" WorkloadEndpoint="localhost-k8s-csi--node--driver--qjkwf-eth0" Apr 14 13:27:52.387816 containerd[1593]: 2026-04-14 13:27:51.541 [INFO][4541] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie7be5fa7599 ContainerID="b906a2b37e590e1e7c17e0f45530346417421f0e739aebe5a9c02ee64ad586fc" Namespace="calico-system" Pod="csi-node-driver-qjkwf" WorkloadEndpoint="localhost-k8s-csi--node--driver--qjkwf-eth0" Apr 14 13:27:52.387816 containerd[1593]: 2026-04-14 13:27:51.843 [INFO][4541] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b906a2b37e590e1e7c17e0f45530346417421f0e739aebe5a9c02ee64ad586fc" Namespace="calico-system" Pod="csi-node-driver-qjkwf" WorkloadEndpoint="localhost-k8s-csi--node--driver--qjkwf-eth0" Apr 14 13:27:52.387816 containerd[1593]: 2026-04-14 13:27:51.847 [INFO][4541] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b906a2b37e590e1e7c17e0f45530346417421f0e739aebe5a9c02ee64ad586fc" Namespace="calico-system" Pod="csi-node-driver-qjkwf" WorkloadEndpoint="localhost-k8s-csi--node--driver--qjkwf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--qjkwf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"292f49bd-6819-414a-9097-fb8dcd762594", ResourceVersion:"1843", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 13, 26, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b906a2b37e590e1e7c17e0f45530346417421f0e739aebe5a9c02ee64ad586fc", Pod:"csi-node-driver-qjkwf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.128/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie7be5fa7599", MAC:"66:af:ce:12:cf:ef", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 13:27:52.387816 containerd[1593]: 2026-04-14 13:27:52.292 [INFO][4541] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b906a2b37e590e1e7c17e0f45530346417421f0e739aebe5a9c02ee64ad586fc" Namespace="calico-system" Pod="csi-node-driver-qjkwf" WorkloadEndpoint="localhost-k8s-csi--node--driver--qjkwf-eth0" Apr 14 13:27:52.453367 systemd[1]: sshd@36-10.0.0.15:22-10.0.0.1:52932.service: Deactivated successfully. Apr 14 13:27:52.488533 systemd[1]: session-37.scope: Deactivated successfully. Apr 14 13:27:52.519768 systemd-logind[1549]: Session 37 logged out. Waiting for processes to exit. Apr 14 13:27:52.533854 systemd-logind[1549]: Removed session 37. Apr 14 13:27:52.862015 kubelet[2716]: E0414 13:27:52.848742 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:27:52.862452 systemd-networkd[1253]: calie7be5fa7599: Gained IPv6LL Apr 14 13:27:52.919539 containerd[1593]: time="2026-04-14T13:27:52.760711554Z" level=error msg="ExecSync for \"82da48d6414986a3420b6788d9ddff0654e378ee988e8ab1ce467bd86f14c917\" failed" error="rpc error: code = DeadlineExceeded desc = failed to exec in container: timeout 10s exceeded: context deadline exceeded" Apr 14 13:27:52.924192 kubelet[2716]: E0414 13:27:52.923418 2716 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = DeadlineExceeded desc = failed to exec in container: timeout 10s exceeded: context deadline exceeded" containerID="82da48d6414986a3420b6788d9ddff0654e378ee988e8ab1ce467bd86f14c917" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Apr 14 13:27:53.662209 systemd[1]: run-containerd-runc-k8s.io-82da48d6414986a3420b6788d9ddff0654e378ee988e8ab1ce467bd86f14c917-runc.EeTvWD.mount: Deactivated successfully. Apr 14 13:27:53.705242 containerd[1593]: time="2026-04-14T13:27:53.663952228Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 13:27:53.731802 containerd[1593]: time="2026-04-14T13:27:53.730455302Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 13:27:53.773306 containerd[1593]: time="2026-04-14T13:27:53.745713057Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 13:27:53.864826 containerd[1593]: time="2026-04-14T13:27:53.855323772Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 13:27:54.880017 systemd-resolved[1467]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 14 13:27:55.125010 containerd[1593]: time="2026-04-14T13:27:55.123215388Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qjkwf,Uid:292f49bd-6819-414a-9097-fb8dcd762594,Namespace:calico-system,Attempt:0,} returns sandbox id \"b906a2b37e590e1e7c17e0f45530346417421f0e739aebe5a9c02ee64ad586fc\"" Apr 14 13:27:55.136924 containerd[1593]: time="2026-04-14T13:27:55.136645214Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Apr 14 13:27:57.518401 systemd[1]: Started sshd@37-10.0.0.15:22-10.0.0.1:42996.service - OpenSSH per-connection server daemon (10.0.0.1:42996). Apr 14 13:27:58.619670 sshd[4864]: Accepted publickey for core from 10.0.0.1 port 42996 ssh2: RSA SHA256:yxEwPRvngNLBDT3pQyHsh993EalqAg0K7yQpushVk/s Apr 14 13:27:58.780513 sshd[4864]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:27:58.897141 systemd-logind[1549]: New session 38 of user core. Apr 14 13:27:59.014861 systemd[1]: Started session-38.scope - Session 38 of User core. Apr 14 13:28:01.902106 sshd[4864]: pam_unix(sshd:session): session closed for user core Apr 14 13:28:01.948732 systemd[1]: sshd@37-10.0.0.15:22-10.0.0.1:42996.service: Deactivated successfully. Apr 14 13:28:01.957703 systemd[1]: session-38.scope: Deactivated successfully. Apr 14 13:28:01.958196 systemd-logind[1549]: Session 38 logged out. Waiting for processes to exit. Apr 14 13:28:01.982143 systemd-logind[1549]: Removed session 38. Apr 14 13:28:03.748520 kubelet[2716]: E0414 13:28:03.747935 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:28:05.334240 containerd[1593]: time="2026-04-14T13:28:05.333539655Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:28:05.337484 containerd[1593]: time="2026-04-14T13:28:05.337376785Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Apr 14 13:28:05.342844 containerd[1593]: time="2026-04-14T13:28:05.342687961Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:28:05.358243 containerd[1593]: time="2026-04-14T13:28:05.357390986Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:28:05.358243 containerd[1593]: time="2026-04-14T13:28:05.358002834Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 10.221317341s" Apr 14 13:28:05.358243 containerd[1593]: time="2026-04-14T13:28:05.358032014Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Apr 14 13:28:05.474327 containerd[1593]: time="2026-04-14T13:28:05.472548039Z" level=info msg="CreateContainer within sandbox \"b906a2b37e590e1e7c17e0f45530346417421f0e739aebe5a9c02ee64ad586fc\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 14 13:28:05.849122 containerd[1593]: time="2026-04-14T13:28:05.848868654Z" level=info msg="CreateContainer within sandbox \"b906a2b37e590e1e7c17e0f45530346417421f0e739aebe5a9c02ee64ad586fc\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"74fa16f43e93024901a97780535013f2241679dbea57af7a26e3f1df87b2a10f\"" Apr 14 13:28:05.859396 containerd[1593]: time="2026-04-14T13:28:05.859199730Z" level=info msg="StartContainer for \"74fa16f43e93024901a97780535013f2241679dbea57af7a26e3f1df87b2a10f\"" Apr 14 13:28:06.874845 containerd[1593]: time="2026-04-14T13:28:06.874354955Z" level=info msg="StartContainer for \"74fa16f43e93024901a97780535013f2241679dbea57af7a26e3f1df87b2a10f\" returns successfully" Apr 14 13:28:07.006698 systemd[1]: Started sshd@38-10.0.0.15:22-10.0.0.1:42934.service - OpenSSH per-connection server daemon (10.0.0.1:42934). Apr 14 13:28:07.024396 containerd[1593]: time="2026-04-14T13:28:07.017594535Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Apr 14 13:28:07.354153 sshd[5018]: Accepted publickey for core from 10.0.0.1 port 42934 ssh2: RSA SHA256:yxEwPRvngNLBDT3pQyHsh993EalqAg0K7yQpushVk/s Apr 14 13:28:07.458699 sshd[5018]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:28:07.469594 systemd-logind[1549]: New session 39 of user core. Apr 14 13:28:07.488487 systemd[1]: Started session-39.scope - Session 39 of User core. Apr 14 13:28:07.562212 kernel: calico-node[4889]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Apr 14 13:28:08.083114 systemd-journald[1180]: Under memory pressure, flushing caches. Apr 14 13:28:08.071994 systemd-resolved[1467]: Under memory pressure, flushing caches. Apr 14 13:28:08.072716 systemd-resolved[1467]: Flushed all caches. Apr 14 13:28:10.146840 systemd-journald[1180]: Under memory pressure, flushing caches. Apr 14 13:28:10.138830 systemd-resolved[1467]: Under memory pressure, flushing caches. Apr 14 13:28:10.139012 systemd-resolved[1467]: Flushed all caches. Apr 14 13:28:11.127678 systemd[1]: Started sshd@39-10.0.0.15:22-10.0.0.1:56228.service - OpenSSH per-connection server daemon (10.0.0.1:56228). Apr 14 13:28:11.138329 sshd[5018]: pam_unix(sshd:session): session closed for user core Apr 14 13:28:11.197752 systemd[1]: sshd@38-10.0.0.15:22-10.0.0.1:42934.service: Deactivated successfully. Apr 14 13:28:11.266834 systemd-logind[1549]: Session 39 logged out. Waiting for processes to exit. Apr 14 13:28:11.266991 systemd[1]: session-39.scope: Deactivated successfully. Apr 14 13:28:11.348255 systemd-logind[1549]: Removed session 39. Apr 14 13:28:12.071268 sshd[5049]: Accepted publickey for core from 10.0.0.1 port 56228 ssh2: RSA SHA256:yxEwPRvngNLBDT3pQyHsh993EalqAg0K7yQpushVk/s Apr 14 13:28:12.071813 sshd[5049]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:28:12.240655 systemd-logind[1549]: New session 40 of user core. Apr 14 13:28:12.370947 systemd[1]: Started session-40.scope - Session 40 of User core. Apr 14 13:28:16.168225 sshd[5049]: pam_unix(sshd:session): session closed for user core Apr 14 13:28:16.288311 systemd[1]: Started sshd@40-10.0.0.15:22-10.0.0.1:56236.service - OpenSSH per-connection server daemon (10.0.0.1:56236). Apr 14 13:28:16.337348 systemd[1]: sshd@39-10.0.0.15:22-10.0.0.1:56228.service: Deactivated successfully. Apr 14 13:28:16.359868 systemd[1]: session-40.scope: Deactivated successfully. Apr 14 13:28:16.429546 systemd-logind[1549]: Session 40 logged out. Waiting for processes to exit. Apr 14 13:28:16.432131 systemd-logind[1549]: Removed session 40. Apr 14 13:28:17.569950 sshd[5079]: Accepted publickey for core from 10.0.0.1 port 56236 ssh2: RSA SHA256:yxEwPRvngNLBDT3pQyHsh993EalqAg0K7yQpushVk/s Apr 14 13:28:17.669142 sshd[5079]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:28:17.948023 systemd-logind[1549]: New session 41 of user core. Apr 14 13:28:17.989017 systemd[1]: Started session-41.scope - Session 41 of User core. Apr 14 13:28:18.930488 systemd-networkd[1253]: vxlan.calico: Link UP Apr 14 13:28:18.930744 systemd-networkd[1253]: vxlan.calico: Gained carrier Apr 14 13:28:20.661737 systemd-networkd[1253]: vxlan.calico: Gained IPv6LL Apr 14 13:28:21.359950 containerd[1593]: time="2026-04-14T13:28:21.350649215Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:28:21.378752 containerd[1593]: time="2026-04-14T13:28:21.374225908Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Apr 14 13:28:22.124659 containerd[1593]: time="2026-04-14T13:28:22.124368413Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:28:23.228568 containerd[1593]: time="2026-04-14T13:28:23.226194681Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:28:23.258700 containerd[1593]: time="2026-04-14T13:28:23.258446308Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 16.240708147s" Apr 14 13:28:23.258700 containerd[1593]: time="2026-04-14T13:28:23.258568032Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Apr 14 13:28:23.760340 kubelet[2716]: E0414 13:28:23.752390 2716 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.103s" Apr 14 13:28:24.370987 containerd[1593]: time="2026-04-14T13:28:24.363680975Z" level=info msg="CreateContainer within sandbox \"b906a2b37e590e1e7c17e0f45530346417421f0e739aebe5a9c02ee64ad586fc\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 14 13:28:25.036214 containerd[1593]: time="2026-04-14T13:28:25.036063980Z" level=info msg="CreateContainer within sandbox \"b906a2b37e590e1e7c17e0f45530346417421f0e739aebe5a9c02ee64ad586fc\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"03349e7975630e6834ba2e700099da3bcfc62d94f08848f596287fc321918da1\"" Apr 14 13:28:25.416519 containerd[1593]: time="2026-04-14T13:28:25.416355758Z" level=info msg="StartContainer for \"03349e7975630e6834ba2e700099da3bcfc62d94f08848f596287fc321918da1\"" Apr 14 13:28:28.156878 kubelet[2716]: E0414 13:28:28.156468 2716 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.443s" Apr 14 13:28:28.546025 containerd[1593]: time="2026-04-14T13:28:28.545479520Z" level=info msg="StartContainer for \"03349e7975630e6834ba2e700099da3bcfc62d94f08848f596287fc321918da1\" returns successfully" Apr 14 13:28:30.407418 kubelet[2716]: E0414 13:28:30.405599 2716 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.541s" Apr 14 13:28:32.164802 kubelet[2716]: E0414 13:28:32.156284 2716 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.51s" Apr 14 13:28:35.235227 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7de588c0c54ad3978119bef3fc73ec1c029b3bd880839b363acbb3f30c760ba5-rootfs.mount: Deactivated successfully. Apr 14 13:28:36.126010 containerd[1593]: time="2026-04-14T13:28:36.031579035Z" level=info msg="shim disconnected" id=7de588c0c54ad3978119bef3fc73ec1c029b3bd880839b363acbb3f30c760ba5 namespace=k8s.io Apr 14 13:28:36.132831 containerd[1593]: time="2026-04-14T13:28:36.131214808Z" level=warning msg="cleaning up after shim disconnected" id=7de588c0c54ad3978119bef3fc73ec1c029b3bd880839b363acbb3f30c760ba5 namespace=k8s.io Apr 14 13:28:36.132831 containerd[1593]: time="2026-04-14T13:28:36.131549075Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 14 13:28:37.182523 kubelet[2716]: I0414 13:28:36.181810 2716 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-qjkwf" podStartSLOduration=104.56715036 podStartE2EDuration="2m13.181534663s" podCreationTimestamp="2026-04-14 13:26:23 +0000 UTC" firstStartedPulling="2026-04-14 13:27:55.135467197 +0000 UTC m=+591.582579228" lastFinishedPulling="2026-04-14 13:28:23.749851484 +0000 UTC m=+620.196963531" observedRunningTime="2026-04-14 13:28:34.983849755 +0000 UTC m=+631.430961797" watchObservedRunningTime="2026-04-14 13:28:36.181534663 +0000 UTC m=+632.628646705" Apr 14 13:28:41.492068 sshd[5079]: pam_unix(sshd:session): session closed for user core Apr 14 13:28:41.633241 systemd[1]: Started sshd@41-10.0.0.15:22-10.0.0.1:41224.service - OpenSSH per-connection server daemon (10.0.0.1:41224). Apr 14 13:28:41.722169 systemd[1]: sshd@40-10.0.0.15:22-10.0.0.1:56236.service: Deactivated successfully. Apr 14 13:28:42.017377 systemd[1]: session-41.scope: Deactivated successfully. Apr 14 13:28:42.170390 systemd-journald[1180]: Under memory pressure, flushing caches. Apr 14 13:28:42.171347 systemd-logind[1549]: Session 41 logged out. Waiting for processes to exit. Apr 14 13:28:42.213234 systemd-resolved[1467]: Under memory pressure, flushing caches. Apr 14 13:28:42.213413 systemd-resolved[1467]: Flushed all caches. Apr 14 13:28:42.229146 systemd-logind[1549]: Removed session 41. Apr 14 13:28:43.403734 kubelet[2716]: E0414 13:28:43.402404 2716 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="10.56s" Apr 14 13:28:43.435044 kubelet[2716]: E0414 13:28:43.435019 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:28:43.594581 containerd[1593]: time="2026-04-14T13:28:43.594320927Z" level=error msg="ExecSync for \"82da48d6414986a3420b6788d9ddff0654e378ee988e8ab1ce467bd86f14c917\" failed" error="rpc error: code = DeadlineExceeded desc = failed to exec in container: timeout 10s exceeded: context deadline exceeded" Apr 14 13:28:43.932309 kubelet[2716]: I0414 13:28:43.926808 2716 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 14 13:28:43.934103 sshd[5220]: Accepted publickey for core from 10.0.0.1 port 41224 ssh2: RSA SHA256:yxEwPRvngNLBDT3pQyHsh993EalqAg0K7yQpushVk/s Apr 14 13:28:43.952162 kubelet[2716]: E0414 13:28:43.933412 2716 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = DeadlineExceeded desc = failed to exec in container: timeout 10s exceeded: context deadline exceeded" containerID="82da48d6414986a3420b6788d9ddff0654e378ee988e8ab1ce467bd86f14c917" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Apr 14 13:28:44.122695 sshd[5220]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:28:44.638932 kubelet[2716]: I0414 13:28:44.632812 2716 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 14 13:28:44.634716 systemd-logind[1549]: New session 42 of user core. Apr 14 13:28:44.683027 systemd[1]: Started session-42.scope - Session 42 of User core. Apr 14 13:28:45.825458 kubelet[2716]: E0414 13:28:45.792445 2716 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.159s" Apr 14 13:28:48.041830 kubelet[2716]: E0414 13:28:48.041732 2716 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.408s" Apr 14 13:28:48.137597 kubelet[2716]: I0414 13:28:48.123371 2716 scope.go:117] "RemoveContainer" containerID="08bea41936ed2ff345ba6e605ca49bb15f4701cea3225733d0a55a798cd6c8c3" Apr 14 13:28:48.137597 kubelet[2716]: I0414 13:28:48.135168 2716 scope.go:117] "RemoveContainer" containerID="7de588c0c54ad3978119bef3fc73ec1c029b3bd880839b363acbb3f30c760ba5" Apr 14 13:28:48.138586 kubelet[2716]: E0414 13:28:48.138460 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=tigera-operator pod=tigera-operator-6bf85f8dd-w6b9v_tigera-operator(0946a41e-7e03-41a0-8dd8-549abdcdc5a2)\"" pod="tigera-operator/tigera-operator-6bf85f8dd-w6b9v" podUID="0946a41e-7e03-41a0-8dd8-549abdcdc5a2" Apr 14 13:28:48.252298 containerd[1593]: time="2026-04-14T13:28:48.251563528Z" level=info msg="RemoveContainer for \"08bea41936ed2ff345ba6e605ca49bb15f4701cea3225733d0a55a798cd6c8c3\"" Apr 14 13:28:48.394462 containerd[1593]: time="2026-04-14T13:28:48.386174789Z" level=info msg="RemoveContainer for \"08bea41936ed2ff345ba6e605ca49bb15f4701cea3225733d0a55a798cd6c8c3\" returns successfully" Apr 14 13:28:50.669767 kubelet[2716]: E0414 13:28:50.664678 2716 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.979s" Apr 14 13:28:55.621331 kubelet[2716]: E0414 13:28:55.620116 2716 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.965s" Apr 14 13:28:56.006244 kubelet[2716]: E0414 13:28:55.999002 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:28:58.219453 kubelet[2716]: E0414 13:28:58.219128 2716 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.599s" Apr 14 13:29:00.308655 kubelet[2716]: E0414 13:29:00.291964 2716 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.341s" Apr 14 13:29:01.749291 sshd[5220]: pam_unix(sshd:session): session closed for user core Apr 14 13:29:01.935438 systemd[1]: Started sshd@42-10.0.0.15:22-10.0.0.1:39350.service - OpenSSH per-connection server daemon (10.0.0.1:39350). Apr 14 13:29:02.329500 systemd[1]: sshd@41-10.0.0.15:22-10.0.0.1:41224.service: Deactivated successfully. Apr 14 13:29:02.576770 systemd[1]: session-42.scope: Deactivated successfully. Apr 14 13:29:02.587868 systemd-logind[1549]: Session 42 logged out. Waiting for processes to exit. Apr 14 13:29:02.720931 kubelet[2716]: I0414 13:29:02.720665 2716 scope.go:117] "RemoveContainer" containerID="7de588c0c54ad3978119bef3fc73ec1c029b3bd880839b363acbb3f30c760ba5" Apr 14 13:29:02.754824 kubelet[2716]: E0414 13:29:02.724189 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:29:02.943996 systemd-logind[1549]: Removed session 42. Apr 14 13:29:04.607792 sshd[5267]: Accepted publickey for core from 10.0.0.1 port 39350 ssh2: RSA SHA256:yxEwPRvngNLBDT3pQyHsh993EalqAg0K7yQpushVk/s Apr 14 13:29:04.717690 sshd[5267]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:29:05.123985 systemd-logind[1549]: New session 43 of user core. Apr 14 13:29:05.255485 systemd[1]: Started session-43.scope - Session 43 of User core. Apr 14 13:29:08.849270 kubelet[2716]: E0414 13:29:08.849175 2716 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="8.195s" Apr 14 13:29:12.353291 containerd[1593]: time="2026-04-14T13:29:12.351895413Z" level=error msg="ExecSync for \"82da48d6414986a3420b6788d9ddff0654e378ee988e8ab1ce467bd86f14c917\" failed" error="rpc error: code = DeadlineExceeded desc = failed to exec in container: timeout 10s exceeded: context deadline exceeded" Apr 14 13:29:13.368480 containerd[1593]: time="2026-04-14T13:29:13.289454852Z" level=info msg="CreateContainer within sandbox \"55636687ab4158ad9e6b2dc567b56e4adf95551dca2e49c96f3ef41b32bc27ea\" for container &ContainerMetadata{Name:tigera-operator,Attempt:3,}" Apr 14 13:29:13.920764 kubelet[2716]: E0414 13:29:13.919865 2716 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = DeadlineExceeded desc = failed to exec in container: timeout 10s exceeded: context deadline exceeded" containerID="82da48d6414986a3420b6788d9ddff0654e378ee988e8ab1ce467bd86f14c917" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Apr 14 13:29:14.166861 sshd[5267]: pam_unix(sshd:session): session closed for user core Apr 14 13:29:14.608597 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2359047455.mount: Deactivated successfully. Apr 14 13:29:14.609831 systemd[1]: sshd@42-10.0.0.15:22-10.0.0.1:39350.service: Deactivated successfully. Apr 14 13:29:14.779448 systemd[1]: session-43.scope: Deactivated successfully. Apr 14 13:29:14.902877 systemd-logind[1549]: Session 43 logged out. Waiting for processes to exit. Apr 14 13:29:15.209472 systemd-logind[1549]: Removed session 43. Apr 14 13:29:16.634857 containerd[1593]: time="2026-04-14T13:29:16.630474283Z" level=info msg="CreateContainer within sandbox \"55636687ab4158ad9e6b2dc567b56e4adf95551dca2e49c96f3ef41b32bc27ea\" for &ContainerMetadata{Name:tigera-operator,Attempt:3,} returns container id \"7884ca8c28fd254b0d6e534f0d7e75f60e8b1989390a981bee344a265a237e06\"" Apr 14 13:29:17.032127 containerd[1593]: time="2026-04-14T13:29:17.031900880Z" level=info msg="StartContainer for \"7884ca8c28fd254b0d6e534f0d7e75f60e8b1989390a981bee344a265a237e06\"" Apr 14 13:29:19.786744 systemd[1]: Started sshd@43-10.0.0.15:22-10.0.0.1:49860.service - OpenSSH per-connection server daemon (10.0.0.1:49860). Apr 14 13:29:20.977521 kubelet[2716]: E0414 13:29:20.966258 2716 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="12.116s" Apr 14 13:29:22.239816 sshd[5307]: Accepted publickey for core from 10.0.0.1 port 49860 ssh2: RSA SHA256:yxEwPRvngNLBDT3pQyHsh993EalqAg0K7yQpushVk/s Apr 14 13:29:22.324784 sshd[5307]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:29:22.780402 systemd-logind[1549]: New session 44 of user core. Apr 14 13:29:22.850919 systemd[1]: Started session-44.scope - Session 44 of User core. Apr 14 13:29:23.553389 kubelet[2716]: E0414 13:29:23.483933 2716 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.511s" Apr 14 13:29:24.071629 kubelet[2716]: E0414 13:29:24.050625 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:29:24.771528 kubelet[2716]: E0414 13:29:24.756897 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:29:25.616143 kubelet[2716]: E0414 13:29:25.586020 2716 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.991s" Apr 14 13:29:26.122301 containerd[1593]: time="2026-04-14T13:29:26.121213988Z" level=error msg="get state for 7884ca8c28fd254b0d6e534f0d7e75f60e8b1989390a981bee344a265a237e06" error="context deadline exceeded: unknown" Apr 14 13:29:26.122301 containerd[1593]: time="2026-04-14T13:29:26.121488432Z" level=warning msg="unknown status" status=0 Apr 14 13:29:26.938704 containerd[1593]: time="2026-04-14T13:29:26.938608828Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Apr 14 13:29:28.716203 containerd[1593]: time="2026-04-14T13:29:28.712802077Z" level=info msg="StartContainer for \"7884ca8c28fd254b0d6e534f0d7e75f60e8b1989390a981bee344a265a237e06\" returns successfully" Apr 14 13:29:28.735442 sshd[5307]: pam_unix(sshd:session): session closed for user core Apr 14 13:29:29.146646 systemd[1]: sshd@43-10.0.0.15:22-10.0.0.1:49860.service: Deactivated successfully. Apr 14 13:29:29.270968 systemd[1]: session-44.scope: Deactivated successfully. Apr 14 13:29:29.271827 systemd-logind[1549]: Session 44 logged out. Waiting for processes to exit. Apr 14 13:29:29.367749 systemd-logind[1549]: Removed session 44. Apr 14 13:29:32.780676 containerd[1593]: time="2026-04-14T13:29:32.760058992Z" level=error msg="ExecSync for \"82da48d6414986a3420b6788d9ddff0654e378ee988e8ab1ce467bd86f14c917\" failed" error="rpc error: code = DeadlineExceeded desc = failed to exec in container: timeout 10s exceeded: context deadline exceeded" Apr 14 13:29:32.881469 kubelet[2716]: E0414 13:29:32.863417 2716 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = DeadlineExceeded desc = failed to exec in container: timeout 10s exceeded: context deadline exceeded" containerID="82da48d6414986a3420b6788d9ddff0654e378ee988e8ab1ce467bd86f14c917" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Apr 14 13:29:33.873826 systemd[1]: Started sshd@44-10.0.0.15:22-10.0.0.1:40874.service - OpenSSH per-connection server daemon (10.0.0.1:40874). Apr 14 13:29:34.159817 sshd[5397]: Accepted publickey for core from 10.0.0.1 port 40874 ssh2: RSA SHA256:yxEwPRvngNLBDT3pQyHsh993EalqAg0K7yQpushVk/s Apr 14 13:29:34.154797 sshd[5397]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:29:34.313306 systemd-logind[1549]: New session 45 of user core. Apr 14 13:29:34.418164 systemd[1]: Started session-45.scope - Session 45 of User core. Apr 14 13:29:37.625340 sshd[5397]: pam_unix(sshd:session): session closed for user core Apr 14 13:29:37.864105 systemd[1]: sshd@44-10.0.0.15:22-10.0.0.1:40874.service: Deactivated successfully. Apr 14 13:29:37.930036 systemd[1]: session-45.scope: Deactivated successfully. Apr 14 13:29:37.931883 systemd-logind[1549]: Session 45 logged out. Waiting for processes to exit. Apr 14 13:29:37.950901 systemd-logind[1549]: Removed session 45. Apr 14 13:29:42.638359 systemd[1]: Started sshd@45-10.0.0.15:22-10.0.0.1:38020.service - OpenSSH per-connection server daemon (10.0.0.1:38020). Apr 14 13:29:42.695046 sshd[5452]: Accepted publickey for core from 10.0.0.1 port 38020 ssh2: RSA SHA256:yxEwPRvngNLBDT3pQyHsh993EalqAg0K7yQpushVk/s Apr 14 13:29:42.701663 sshd[5452]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:29:42.718800 systemd-logind[1549]: New session 46 of user core. Apr 14 13:29:42.766363 systemd[1]: Started session-46.scope - Session 46 of User core. Apr 14 13:29:43.095743 sshd[5452]: pam_unix(sshd:session): session closed for user core Apr 14 13:29:43.187481 systemd[1]: sshd@45-10.0.0.15:22-10.0.0.1:38020.service: Deactivated successfully. Apr 14 13:29:43.195407 systemd[1]: session-46.scope: Deactivated successfully. Apr 14 13:29:43.200285 systemd-logind[1549]: Session 46 logged out. Waiting for processes to exit. Apr 14 13:29:43.210640 systemd-logind[1549]: Removed session 46. Apr 14 13:29:48.111237 systemd[1]: Started sshd@46-10.0.0.15:22-10.0.0.1:38028.service - OpenSSH per-connection server daemon (10.0.0.1:38028). Apr 14 13:29:48.146110 sshd[5509]: Accepted publickey for core from 10.0.0.1 port 38028 ssh2: RSA SHA256:yxEwPRvngNLBDT3pQyHsh993EalqAg0K7yQpushVk/s Apr 14 13:29:48.147345 sshd[5509]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:29:48.154549 systemd-logind[1549]: New session 47 of user core. Apr 14 13:29:48.169527 systemd[1]: Started session-47.scope - Session 47 of User core. Apr 14 13:29:48.298169 sshd[5509]: pam_unix(sshd:session): session closed for user core Apr 14 13:29:48.301501 systemd[1]: sshd@46-10.0.0.15:22-10.0.0.1:38028.service: Deactivated successfully. Apr 14 13:29:48.305547 systemd[1]: session-47.scope: Deactivated successfully. Apr 14 13:29:48.306351 systemd-logind[1549]: Session 47 logged out. Waiting for processes to exit. Apr 14 13:29:48.307352 systemd-logind[1549]: Removed session 47. Apr 14 13:29:53.317223 systemd[1]: Started sshd@47-10.0.0.15:22-10.0.0.1:36310.service - OpenSSH per-connection server daemon (10.0.0.1:36310). Apr 14 13:29:53.525300 sshd[5550]: Accepted publickey for core from 10.0.0.1 port 36310 ssh2: RSA SHA256:yxEwPRvngNLBDT3pQyHsh993EalqAg0K7yQpushVk/s Apr 14 13:29:53.530122 sshd[5550]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:29:53.566162 systemd-logind[1549]: New session 48 of user core. Apr 14 13:29:53.591489 systemd[1]: Started session-48.scope - Session 48 of User core. Apr 14 13:29:53.996268 sshd[5550]: pam_unix(sshd:session): session closed for user core Apr 14 13:29:54.004326 systemd[1]: sshd@47-10.0.0.15:22-10.0.0.1:36310.service: Deactivated successfully. Apr 14 13:29:54.009950 systemd[1]: session-48.scope: Deactivated successfully. Apr 14 13:29:54.011431 systemd-logind[1549]: Session 48 logged out. Waiting for processes to exit. Apr 14 13:29:54.012444 systemd-logind[1549]: Removed session 48.