Apr 16 04:51:43.465365 kernel: Linux version 6.12.81-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Wed Apr 15 22:39:17 -00 2026 Apr 16 04:51:43.465421 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=15b40c09f238fba45b5bb3e18ef7e289d4e557e0500075f5731dd7eaa53962ae Apr 16 04:51:43.465432 kernel: BIOS-provided physical RAM map: Apr 16 04:51:43.465438 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Apr 16 04:51:43.465444 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Apr 16 04:51:43.465449 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Apr 16 04:51:43.465456 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Apr 16 04:51:43.465462 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Apr 16 04:51:43.465476 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Apr 16 04:51:43.465482 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Apr 16 04:51:43.465488 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 16 04:51:43.465495 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Apr 16 04:51:43.465501 kernel: NX (Execute Disable) protection: active Apr 16 04:51:43.465507 kernel: APIC: Static calls initialized Apr 16 04:51:43.465514 kernel: SMBIOS 2.8 present. Apr 16 04:51:43.465520 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Apr 16 04:51:43.465536 kernel: DMI: Memory slots populated: 1/1 Apr 16 04:51:43.465542 kernel: Hypervisor detected: KVM Apr 16 04:51:43.465548 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x10000000000 Apr 16 04:51:43.465554 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 16 04:51:43.465560 kernel: kvm-clock: using sched offset of 8146304630 cycles Apr 16 04:51:43.465567 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 16 04:51:43.465573 kernel: tsc: Detected 2793.438 MHz processor Apr 16 04:51:43.465580 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 16 04:51:43.465586 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 16 04:51:43.465593 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x10000000000 Apr 16 04:51:43.465601 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Apr 16 04:51:43.465608 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 16 04:51:43.465614 kernel: Using GB pages for direct mapping Apr 16 04:51:43.465621 kernel: ACPI: Early table checksum verification disabled Apr 16 04:51:43.465627 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Apr 16 04:51:43.465633 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 04:51:43.465640 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 04:51:43.465646 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 04:51:43.465652 kernel: ACPI: FACS 0x000000009CFE0000 000040 Apr 16 04:51:43.465660 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 04:51:43.465666 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 04:51:43.465672 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 04:51:43.465679 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 04:51:43.465685 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Apr 16 04:51:43.465694 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Apr 16 04:51:43.465702 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Apr 16 04:51:43.465708 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Apr 16 04:51:43.465715 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Apr 16 04:51:43.465722 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Apr 16 04:51:43.465728 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Apr 16 04:51:43.465734 kernel: No NUMA configuration found Apr 16 04:51:43.465741 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Apr 16 04:51:43.465747 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Apr 16 04:51:43.465755 kernel: Zone ranges: Apr 16 04:51:43.465762 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 16 04:51:43.465768 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Apr 16 04:51:43.465775 kernel: Normal empty Apr 16 04:51:43.465781 kernel: Device empty Apr 16 04:51:43.465788 kernel: Movable zone start for each node Apr 16 04:51:43.465794 kernel: Early memory node ranges Apr 16 04:51:43.465801 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Apr 16 04:51:43.465809 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Apr 16 04:51:43.465829 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Apr 16 04:51:43.465841 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 16 04:51:43.465851 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Apr 16 04:51:43.465862 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Apr 16 04:51:43.465879 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 16 04:51:43.465907 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 16 04:51:43.465914 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 16 04:51:43.465921 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 16 04:51:43.465927 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 16 04:51:43.465941 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 16 04:51:43.465949 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 16 04:51:43.465956 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 16 04:51:43.465963 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 16 04:51:43.465969 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 16 04:51:43.465976 kernel: TSC deadline timer available Apr 16 04:51:43.465982 kernel: CPU topo: Max. logical packages: 1 Apr 16 04:51:43.465989 kernel: CPU topo: Max. logical dies: 1 Apr 16 04:51:43.465995 kernel: CPU topo: Max. dies per package: 1 Apr 16 04:51:43.466002 kernel: CPU topo: Max. threads per core: 1 Apr 16 04:51:43.466010 kernel: CPU topo: Num. cores per package: 4 Apr 16 04:51:43.466016 kernel: CPU topo: Num. threads per package: 4 Apr 16 04:51:43.466023 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Apr 16 04:51:43.466029 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 16 04:51:43.466035 kernel: kvm-guest: KVM setup pv remote TLB flush Apr 16 04:51:43.466042 kernel: kvm-guest: setup PV sched yield Apr 16 04:51:43.466049 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Apr 16 04:51:43.466055 kernel: Booting paravirtualized kernel on KVM Apr 16 04:51:43.466062 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 16 04:51:43.466070 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Apr 16 04:51:43.466076 kernel: percpu: Embedded 60 pages/cpu s207448 r8192 d30120 u524288 Apr 16 04:51:43.466083 kernel: pcpu-alloc: s207448 r8192 d30120 u524288 alloc=1*2097152 Apr 16 04:51:43.466090 kernel: pcpu-alloc: [0] 0 1 2 3 Apr 16 04:51:43.466096 kernel: kvm-guest: PV spinlocks enabled Apr 16 04:51:43.466102 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 16 04:51:43.466110 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=15b40c09f238fba45b5bb3e18ef7e289d4e557e0500075f5731dd7eaa53962ae Apr 16 04:51:43.466117 kernel: random: crng init done Apr 16 04:51:43.466125 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 16 04:51:43.466132 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 16 04:51:43.466138 kernel: Fallback order for Node 0: 0 Apr 16 04:51:43.466144 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Apr 16 04:51:43.466151 kernel: Policy zone: DMA32 Apr 16 04:51:43.466158 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 16 04:51:43.466164 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Apr 16 04:51:43.466171 kernel: ftrace: allocating 40126 entries in 157 pages Apr 16 04:51:43.466177 kernel: ftrace: allocated 157 pages with 5 groups Apr 16 04:51:43.466185 kernel: Dynamic Preempt: voluntary Apr 16 04:51:43.466192 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 16 04:51:43.466199 kernel: rcu: RCU event tracing is enabled. Apr 16 04:51:43.466206 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Apr 16 04:51:43.466213 kernel: Trampoline variant of Tasks RCU enabled. Apr 16 04:51:43.466219 kernel: Rude variant of Tasks RCU enabled. Apr 16 04:51:43.466232 kernel: Tracing variant of Tasks RCU enabled. Apr 16 04:51:43.466239 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 16 04:51:43.466246 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Apr 16 04:51:43.466252 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 16 04:51:43.466261 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 16 04:51:43.466267 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 16 04:51:43.466274 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Apr 16 04:51:43.466280 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 16 04:51:43.466287 kernel: Console: colour VGA+ 80x25 Apr 16 04:51:43.466305 kernel: printk: legacy console [ttyS0] enabled Apr 16 04:51:43.466318 kernel: ACPI: Core revision 20240827 Apr 16 04:51:43.466329 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 16 04:51:43.466339 kernel: APIC: Switch to symmetric I/O mode setup Apr 16 04:51:43.466349 kernel: x2apic enabled Apr 16 04:51:43.466359 kernel: APIC: Switched APIC routing to: physical x2apic Apr 16 04:51:43.466371 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Apr 16 04:51:43.466406 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Apr 16 04:51:43.466417 kernel: kvm-guest: setup PV IPIs Apr 16 04:51:43.466427 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 16 04:51:43.466438 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 16 04:51:43.466450 kernel: Calibrating delay loop (skipped) preset value.. 5586.87 BogoMIPS (lpj=2793438) Apr 16 04:51:43.466461 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 16 04:51:43.466471 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Apr 16 04:51:43.466481 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Apr 16 04:51:43.466492 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 16 04:51:43.466503 kernel: Spectre V2 : Mitigation: Retpolines Apr 16 04:51:43.466511 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 16 04:51:43.466518 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 16 04:51:43.466525 kernel: RETBleed: Vulnerable Apr 16 04:51:43.466534 kernel: Speculative Store Bypass: Vulnerable Apr 16 04:51:43.466541 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 16 04:51:43.466549 kernel: GDS: Unknown: Dependent on hypervisor status Apr 16 04:51:43.466555 kernel: active return thunk: its_return_thunk Apr 16 04:51:43.466563 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 16 04:51:43.466570 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 16 04:51:43.466577 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 16 04:51:43.466584 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 16 04:51:43.466591 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 16 04:51:43.466600 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 16 04:51:43.466606 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 16 04:51:43.466614 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 16 04:51:43.466621 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Apr 16 04:51:43.466628 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Apr 16 04:51:43.466635 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Apr 16 04:51:43.466641 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Apr 16 04:51:43.466649 kernel: Freeing SMP alternatives memory: 32K Apr 16 04:51:43.466666 kernel: pid_max: default: 32768 minimum: 301 Apr 16 04:51:43.466674 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Apr 16 04:51:43.466681 kernel: landlock: Up and running. Apr 16 04:51:43.466688 kernel: SELinux: Initializing. Apr 16 04:51:43.466698 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 16 04:51:43.466705 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 16 04:51:43.466712 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8370C CPU @ 2.80GHz (family: 0x6, model: 0x6a, stepping: 0x6) Apr 16 04:51:43.466719 kernel: Performance Events: unsupported p6 CPU model 106 no PMU driver, software events only. Apr 16 04:51:43.466726 kernel: signal: max sigframe size: 3632 Apr 16 04:51:43.466735 kernel: rcu: Hierarchical SRCU implementation. Apr 16 04:51:43.466742 kernel: rcu: Max phase no-delay instances is 400. Apr 16 04:51:43.466749 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Apr 16 04:51:43.466756 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 16 04:51:43.466763 kernel: smp: Bringing up secondary CPUs ... Apr 16 04:51:43.466770 kernel: smpboot: x86: Booting SMP configuration: Apr 16 04:51:43.466777 kernel: .... node #0, CPUs: #1 #2 #3 Apr 16 04:51:43.466784 kernel: smp: Brought up 1 node, 4 CPUs Apr 16 04:51:43.466791 kernel: smpboot: Total of 4 processors activated (22347.50 BogoMIPS) Apr 16 04:51:43.466799 kernel: Memory: 2419752K/2571752K available (14336K kernel code, 2453K rwdata, 26076K rodata, 46224K init, 2524K bss, 146108K reserved, 0K cma-reserved) Apr 16 04:51:43.466810 kernel: devtmpfs: initialized Apr 16 04:51:43.466821 kernel: x86/mm: Memory block size: 128MB Apr 16 04:51:43.466832 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 16 04:51:43.466843 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Apr 16 04:51:43.466854 kernel: pinctrl core: initialized pinctrl subsystem Apr 16 04:51:43.466861 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 16 04:51:43.466868 kernel: audit: initializing netlink subsys (disabled) Apr 16 04:51:43.466875 kernel: audit: type=2000 audit(1776315098.847:1): state=initialized audit_enabled=0 res=1 Apr 16 04:51:43.466912 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 16 04:51:43.466920 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 16 04:51:43.466927 kernel: cpuidle: using governor menu Apr 16 04:51:43.466934 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 16 04:51:43.466941 kernel: dca service started, version 1.12.1 Apr 16 04:51:43.466949 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Apr 16 04:51:43.466956 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Apr 16 04:51:43.466963 kernel: PCI: Using configuration type 1 for base access Apr 16 04:51:43.466970 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 16 04:51:43.466990 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 16 04:51:43.466998 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 16 04:51:43.467005 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 16 04:51:43.467012 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 16 04:51:43.467019 kernel: ACPI: Added _OSI(Module Device) Apr 16 04:51:43.467026 kernel: ACPI: Added _OSI(Processor Device) Apr 16 04:51:43.467033 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 16 04:51:43.467040 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 16 04:51:43.467048 kernel: ACPI: Interpreter enabled Apr 16 04:51:43.467064 kernel: ACPI: PM: (supports S0 S3 S5) Apr 16 04:51:43.467070 kernel: ACPI: Using IOAPIC for interrupt routing Apr 16 04:51:43.467083 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 16 04:51:43.467089 kernel: PCI: Using E820 reservations for host bridge windows Apr 16 04:51:43.467095 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 16 04:51:43.467101 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 16 04:51:43.467307 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 16 04:51:43.467422 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 16 04:51:43.467527 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 16 04:51:43.467539 kernel: PCI host bridge to bus 0000:00 Apr 16 04:51:43.467674 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 16 04:51:43.467762 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 16 04:51:43.467819 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 16 04:51:43.467933 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Apr 16 04:51:43.468531 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Apr 16 04:51:43.468702 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Apr 16 04:51:43.468757 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 16 04:51:43.468993 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Apr 16 04:51:43.469128 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Apr 16 04:51:43.469191 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Apr 16 04:51:43.469248 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Apr 16 04:51:43.469326 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Apr 16 04:51:43.469383 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 16 04:51:43.469501 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Apr 16 04:51:43.469581 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Apr 16 04:51:43.469638 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Apr 16 04:51:43.469695 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Apr 16 04:51:43.469783 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Apr 16 04:51:43.469924 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Apr 16 04:51:43.470025 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Apr 16 04:51:43.470103 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Apr 16 04:51:43.472283 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Apr 16 04:51:43.472379 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Apr 16 04:51:43.473669 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Apr 16 04:51:43.473760 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Apr 16 04:51:43.473881 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Apr 16 04:51:43.476245 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Apr 16 04:51:43.476381 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 16 04:51:43.476529 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Apr 16 04:51:43.476592 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Apr 16 04:51:43.476660 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Apr 16 04:51:43.476754 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Apr 16 04:51:43.476850 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Apr 16 04:51:43.476864 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 16 04:51:43.476874 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 16 04:51:43.476880 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 16 04:51:43.476915 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 16 04:51:43.476923 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 16 04:51:43.476945 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 16 04:51:43.476955 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 16 04:51:43.476986 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 16 04:51:43.476996 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 16 04:51:43.477006 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 16 04:51:43.477014 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 16 04:51:43.477020 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 16 04:51:43.477026 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 16 04:51:43.477032 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 16 04:51:43.477041 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 16 04:51:43.477051 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 16 04:51:43.477078 kernel: iommu: Default domain type: Translated Apr 16 04:51:43.477088 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 16 04:51:43.477099 kernel: PCI: Using ACPI for IRQ routing Apr 16 04:51:43.477109 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 16 04:51:43.477119 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Apr 16 04:51:43.477126 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Apr 16 04:51:43.477203 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 16 04:51:43.477294 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 16 04:51:43.477428 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 16 04:51:43.477439 kernel: vgaarb: loaded Apr 16 04:51:43.477446 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 16 04:51:43.477456 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 16 04:51:43.477465 kernel: clocksource: Switched to clocksource kvm-clock Apr 16 04:51:43.477475 kernel: VFS: Disk quotas dquot_6.6.0 Apr 16 04:51:43.477485 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 16 04:51:43.477495 kernel: pnp: PnP ACPI init Apr 16 04:51:43.478544 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Apr 16 04:51:43.478602 kernel: pnp: PnP ACPI: found 6 devices Apr 16 04:51:43.478613 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 16 04:51:43.478625 kernel: NET: Registered PF_INET protocol family Apr 16 04:51:43.478635 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 16 04:51:43.478646 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 16 04:51:43.478657 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 16 04:51:43.478669 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 16 04:51:43.478680 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 16 04:51:43.478706 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 16 04:51:43.478714 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 16 04:51:43.478724 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 16 04:51:43.478734 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 16 04:51:43.478745 kernel: NET: Registered PF_XDP protocol family Apr 16 04:51:43.478853 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 16 04:51:43.478960 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 16 04:51:43.479031 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 16 04:51:43.479083 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Apr 16 04:51:43.479178 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Apr 16 04:51:43.479260 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Apr 16 04:51:43.479274 kernel: PCI: CLS 0 bytes, default 64 Apr 16 04:51:43.479281 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 16 04:51:43.479289 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 16 04:51:43.479300 kernel: Initialise system trusted keyrings Apr 16 04:51:43.479309 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 16 04:51:43.479319 kernel: Key type asymmetric registered Apr 16 04:51:43.479436 kernel: Asymmetric key parser 'x509' registered Apr 16 04:51:43.479442 kernel: hrtimer: interrupt took 22206397 ns Apr 16 04:51:43.479449 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Apr 16 04:51:43.479455 kernel: io scheduler mq-deadline registered Apr 16 04:51:43.479461 kernel: io scheduler kyber registered Apr 16 04:51:43.479467 kernel: io scheduler bfq registered Apr 16 04:51:43.479473 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 16 04:51:43.479481 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 16 04:51:43.479487 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 16 04:51:43.479504 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Apr 16 04:51:43.479510 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 16 04:51:43.479516 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 16 04:51:43.479522 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 16 04:51:43.479528 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 16 04:51:43.479534 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 16 04:51:43.479689 kernel: rtc_cmos 00:04: RTC can wake from S4 Apr 16 04:51:43.479699 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 16 04:51:43.479847 kernel: rtc_cmos 00:04: registered as rtc0 Apr 16 04:51:43.479993 kernel: rtc_cmos 00:04: setting system clock to 2026-04-16T04:51:42 UTC (1776315102) Apr 16 04:51:43.480079 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Apr 16 04:51:43.480091 kernel: intel_pstate: CPU model not supported Apr 16 04:51:43.480102 kernel: NET: Registered PF_INET6 protocol family Apr 16 04:51:43.480112 kernel: Segment Routing with IPv6 Apr 16 04:51:43.480118 kernel: In-situ OAM (IOAM) with IPv6 Apr 16 04:51:43.480124 kernel: NET: Registered PF_PACKET protocol family Apr 16 04:51:43.480130 kernel: Key type dns_resolver registered Apr 16 04:51:43.480156 kernel: IPI shorthand broadcast: enabled Apr 16 04:51:43.480162 kernel: sched_clock: Marking stable (4465009021, 290419589)->(4837197145, -81768535) Apr 16 04:51:43.480168 kernel: registered taskstats version 1 Apr 16 04:51:43.480174 kernel: Loading compiled-in X.509 certificates Apr 16 04:51:43.480180 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.81-flatcar: 25c2b596b475a2918f2ba6f953b0a89c09a0d0ab' Apr 16 04:51:43.480186 kernel: Demotion targets for Node 0: null Apr 16 04:51:43.480192 kernel: Key type .fscrypt registered Apr 16 04:51:43.480198 kernel: Key type fscrypt-provisioning registered Apr 16 04:51:43.480204 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 16 04:51:43.480219 kernel: ima: Allocated hash algorithm: sha1 Apr 16 04:51:43.480226 kernel: ima: No architecture policies found Apr 16 04:51:43.480232 kernel: clk: Disabling unused clocks Apr 16 04:51:43.480238 kernel: Warning: unable to open an initial console. Apr 16 04:51:43.480244 kernel: Freeing unused kernel image (initmem) memory: 46224K Apr 16 04:51:43.480250 kernel: Write protecting the kernel read-only data: 40960k Apr 16 04:51:43.480255 kernel: Freeing unused kernel image (rodata/data gap) memory: 548K Apr 16 04:51:43.480261 kernel: Run /init as init process Apr 16 04:51:43.480267 kernel: with arguments: Apr 16 04:51:43.480283 kernel: /init Apr 16 04:51:43.480289 kernel: with environment: Apr 16 04:51:43.480294 kernel: HOME=/ Apr 16 04:51:43.480300 kernel: TERM=linux Apr 16 04:51:43.480307 systemd[1]: Successfully made /usr/ read-only. Apr 16 04:51:43.480316 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Apr 16 04:51:43.480334 systemd[1]: Detected virtualization kvm. Apr 16 04:51:43.480378 systemd[1]: Detected architecture x86-64. Apr 16 04:51:43.480408 systemd[1]: Running in initrd. Apr 16 04:51:43.480414 systemd[1]: No hostname configured, using default hostname. Apr 16 04:51:43.480421 systemd[1]: Hostname set to . Apr 16 04:51:43.480427 systemd[1]: Initializing machine ID from VM UUID. Apr 16 04:51:43.480433 systemd[1]: Queued start job for default target initrd.target. Apr 16 04:51:43.480440 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 16 04:51:43.480457 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 16 04:51:43.480464 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 16 04:51:43.480471 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 16 04:51:43.480478 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 16 04:51:43.480485 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 16 04:51:43.480492 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 16 04:51:43.480500 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 16 04:51:43.480551 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 16 04:51:43.480563 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 16 04:51:43.480574 systemd[1]: Reached target paths.target - Path Units. Apr 16 04:51:43.480598 systemd[1]: Reached target slices.target - Slice Units. Apr 16 04:51:43.480608 systemd[1]: Reached target swap.target - Swaps. Apr 16 04:51:43.480615 systemd[1]: Reached target timers.target - Timer Units. Apr 16 04:51:43.480627 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 16 04:51:43.480637 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 16 04:51:43.480665 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 16 04:51:43.480677 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Apr 16 04:51:43.480688 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 16 04:51:43.480699 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 16 04:51:43.480710 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 16 04:51:43.480721 systemd[1]: Reached target sockets.target - Socket Units. Apr 16 04:51:43.480732 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 16 04:51:43.480753 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 16 04:51:43.480760 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 16 04:51:43.480767 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Apr 16 04:51:43.480774 systemd[1]: Starting systemd-fsck-usr.service... Apr 16 04:51:43.480780 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 16 04:51:43.480787 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 16 04:51:43.480793 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 16 04:51:43.480813 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 16 04:51:43.480825 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 16 04:51:43.480877 systemd-journald[201]: Collecting audit messages is disabled. Apr 16 04:51:43.480946 systemd[1]: Finished systemd-fsck-usr.service. Apr 16 04:51:43.480958 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 16 04:51:43.480970 systemd-journald[201]: Journal started Apr 16 04:51:43.481008 systemd-journald[201]: Runtime Journal (/run/log/journal/f0834370c08a41268a8e45bce738101b) is 6M, max 48.2M, 42.2M free. Apr 16 04:51:43.478449 systemd-modules-load[203]: Inserted module 'overlay' Apr 16 04:51:43.486726 systemd[1]: Started systemd-journald.service - Journal Service. Apr 16 04:51:43.490792 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 16 04:51:43.684946 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 16 04:51:43.684977 kernel: Bridge firewalling registered Apr 16 04:51:43.524730 systemd-modules-load[203]: Inserted module 'br_netfilter' Apr 16 04:51:43.686608 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 16 04:51:43.701304 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 16 04:51:43.705919 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 04:51:43.717658 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 16 04:51:43.717970 systemd-tmpfiles[218]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Apr 16 04:51:43.725612 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 16 04:51:43.735339 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 16 04:51:43.749430 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 16 04:51:43.764239 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 16 04:51:43.767231 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 16 04:51:43.771804 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 16 04:51:43.772411 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 16 04:51:43.783557 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 16 04:51:43.878148 dracut-cmdline[243]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=15b40c09f238fba45b5bb3e18ef7e289d4e557e0500075f5731dd7eaa53962ae Apr 16 04:51:43.897373 systemd-resolved[242]: Positive Trust Anchors: Apr 16 04:51:43.897416 systemd-resolved[242]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 16 04:51:43.897442 systemd-resolved[242]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 16 04:51:43.900530 systemd-resolved[242]: Defaulting to hostname 'linux'. Apr 16 04:51:43.902757 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 16 04:51:43.919264 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 16 04:51:44.047137 kernel: SCSI subsystem initialized Apr 16 04:51:44.065591 kernel: Loading iSCSI transport class v2.0-870. Apr 16 04:51:44.093629 kernel: iscsi: registered transport (tcp) Apr 16 04:51:44.135499 kernel: iscsi: registered transport (qla4xxx) Apr 16 04:51:44.135649 kernel: QLogic iSCSI HBA Driver Apr 16 04:51:44.198190 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 16 04:51:44.289001 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 16 04:51:44.316459 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 16 04:51:44.539247 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 16 04:51:44.549469 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 16 04:51:44.700827 kernel: raid6: avx512x4 gen() 21372 MB/s Apr 16 04:51:44.724290 kernel: raid6: avx512x2 gen() 25894 MB/s Apr 16 04:51:44.742534 kernel: raid6: avx512x1 gen() 31494 MB/s Apr 16 04:51:44.760766 kernel: raid6: avx2x4 gen() 29607 MB/s Apr 16 04:51:44.784007 kernel: raid6: avx2x2 gen() 32069 MB/s Apr 16 04:51:44.802442 kernel: raid6: avx2x1 gen() 20356 MB/s Apr 16 04:51:44.802658 kernel: raid6: using algorithm avx2x2 gen() 32069 MB/s Apr 16 04:51:44.823697 kernel: raid6: .... xor() 15720 MB/s, rmw enabled Apr 16 04:51:44.823876 kernel: raid6: using avx512x2 recovery algorithm Apr 16 04:51:44.861162 kernel: xor: automatically using best checksumming function avx Apr 16 04:51:45.169262 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 16 04:51:45.196986 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 16 04:51:45.203029 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 16 04:51:45.261488 systemd-udevd[452]: Using default interface naming scheme 'v255'. Apr 16 04:51:45.264967 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 16 04:51:45.271643 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 16 04:51:45.391284 dracut-pre-trigger[461]: rd.md=0: removing MD RAID activation Apr 16 04:51:45.430099 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 16 04:51:45.431358 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 16 04:51:45.531183 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 16 04:51:45.542129 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 16 04:51:45.578955 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Apr 16 04:51:45.586721 kernel: cryptd: max_cpu_qlen set to 1000 Apr 16 04:51:45.593219 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Apr 16 04:51:45.610498 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 16 04:51:45.610674 kernel: GPT:9289727 != 19775487 Apr 16 04:51:45.610709 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 16 04:51:45.610717 kernel: GPT:9289727 != 19775487 Apr 16 04:51:45.610724 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 16 04:51:45.610732 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 16 04:51:45.610740 kernel: AES CTR mode by8 optimization enabled Apr 16 04:51:45.680955 kernel: libata version 3.00 loaded. Apr 16 04:51:45.699668 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Apr 16 04:51:45.732691 kernel: ahci 0000:00:1f.2: version 3.0 Apr 16 04:51:45.737528 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 16 04:51:45.745545 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Apr 16 04:51:45.749845 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Apr 16 04:51:45.750210 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 16 04:51:45.749557 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 16 04:51:45.749654 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 04:51:45.759504 kernel: scsi host0: ahci Apr 16 04:51:45.753585 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 16 04:51:45.766561 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 16 04:51:45.770021 kernel: scsi host1: ahci Apr 16 04:51:45.771447 kernel: scsi host2: ahci Apr 16 04:51:45.772461 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Apr 16 04:51:45.776507 kernel: scsi host3: ahci Apr 16 04:51:45.776718 kernel: scsi host4: ahci Apr 16 04:51:45.779441 kernel: scsi host5: ahci Apr 16 04:51:45.779796 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 33 lpm-pol 1 Apr 16 04:51:45.779806 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 33 lpm-pol 1 Apr 16 04:51:45.782811 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 33 lpm-pol 1 Apr 16 04:51:45.782954 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 33 lpm-pol 1 Apr 16 04:51:45.786650 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 33 lpm-pol 1 Apr 16 04:51:45.787799 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 33 lpm-pol 1 Apr 16 04:51:45.814581 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Apr 16 04:51:45.825134 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 16 04:51:45.995077 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 04:51:46.010601 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Apr 16 04:51:46.022174 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Apr 16 04:51:46.024632 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Apr 16 04:51:46.030192 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 16 04:51:46.129291 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 16 04:51:46.129489 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 16 04:51:46.129499 kernel: ata1: SATA link down (SStatus 0 SControl 300) Apr 16 04:51:46.155728 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 16 04:51:46.157492 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 16 04:51:46.157633 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 16 04:51:46.167687 kernel: ata3.00: LPM support broken, forcing max_power Apr 16 04:51:46.168017 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Apr 16 04:51:46.168169 kernel: ata3.00: applying bridge limits Apr 16 04:51:46.172813 kernel: ata3.00: LPM support broken, forcing max_power Apr 16 04:51:46.173021 kernel: ata3.00: configured for UDMA/100 Apr 16 04:51:46.179638 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 16 04:51:46.210932 disk-uuid[644]: Primary Header is updated. Apr 16 04:51:46.210932 disk-uuid[644]: Secondary Entries is updated. Apr 16 04:51:46.210932 disk-uuid[644]: Secondary Header is updated. Apr 16 04:51:46.261972 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 16 04:51:46.273372 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 16 04:51:46.332687 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Apr 16 04:51:46.333016 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 16 04:51:46.349935 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Apr 16 04:51:46.783222 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 16 04:51:46.798196 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 16 04:51:46.804019 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 16 04:51:46.817713 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 16 04:51:46.877726 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 16 04:51:46.921799 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 16 04:51:47.288927 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 16 04:51:47.289247 disk-uuid[645]: The operation has completed successfully. Apr 16 04:51:47.337331 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 16 04:51:47.337484 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 16 04:51:47.387671 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 16 04:51:47.428320 sh[674]: Success Apr 16 04:51:47.465352 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 16 04:51:47.465551 kernel: device-mapper: uevent: version 1.0.3 Apr 16 04:51:47.469397 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Apr 16 04:51:47.489325 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 16 04:51:47.639737 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 16 04:51:47.648463 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 16 04:51:47.667648 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 16 04:51:47.675035 kernel: BTRFS: device fsid 20ab7e7c-5d1e-4cd5-bec1-5b111d7138f2 devid 1 transid 35 /dev/mapper/usr (253:0) scanned by mount (686) Apr 16 04:51:47.678655 kernel: BTRFS info (device dm-0): first mount of filesystem 20ab7e7c-5d1e-4cd5-bec1-5b111d7138f2 Apr 16 04:51:47.678711 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 16 04:51:47.689507 kernel: BTRFS info (device dm-0 state E): disabling log replay at mount time Apr 16 04:51:47.689674 kernel: BTRFS info (device dm-0 state E): enabling free space tree Apr 16 04:51:47.692839 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 16 04:51:47.697695 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Apr 16 04:51:47.710572 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 16 04:51:47.715719 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 16 04:51:47.721608 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 16 04:51:47.760965 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (719) Apr 16 04:51:47.761083 kernel: BTRFS info (device vda6): first mount of filesystem 5c620efa-1d1f-4cb4-918b-5f9d92be2a74 Apr 16 04:51:47.763621 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 16 04:51:47.768609 kernel: BTRFS info (device vda6): turning on async discard Apr 16 04:51:47.768734 kernel: BTRFS info (device vda6): enabling free space tree Apr 16 04:51:47.774007 kernel: BTRFS info (device vda6): last unmount of filesystem 5c620efa-1d1f-4cb4-918b-5f9d92be2a74 Apr 16 04:51:47.775380 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 16 04:51:47.784807 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 16 04:51:48.220559 ignition[766]: Ignition 2.22.0 Apr 16 04:51:48.220577 ignition[766]: Stage: fetch-offline Apr 16 04:51:48.220612 ignition[766]: no configs at "/usr/lib/ignition/base.d" Apr 16 04:51:48.220619 ignition[766]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 16 04:51:48.220694 ignition[766]: parsed url from cmdline: "" Apr 16 04:51:48.220696 ignition[766]: no config URL provided Apr 16 04:51:48.220700 ignition[766]: reading system config file "/usr/lib/ignition/user.ign" Apr 16 04:51:48.220706 ignition[766]: no config at "/usr/lib/ignition/user.ign" Apr 16 04:51:48.220767 ignition[766]: op(1): [started] loading QEMU firmware config module Apr 16 04:51:48.220770 ignition[766]: op(1): executing: "modprobe" "qemu_fw_cfg" Apr 16 04:51:48.242166 ignition[766]: op(1): [finished] loading QEMU firmware config module Apr 16 04:51:48.250344 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 16 04:51:48.257626 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 16 04:51:48.327592 systemd-networkd[866]: lo: Link UP Apr 16 04:51:48.327612 systemd-networkd[866]: lo: Gained carrier Apr 16 04:51:48.328633 systemd-networkd[866]: Enumeration completed Apr 16 04:51:48.328972 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 16 04:51:48.329246 systemd-networkd[866]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 16 04:51:48.329249 systemd-networkd[866]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 16 04:51:48.329878 systemd-networkd[866]: eth0: Link UP Apr 16 04:51:48.330067 systemd-networkd[866]: eth0: Gained carrier Apr 16 04:51:48.330077 systemd-networkd[866]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 16 04:51:48.332814 systemd[1]: Reached target network.target - Network. Apr 16 04:51:48.391861 systemd-networkd[866]: eth0: DHCPv4 address 10.0.0.15/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 16 04:51:48.485381 ignition[766]: parsing config with SHA512: 09093923cf999e96f40ca4bb94b2deb9ba430fffd00ae84e3f749afec399815607a6975d26aea9c57d98fd3ac342251a5d58f52f2d11e64faeed7050c9ae451e Apr 16 04:51:48.497034 unknown[766]: fetched base config from "system" Apr 16 04:51:48.497820 unknown[766]: fetched user config from "qemu" Apr 16 04:51:48.498506 ignition[766]: fetch-offline: fetch-offline passed Apr 16 04:51:48.500732 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 16 04:51:48.498601 ignition[766]: Ignition finished successfully Apr 16 04:51:48.503310 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Apr 16 04:51:48.504234 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 16 04:51:48.583359 ignition[871]: Ignition 2.22.0 Apr 16 04:51:48.583385 ignition[871]: Stage: kargs Apr 16 04:51:48.583539 ignition[871]: no configs at "/usr/lib/ignition/base.d" Apr 16 04:51:48.583547 ignition[871]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 16 04:51:48.595440 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 16 04:51:48.585365 ignition[871]: kargs: kargs passed Apr 16 04:51:48.601162 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 16 04:51:48.585440 ignition[871]: Ignition finished successfully Apr 16 04:51:48.663216 ignition[880]: Ignition 2.22.0 Apr 16 04:51:48.663238 ignition[880]: Stage: disks Apr 16 04:51:48.663601 ignition[880]: no configs at "/usr/lib/ignition/base.d" Apr 16 04:51:48.663616 ignition[880]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 16 04:51:48.672396 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 16 04:51:48.665389 ignition[880]: disks: disks passed Apr 16 04:51:48.665544 ignition[880]: Ignition finished successfully Apr 16 04:51:48.677127 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 16 04:51:48.680928 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 16 04:51:48.684497 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 16 04:51:48.687771 systemd[1]: Reached target sysinit.target - System Initialization. Apr 16 04:51:48.693077 systemd[1]: Reached target basic.target - Basic System. Apr 16 04:51:48.703066 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 16 04:51:48.753747 systemd-fsck[890]: ROOT: clean, 15/553520 files, 52789/553472 blocks Apr 16 04:51:48.767766 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 16 04:51:48.769742 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 16 04:51:48.998980 kernel: EXT4-fs (vda9): mounted filesystem 75cd5b5e-229f-474b-8de5-870bc4bccaf1 r/w with ordered data mode. Quota mode: none. Apr 16 04:51:49.000588 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 16 04:51:49.005720 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 16 04:51:49.012814 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 16 04:51:49.030662 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 16 04:51:49.032498 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 16 04:51:49.032545 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 16 04:51:49.046625 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (898) Apr 16 04:51:49.032570 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 16 04:51:49.051708 kernel: BTRFS info (device vda6): first mount of filesystem 5c620efa-1d1f-4cb4-918b-5f9d92be2a74 Apr 16 04:51:49.051728 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 16 04:51:49.055325 kernel: BTRFS info (device vda6): turning on async discard Apr 16 04:51:49.055383 kernel: BTRFS info (device vda6): enabling free space tree Apr 16 04:51:49.056773 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 16 04:51:49.064691 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 16 04:51:49.071482 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 16 04:51:49.136185 initrd-setup-root[922]: cut: /sysroot/etc/passwd: No such file or directory Apr 16 04:51:49.145194 initrd-setup-root[929]: cut: /sysroot/etc/group: No such file or directory Apr 16 04:51:49.150118 initrd-setup-root[936]: cut: /sysroot/etc/shadow: No such file or directory Apr 16 04:51:49.155678 initrd-setup-root[943]: cut: /sysroot/etc/gshadow: No such file or directory Apr 16 04:51:49.263834 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 16 04:51:49.269678 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 16 04:51:49.272375 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 16 04:51:49.296793 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 16 04:51:49.298587 kernel: BTRFS info (device vda6): last unmount of filesystem 5c620efa-1d1f-4cb4-918b-5f9d92be2a74 Apr 16 04:51:49.312243 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 16 04:51:49.418477 ignition[1012]: INFO : Ignition 2.22.0 Apr 16 04:51:49.420552 ignition[1012]: INFO : Stage: mount Apr 16 04:51:49.421634 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 16 04:51:49.421634 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 16 04:51:49.426651 ignition[1012]: INFO : mount: mount passed Apr 16 04:51:49.428329 ignition[1012]: INFO : Ignition finished successfully Apr 16 04:51:49.431565 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 16 04:51:49.434594 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 16 04:51:50.007318 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 16 04:51:50.044365 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1024) Apr 16 04:51:50.048017 kernel: BTRFS info (device vda6): first mount of filesystem 5c620efa-1d1f-4cb4-918b-5f9d92be2a74 Apr 16 04:51:50.048150 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 16 04:51:50.052219 kernel: BTRFS info (device vda6): turning on async discard Apr 16 04:51:50.052236 kernel: BTRFS info (device vda6): enabling free space tree Apr 16 04:51:50.054118 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 16 04:51:50.100352 ignition[1041]: INFO : Ignition 2.22.0 Apr 16 04:51:50.100352 ignition[1041]: INFO : Stage: files Apr 16 04:51:50.100352 ignition[1041]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 16 04:51:50.100352 ignition[1041]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 16 04:51:50.107039 ignition[1041]: DEBUG : files: compiled without relabeling support, skipping Apr 16 04:51:50.107039 ignition[1041]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 16 04:51:50.107039 ignition[1041]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 16 04:51:50.107039 ignition[1041]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 16 04:51:50.107039 ignition[1041]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 16 04:51:50.107039 ignition[1041]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 16 04:51:50.107039 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 16 04:51:50.107039 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 16 04:51:50.105393 unknown[1041]: wrote ssh authorized keys file for user: core Apr 16 04:51:50.177688 systemd-networkd[866]: eth0: Gained IPv6LL Apr 16 04:51:50.180134 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 16 04:51:50.270213 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 16 04:51:50.270213 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 16 04:51:50.276937 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 16 04:51:50.276937 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 16 04:51:50.276937 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 16 04:51:50.276937 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 16 04:51:50.276937 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 16 04:51:50.276937 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 16 04:51:50.276937 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 16 04:51:50.276937 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 16 04:51:50.276937 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 16 04:51:50.276937 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 16 04:51:50.276937 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 16 04:51:50.276937 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 16 04:51:50.276937 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Apr 16 04:51:50.404098 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 16 04:51:50.681048 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 16 04:51:50.681048 ignition[1041]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 16 04:51:50.689789 ignition[1041]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 16 04:51:50.689789 ignition[1041]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 16 04:51:50.689789 ignition[1041]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 16 04:51:50.689789 ignition[1041]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Apr 16 04:51:50.689789 ignition[1041]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 16 04:51:50.689789 ignition[1041]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 16 04:51:50.689789 ignition[1041]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Apr 16 04:51:50.689789 ignition[1041]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Apr 16 04:51:50.716812 ignition[1041]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Apr 16 04:51:50.720873 ignition[1041]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Apr 16 04:51:50.724203 ignition[1041]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Apr 16 04:51:50.724203 ignition[1041]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Apr 16 04:51:50.724203 ignition[1041]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Apr 16 04:51:50.724203 ignition[1041]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 16 04:51:50.724203 ignition[1041]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 16 04:51:50.724203 ignition[1041]: INFO : files: files passed Apr 16 04:51:50.724203 ignition[1041]: INFO : Ignition finished successfully Apr 16 04:51:50.731875 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 16 04:51:50.734707 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 16 04:51:50.758165 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 16 04:51:50.767760 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 16 04:51:50.786881 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 16 04:51:50.799537 initrd-setup-root-after-ignition[1070]: grep: /sysroot/oem/oem-release: No such file or directory Apr 16 04:51:50.804587 initrd-setup-root-after-ignition[1072]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 16 04:51:50.806985 initrd-setup-root-after-ignition[1076]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 16 04:51:50.809089 initrd-setup-root-after-ignition[1072]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 16 04:51:50.815312 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 16 04:51:50.819469 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 16 04:51:50.824408 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 16 04:51:50.961691 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 16 04:51:50.961950 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 16 04:51:50.970477 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 16 04:51:50.974753 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 16 04:51:50.983449 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 16 04:51:50.993783 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 16 04:51:51.037279 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 16 04:51:51.040448 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 16 04:51:51.073394 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 16 04:51:51.080146 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 16 04:51:51.082033 systemd[1]: Stopped target timers.target - Timer Units. Apr 16 04:51:51.088520 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 16 04:51:51.094156 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 16 04:51:51.106991 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 16 04:51:51.109748 systemd[1]: Stopped target basic.target - Basic System. Apr 16 04:51:51.116295 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 16 04:51:51.121850 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 16 04:51:51.132781 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 16 04:51:51.137219 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Apr 16 04:51:51.147260 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 16 04:51:51.153191 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 16 04:51:51.154372 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 16 04:51:51.165161 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 16 04:51:51.170859 systemd[1]: Stopped target swap.target - Swaps. Apr 16 04:51:51.174559 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 16 04:51:51.174823 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 16 04:51:51.182686 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 16 04:51:51.182883 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 16 04:51:51.188023 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 16 04:51:51.190477 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 16 04:51:51.196714 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 16 04:51:51.197019 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 16 04:51:51.202440 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 16 04:51:51.203338 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 16 04:51:51.207975 systemd[1]: Stopped target paths.target - Path Units. Apr 16 04:51:51.211645 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 16 04:51:51.218562 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 16 04:51:51.227493 systemd[1]: Stopped target slices.target - Slice Units. Apr 16 04:51:51.227771 systemd[1]: Stopped target sockets.target - Socket Units. Apr 16 04:51:51.234021 systemd[1]: iscsid.socket: Deactivated successfully. Apr 16 04:51:51.234454 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 16 04:51:51.238532 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 16 04:51:51.240151 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 16 04:51:51.245490 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 16 04:51:51.246589 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 16 04:51:51.253804 systemd[1]: ignition-files.service: Deactivated successfully. Apr 16 04:51:51.255716 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 16 04:51:51.263783 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 16 04:51:51.272724 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 16 04:51:51.274866 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 16 04:51:51.275278 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 16 04:51:51.277753 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 16 04:51:51.277982 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 16 04:51:51.301362 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 16 04:51:51.302938 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 16 04:51:51.320954 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 16 04:51:51.330535 ignition[1096]: INFO : Ignition 2.22.0 Apr 16 04:51:51.330535 ignition[1096]: INFO : Stage: umount Apr 16 04:51:51.333469 ignition[1096]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 16 04:51:51.333469 ignition[1096]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 16 04:51:51.333469 ignition[1096]: INFO : umount: umount passed Apr 16 04:51:51.333469 ignition[1096]: INFO : Ignition finished successfully Apr 16 04:51:51.340813 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 16 04:51:51.340998 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 16 04:51:51.349624 systemd[1]: Stopped target network.target - Network. Apr 16 04:51:51.349777 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 16 04:51:51.349846 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 16 04:51:51.352118 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 16 04:51:51.352158 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 16 04:51:51.358417 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 16 04:51:51.359953 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 16 04:51:51.363083 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 16 04:51:51.363123 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 16 04:51:51.366796 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 16 04:51:51.367843 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 16 04:51:51.374288 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 16 04:51:51.374387 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 16 04:51:51.386331 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 16 04:51:51.386498 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 16 04:51:51.398611 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Apr 16 04:51:51.402449 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 16 04:51:51.402661 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 16 04:51:51.458549 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Apr 16 04:51:51.469624 systemd[1]: Stopped target network-pre.target - Preparation for Network. Apr 16 04:51:51.469881 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 16 04:51:51.469966 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 16 04:51:51.475303 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 16 04:51:51.475366 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 16 04:51:51.482654 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 16 04:51:51.484707 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 16 04:51:51.484851 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 16 04:51:51.488498 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 16 04:51:51.488560 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 16 04:51:51.501377 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 16 04:51:51.501465 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 16 04:51:51.508498 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 16 04:51:51.508580 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 16 04:51:51.519239 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 16 04:51:51.527716 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Apr 16 04:51:51.528049 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Apr 16 04:51:51.550137 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 16 04:51:51.554506 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 16 04:51:51.561672 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 16 04:51:51.561798 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 16 04:51:51.571323 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 16 04:51:51.571451 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 16 04:51:51.575814 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 16 04:51:51.575968 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 16 04:51:51.581507 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 16 04:51:51.581609 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 16 04:51:51.588268 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 16 04:51:51.588393 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 16 04:51:51.595270 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 16 04:51:51.595445 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 16 04:51:51.611843 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 16 04:51:51.617059 systemd[1]: systemd-network-generator.service: Deactivated successfully. Apr 16 04:51:51.619601 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Apr 16 04:51:51.624515 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 16 04:51:51.624600 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 16 04:51:51.632594 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 16 04:51:51.632671 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 16 04:51:51.638822 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 16 04:51:51.638950 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 16 04:51:51.643570 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 16 04:51:51.643609 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 04:51:51.655630 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Apr 16 04:51:51.655678 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Apr 16 04:51:51.655722 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Apr 16 04:51:51.655748 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Apr 16 04:51:51.668418 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 16 04:51:51.668598 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 16 04:51:51.673302 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 16 04:51:51.681987 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 16 04:51:51.768779 systemd[1]: Switching root. Apr 16 04:51:51.827360 systemd-journald[201]: Received SIGTERM from PID 1 (systemd). Apr 16 04:51:51.827551 systemd-journald[201]: Journal stopped Apr 16 04:51:54.052800 kernel: SELinux: policy capability network_peer_controls=1 Apr 16 04:51:54.052862 kernel: SELinux: policy capability open_perms=1 Apr 16 04:51:54.052877 kernel: SELinux: policy capability extended_socket_class=1 Apr 16 04:51:54.053358 kernel: SELinux: policy capability always_check_network=0 Apr 16 04:51:54.053413 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 16 04:51:54.053465 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 16 04:51:54.053479 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 16 04:51:54.054152 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 16 04:51:54.054172 kernel: SELinux: policy capability userspace_initial_context=0 Apr 16 04:51:54.054186 kernel: audit: type=1403 audit(1776315112.160:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 16 04:51:54.054208 systemd[1]: Successfully loaded SELinux policy in 128.911ms. Apr 16 04:51:54.054244 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 6.544ms. Apr 16 04:51:54.054260 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Apr 16 04:51:54.054274 systemd[1]: Detected virtualization kvm. Apr 16 04:51:54.054289 systemd[1]: Detected architecture x86-64. Apr 16 04:51:54.054303 systemd[1]: Detected first boot. Apr 16 04:51:54.054336 systemd[1]: Initializing machine ID from VM UUID. Apr 16 04:51:54.054350 zram_generator::config[1141]: No configuration found. Apr 16 04:51:54.054370 kernel: Guest personality initialized and is inactive Apr 16 04:51:54.054383 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Apr 16 04:51:54.054396 kernel: Initialized host personality Apr 16 04:51:54.054409 kernel: NET: Registered PF_VSOCK protocol family Apr 16 04:51:54.054423 systemd[1]: Populated /etc with preset unit settings. Apr 16 04:51:54.054455 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Apr 16 04:51:54.054481 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 16 04:51:54.054495 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 16 04:51:54.054509 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 16 04:51:54.054525 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 16 04:51:54.054539 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 16 04:51:54.054552 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 16 04:51:54.054565 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 16 04:51:54.054579 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 16 04:51:54.054593 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 16 04:51:54.054620 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 16 04:51:54.054635 systemd[1]: Created slice user.slice - User and Session Slice. Apr 16 04:51:54.054649 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 16 04:51:54.054662 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 16 04:51:54.054678 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 16 04:51:54.054692 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 16 04:51:54.054707 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 16 04:51:54.054721 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 16 04:51:54.054746 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 16 04:51:54.054760 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 16 04:51:54.054774 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 16 04:51:54.054788 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 16 04:51:54.054814 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 16 04:51:54.054828 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 16 04:51:54.054842 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 16 04:51:54.054855 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 16 04:51:54.054882 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 16 04:51:54.054950 systemd[1]: Reached target slices.target - Slice Units. Apr 16 04:51:54.054964 systemd[1]: Reached target swap.target - Swaps. Apr 16 04:51:54.054978 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 16 04:51:54.054992 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 16 04:51:54.055006 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Apr 16 04:51:54.055020 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 16 04:51:54.055034 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 16 04:51:54.055047 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 16 04:51:54.055062 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 16 04:51:54.055094 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 16 04:51:54.055107 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 16 04:51:54.055121 systemd[1]: Mounting media.mount - External Media Directory... Apr 16 04:51:54.055134 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 16 04:51:54.055149 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 16 04:51:54.055164 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 16 04:51:54.055177 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 16 04:51:54.055191 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 16 04:51:54.055216 systemd[1]: Reached target machines.target - Containers. Apr 16 04:51:54.055230 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 16 04:51:54.055244 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 16 04:51:54.055258 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 16 04:51:54.055272 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 16 04:51:54.055285 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 16 04:51:54.055298 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 16 04:51:54.055311 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 16 04:51:54.055325 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 16 04:51:54.055352 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 16 04:51:54.055367 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 16 04:51:54.055380 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 16 04:51:54.055394 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 16 04:51:54.055409 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 16 04:51:54.055423 systemd[1]: Stopped systemd-fsck-usr.service. Apr 16 04:51:54.055452 kernel: fuse: init (API version 7.41) Apr 16 04:51:54.055466 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 16 04:51:54.055492 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 16 04:51:54.055506 kernel: ACPI: bus type drm_connector registered Apr 16 04:51:54.055519 kernel: loop: module loaded Apr 16 04:51:54.055533 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 16 04:51:54.055546 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 16 04:51:54.055560 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 16 04:51:54.055602 systemd-journald[1226]: Collecting audit messages is disabled. Apr 16 04:51:54.055645 systemd-journald[1226]: Journal started Apr 16 04:51:54.055671 systemd-journald[1226]: Runtime Journal (/run/log/journal/f0834370c08a41268a8e45bce738101b) is 6M, max 48.2M, 42.2M free. Apr 16 04:51:53.400140 systemd[1]: Queued start job for default target multi-user.target. Apr 16 04:51:53.482850 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Apr 16 04:51:53.485865 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 16 04:51:54.060878 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Apr 16 04:51:54.063949 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 16 04:51:54.067365 systemd[1]: verity-setup.service: Deactivated successfully. Apr 16 04:51:54.071249 systemd[1]: Stopped verity-setup.service. Apr 16 04:51:54.074974 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 16 04:51:54.083557 systemd[1]: Started systemd-journald.service - Journal Service. Apr 16 04:51:54.087765 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 16 04:51:54.089852 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 16 04:51:54.091464 systemd[1]: Mounted media.mount - External Media Directory. Apr 16 04:51:54.094415 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 16 04:51:54.096114 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 16 04:51:54.100756 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 16 04:51:54.106665 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 16 04:51:54.110975 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 16 04:51:54.122191 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 16 04:51:54.123507 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 16 04:51:54.127071 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 16 04:51:54.127262 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 16 04:51:54.131988 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 16 04:51:54.132297 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 16 04:51:54.137096 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 16 04:51:54.139410 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 16 04:51:54.150879 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 16 04:51:54.152656 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 16 04:51:54.162319 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 16 04:51:54.162661 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 16 04:51:54.168718 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 16 04:51:54.179842 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 16 04:51:54.187802 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 16 04:51:54.194070 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Apr 16 04:51:54.225998 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 16 04:51:54.233387 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 16 04:51:54.240028 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 16 04:51:54.250521 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 16 04:51:54.252385 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 16 04:51:54.253021 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 16 04:51:54.257504 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Apr 16 04:51:54.262351 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 16 04:51:54.266168 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 16 04:51:54.270867 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 16 04:51:54.276255 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 16 04:51:54.280159 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 16 04:51:54.281485 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 16 04:51:54.286140 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 16 04:51:54.292589 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 16 04:51:54.292771 systemd-journald[1226]: Time spent on flushing to /var/log/journal/f0834370c08a41268a8e45bce738101b is 61.220ms for 986 entries. Apr 16 04:51:54.292771 systemd-journald[1226]: System Journal (/var/log/journal/f0834370c08a41268a8e45bce738101b) is 8M, max 195.6M, 187.6M free. Apr 16 04:51:54.364594 systemd-journald[1226]: Received client request to flush runtime journal. Apr 16 04:51:54.364628 kernel: loop0: detected capacity change from 0 to 110984 Apr 16 04:51:54.317532 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 16 04:51:54.326459 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 16 04:51:54.333186 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 16 04:51:54.335742 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 16 04:51:54.346534 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 16 04:51:54.353011 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 16 04:51:54.359353 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Apr 16 04:51:54.372355 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 16 04:51:54.391690 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 16 04:51:54.401961 systemd-tmpfiles[1262]: ACLs are not supported, ignoring. Apr 16 04:51:54.402165 systemd-tmpfiles[1262]: ACLs are not supported, ignoring. Apr 16 04:51:54.405506 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Apr 16 04:51:54.411346 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 16 04:51:54.458956 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 16 04:51:54.461673 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 16 04:51:54.484992 kernel: loop1: detected capacity change from 0 to 228704 Apr 16 04:51:54.500720 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 16 04:51:54.534944 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 16 04:51:54.539101 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 16 04:51:54.541791 kernel: loop2: detected capacity change from 0 to 128560 Apr 16 04:51:54.578221 systemd-tmpfiles[1284]: ACLs are not supported, ignoring. Apr 16 04:51:54.578278 systemd-tmpfiles[1284]: ACLs are not supported, ignoring. Apr 16 04:51:54.586678 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 16 04:51:54.595265 kernel: loop3: detected capacity change from 0 to 110984 Apr 16 04:51:54.618946 kernel: loop4: detected capacity change from 0 to 228704 Apr 16 04:51:54.632075 kernel: loop5: detected capacity change from 0 to 128560 Apr 16 04:51:54.647566 (sd-merge)[1288]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Apr 16 04:51:54.648314 (sd-merge)[1288]: Merged extensions into '/usr'. Apr 16 04:51:54.652259 systemd[1]: Reload requested from client PID 1261 ('systemd-sysext') (unit systemd-sysext.service)... Apr 16 04:51:54.652272 systemd[1]: Reloading... Apr 16 04:51:54.746404 zram_generator::config[1314]: No configuration found. Apr 16 04:51:54.948480 ldconfig[1256]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 16 04:51:55.101782 systemd[1]: Reloading finished in 449 ms. Apr 16 04:51:55.205355 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 16 04:51:55.210061 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 16 04:51:55.223069 systemd[1]: Starting ensure-sysext.service... Apr 16 04:51:55.225409 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 16 04:51:55.250696 systemd-tmpfiles[1353]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Apr 16 04:51:55.250734 systemd-tmpfiles[1353]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Apr 16 04:51:55.250917 systemd-tmpfiles[1353]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 16 04:51:55.251064 systemd-tmpfiles[1353]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 16 04:51:55.251561 systemd[1]: Reload requested from client PID 1352 ('systemctl') (unit ensure-sysext.service)... Apr 16 04:51:55.251584 systemd[1]: Reloading... Apr 16 04:51:55.251845 systemd-tmpfiles[1353]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 16 04:51:55.252059 systemd-tmpfiles[1353]: ACLs are not supported, ignoring. Apr 16 04:51:55.252110 systemd-tmpfiles[1353]: ACLs are not supported, ignoring. Apr 16 04:51:55.254405 systemd-tmpfiles[1353]: Detected autofs mount point /boot during canonicalization of boot. Apr 16 04:51:55.254420 systemd-tmpfiles[1353]: Skipping /boot Apr 16 04:51:55.259318 systemd-tmpfiles[1353]: Detected autofs mount point /boot during canonicalization of boot. Apr 16 04:51:55.259339 systemd-tmpfiles[1353]: Skipping /boot Apr 16 04:51:55.296017 zram_generator::config[1380]: No configuration found. Apr 16 04:51:55.454722 systemd[1]: Reloading finished in 202 ms. Apr 16 04:51:55.471935 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 16 04:51:55.480055 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 16 04:51:55.504562 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 16 04:51:55.518479 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 16 04:51:55.531936 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 16 04:51:55.539029 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 16 04:51:55.542545 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 16 04:51:55.550071 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 16 04:51:55.558414 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 16 04:51:55.562989 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 16 04:51:55.563114 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 16 04:51:55.571594 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 16 04:51:55.573873 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 16 04:51:55.581212 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 16 04:51:55.582830 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 16 04:51:55.582956 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 16 04:51:55.583037 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 16 04:51:55.584642 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 16 04:51:55.591076 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 16 04:51:55.591224 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 16 04:51:55.593648 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 16 04:51:55.594960 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 16 04:51:55.602770 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 16 04:51:55.604363 systemd-udevd[1429]: Using default interface naming scheme 'v255'. Apr 16 04:51:55.604797 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 16 04:51:55.605534 augenrules[1449]: No rules Apr 16 04:51:55.610938 systemd[1]: audit-rules.service: Deactivated successfully. Apr 16 04:51:55.612870 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 16 04:51:55.616127 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 16 04:51:55.627179 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 16 04:51:55.629404 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 16 04:51:55.637331 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 16 04:51:55.637521 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 16 04:51:55.639501 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 16 04:51:55.642725 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 16 04:51:55.656564 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 16 04:51:55.656676 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 16 04:51:55.659129 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 16 04:51:55.664061 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 16 04:51:55.669749 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 16 04:51:55.681155 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 16 04:51:55.681421 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 16 04:51:55.684183 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 16 04:51:55.684270 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 16 04:51:55.684332 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 16 04:51:55.687830 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 16 04:51:55.692051 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 16 04:51:55.697014 systemd-resolved[1423]: Positive Trust Anchors: Apr 16 04:51:55.697044 systemd-resolved[1423]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 16 04:51:55.697070 systemd-resolved[1423]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 16 04:51:55.704238 systemd-resolved[1423]: Defaulting to hostname 'linux'. Apr 16 04:51:55.705586 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 16 04:51:55.712333 systemd[1]: Finished ensure-sysext.service. Apr 16 04:51:55.713864 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 16 04:51:55.716935 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 16 04:51:55.719254 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 16 04:51:55.719649 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 16 04:51:55.730226 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 16 04:51:55.739934 kernel: mousedev: PS/2 mouse device common for all mice Apr 16 04:51:55.746235 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 16 04:51:55.749807 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 16 04:51:55.752089 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 16 04:51:55.754808 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 16 04:51:55.756237 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 16 04:51:55.757321 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 16 04:51:55.760126 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 16 04:51:55.761697 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 16 04:51:55.766292 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 16 04:51:55.767994 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 16 04:51:55.768050 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 16 04:51:55.771132 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 16 04:51:55.773105 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 16 04:51:55.773175 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 16 04:51:55.773635 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 16 04:51:55.773781 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 16 04:51:55.779486 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 16 04:51:55.781322 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 16 04:51:55.781551 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 16 04:51:55.789705 augenrules[1511]: /sbin/augenrules: No change Apr 16 04:51:55.792254 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Apr 16 04:51:55.801165 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 16 04:51:55.802972 augenrules[1536]: No rules Apr 16 04:51:55.803210 systemd[1]: audit-rules.service: Deactivated successfully. Apr 16 04:51:55.804707 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 16 04:51:55.822073 kernel: ACPI: button: Power Button [PWRF] Apr 16 04:51:55.844011 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 16 04:51:55.847052 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 16 04:51:55.850503 systemd-networkd[1497]: lo: Link UP Apr 16 04:51:55.850519 systemd-networkd[1497]: lo: Gained carrier Apr 16 04:51:55.852599 systemd-networkd[1497]: Enumeration completed Apr 16 04:51:55.852987 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 16 04:51:55.855915 systemd[1]: Reached target network.target - Network. Apr 16 04:51:55.860033 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Apr 16 04:51:55.862807 systemd-networkd[1497]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 16 04:51:55.862810 systemd-networkd[1497]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 16 04:51:55.863149 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 16 04:51:55.866205 systemd-networkd[1497]: eth0: Link UP Apr 16 04:51:55.866291 systemd-networkd[1497]: eth0: Gained carrier Apr 16 04:51:55.866335 systemd-networkd[1497]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 16 04:51:55.882972 systemd-networkd[1497]: eth0: DHCPv4 address 10.0.0.15/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 16 04:51:55.919362 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 16 04:51:55.937844 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Apr 16 04:51:56.048941 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 16 04:51:56.049113 systemd[1]: Reached target time-set.target - System Time Set. Apr 16 04:51:57.062677 systemd-resolved[1423]: Clock change detected. Flushing caches. Apr 16 04:51:57.064078 systemd-timesyncd[1517]: Contacted time server 10.0.0.1:123 (10.0.0.1). Apr 16 04:51:57.064168 systemd-timesyncd[1517]: Initial clock synchronization to Thu 2026-04-16 04:51:57.062533 UTC. Apr 16 04:51:57.176120 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 04:51:57.178295 systemd[1]: Reached target sysinit.target - System Initialization. Apr 16 04:51:57.179790 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 16 04:51:57.181412 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 16 04:51:57.183105 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Apr 16 04:51:57.184806 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 16 04:51:57.190555 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 16 04:51:57.194306 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 16 04:51:57.199053 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 16 04:51:57.199115 systemd[1]: Reached target paths.target - Path Units. Apr 16 04:51:57.200398 systemd[1]: Reached target timers.target - Timer Units. Apr 16 04:51:57.203080 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 16 04:51:57.205801 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 16 04:51:57.214249 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Apr 16 04:51:57.216176 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Apr 16 04:51:57.217860 systemd[1]: Reached target ssh-access.target - SSH Access Available. Apr 16 04:51:57.224128 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 16 04:51:57.225856 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Apr 16 04:51:57.230260 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 16 04:51:57.235083 systemd[1]: Reached target sockets.target - Socket Units. Apr 16 04:51:57.238677 systemd[1]: Reached target basic.target - Basic System. Apr 16 04:51:57.240202 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 16 04:51:57.240260 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 16 04:51:57.241445 systemd[1]: Starting containerd.service - containerd container runtime... Apr 16 04:51:57.243753 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 16 04:51:57.245827 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 16 04:51:57.258658 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 16 04:51:57.261882 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 16 04:51:57.263287 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 16 04:51:57.264106 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Apr 16 04:51:57.267978 jq[1575]: false Apr 16 04:51:57.268078 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 16 04:51:57.270755 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 16 04:51:57.273796 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 16 04:51:57.274819 extend-filesystems[1576]: Found /dev/vda6 Apr 16 04:51:57.276838 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 16 04:51:57.279538 google_oslogin_nss_cache[1577]: oslogin_cache_refresh[1577]: Refreshing passwd entry cache Apr 16 04:51:57.279537 oslogin_cache_refresh[1577]: Refreshing passwd entry cache Apr 16 04:51:57.281579 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 16 04:51:57.285544 extend-filesystems[1576]: Found /dev/vda9 Apr 16 04:51:57.285836 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 16 04:51:57.288686 google_oslogin_nss_cache[1577]: oslogin_cache_refresh[1577]: Failure getting users, quitting Apr 16 04:51:57.288641 oslogin_cache_refresh[1577]: Failure getting users, quitting Apr 16 04:51:57.288838 google_oslogin_nss_cache[1577]: oslogin_cache_refresh[1577]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Apr 16 04:51:57.288838 google_oslogin_nss_cache[1577]: oslogin_cache_refresh[1577]: Refreshing group entry cache Apr 16 04:51:57.288717 oslogin_cache_refresh[1577]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Apr 16 04:51:57.288797 oslogin_cache_refresh[1577]: Refreshing group entry cache Apr 16 04:51:57.289113 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 16 04:51:57.289342 extend-filesystems[1576]: Checking size of /dev/vda9 Apr 16 04:51:57.301946 google_oslogin_nss_cache[1577]: oslogin_cache_refresh[1577]: Failure getting groups, quitting Apr 16 04:51:57.301946 google_oslogin_nss_cache[1577]: oslogin_cache_refresh[1577]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Apr 16 04:51:57.299172 systemd[1]: Starting update-engine.service - Update Engine... Apr 16 04:51:57.298841 oslogin_cache_refresh[1577]: Failure getting groups, quitting Apr 16 04:51:57.298853 oslogin_cache_refresh[1577]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Apr 16 04:51:57.303464 extend-filesystems[1576]: Resized partition /dev/vda9 Apr 16 04:51:57.303867 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 16 04:51:57.313936 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 16 04:51:57.315849 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 16 04:51:57.316054 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 16 04:51:57.316248 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Apr 16 04:51:57.316416 jq[1595]: true Apr 16 04:51:57.316406 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Apr 16 04:51:57.320394 extend-filesystems[1599]: resize2fs 1.47.3 (8-Jul-2025) Apr 16 04:51:57.322431 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 16 04:51:57.322618 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 16 04:51:57.327554 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Apr 16 04:51:57.334854 systemd[1]: motdgen.service: Deactivated successfully. Apr 16 04:51:57.335368 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 16 04:51:57.349306 jq[1605]: true Apr 16 04:51:57.362956 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Apr 16 04:51:57.363136 tar[1603]: linux-amd64/LICENSE Apr 16 04:51:57.386643 tar[1603]: linux-amd64/helm Apr 16 04:51:57.386687 update_engine[1589]: I20260416 04:51:57.372989 1589 main.cc:92] Flatcar Update Engine starting Apr 16 04:51:57.386687 update_engine[1589]: I20260416 04:51:57.376451 1589 update_check_scheduler.cc:74] Next update check in 10m38s Apr 16 04:51:57.372651 dbus-daemon[1573]: [system] SELinux support is enabled Apr 16 04:51:57.367856 (ntainerd)[1618]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 16 04:51:57.374320 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 16 04:51:57.381759 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 16 04:51:57.381803 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 16 04:51:57.386425 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 16 04:51:57.386439 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 16 04:51:57.392007 extend-filesystems[1599]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Apr 16 04:51:57.392007 extend-filesystems[1599]: old_desc_blocks = 1, new_desc_blocks = 1 Apr 16 04:51:57.392007 extend-filesystems[1599]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Apr 16 04:51:57.404817 extend-filesystems[1576]: Resized filesystem in /dev/vda9 Apr 16 04:51:57.393079 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 16 04:51:57.394369 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 16 04:51:57.399353 systemd[1]: Started update-engine.service - Update Engine. Apr 16 04:51:57.407157 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 16 04:51:57.442409 systemd-logind[1586]: Watching system buttons on /dev/input/event2 (Power Button) Apr 16 04:51:57.442744 systemd-logind[1586]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 16 04:51:57.443170 systemd-logind[1586]: New seat seat0. Apr 16 04:51:57.444872 systemd[1]: Started systemd-logind.service - User Login Management. Apr 16 04:51:57.448145 bash[1640]: Updated "/home/core/.ssh/authorized_keys" Apr 16 04:51:57.449135 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 16 04:51:57.452783 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Apr 16 04:51:57.487004 locksmithd[1634]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 16 04:51:57.554689 containerd[1618]: time="2026-04-16T04:51:57Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Apr 16 04:51:57.556221 containerd[1618]: time="2026-04-16T04:51:57.556182488Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Apr 16 04:51:57.571850 containerd[1618]: time="2026-04-16T04:51:57.571390066Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.069µs" Apr 16 04:51:57.571850 containerd[1618]: time="2026-04-16T04:51:57.571698655Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Apr 16 04:51:57.571850 containerd[1618]: time="2026-04-16T04:51:57.571727964Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Apr 16 04:51:57.572101 containerd[1618]: time="2026-04-16T04:51:57.572071011Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Apr 16 04:51:57.572101 containerd[1618]: time="2026-04-16T04:51:57.572086438Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Apr 16 04:51:57.572142 containerd[1618]: time="2026-04-16T04:51:57.572106843Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Apr 16 04:51:57.572156 containerd[1618]: time="2026-04-16T04:51:57.572143806Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Apr 16 04:51:57.572156 containerd[1618]: time="2026-04-16T04:51:57.572151937Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Apr 16 04:51:57.572375 containerd[1618]: time="2026-04-16T04:51:57.572334669Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Apr 16 04:51:57.572375 containerd[1618]: time="2026-04-16T04:51:57.572364139Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Apr 16 04:51:57.572375 containerd[1618]: time="2026-04-16T04:51:57.572372392Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Apr 16 04:51:57.572424 containerd[1618]: time="2026-04-16T04:51:57.572378938Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Apr 16 04:51:57.572439 containerd[1618]: time="2026-04-16T04:51:57.572428211Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Apr 16 04:51:57.572644 containerd[1618]: time="2026-04-16T04:51:57.572611404Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Apr 16 04:51:57.572663 containerd[1618]: time="2026-04-16T04:51:57.572648080Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Apr 16 04:51:57.572663 containerd[1618]: time="2026-04-16T04:51:57.572656191Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Apr 16 04:51:57.572713 containerd[1618]: time="2026-04-16T04:51:57.572698972Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Apr 16 04:51:57.572983 containerd[1618]: time="2026-04-16T04:51:57.572966514Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Apr 16 04:51:57.573044 containerd[1618]: time="2026-04-16T04:51:57.573025271Z" level=info msg="metadata content store policy set" policy=shared Apr 16 04:51:57.577886 containerd[1618]: time="2026-04-16T04:51:57.577767312Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Apr 16 04:51:57.577886 containerd[1618]: time="2026-04-16T04:51:57.577835409Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Apr 16 04:51:57.577886 containerd[1618]: time="2026-04-16T04:51:57.577848471Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Apr 16 04:51:57.577886 containerd[1618]: time="2026-04-16T04:51:57.577857898Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Apr 16 04:51:57.577886 containerd[1618]: time="2026-04-16T04:51:57.577875215Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Apr 16 04:51:57.577886 containerd[1618]: time="2026-04-16T04:51:57.577884713Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Apr 16 04:51:57.577886 containerd[1618]: time="2026-04-16T04:51:57.577896419Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Apr 16 04:51:57.578175 containerd[1618]: time="2026-04-16T04:51:57.577928072Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Apr 16 04:51:57.578175 containerd[1618]: time="2026-04-16T04:51:57.577936344Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Apr 16 04:51:57.578175 containerd[1618]: time="2026-04-16T04:51:57.577944111Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Apr 16 04:51:57.578175 containerd[1618]: time="2026-04-16T04:51:57.577958022Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Apr 16 04:51:57.578175 containerd[1618]: time="2026-04-16T04:51:57.577972617Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Apr 16 04:51:57.578175 containerd[1618]: time="2026-04-16T04:51:57.578158481Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Apr 16 04:51:57.578175 containerd[1618]: time="2026-04-16T04:51:57.578172621Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Apr 16 04:51:57.578266 containerd[1618]: time="2026-04-16T04:51:57.578182307Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Apr 16 04:51:57.578266 containerd[1618]: time="2026-04-16T04:51:57.578190537Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Apr 16 04:51:57.578266 containerd[1618]: time="2026-04-16T04:51:57.578198303Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Apr 16 04:51:57.578266 containerd[1618]: time="2026-04-16T04:51:57.578205434Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Apr 16 04:51:57.578266 containerd[1618]: time="2026-04-16T04:51:57.578212694Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Apr 16 04:51:57.578266 containerd[1618]: time="2026-04-16T04:51:57.578219346Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Apr 16 04:51:57.578266 containerd[1618]: time="2026-04-16T04:51:57.578226895Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Apr 16 04:51:57.578266 containerd[1618]: time="2026-04-16T04:51:57.578233885Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Apr 16 04:51:57.578266 containerd[1618]: time="2026-04-16T04:51:57.578240771Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Apr 16 04:51:57.578383 containerd[1618]: time="2026-04-16T04:51:57.578312488Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Apr 16 04:51:57.578383 containerd[1618]: time="2026-04-16T04:51:57.578323198Z" level=info msg="Start snapshots syncer" Apr 16 04:51:57.578383 containerd[1618]: time="2026-04-16T04:51:57.578369326Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Apr 16 04:51:57.578727 containerd[1618]: time="2026-04-16T04:51:57.578677873Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Apr 16 04:51:57.578825 containerd[1618]: time="2026-04-16T04:51:57.578731496Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Apr 16 04:51:57.578825 containerd[1618]: time="2026-04-16T04:51:57.578776536Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Apr 16 04:51:57.578873 containerd[1618]: time="2026-04-16T04:51:57.578849812Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Apr 16 04:51:57.578889 containerd[1618]: time="2026-04-16T04:51:57.578878994Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Apr 16 04:51:57.578889 containerd[1618]: time="2026-04-16T04:51:57.578886834Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Apr 16 04:51:57.578938 containerd[1618]: time="2026-04-16T04:51:57.578893757Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Apr 16 04:51:57.579074 containerd[1618]: time="2026-04-16T04:51:57.579057120Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Apr 16 04:51:57.579093 containerd[1618]: time="2026-04-16T04:51:57.579080046Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Apr 16 04:51:57.579150 containerd[1618]: time="2026-04-16T04:51:57.579124251Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Apr 16 04:51:57.579177 containerd[1618]: time="2026-04-16T04:51:57.579154036Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Apr 16 04:51:57.579177 containerd[1618]: time="2026-04-16T04:51:57.579163798Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Apr 16 04:51:57.579218 containerd[1618]: time="2026-04-16T04:51:57.579187326Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Apr 16 04:51:57.580001 containerd[1618]: time="2026-04-16T04:51:57.579838647Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Apr 16 04:51:57.580001 containerd[1618]: time="2026-04-16T04:51:57.579900329Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Apr 16 04:51:57.580001 containerd[1618]: time="2026-04-16T04:51:57.579931477Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Apr 16 04:51:57.580001 containerd[1618]: time="2026-04-16T04:51:57.579938939Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Apr 16 04:51:57.580001 containerd[1618]: time="2026-04-16T04:51:57.579944695Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Apr 16 04:51:57.580001 containerd[1618]: time="2026-04-16T04:51:57.579951347Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Apr 16 04:51:57.580001 containerd[1618]: time="2026-04-16T04:51:57.579985482Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Apr 16 04:51:57.580001 containerd[1618]: time="2026-04-16T04:51:57.579998898Z" level=info msg="runtime interface created" Apr 16 04:51:57.580001 containerd[1618]: time="2026-04-16T04:51:57.580003035Z" level=info msg="created NRI interface" Apr 16 04:51:57.580001 containerd[1618]: time="2026-04-16T04:51:57.580008616Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Apr 16 04:51:57.580321 containerd[1618]: time="2026-04-16T04:51:57.580020923Z" level=info msg="Connect containerd service" Apr 16 04:51:57.580321 containerd[1618]: time="2026-04-16T04:51:57.580045213Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 16 04:51:57.581267 containerd[1618]: time="2026-04-16T04:51:57.581051499Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 16 04:51:57.723775 containerd[1618]: time="2026-04-16T04:51:57.722585790Z" level=info msg="Start subscribing containerd event" Apr 16 04:51:57.723775 containerd[1618]: time="2026-04-16T04:51:57.722656195Z" level=info msg="Start recovering state" Apr 16 04:51:57.723775 containerd[1618]: time="2026-04-16T04:51:57.722868164Z" level=info msg="Start event monitor" Apr 16 04:51:57.723775 containerd[1618]: time="2026-04-16T04:51:57.722883997Z" level=info msg="Start cni network conf syncer for default" Apr 16 04:51:57.723775 containerd[1618]: time="2026-04-16T04:51:57.722892039Z" level=info msg="Start streaming server" Apr 16 04:51:57.723775 containerd[1618]: time="2026-04-16T04:51:57.722942974Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Apr 16 04:51:57.723775 containerd[1618]: time="2026-04-16T04:51:57.722949522Z" level=info msg="runtime interface starting up..." Apr 16 04:51:57.723775 containerd[1618]: time="2026-04-16T04:51:57.722957881Z" level=info msg="starting plugins..." Apr 16 04:51:57.723775 containerd[1618]: time="2026-04-16T04:51:57.722970893Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Apr 16 04:51:57.728784 containerd[1618]: time="2026-04-16T04:51:57.725825306Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 16 04:51:57.728784 containerd[1618]: time="2026-04-16T04:51:57.725887468Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 16 04:51:57.728784 containerd[1618]: time="2026-04-16T04:51:57.725997932Z" level=info msg="containerd successfully booted in 0.175425s" Apr 16 04:51:57.726555 systemd[1]: Started containerd.service - containerd container runtime. Apr 16 04:51:57.729136 sshd_keygen[1604]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 16 04:51:57.752660 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 16 04:51:57.759786 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 16 04:51:57.777316 systemd[1]: issuegen.service: Deactivated successfully. Apr 16 04:51:57.777683 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 16 04:51:57.781284 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 16 04:51:57.796652 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 16 04:51:57.800083 tar[1603]: linux-amd64/README.md Apr 16 04:51:57.800777 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 16 04:51:57.804415 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 16 04:51:57.807263 systemd[1]: Reached target getty.target - Login Prompts. Apr 16 04:51:57.832995 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 16 04:51:58.101627 systemd-networkd[1497]: eth0: Gained IPv6LL Apr 16 04:51:58.111564 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 16 04:51:58.121458 systemd[1]: Reached target network-online.target - Network is Online. Apr 16 04:51:58.136981 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Apr 16 04:51:58.141963 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 04:51:58.158655 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 16 04:51:58.217121 systemd[1]: coreos-metadata.service: Deactivated successfully. Apr 16 04:51:58.219439 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Apr 16 04:51:58.222804 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 16 04:51:58.235269 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 16 04:51:59.422495 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 04:51:59.480360 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 16 04:51:59.482078 systemd[1]: Startup finished in 4.583s (kernel) + 9.391s (initrd) + 6.440s (userspace) = 20.416s. Apr 16 04:51:59.551245 (kubelet)[1711]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 04:52:00.032250 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 16 04:52:00.033377 systemd[1]: Started sshd@0-10.0.0.15:22-10.0.0.1:35774.service - OpenSSH per-connection server daemon (10.0.0.1:35774). Apr 16 04:52:00.119826 sshd[1723]: Accepted publickey for core from 10.0.0.1 port 35774 ssh2: RSA SHA256:IiTfK2rD8LSHXggGFdyxto9bXxmDCS3DeyOSiMga61s Apr 16 04:52:00.124945 sshd-session[1723]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:52:00.131784 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 16 04:52:00.132735 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 16 04:52:00.138182 systemd-logind[1586]: New session 1 of user core. Apr 16 04:52:00.159247 kubelet[1711]: E0416 04:52:00.159186 1711 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 04:52:00.163310 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 16 04:52:00.166343 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 04:52:00.167993 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 04:52:00.168437 systemd[1]: kubelet.service: Consumed 1.225s CPU time, 266.3M memory peak. Apr 16 04:52:00.175742 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 16 04:52:00.193759 (systemd)[1729]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 16 04:52:00.198471 systemd-logind[1586]: New session c1 of user core. Apr 16 04:52:00.345655 systemd[1729]: Queued start job for default target default.target. Apr 16 04:52:00.398198 systemd[1729]: Created slice app.slice - User Application Slice. Apr 16 04:52:00.398297 systemd[1729]: Reached target paths.target - Paths. Apr 16 04:52:00.398345 systemd[1729]: Reached target timers.target - Timers. Apr 16 04:52:00.400066 systemd[1729]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 16 04:52:00.430359 systemd[1729]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 16 04:52:00.430502 systemd[1729]: Reached target sockets.target - Sockets. Apr 16 04:52:00.430541 systemd[1729]: Reached target basic.target - Basic System. Apr 16 04:52:00.430563 systemd[1729]: Reached target default.target - Main User Target. Apr 16 04:52:00.430583 systemd[1729]: Startup finished in 223ms. Apr 16 04:52:00.430971 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 16 04:52:00.442822 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 16 04:52:00.471170 systemd[1]: Started sshd@1-10.0.0.15:22-10.0.0.1:35790.service - OpenSSH per-connection server daemon (10.0.0.1:35790). Apr 16 04:52:00.541869 sshd[1740]: Accepted publickey for core from 10.0.0.1 port 35790 ssh2: RSA SHA256:IiTfK2rD8LSHXggGFdyxto9bXxmDCS3DeyOSiMga61s Apr 16 04:52:00.543220 sshd-session[1740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:52:00.561091 systemd-logind[1586]: New session 2 of user core. Apr 16 04:52:00.568069 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 16 04:52:00.577988 sshd[1743]: Connection closed by 10.0.0.1 port 35790 Apr 16 04:52:00.578250 sshd-session[1740]: pam_unix(sshd:session): session closed for user core Apr 16 04:52:00.587740 systemd[1]: sshd@1-10.0.0.15:22-10.0.0.1:35790.service: Deactivated successfully. Apr 16 04:52:00.590223 systemd[1]: session-2.scope: Deactivated successfully. Apr 16 04:52:00.590846 systemd-logind[1586]: Session 2 logged out. Waiting for processes to exit. Apr 16 04:52:00.592779 systemd[1]: Started sshd@2-10.0.0.15:22-10.0.0.1:35798.service - OpenSSH per-connection server daemon (10.0.0.1:35798). Apr 16 04:52:00.593210 systemd-logind[1586]: Removed session 2. Apr 16 04:52:00.651793 sshd[1749]: Accepted publickey for core from 10.0.0.1 port 35798 ssh2: RSA SHA256:IiTfK2rD8LSHXggGFdyxto9bXxmDCS3DeyOSiMga61s Apr 16 04:52:00.653055 sshd-session[1749]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:52:00.662118 systemd-logind[1586]: New session 3 of user core. Apr 16 04:52:00.672096 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 16 04:52:00.678365 sshd[1753]: Connection closed by 10.0.0.1 port 35798 Apr 16 04:52:00.678660 sshd-session[1749]: pam_unix(sshd:session): session closed for user core Apr 16 04:52:00.694297 systemd[1]: sshd@2-10.0.0.15:22-10.0.0.1:35798.service: Deactivated successfully. Apr 16 04:52:00.695616 systemd[1]: session-3.scope: Deactivated successfully. Apr 16 04:52:00.696243 systemd-logind[1586]: Session 3 logged out. Waiting for processes to exit. Apr 16 04:52:00.700617 systemd[1]: Started sshd@3-10.0.0.15:22-10.0.0.1:35802.service - OpenSSH per-connection server daemon (10.0.0.1:35802). Apr 16 04:52:00.701105 systemd-logind[1586]: Removed session 3. Apr 16 04:52:00.758370 sshd[1759]: Accepted publickey for core from 10.0.0.1 port 35802 ssh2: RSA SHA256:IiTfK2rD8LSHXggGFdyxto9bXxmDCS3DeyOSiMga61s Apr 16 04:52:00.759602 sshd-session[1759]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:52:00.764735 systemd-logind[1586]: New session 4 of user core. Apr 16 04:52:00.772751 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 16 04:52:00.784745 sshd[1763]: Connection closed by 10.0.0.1 port 35802 Apr 16 04:52:00.785069 sshd-session[1759]: pam_unix(sshd:session): session closed for user core Apr 16 04:52:00.792793 systemd[1]: sshd@3-10.0.0.15:22-10.0.0.1:35802.service: Deactivated successfully. Apr 16 04:52:00.794070 systemd[1]: session-4.scope: Deactivated successfully. Apr 16 04:52:00.798192 systemd-logind[1586]: Session 4 logged out. Waiting for processes to exit. Apr 16 04:52:00.800341 systemd[1]: Started sshd@4-10.0.0.15:22-10.0.0.1:35806.service - OpenSSH per-connection server daemon (10.0.0.1:35806). Apr 16 04:52:00.800720 systemd-logind[1586]: Removed session 4. Apr 16 04:52:00.858355 sshd[1769]: Accepted publickey for core from 10.0.0.1 port 35806 ssh2: RSA SHA256:IiTfK2rD8LSHXggGFdyxto9bXxmDCS3DeyOSiMga61s Apr 16 04:52:00.860377 sshd-session[1769]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:52:00.864644 systemd-logind[1586]: New session 5 of user core. Apr 16 04:52:00.879067 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 16 04:52:00.902615 sudo[1773]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 16 04:52:00.903011 sudo[1773]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 16 04:52:00.924799 sudo[1773]: pam_unix(sudo:session): session closed for user root Apr 16 04:52:00.926280 sshd[1772]: Connection closed by 10.0.0.1 port 35806 Apr 16 04:52:00.926702 sshd-session[1769]: pam_unix(sshd:session): session closed for user core Apr 16 04:52:00.947852 systemd[1]: sshd@4-10.0.0.15:22-10.0.0.1:35806.service: Deactivated successfully. Apr 16 04:52:00.949238 systemd[1]: session-5.scope: Deactivated successfully. Apr 16 04:52:00.950852 systemd-logind[1586]: Session 5 logged out. Waiting for processes to exit. Apr 16 04:52:00.953604 systemd[1]: Started sshd@5-10.0.0.15:22-10.0.0.1:35814.service - OpenSSH per-connection server daemon (10.0.0.1:35814). Apr 16 04:52:00.955280 systemd-logind[1586]: Removed session 5. Apr 16 04:52:01.019780 sshd[1779]: Accepted publickey for core from 10.0.0.1 port 35814 ssh2: RSA SHA256:IiTfK2rD8LSHXggGFdyxto9bXxmDCS3DeyOSiMga61s Apr 16 04:52:01.020861 sshd-session[1779]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:52:01.027705 systemd-logind[1586]: New session 6 of user core. Apr 16 04:52:01.044776 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 16 04:52:01.058649 sudo[1784]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 16 04:52:01.058868 sudo[1784]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 16 04:52:01.063201 sudo[1784]: pam_unix(sudo:session): session closed for user root Apr 16 04:52:01.067762 sudo[1783]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Apr 16 04:52:01.067998 sudo[1783]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 16 04:52:01.084200 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 16 04:52:01.137141 augenrules[1806]: No rules Apr 16 04:52:01.138803 systemd[1]: audit-rules.service: Deactivated successfully. Apr 16 04:52:01.139089 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 16 04:52:01.142661 sudo[1783]: pam_unix(sudo:session): session closed for user root Apr 16 04:52:01.147368 sshd[1782]: Connection closed by 10.0.0.1 port 35814 Apr 16 04:52:01.147944 sshd-session[1779]: pam_unix(sshd:session): session closed for user core Apr 16 04:52:01.162326 systemd[1]: sshd@5-10.0.0.15:22-10.0.0.1:35814.service: Deactivated successfully. Apr 16 04:52:01.165794 systemd[1]: session-6.scope: Deactivated successfully. Apr 16 04:52:01.166834 systemd-logind[1586]: Session 6 logged out. Waiting for processes to exit. Apr 16 04:52:01.175889 systemd[1]: Started sshd@6-10.0.0.15:22-10.0.0.1:35820.service - OpenSSH per-connection server daemon (10.0.0.1:35820). Apr 16 04:52:01.176688 systemd-logind[1586]: Removed session 6. Apr 16 04:52:01.256180 sshd[1815]: Accepted publickey for core from 10.0.0.1 port 35820 ssh2: RSA SHA256:IiTfK2rD8LSHXggGFdyxto9bXxmDCS3DeyOSiMga61s Apr 16 04:52:01.257418 sshd-session[1815]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:52:01.262763 systemd-logind[1586]: New session 7 of user core. Apr 16 04:52:01.278248 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 16 04:52:01.304654 sudo[1819]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 16 04:52:01.304889 sudo[1819]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 16 04:52:01.731435 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 16 04:52:01.756772 (dockerd)[1839]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 16 04:52:02.049138 dockerd[1839]: time="2026-04-16T04:52:02.048877393Z" level=info msg="Starting up" Apr 16 04:52:02.050120 dockerd[1839]: time="2026-04-16T04:52:02.050077515Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Apr 16 04:52:02.066388 dockerd[1839]: time="2026-04-16T04:52:02.066262247Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Apr 16 04:52:02.310889 dockerd[1839]: time="2026-04-16T04:52:02.310553564Z" level=info msg="Loading containers: start." Apr 16 04:52:02.320958 kernel: Initializing XFRM netlink socket Apr 16 04:52:02.620093 systemd-networkd[1497]: docker0: Link UP Apr 16 04:52:02.626729 dockerd[1839]: time="2026-04-16T04:52:02.626565850Z" level=info msg="Loading containers: done." Apr 16 04:52:02.643134 dockerd[1839]: time="2026-04-16T04:52:02.643076782Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 16 04:52:02.643349 dockerd[1839]: time="2026-04-16T04:52:02.643183008Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Apr 16 04:52:02.643349 dockerd[1839]: time="2026-04-16T04:52:02.643244932Z" level=info msg="Initializing buildkit" Apr 16 04:52:02.668080 dockerd[1839]: time="2026-04-16T04:52:02.667968615Z" level=info msg="Completed buildkit initialization" Apr 16 04:52:02.674324 dockerd[1839]: time="2026-04-16T04:52:02.674115806Z" level=info msg="Daemon has completed initialization" Apr 16 04:52:02.674646 dockerd[1839]: time="2026-04-16T04:52:02.674588493Z" level=info msg="API listen on /run/docker.sock" Apr 16 04:52:02.675106 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 16 04:52:03.440019 containerd[1618]: time="2026-04-16T04:52:03.439780641Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\"" Apr 16 04:52:04.114846 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3742657573.mount: Deactivated successfully. Apr 16 04:52:05.145542 containerd[1618]: time="2026-04-16T04:52:05.145277912Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:52:05.146397 containerd[1618]: time="2026-04-16T04:52:05.145616779Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.11: active requests=0, bytes read=30193427" Apr 16 04:52:05.146613 containerd[1618]: time="2026-04-16T04:52:05.146559243Z" level=info msg="ImageCreate event name:\"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:52:05.149433 containerd[1618]: time="2026-04-16T04:52:05.149288665Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:52:05.150259 containerd[1618]: time="2026-04-16T04:52:05.150232451Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.11\" with image id \"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\", size \"30190588\" in 1.710414408s" Apr 16 04:52:05.150308 containerd[1618]: time="2026-04-16T04:52:05.150265963Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\" returns image reference \"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\"" Apr 16 04:52:05.150950 containerd[1618]: time="2026-04-16T04:52:05.150933352Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\"" Apr 16 04:52:06.348724 containerd[1618]: time="2026-04-16T04:52:06.348434274Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:52:06.349709 containerd[1618]: time="2026-04-16T04:52:06.349661210Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.11: active requests=0, bytes read=26171379" Apr 16 04:52:06.350820 containerd[1618]: time="2026-04-16T04:52:06.350673884Z" level=info msg="ImageCreate event name:\"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:52:06.357776 containerd[1618]: time="2026-04-16T04:52:06.357558978Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:52:06.358551 containerd[1618]: time="2026-04-16T04:52:06.358490699Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.11\" with image id \"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\", size \"27737794\" in 1.207537656s" Apr 16 04:52:06.358590 containerd[1618]: time="2026-04-16T04:52:06.358541019Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\" returns image reference \"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\"" Apr 16 04:52:06.359203 containerd[1618]: time="2026-04-16T04:52:06.359166297Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\"" Apr 16 04:52:07.390594 containerd[1618]: time="2026-04-16T04:52:07.390280582Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:52:07.392401 containerd[1618]: time="2026-04-16T04:52:07.392214795Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.11: active requests=0, bytes read=20289688" Apr 16 04:52:07.396044 containerd[1618]: time="2026-04-16T04:52:07.395724674Z" level=info msg="ImageCreate event name:\"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:52:07.398358 containerd[1618]: time="2026-04-16T04:52:07.398323143Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:52:07.399041 containerd[1618]: time="2026-04-16T04:52:07.399013289Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.11\" with image id \"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\", size \"21856121\" in 1.039807201s" Apr 16 04:52:07.399066 containerd[1618]: time="2026-04-16T04:52:07.399046641Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\" returns image reference \"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\"" Apr 16 04:52:07.399566 containerd[1618]: time="2026-04-16T04:52:07.399541707Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\"" Apr 16 04:52:08.293434 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount543778610.mount: Deactivated successfully. Apr 16 04:52:08.798718 containerd[1618]: time="2026-04-16T04:52:08.798472775Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:52:08.799324 containerd[1618]: time="2026-04-16T04:52:08.799207793Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.11: active requests=0, bytes read=32010605" Apr 16 04:52:08.800023 containerd[1618]: time="2026-04-16T04:52:08.799989847Z" level=info msg="ImageCreate event name:\"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:52:08.803076 containerd[1618]: time="2026-04-16T04:52:08.802773723Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:52:08.803779 containerd[1618]: time="2026-04-16T04:52:08.803739853Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.11\" with image id \"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\", repo tag \"registry.k8s.io/kube-proxy:v1.33.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\", size \"32009730\" in 1.404166522s" Apr 16 04:52:08.803779 containerd[1618]: time="2026-04-16T04:52:08.803772407Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\" returns image reference \"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\"" Apr 16 04:52:08.804753 containerd[1618]: time="2026-04-16T04:52:08.804336829Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Apr 16 04:52:09.330740 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3623463701.mount: Deactivated successfully. Apr 16 04:52:10.134866 containerd[1618]: time="2026-04-16T04:52:10.134669732Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:52:10.135408 containerd[1618]: time="2026-04-16T04:52:10.135187332Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20941714" Apr 16 04:52:10.137339 containerd[1618]: time="2026-04-16T04:52:10.137176331Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:52:10.139593 containerd[1618]: time="2026-04-16T04:52:10.139559435Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:52:10.140319 containerd[1618]: time="2026-04-16T04:52:10.140291630Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.335931975s" Apr 16 04:52:10.140319 containerd[1618]: time="2026-04-16T04:52:10.140319185Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Apr 16 04:52:10.140963 containerd[1618]: time="2026-04-16T04:52:10.140941292Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Apr 16 04:52:10.272566 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 16 04:52:10.278403 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 04:52:10.516365 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 04:52:10.548682 (kubelet)[2192]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 04:52:10.711853 kubelet[2192]: E0416 04:52:10.711598 2192 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 04:52:10.721126 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 04:52:10.721353 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 04:52:10.721780 systemd[1]: kubelet.service: Consumed 317ms CPU time, 110.8M memory peak. Apr 16 04:52:10.769065 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2089411787.mount: Deactivated successfully. Apr 16 04:52:10.778395 containerd[1618]: time="2026-04-16T04:52:10.778202499Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 16 04:52:10.778769 containerd[1618]: time="2026-04-16T04:52:10.778720592Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321070" Apr 16 04:52:10.787512 containerd[1618]: time="2026-04-16T04:52:10.787256162Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 16 04:52:10.798464 containerd[1618]: time="2026-04-16T04:52:10.798295241Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 16 04:52:10.798823 containerd[1618]: time="2026-04-16T04:52:10.798795112Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 657.827826ms" Apr 16 04:52:10.798875 containerd[1618]: time="2026-04-16T04:52:10.798829021Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Apr 16 04:52:10.799374 containerd[1618]: time="2026-04-16T04:52:10.799348229Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Apr 16 04:52:11.350634 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount287336264.mount: Deactivated successfully. Apr 16 04:52:12.287145 containerd[1618]: time="2026-04-16T04:52:12.287012489Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:52:12.287696 containerd[1618]: time="2026-04-16T04:52:12.287670698Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=23718826" Apr 16 04:52:12.290444 containerd[1618]: time="2026-04-16T04:52:12.290271510Z" level=info msg="ImageCreate event name:\"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:52:12.292458 containerd[1618]: time="2026-04-16T04:52:12.292414069Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:52:12.293123 containerd[1618]: time="2026-04-16T04:52:12.293100218Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"23716032\" in 1.493725046s" Apr 16 04:52:12.293164 containerd[1618]: time="2026-04-16T04:52:12.293127122Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\"" Apr 16 04:52:16.737811 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 04:52:16.738133 systemd[1]: kubelet.service: Consumed 317ms CPU time, 110.8M memory peak. Apr 16 04:52:16.742813 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 04:52:16.787778 systemd[1]: Reload requested from client PID 2300 ('systemctl') (unit session-7.scope)... Apr 16 04:52:16.787849 systemd[1]: Reloading... Apr 16 04:52:16.994001 zram_generator::config[2346]: No configuration found. Apr 16 04:52:17.292518 systemd[1]: Reloading finished in 504 ms. Apr 16 04:52:17.398512 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 16 04:52:17.398736 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 16 04:52:17.400495 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 04:52:17.400763 systemd[1]: kubelet.service: Consumed 183ms CPU time, 98.1M memory peak. Apr 16 04:52:17.408989 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 04:52:17.780596 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 04:52:17.804505 (kubelet)[2391]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 16 04:52:17.908978 kubelet[2391]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 16 04:52:17.908978 kubelet[2391]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 16 04:52:17.908978 kubelet[2391]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 16 04:52:17.909608 kubelet[2391]: I0416 04:52:17.908861 2391 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 16 04:52:18.750269 kubelet[2391]: I0416 04:52:18.750109 2391 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 16 04:52:18.750269 kubelet[2391]: I0416 04:52:18.750184 2391 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 16 04:52:18.750597 kubelet[2391]: I0416 04:52:18.750367 2391 server.go:956] "Client rotation is on, will bootstrap in background" Apr 16 04:52:18.780132 kubelet[2391]: E0416 04:52:18.779987 2391 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.15:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 16 04:52:18.786528 kubelet[2391]: I0416 04:52:18.786391 2391 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 16 04:52:18.793144 kubelet[2391]: I0416 04:52:18.793010 2391 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Apr 16 04:52:18.800835 kubelet[2391]: I0416 04:52:18.800466 2391 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 16 04:52:18.801947 kubelet[2391]: I0416 04:52:18.801792 2391 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 16 04:52:18.802658 kubelet[2391]: I0416 04:52:18.802426 2391 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 16 04:52:18.802658 kubelet[2391]: I0416 04:52:18.802653 2391 topology_manager.go:138] "Creating topology manager with none policy" Apr 16 04:52:18.802658 kubelet[2391]: I0416 04:52:18.802662 2391 container_manager_linux.go:303] "Creating device plugin manager" Apr 16 04:52:18.803013 kubelet[2391]: I0416 04:52:18.802995 2391 state_mem.go:36] "Initialized new in-memory state store" Apr 16 04:52:18.806452 kubelet[2391]: I0416 04:52:18.806419 2391 kubelet.go:480] "Attempting to sync node with API server" Apr 16 04:52:18.806452 kubelet[2391]: I0416 04:52:18.806444 2391 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 16 04:52:18.806899 kubelet[2391]: I0416 04:52:18.806871 2391 kubelet.go:386] "Adding apiserver pod source" Apr 16 04:52:18.808893 kubelet[2391]: I0416 04:52:18.808059 2391 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 16 04:52:18.812898 kubelet[2391]: E0416 04:52:18.812821 2391 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.15:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 16 04:52:18.812898 kubelet[2391]: E0416 04:52:18.812900 2391 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 16 04:52:18.813730 kubelet[2391]: I0416 04:52:18.813676 2391 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Apr 16 04:52:18.814210 kubelet[2391]: I0416 04:52:18.814177 2391 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 16 04:52:18.815022 kubelet[2391]: W0416 04:52:18.814992 2391 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 16 04:52:18.818673 kubelet[2391]: I0416 04:52:18.818639 2391 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 16 04:52:18.818726 kubelet[2391]: I0416 04:52:18.818683 2391 server.go:1289] "Started kubelet" Apr 16 04:52:18.818800 kubelet[2391]: I0416 04:52:18.818735 2391 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 16 04:52:18.821681 kubelet[2391]: I0416 04:52:18.819511 2391 server.go:317] "Adding debug handlers to kubelet server" Apr 16 04:52:18.821681 kubelet[2391]: I0416 04:52:18.820521 2391 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 16 04:52:18.823006 kubelet[2391]: I0416 04:52:18.822837 2391 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 16 04:52:18.823329 kubelet[2391]: I0416 04:52:18.823311 2391 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 16 04:52:18.824416 kubelet[2391]: E0416 04:52:18.822752 2391 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.15:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.15:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a6bd351a6f50c6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-16 04:52:18.818658502 +0000 UTC m=+1.009778996,LastTimestamp:2026-04-16 04:52:18.818658502 +0000 UTC m=+1.009778996,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 16 04:52:18.824416 kubelet[2391]: E0416 04:52:18.823702 2391 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:52:18.824416 kubelet[2391]: I0416 04:52:18.823735 2391 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 16 04:52:18.824416 kubelet[2391]: I0416 04:52:18.823763 2391 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 16 04:52:18.824416 kubelet[2391]: I0416 04:52:18.823861 2391 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 16 04:52:18.824416 kubelet[2391]: I0416 04:52:18.824320 2391 reconciler.go:26] "Reconciler: start to sync state" Apr 16 04:52:18.824823 kubelet[2391]: E0416 04:52:18.824714 2391 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 16 04:52:18.825949 kubelet[2391]: E0416 04:52:18.824870 2391 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.15:6443: connect: connection refused" interval="200ms" Apr 16 04:52:18.826775 kubelet[2391]: E0416 04:52:18.826737 2391 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 16 04:52:18.827494 kubelet[2391]: I0416 04:52:18.827161 2391 factory.go:223] Registration of the containerd container factory successfully Apr 16 04:52:18.827948 kubelet[2391]: I0416 04:52:18.827800 2391 factory.go:223] Registration of the systemd container factory successfully Apr 16 04:52:18.828024 kubelet[2391]: I0416 04:52:18.828012 2391 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 16 04:52:18.839141 kubelet[2391]: I0416 04:52:18.839072 2391 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 16 04:52:18.839141 kubelet[2391]: I0416 04:52:18.839093 2391 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 16 04:52:18.839141 kubelet[2391]: I0416 04:52:18.839106 2391 state_mem.go:36] "Initialized new in-memory state store" Apr 16 04:52:18.918131 kubelet[2391]: I0416 04:52:18.918027 2391 policy_none.go:49] "None policy: Start" Apr 16 04:52:18.918131 kubelet[2391]: I0416 04:52:18.918067 2391 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 16 04:52:18.918131 kubelet[2391]: I0416 04:52:18.918080 2391 state_mem.go:35] "Initializing new in-memory state store" Apr 16 04:52:18.923807 kubelet[2391]: E0416 04:52:18.923780 2391 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:52:18.930518 kubelet[2391]: I0416 04:52:18.930479 2391 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 16 04:52:18.931893 kubelet[2391]: I0416 04:52:18.931870 2391 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 16 04:52:18.931893 kubelet[2391]: I0416 04:52:18.931891 2391 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 16 04:52:18.932031 kubelet[2391]: I0416 04:52:18.931930 2391 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 16 04:52:18.932031 kubelet[2391]: I0416 04:52:18.931936 2391 kubelet.go:2436] "Starting kubelet main sync loop" Apr 16 04:52:18.932031 kubelet[2391]: E0416 04:52:18.931992 2391 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 16 04:52:18.932689 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 16 04:52:18.935114 kubelet[2391]: E0416 04:52:18.935051 2391 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 16 04:52:18.942291 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 16 04:52:18.944506 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 16 04:52:18.955934 kubelet[2391]: E0416 04:52:18.955862 2391 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 16 04:52:18.956111 kubelet[2391]: I0416 04:52:18.956101 2391 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 16 04:52:18.956140 kubelet[2391]: I0416 04:52:18.956110 2391 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 16 04:52:18.956774 kubelet[2391]: I0416 04:52:18.956266 2391 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 16 04:52:18.957716 kubelet[2391]: E0416 04:52:18.957658 2391 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 16 04:52:18.957716 kubelet[2391]: E0416 04:52:18.957712 2391 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 16 04:52:19.029299 kubelet[2391]: E0416 04:52:19.029145 2391 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.15:6443: connect: connection refused" interval="400ms" Apr 16 04:52:19.047966 systemd[1]: Created slice kubepods-burstable-pode9ca41790ae21be9f4cbd451ade0acec.slice - libcontainer container kubepods-burstable-pode9ca41790ae21be9f4cbd451ade0acec.slice. Apr 16 04:52:19.059018 kubelet[2391]: I0416 04:52:19.058940 2391 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 16 04:52:19.059383 kubelet[2391]: E0416 04:52:19.059340 2391 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.15:6443/api/v1/nodes\": dial tcp 10.0.0.15:6443: connect: connection refused" node="localhost" Apr 16 04:52:19.063229 kubelet[2391]: E0416 04:52:19.063192 2391 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 04:52:19.067367 systemd[1]: Created slice kubepods-burstable-pod33fee6ba1581201eda98a989140db110.slice - libcontainer container kubepods-burstable-pod33fee6ba1581201eda98a989140db110.slice. Apr 16 04:52:19.068597 kubelet[2391]: E0416 04:52:19.068574 2391 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 04:52:19.080654 systemd[1]: Created slice kubepods-burstable-podfd582743e02b90df93b255139a83955f.slice - libcontainer container kubepods-burstable-podfd582743e02b90df93b255139a83955f.slice. Apr 16 04:52:19.081986 kubelet[2391]: E0416 04:52:19.081957 2391 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 04:52:19.127068 kubelet[2391]: I0416 04:52:19.126874 2391 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fd582743e02b90df93b255139a83955f-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"fd582743e02b90df93b255139a83955f\") " pod="kube-system/kube-apiserver-localhost" Apr 16 04:52:19.127068 kubelet[2391]: I0416 04:52:19.126967 2391 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fd582743e02b90df93b255139a83955f-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"fd582743e02b90df93b255139a83955f\") " pod="kube-system/kube-apiserver-localhost" Apr 16 04:52:19.127068 kubelet[2391]: I0416 04:52:19.126985 2391 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 04:52:19.127068 kubelet[2391]: I0416 04:52:19.126998 2391 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 04:52:19.127068 kubelet[2391]: I0416 04:52:19.127009 2391 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fd582743e02b90df93b255139a83955f-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"fd582743e02b90df93b255139a83955f\") " pod="kube-system/kube-apiserver-localhost" Apr 16 04:52:19.127858 kubelet[2391]: I0416 04:52:19.127020 2391 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 04:52:19.127858 kubelet[2391]: I0416 04:52:19.127031 2391 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 04:52:19.127858 kubelet[2391]: I0416 04:52:19.127103 2391 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 04:52:19.127858 kubelet[2391]: I0416 04:52:19.127117 2391 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/33fee6ba1581201eda98a989140db110-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"33fee6ba1581201eda98a989140db110\") " pod="kube-system/kube-scheduler-localhost" Apr 16 04:52:19.262657 kubelet[2391]: I0416 04:52:19.262508 2391 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 16 04:52:19.263006 kubelet[2391]: E0416 04:52:19.262974 2391 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.15:6443/api/v1/nodes\": dial tcp 10.0.0.15:6443: connect: connection refused" node="localhost" Apr 16 04:52:19.367292 kubelet[2391]: E0416 04:52:19.366741 2391 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:52:19.370236 kubelet[2391]: E0416 04:52:19.370125 2391 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:52:19.370717 containerd[1618]: time="2026-04-16T04:52:19.370644022Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:e9ca41790ae21be9f4cbd451ade0acec,Namespace:kube-system,Attempt:0,}" Apr 16 04:52:19.371094 containerd[1618]: time="2026-04-16T04:52:19.370795319Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:33fee6ba1581201eda98a989140db110,Namespace:kube-system,Attempt:0,}" Apr 16 04:52:19.383298 kubelet[2391]: E0416 04:52:19.383147 2391 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:52:19.383528 containerd[1618]: time="2026-04-16T04:52:19.383504445Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:fd582743e02b90df93b255139a83955f,Namespace:kube-system,Attempt:0,}" Apr 16 04:52:19.433764 containerd[1618]: time="2026-04-16T04:52:19.433644725Z" level=info msg="connecting to shim 54728d73ce1941db2272d0610c07d2b37c401b8136ff5d4442f1382b0fe3a776" address="unix:///run/containerd/s/8d03fe8566f37abff49c2cd75a2b23a12bb43061161f29e6c7e8a0055a3e0bd1" namespace=k8s.io protocol=ttrpc version=3 Apr 16 04:52:19.435646 kubelet[2391]: E0416 04:52:19.435528 2391 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.15:6443: connect: connection refused" interval="800ms" Apr 16 04:52:19.437965 containerd[1618]: time="2026-04-16T04:52:19.437881044Z" level=info msg="connecting to shim d1cad58348c434defdb9f3135c01fb34c5393ac0b6fca5baf3e19aad15957959" address="unix:///run/containerd/s/283af1b396aef7ed210d58c3619814559d553a1b49385c59315b4cd4779a31e4" namespace=k8s.io protocol=ttrpc version=3 Apr 16 04:52:19.445217 containerd[1618]: time="2026-04-16T04:52:19.445178369Z" level=info msg="connecting to shim 19e6f58716eed4b96e130290877edae6d51e48339c546e9645a39f275fcd83a5" address="unix:///run/containerd/s/4ae80216cf07ded1b5f700899883231ea862d15bd657a276c8b678f7878de91d" namespace=k8s.io protocol=ttrpc version=3 Apr 16 04:52:19.503312 systemd[1]: Started cri-containerd-54728d73ce1941db2272d0610c07d2b37c401b8136ff5d4442f1382b0fe3a776.scope - libcontainer container 54728d73ce1941db2272d0610c07d2b37c401b8136ff5d4442f1382b0fe3a776. Apr 16 04:52:19.507387 systemd[1]: Started cri-containerd-19e6f58716eed4b96e130290877edae6d51e48339c546e9645a39f275fcd83a5.scope - libcontainer container 19e6f58716eed4b96e130290877edae6d51e48339c546e9645a39f275fcd83a5. Apr 16 04:52:19.508851 systemd[1]: Started cri-containerd-d1cad58348c434defdb9f3135c01fb34c5393ac0b6fca5baf3e19aad15957959.scope - libcontainer container d1cad58348c434defdb9f3135c01fb34c5393ac0b6fca5baf3e19aad15957959. Apr 16 04:52:19.581087 containerd[1618]: time="2026-04-16T04:52:19.580851511Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:33fee6ba1581201eda98a989140db110,Namespace:kube-system,Attempt:0,} returns sandbox id \"54728d73ce1941db2272d0610c07d2b37c401b8136ff5d4442f1382b0fe3a776\"" Apr 16 04:52:19.583072 kubelet[2391]: E0416 04:52:19.582649 2391 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:52:19.589304 containerd[1618]: time="2026-04-16T04:52:19.589137440Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:e9ca41790ae21be9f4cbd451ade0acec,Namespace:kube-system,Attempt:0,} returns sandbox id \"d1cad58348c434defdb9f3135c01fb34c5393ac0b6fca5baf3e19aad15957959\"" Apr 16 04:52:19.591690 kubelet[2391]: E0416 04:52:19.591393 2391 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:52:19.592842 containerd[1618]: time="2026-04-16T04:52:19.592780164Z" level=info msg="CreateContainer within sandbox \"54728d73ce1941db2272d0610c07d2b37c401b8136ff5d4442f1382b0fe3a776\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 16 04:52:19.595794 containerd[1618]: time="2026-04-16T04:52:19.595468863Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:fd582743e02b90df93b255139a83955f,Namespace:kube-system,Attempt:0,} returns sandbox id \"19e6f58716eed4b96e130290877edae6d51e48339c546e9645a39f275fcd83a5\"" Apr 16 04:52:19.596234 kubelet[2391]: E0416 04:52:19.596200 2391 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:52:19.597628 containerd[1618]: time="2026-04-16T04:52:19.597607155Z" level=info msg="CreateContainer within sandbox \"d1cad58348c434defdb9f3135c01fb34c5393ac0b6fca5baf3e19aad15957959\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 16 04:52:19.599785 containerd[1618]: time="2026-04-16T04:52:19.599738190Z" level=info msg="CreateContainer within sandbox \"19e6f58716eed4b96e130290877edae6d51e48339c546e9645a39f275fcd83a5\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 16 04:52:19.608274 containerd[1618]: time="2026-04-16T04:52:19.608144993Z" level=info msg="Container 0f62e327aff837ac8fa39125339afc5daf070301d585c4b75049af1d3ea958a7: CDI devices from CRI Config.CDIDevices: []" Apr 16 04:52:19.612152 containerd[1618]: time="2026-04-16T04:52:19.611897746Z" level=info msg="Container 2c7c45c5104804bb98406304f40d7be0ef7ce9dd9fa4bb9fde2cf5665cdce7e9: CDI devices from CRI Config.CDIDevices: []" Apr 16 04:52:19.621815 containerd[1618]: time="2026-04-16T04:52:19.621534778Z" level=info msg="Container b796c44ad8c74ee0cd15a3eb0e486df0a8d5c7b6be755653b4134079b7d04d7f: CDI devices from CRI Config.CDIDevices: []" Apr 16 04:52:19.622675 containerd[1618]: time="2026-04-16T04:52:19.622626818Z" level=info msg="CreateContainer within sandbox \"54728d73ce1941db2272d0610c07d2b37c401b8136ff5d4442f1382b0fe3a776\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"0f62e327aff837ac8fa39125339afc5daf070301d585c4b75049af1d3ea958a7\"" Apr 16 04:52:19.623320 containerd[1618]: time="2026-04-16T04:52:19.623295409Z" level=info msg="StartContainer for \"0f62e327aff837ac8fa39125339afc5daf070301d585c4b75049af1d3ea958a7\"" Apr 16 04:52:19.625478 containerd[1618]: time="2026-04-16T04:52:19.625354697Z" level=info msg="connecting to shim 0f62e327aff837ac8fa39125339afc5daf070301d585c4b75049af1d3ea958a7" address="unix:///run/containerd/s/8d03fe8566f37abff49c2cd75a2b23a12bb43061161f29e6c7e8a0055a3e0bd1" protocol=ttrpc version=3 Apr 16 04:52:19.626070 containerd[1618]: time="2026-04-16T04:52:19.626022460Z" level=info msg="CreateContainer within sandbox \"d1cad58348c434defdb9f3135c01fb34c5393ac0b6fca5baf3e19aad15957959\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"2c7c45c5104804bb98406304f40d7be0ef7ce9dd9fa4bb9fde2cf5665cdce7e9\"" Apr 16 04:52:19.626955 containerd[1618]: time="2026-04-16T04:52:19.626486649Z" level=info msg="StartContainer for \"2c7c45c5104804bb98406304f40d7be0ef7ce9dd9fa4bb9fde2cf5665cdce7e9\"" Apr 16 04:52:19.627318 containerd[1618]: time="2026-04-16T04:52:19.627291924Z" level=info msg="connecting to shim 2c7c45c5104804bb98406304f40d7be0ef7ce9dd9fa4bb9fde2cf5665cdce7e9" address="unix:///run/containerd/s/283af1b396aef7ed210d58c3619814559d553a1b49385c59315b4cd4779a31e4" protocol=ttrpc version=3 Apr 16 04:52:19.630440 containerd[1618]: time="2026-04-16T04:52:19.630392892Z" level=info msg="CreateContainer within sandbox \"19e6f58716eed4b96e130290877edae6d51e48339c546e9645a39f275fcd83a5\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b796c44ad8c74ee0cd15a3eb0e486df0a8d5c7b6be755653b4134079b7d04d7f\"" Apr 16 04:52:19.630749 containerd[1618]: time="2026-04-16T04:52:19.630732362Z" level=info msg="StartContainer for \"b796c44ad8c74ee0cd15a3eb0e486df0a8d5c7b6be755653b4134079b7d04d7f\"" Apr 16 04:52:19.633223 containerd[1618]: time="2026-04-16T04:52:19.633071496Z" level=info msg="connecting to shim b796c44ad8c74ee0cd15a3eb0e486df0a8d5c7b6be755653b4134079b7d04d7f" address="unix:///run/containerd/s/4ae80216cf07ded1b5f700899883231ea862d15bd657a276c8b678f7878de91d" protocol=ttrpc version=3 Apr 16 04:52:19.656294 systemd[1]: Started cri-containerd-0f62e327aff837ac8fa39125339afc5daf070301d585c4b75049af1d3ea958a7.scope - libcontainer container 0f62e327aff837ac8fa39125339afc5daf070301d585c4b75049af1d3ea958a7. Apr 16 04:52:19.664937 kubelet[2391]: I0416 04:52:19.664758 2391 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 16 04:52:19.665371 kubelet[2391]: E0416 04:52:19.665352 2391 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.15:6443/api/v1/nodes\": dial tcp 10.0.0.15:6443: connect: connection refused" node="localhost" Apr 16 04:52:19.666139 systemd[1]: Started cri-containerd-2c7c45c5104804bb98406304f40d7be0ef7ce9dd9fa4bb9fde2cf5665cdce7e9.scope - libcontainer container 2c7c45c5104804bb98406304f40d7be0ef7ce9dd9fa4bb9fde2cf5665cdce7e9. Apr 16 04:52:19.667466 systemd[1]: Started cri-containerd-b796c44ad8c74ee0cd15a3eb0e486df0a8d5c7b6be755653b4134079b7d04d7f.scope - libcontainer container b796c44ad8c74ee0cd15a3eb0e486df0a8d5c7b6be755653b4134079b7d04d7f. Apr 16 04:52:19.710823 containerd[1618]: time="2026-04-16T04:52:19.710671393Z" level=info msg="StartContainer for \"0f62e327aff837ac8fa39125339afc5daf070301d585c4b75049af1d3ea958a7\" returns successfully" Apr 16 04:52:19.721507 containerd[1618]: time="2026-04-16T04:52:19.721340236Z" level=info msg="StartContainer for \"b796c44ad8c74ee0cd15a3eb0e486df0a8d5c7b6be755653b4134079b7d04d7f\" returns successfully" Apr 16 04:52:19.734495 containerd[1618]: time="2026-04-16T04:52:19.734405152Z" level=info msg="StartContainer for \"2c7c45c5104804bb98406304f40d7be0ef7ce9dd9fa4bb9fde2cf5665cdce7e9\" returns successfully" Apr 16 04:52:19.948879 kubelet[2391]: E0416 04:52:19.947860 2391 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 04:52:19.950210 kubelet[2391]: E0416 04:52:19.950195 2391 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:52:19.951201 kubelet[2391]: E0416 04:52:19.950444 2391 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 04:52:19.951201 kubelet[2391]: E0416 04:52:19.951158 2391 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:52:19.952779 kubelet[2391]: E0416 04:52:19.952765 2391 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 04:52:19.953000 kubelet[2391]: E0416 04:52:19.952967 2391 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:52:20.468179 kubelet[2391]: I0416 04:52:20.468057 2391 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 16 04:52:20.783264 kubelet[2391]: E0416 04:52:20.783237 2391 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Apr 16 04:52:20.814133 kubelet[2391]: I0416 04:52:20.813993 2391 apiserver.go:52] "Watching apiserver" Apr 16 04:52:20.825708 kubelet[2391]: I0416 04:52:20.824740 2391 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 16 04:52:20.878704 kubelet[2391]: I0416 04:52:20.878053 2391 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Apr 16 04:52:20.878704 kubelet[2391]: E0416 04:52:20.878389 2391 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Apr 16 04:52:20.930158 kubelet[2391]: I0416 04:52:20.928534 2391 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 16 04:52:20.948623 kubelet[2391]: E0416 04:52:20.948354 2391 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Apr 16 04:52:20.948623 kubelet[2391]: I0416 04:52:20.948408 2391 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 16 04:52:20.950294 kubelet[2391]: E0416 04:52:20.950228 2391 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Apr 16 04:52:20.950294 kubelet[2391]: I0416 04:52:20.950279 2391 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 16 04:52:20.983772 kubelet[2391]: E0416 04:52:20.983594 2391 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Apr 16 04:52:20.984256 kubelet[2391]: I0416 04:52:20.984236 2391 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 16 04:52:20.984320 kubelet[2391]: I0416 04:52:20.984297 2391 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 16 04:52:20.988807 kubelet[2391]: E0416 04:52:20.988654 2391 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Apr 16 04:52:20.989116 kubelet[2391]: E0416 04:52:20.988927 2391 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Apr 16 04:52:20.989116 kubelet[2391]: E0416 04:52:20.989073 2391 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:52:20.989392 kubelet[2391]: E0416 04:52:20.989135 2391 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:52:22.002471 kubelet[2391]: I0416 04:52:22.001721 2391 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 16 04:52:22.007512 kubelet[2391]: E0416 04:52:22.007464 2391 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:52:22.991728 kubelet[2391]: E0416 04:52:22.991424 2391 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:52:23.248203 systemd[1]: Reload requested from client PID 2676 ('systemctl') (unit session-7.scope)... Apr 16 04:52:23.248382 systemd[1]: Reloading... Apr 16 04:52:23.328971 zram_generator::config[2716]: No configuration found. Apr 16 04:52:23.665883 systemd[1]: Reloading finished in 417 ms. Apr 16 04:52:23.713689 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 04:52:23.739903 systemd[1]: kubelet.service: Deactivated successfully. Apr 16 04:52:23.740464 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 04:52:23.740529 systemd[1]: kubelet.service: Consumed 1.585s CPU time, 133.4M memory peak. Apr 16 04:52:23.745120 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 04:52:23.975464 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 04:52:23.995779 (kubelet)[2764]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 16 04:52:24.081472 kubelet[2764]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 16 04:52:24.081472 kubelet[2764]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 16 04:52:24.081472 kubelet[2764]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 16 04:52:24.082373 kubelet[2764]: I0416 04:52:24.081780 2764 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 16 04:52:24.097224 kubelet[2764]: I0416 04:52:24.096889 2764 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 16 04:52:24.097224 kubelet[2764]: I0416 04:52:24.097039 2764 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 16 04:52:24.097224 kubelet[2764]: I0416 04:52:24.097228 2764 server.go:956] "Client rotation is on, will bootstrap in background" Apr 16 04:52:24.098702 kubelet[2764]: I0416 04:52:24.098633 2764 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 16 04:52:24.106302 kubelet[2764]: I0416 04:52:24.106049 2764 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 16 04:52:24.114764 kubelet[2764]: I0416 04:52:24.114532 2764 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Apr 16 04:52:24.120522 kubelet[2764]: I0416 04:52:24.120388 2764 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 16 04:52:24.120758 kubelet[2764]: I0416 04:52:24.120642 2764 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 16 04:52:24.120797 kubelet[2764]: I0416 04:52:24.120671 2764 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 16 04:52:24.121005 kubelet[2764]: I0416 04:52:24.120802 2764 topology_manager.go:138] "Creating topology manager with none policy" Apr 16 04:52:24.121005 kubelet[2764]: I0416 04:52:24.120810 2764 container_manager_linux.go:303] "Creating device plugin manager" Apr 16 04:52:24.121005 kubelet[2764]: I0416 04:52:24.120847 2764 state_mem.go:36] "Initialized new in-memory state store" Apr 16 04:52:24.121200 kubelet[2764]: I0416 04:52:24.121165 2764 kubelet.go:480] "Attempting to sync node with API server" Apr 16 04:52:24.121223 kubelet[2764]: I0416 04:52:24.121213 2764 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 16 04:52:24.121266 kubelet[2764]: I0416 04:52:24.121252 2764 kubelet.go:386] "Adding apiserver pod source" Apr 16 04:52:24.121283 kubelet[2764]: I0416 04:52:24.121268 2764 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 16 04:52:24.126039 kubelet[2764]: I0416 04:52:24.125290 2764 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Apr 16 04:52:24.126039 kubelet[2764]: I0416 04:52:24.125721 2764 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 16 04:52:24.129930 kubelet[2764]: I0416 04:52:24.129859 2764 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 16 04:52:24.130002 kubelet[2764]: I0416 04:52:24.129942 2764 server.go:1289] "Started kubelet" Apr 16 04:52:24.130473 kubelet[2764]: I0416 04:52:24.130445 2764 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 16 04:52:24.130845 kubelet[2764]: I0416 04:52:24.130804 2764 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 16 04:52:24.136846 kubelet[2764]: I0416 04:52:24.134813 2764 server.go:317] "Adding debug handlers to kubelet server" Apr 16 04:52:24.136846 kubelet[2764]: I0416 04:52:24.139552 2764 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 16 04:52:24.136846 kubelet[2764]: I0416 04:52:24.140342 2764 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 16 04:52:24.149201 kubelet[2764]: I0416 04:52:24.149047 2764 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 16 04:52:24.149871 kubelet[2764]: I0416 04:52:24.149838 2764 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 16 04:52:24.156259 kubelet[2764]: I0416 04:52:24.155711 2764 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 16 04:52:24.156259 kubelet[2764]: I0416 04:52:24.156006 2764 reconciler.go:26] "Reconciler: start to sync state" Apr 16 04:52:24.159656 kubelet[2764]: I0416 04:52:24.158601 2764 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 16 04:52:24.161120 kubelet[2764]: E0416 04:52:24.161073 2764 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 16 04:52:24.166272 kubelet[2764]: I0416 04:52:24.166169 2764 factory.go:223] Registration of the containerd container factory successfully Apr 16 04:52:24.166272 kubelet[2764]: I0416 04:52:24.166194 2764 factory.go:223] Registration of the systemd container factory successfully Apr 16 04:52:24.180819 kubelet[2764]: I0416 04:52:24.180705 2764 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 16 04:52:24.181831 kubelet[2764]: I0416 04:52:24.181741 2764 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 16 04:52:24.181831 kubelet[2764]: I0416 04:52:24.181759 2764 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 16 04:52:24.181831 kubelet[2764]: I0416 04:52:24.181775 2764 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 16 04:52:24.181831 kubelet[2764]: I0416 04:52:24.181780 2764 kubelet.go:2436] "Starting kubelet main sync loop" Apr 16 04:52:24.181831 kubelet[2764]: E0416 04:52:24.181813 2764 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 16 04:52:24.208110 kubelet[2764]: I0416 04:52:24.207961 2764 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 16 04:52:24.208110 kubelet[2764]: I0416 04:52:24.207981 2764 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 16 04:52:24.208110 kubelet[2764]: I0416 04:52:24.208001 2764 state_mem.go:36] "Initialized new in-memory state store" Apr 16 04:52:24.208420 kubelet[2764]: I0416 04:52:24.208179 2764 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 16 04:52:24.208420 kubelet[2764]: I0416 04:52:24.208186 2764 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 16 04:52:24.208420 kubelet[2764]: I0416 04:52:24.208201 2764 policy_none.go:49] "None policy: Start" Apr 16 04:52:24.208420 kubelet[2764]: I0416 04:52:24.208208 2764 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 16 04:52:24.208420 kubelet[2764]: I0416 04:52:24.208214 2764 state_mem.go:35] "Initializing new in-memory state store" Apr 16 04:52:24.208420 kubelet[2764]: I0416 04:52:24.208277 2764 state_mem.go:75] "Updated machine memory state" Apr 16 04:52:24.215047 kubelet[2764]: E0416 04:52:24.214397 2764 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 16 04:52:24.215047 kubelet[2764]: I0416 04:52:24.214673 2764 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 16 04:52:24.215047 kubelet[2764]: I0416 04:52:24.214683 2764 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 16 04:52:24.215047 kubelet[2764]: I0416 04:52:24.214863 2764 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 16 04:52:24.215822 kubelet[2764]: E0416 04:52:24.215763 2764 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 16 04:52:24.283598 kubelet[2764]: I0416 04:52:24.283375 2764 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 16 04:52:24.283598 kubelet[2764]: I0416 04:52:24.283403 2764 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 16 04:52:24.283598 kubelet[2764]: I0416 04:52:24.283380 2764 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 16 04:52:24.297937 kubelet[2764]: E0416 04:52:24.297706 2764 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Apr 16 04:52:24.328013 kubelet[2764]: I0416 04:52:24.327844 2764 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 16 04:52:24.338426 kubelet[2764]: I0416 04:52:24.338226 2764 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Apr 16 04:52:24.338781 kubelet[2764]: I0416 04:52:24.338619 2764 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Apr 16 04:52:24.356618 kubelet[2764]: I0416 04:52:24.356438 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fd582743e02b90df93b255139a83955f-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"fd582743e02b90df93b255139a83955f\") " pod="kube-system/kube-apiserver-localhost" Apr 16 04:52:24.356618 kubelet[2764]: I0416 04:52:24.356522 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 04:52:24.356618 kubelet[2764]: I0416 04:52:24.356537 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 04:52:24.356618 kubelet[2764]: I0416 04:52:24.356548 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 04:52:24.356618 kubelet[2764]: I0416 04:52:24.356589 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 04:52:24.357295 kubelet[2764]: I0416 04:52:24.356603 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fd582743e02b90df93b255139a83955f-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"fd582743e02b90df93b255139a83955f\") " pod="kube-system/kube-apiserver-localhost" Apr 16 04:52:24.357295 kubelet[2764]: I0416 04:52:24.356614 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fd582743e02b90df93b255139a83955f-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"fd582743e02b90df93b255139a83955f\") " pod="kube-system/kube-apiserver-localhost" Apr 16 04:52:24.357295 kubelet[2764]: I0416 04:52:24.356627 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 04:52:24.357295 kubelet[2764]: I0416 04:52:24.356639 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/33fee6ba1581201eda98a989140db110-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"33fee6ba1581201eda98a989140db110\") " pod="kube-system/kube-scheduler-localhost" Apr 16 04:52:24.598707 kubelet[2764]: E0416 04:52:24.597717 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:52:24.599588 kubelet[2764]: E0416 04:52:24.598743 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:52:24.599588 kubelet[2764]: E0416 04:52:24.599394 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:52:25.124937 kubelet[2764]: I0416 04:52:25.124610 2764 apiserver.go:52] "Watching apiserver" Apr 16 04:52:25.156581 kubelet[2764]: I0416 04:52:25.156307 2764 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 16 04:52:25.198096 kubelet[2764]: I0416 04:52:25.197938 2764 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 16 04:52:25.198096 kubelet[2764]: I0416 04:52:25.198108 2764 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 16 04:52:25.198489 kubelet[2764]: E0416 04:52:25.198436 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:52:25.206491 kubelet[2764]: E0416 04:52:25.206287 2764 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Apr 16 04:52:25.206736 kubelet[2764]: E0416 04:52:25.206621 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:52:25.207548 kubelet[2764]: E0416 04:52:25.207489 2764 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Apr 16 04:52:25.207770 kubelet[2764]: E0416 04:52:25.207615 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:52:25.226450 kubelet[2764]: I0416 04:52:25.226257 2764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.22624589 podStartE2EDuration="3.22624589s" podCreationTimestamp="2026-04-16 04:52:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 04:52:25.226121023 +0000 UTC m=+1.217855431" watchObservedRunningTime="2026-04-16 04:52:25.22624589 +0000 UTC m=+1.217980303" Apr 16 04:52:25.235275 kubelet[2764]: I0416 04:52:25.235112 2764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.235095867 podStartE2EDuration="1.235095867s" podCreationTimestamp="2026-04-16 04:52:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 04:52:25.235053079 +0000 UTC m=+1.226787487" watchObservedRunningTime="2026-04-16 04:52:25.235095867 +0000 UTC m=+1.226830282" Apr 16 04:52:25.250083 kubelet[2764]: I0416 04:52:25.249882 2764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.249864056 podStartE2EDuration="1.249864056s" podCreationTimestamp="2026-04-16 04:52:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 04:52:25.249863244 +0000 UTC m=+1.241597663" watchObservedRunningTime="2026-04-16 04:52:25.249864056 +0000 UTC m=+1.241598468" Apr 16 04:52:26.201746 kubelet[2764]: E0416 04:52:26.201540 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:52:26.201746 kubelet[2764]: E0416 04:52:26.201551 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:52:26.201746 kubelet[2764]: E0416 04:52:26.201739 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:52:27.203581 kubelet[2764]: E0416 04:52:27.203415 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:52:28.206044 kubelet[2764]: E0416 04:52:28.205888 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:52:29.004038 kubelet[2764]: E0416 04:52:29.003862 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:52:29.210142 kubelet[2764]: E0416 04:52:29.209967 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:52:29.295697 kubelet[2764]: I0416 04:52:29.295520 2764 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 16 04:52:29.296232 kubelet[2764]: I0416 04:52:29.296073 2764 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 16 04:52:29.296265 containerd[1618]: time="2026-04-16T04:52:29.295875359Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 16 04:52:29.920394 kubelet[2764]: I0416 04:52:29.919319 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2d224c56-d355-4852-818f-08278c8c73de-kube-proxy\") pod \"kube-proxy-gvvx2\" (UID: \"2d224c56-d355-4852-818f-08278c8c73de\") " pod="kube-system/kube-proxy-gvvx2" Apr 16 04:52:29.921843 systemd[1]: Created slice kubepods-besteffort-pod2d224c56_d355_4852_818f_08278c8c73de.slice - libcontainer container kubepods-besteffort-pod2d224c56_d355_4852_818f_08278c8c73de.slice. Apr 16 04:52:30.020761 kubelet[2764]: I0416 04:52:30.020660 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2d224c56-d355-4852-818f-08278c8c73de-xtables-lock\") pod \"kube-proxy-gvvx2\" (UID: \"2d224c56-d355-4852-818f-08278c8c73de\") " pod="kube-system/kube-proxy-gvvx2" Apr 16 04:52:30.020761 kubelet[2764]: I0416 04:52:30.020696 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2d224c56-d355-4852-818f-08278c8c73de-lib-modules\") pod \"kube-proxy-gvvx2\" (UID: \"2d224c56-d355-4852-818f-08278c8c73de\") " pod="kube-system/kube-proxy-gvvx2" Apr 16 04:52:30.020761 kubelet[2764]: I0416 04:52:30.020712 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4dgw\" (UniqueName: \"kubernetes.io/projected/2d224c56-d355-4852-818f-08278c8c73de-kube-api-access-k4dgw\") pod \"kube-proxy-gvvx2\" (UID: \"2d224c56-d355-4852-818f-08278c8c73de\") " pod="kube-system/kube-proxy-gvvx2" Apr 16 04:52:30.126016 kubelet[2764]: E0416 04:52:30.125893 2764 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Apr 16 04:52:30.126016 kubelet[2764]: E0416 04:52:30.125961 2764 projected.go:194] Error preparing data for projected volume kube-api-access-k4dgw for pod kube-system/kube-proxy-gvvx2: configmap "kube-root-ca.crt" not found Apr 16 04:52:30.126016 kubelet[2764]: E0416 04:52:30.126012 2764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2d224c56-d355-4852-818f-08278c8c73de-kube-api-access-k4dgw podName:2d224c56-d355-4852-818f-08278c8c73de nodeName:}" failed. No retries permitted until 2026-04-16 04:52:30.625997625 +0000 UTC m=+6.617732029 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-k4dgw" (UniqueName: "kubernetes.io/projected/2d224c56-d355-4852-818f-08278c8c73de-kube-api-access-k4dgw") pod "kube-proxy-gvvx2" (UID: "2d224c56-d355-4852-818f-08278c8c73de") : configmap "kube-root-ca.crt" not found Apr 16 04:52:30.423637 systemd[1]: Created slice kubepods-besteffort-pod57fcdf53_f24b_4889_a764_81e296942440.slice - libcontainer container kubepods-besteffort-pod57fcdf53_f24b_4889_a764_81e296942440.slice. Apr 16 04:52:30.524660 kubelet[2764]: I0416 04:52:30.524363 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgwmj\" (UniqueName: \"kubernetes.io/projected/57fcdf53-f24b-4889-a764-81e296942440-kube-api-access-tgwmj\") pod \"tigera-operator-6bf85f8dd-29pn9\" (UID: \"57fcdf53-f24b-4889-a764-81e296942440\") " pod="tigera-operator/tigera-operator-6bf85f8dd-29pn9" Apr 16 04:52:30.524660 kubelet[2764]: I0416 04:52:30.524535 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/57fcdf53-f24b-4889-a764-81e296942440-var-lib-calico\") pod \"tigera-operator-6bf85f8dd-29pn9\" (UID: \"57fcdf53-f24b-4889-a764-81e296942440\") " pod="tigera-operator/tigera-operator-6bf85f8dd-29pn9" Apr 16 04:52:30.727703 containerd[1618]: time="2026-04-16T04:52:30.727460222Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-29pn9,Uid:57fcdf53-f24b-4889-a764-81e296942440,Namespace:tigera-operator,Attempt:0,}" Apr 16 04:52:30.750317 containerd[1618]: time="2026-04-16T04:52:30.750252974Z" level=info msg="connecting to shim 648985897e0c9917b07303371c6800a0366a09032bc0dea7691100e2eca68ba5" address="unix:///run/containerd/s/bc8c53f4127540783b026cbd2ffe0adc0093832dad06e6b0c9da3b5eefbe9715" namespace=k8s.io protocol=ttrpc version=3 Apr 16 04:52:30.778099 systemd[1]: Started cri-containerd-648985897e0c9917b07303371c6800a0366a09032bc0dea7691100e2eca68ba5.scope - libcontainer container 648985897e0c9917b07303371c6800a0366a09032bc0dea7691100e2eca68ba5. Apr 16 04:52:30.821213 containerd[1618]: time="2026-04-16T04:52:30.821077408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-29pn9,Uid:57fcdf53-f24b-4889-a764-81e296942440,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"648985897e0c9917b07303371c6800a0366a09032bc0dea7691100e2eca68ba5\"" Apr 16 04:52:30.822591 containerd[1618]: time="2026-04-16T04:52:30.822542886Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Apr 16 04:52:30.837649 kubelet[2764]: E0416 04:52:30.837603 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:52:30.838364 containerd[1618]: time="2026-04-16T04:52:30.838335970Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gvvx2,Uid:2d224c56-d355-4852-818f-08278c8c73de,Namespace:kube-system,Attempt:0,}" Apr 16 04:52:30.854753 containerd[1618]: time="2026-04-16T04:52:30.854720751Z" level=info msg="connecting to shim 3e64ccc7acd79acfd1d41980350373e2906e5fe371233e626c11a0a9c51ce100" address="unix:///run/containerd/s/d54806492a858146a7d4e5860cc53463543959eeaf4aea3bfd3080e51ea70f39" namespace=k8s.io protocol=ttrpc version=3 Apr 16 04:52:30.880088 systemd[1]: Started cri-containerd-3e64ccc7acd79acfd1d41980350373e2906e5fe371233e626c11a0a9c51ce100.scope - libcontainer container 3e64ccc7acd79acfd1d41980350373e2906e5fe371233e626c11a0a9c51ce100. Apr 16 04:52:30.902502 containerd[1618]: time="2026-04-16T04:52:30.902393306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gvvx2,Uid:2d224c56-d355-4852-818f-08278c8c73de,Namespace:kube-system,Attempt:0,} returns sandbox id \"3e64ccc7acd79acfd1d41980350373e2906e5fe371233e626c11a0a9c51ce100\"" Apr 16 04:52:30.903214 kubelet[2764]: E0416 04:52:30.903191 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:52:30.908100 containerd[1618]: time="2026-04-16T04:52:30.907564347Z" level=info msg="CreateContainer within sandbox \"3e64ccc7acd79acfd1d41980350373e2906e5fe371233e626c11a0a9c51ce100\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 16 04:52:30.924402 containerd[1618]: time="2026-04-16T04:52:30.924227576Z" level=info msg="Container a4361b3baeeb830dbc3b1542990f5f73b19bcad9b4f034999a574d06c7f441da: CDI devices from CRI Config.CDIDevices: []" Apr 16 04:52:30.931135 containerd[1618]: time="2026-04-16T04:52:30.931086151Z" level=info msg="CreateContainer within sandbox \"3e64ccc7acd79acfd1d41980350373e2906e5fe371233e626c11a0a9c51ce100\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a4361b3baeeb830dbc3b1542990f5f73b19bcad9b4f034999a574d06c7f441da\"" Apr 16 04:52:30.931670 containerd[1618]: time="2026-04-16T04:52:30.931642804Z" level=info msg="StartContainer for \"a4361b3baeeb830dbc3b1542990f5f73b19bcad9b4f034999a574d06c7f441da\"" Apr 16 04:52:30.932675 containerd[1618]: time="2026-04-16T04:52:30.932621508Z" level=info msg="connecting to shim a4361b3baeeb830dbc3b1542990f5f73b19bcad9b4f034999a574d06c7f441da" address="unix:///run/containerd/s/d54806492a858146a7d4e5860cc53463543959eeaf4aea3bfd3080e51ea70f39" protocol=ttrpc version=3 Apr 16 04:52:30.953096 systemd[1]: Started cri-containerd-a4361b3baeeb830dbc3b1542990f5f73b19bcad9b4f034999a574d06c7f441da.scope - libcontainer container a4361b3baeeb830dbc3b1542990f5f73b19bcad9b4f034999a574d06c7f441da. Apr 16 04:52:31.018685 containerd[1618]: time="2026-04-16T04:52:31.018653113Z" level=info msg="StartContainer for \"a4361b3baeeb830dbc3b1542990f5f73b19bcad9b4f034999a574d06c7f441da\" returns successfully" Apr 16 04:52:31.214093 kubelet[2764]: E0416 04:52:31.214068 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:52:32.069048 kubelet[2764]: E0416 04:52:32.068863 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:52:32.081482 kubelet[2764]: I0416 04:52:32.081298 2764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-gvvx2" podStartSLOduration=3.081282123 podStartE2EDuration="3.081282123s" podCreationTimestamp="2026-04-16 04:52:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 04:52:31.22427307 +0000 UTC m=+7.216007476" watchObservedRunningTime="2026-04-16 04:52:32.081282123 +0000 UTC m=+8.073016527" Apr 16 04:52:32.229961 kubelet[2764]: E0416 04:52:32.229416 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:52:32.339391 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1380843741.mount: Deactivated successfully. Apr 16 04:52:32.962445 containerd[1618]: time="2026-04-16T04:52:32.962200927Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:52:32.963143 containerd[1618]: time="2026-04-16T04:52:32.962815360Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Apr 16 04:52:32.963857 containerd[1618]: time="2026-04-16T04:52:32.963778103Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:52:32.965562 containerd[1618]: time="2026-04-16T04:52:32.965514399Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:52:32.966036 containerd[1618]: time="2026-04-16T04:52:32.966014757Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 2.14342236s" Apr 16 04:52:32.966104 containerd[1618]: time="2026-04-16T04:52:32.966041958Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Apr 16 04:52:32.969673 containerd[1618]: time="2026-04-16T04:52:32.969649471Z" level=info msg="CreateContainer within sandbox \"648985897e0c9917b07303371c6800a0366a09032bc0dea7691100e2eca68ba5\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 16 04:52:32.978145 containerd[1618]: time="2026-04-16T04:52:32.978112400Z" level=info msg="Container ae65a16baac5812dd18d21dd727ca2ed16518cd72ec65a532bd8c9e1239db493: CDI devices from CRI Config.CDIDevices: []" Apr 16 04:52:32.984201 containerd[1618]: time="2026-04-16T04:52:32.984156359Z" level=info msg="CreateContainer within sandbox \"648985897e0c9917b07303371c6800a0366a09032bc0dea7691100e2eca68ba5\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"ae65a16baac5812dd18d21dd727ca2ed16518cd72ec65a532bd8c9e1239db493\"" Apr 16 04:52:32.984611 containerd[1618]: time="2026-04-16T04:52:32.984588842Z" level=info msg="StartContainer for \"ae65a16baac5812dd18d21dd727ca2ed16518cd72ec65a532bd8c9e1239db493\"" Apr 16 04:52:32.985224 containerd[1618]: time="2026-04-16T04:52:32.985201945Z" level=info msg="connecting to shim ae65a16baac5812dd18d21dd727ca2ed16518cd72ec65a532bd8c9e1239db493" address="unix:///run/containerd/s/bc8c53f4127540783b026cbd2ffe0adc0093832dad06e6b0c9da3b5eefbe9715" protocol=ttrpc version=3 Apr 16 04:52:33.023169 systemd[1]: Started cri-containerd-ae65a16baac5812dd18d21dd727ca2ed16518cd72ec65a532bd8c9e1239db493.scope - libcontainer container ae65a16baac5812dd18d21dd727ca2ed16518cd72ec65a532bd8c9e1239db493. Apr 16 04:52:33.057862 containerd[1618]: time="2026-04-16T04:52:33.057828147Z" level=info msg="StartContainer for \"ae65a16baac5812dd18d21dd727ca2ed16518cd72ec65a532bd8c9e1239db493\" returns successfully" Apr 16 04:52:33.231379 kubelet[2764]: E0416 04:52:33.230899 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:52:38.116126 kubelet[2764]: E0416 04:52:38.115249 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:52:38.131943 kubelet[2764]: I0416 04:52:38.131803 2764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6bf85f8dd-29pn9" podStartSLOduration=5.987310571 podStartE2EDuration="8.131791706s" podCreationTimestamp="2026-04-16 04:52:30 +0000 UTC" firstStartedPulling="2026-04-16 04:52:30.822272794 +0000 UTC m=+6.814007202" lastFinishedPulling="2026-04-16 04:52:32.966753927 +0000 UTC m=+8.958488337" observedRunningTime="2026-04-16 04:52:33.243748329 +0000 UTC m=+9.235482740" watchObservedRunningTime="2026-04-16 04:52:38.131791706 +0000 UTC m=+14.123526178" Apr 16 04:52:38.242348 kubelet[2764]: E0416 04:52:38.242283 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:52:38.270164 sudo[1819]: pam_unix(sudo:session): session closed for user root Apr 16 04:52:38.273316 sshd[1818]: Connection closed by 10.0.0.1 port 35820 Apr 16 04:52:38.273783 sshd-session[1815]: pam_unix(sshd:session): session closed for user core Apr 16 04:52:38.278217 systemd[1]: sshd@6-10.0.0.15:22-10.0.0.1:35820.service: Deactivated successfully. Apr 16 04:52:38.282603 systemd[1]: session-7.scope: Deactivated successfully. Apr 16 04:52:38.283368 systemd[1]: session-7.scope: Consumed 6.424s CPU time, 230M memory peak. Apr 16 04:52:38.286881 systemd-logind[1586]: Session 7 logged out. Waiting for processes to exit. Apr 16 04:52:38.288660 systemd-logind[1586]: Removed session 7. Apr 16 04:52:40.372835 systemd[1]: Created slice kubepods-besteffort-pod7d29b6df_3cb3_4afe_a650_71b74b142df1.slice - libcontainer container kubepods-besteffort-pod7d29b6df_3cb3_4afe_a650_71b74b142df1.slice. Apr 16 04:52:40.456159 systemd[1]: Created slice kubepods-besteffort-podba11ac1e_d038_41b4_b00d_97dcddabdb0e.slice - libcontainer container kubepods-besteffort-podba11ac1e_d038_41b4_b00d_97dcddabdb0e.slice. Apr 16 04:52:40.500840 kubelet[2764]: I0416 04:52:40.500494 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/7d29b6df-3cb3-4afe-a650-71b74b142df1-typha-certs\") pod \"calico-typha-5589bb8dc8-q74pq\" (UID: \"7d29b6df-3cb3-4afe-a650-71b74b142df1\") " pod="calico-system/calico-typha-5589bb8dc8-q74pq" Apr 16 04:52:40.500840 kubelet[2764]: I0416 04:52:40.500667 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7d29b6df-3cb3-4afe-a650-71b74b142df1-tigera-ca-bundle\") pod \"calico-typha-5589bb8dc8-q74pq\" (UID: \"7d29b6df-3cb3-4afe-a650-71b74b142df1\") " pod="calico-system/calico-typha-5589bb8dc8-q74pq" Apr 16 04:52:40.500840 kubelet[2764]: I0416 04:52:40.500697 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sc46z\" (UniqueName: \"kubernetes.io/projected/7d29b6df-3cb3-4afe-a650-71b74b142df1-kube-api-access-sc46z\") pod \"calico-typha-5589bb8dc8-q74pq\" (UID: \"7d29b6df-3cb3-4afe-a650-71b74b142df1\") " pod="calico-system/calico-typha-5589bb8dc8-q74pq" Apr 16 04:52:40.599960 kubelet[2764]: E0416 04:52:40.599673 2764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6d28c" podUID="81d6e248-dbc3-4b14-bcf7-e79ac5dd77dd" Apr 16 04:52:40.604794 kubelet[2764]: I0416 04:52:40.604655 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/ba11ac1e-d038-41b4-b00d-97dcddabdb0e-cni-net-dir\") pod \"calico-node-lrppt\" (UID: \"ba11ac1e-d038-41b4-b00d-97dcddabdb0e\") " pod="calico-system/calico-node-lrppt" Apr 16 04:52:40.604794 kubelet[2764]: I0416 04:52:40.604715 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/ba11ac1e-d038-41b4-b00d-97dcddabdb0e-flexvol-driver-host\") pod \"calico-node-lrppt\" (UID: \"ba11ac1e-d038-41b4-b00d-97dcddabdb0e\") " pod="calico-system/calico-node-lrppt" Apr 16 04:52:40.604794 kubelet[2764]: I0416 04:52:40.604736 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/ba11ac1e-d038-41b4-b00d-97dcddabdb0e-node-certs\") pod \"calico-node-lrppt\" (UID: \"ba11ac1e-d038-41b4-b00d-97dcddabdb0e\") " pod="calico-system/calico-node-lrppt" Apr 16 04:52:40.604794 kubelet[2764]: I0416 04:52:40.604753 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/81d6e248-dbc3-4b14-bcf7-e79ac5dd77dd-kubelet-dir\") pod \"csi-node-driver-6d28c\" (UID: \"81d6e248-dbc3-4b14-bcf7-e79ac5dd77dd\") " pod="calico-system/csi-node-driver-6d28c" Apr 16 04:52:40.604794 kubelet[2764]: I0416 04:52:40.604769 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/81d6e248-dbc3-4b14-bcf7-e79ac5dd77dd-socket-dir\") pod \"csi-node-driver-6d28c\" (UID: \"81d6e248-dbc3-4b14-bcf7-e79ac5dd77dd\") " pod="calico-system/csi-node-driver-6d28c" Apr 16 04:52:40.605250 kubelet[2764]: I0416 04:52:40.604785 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/ba11ac1e-d038-41b4-b00d-97dcddabdb0e-sys-fs\") pod \"calico-node-lrppt\" (UID: \"ba11ac1e-d038-41b4-b00d-97dcddabdb0e\") " pod="calico-system/calico-node-lrppt" Apr 16 04:52:40.605250 kubelet[2764]: I0416 04:52:40.604801 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmmqp\" (UniqueName: \"kubernetes.io/projected/ba11ac1e-d038-41b4-b00d-97dcddabdb0e-kube-api-access-cmmqp\") pod \"calico-node-lrppt\" (UID: \"ba11ac1e-d038-41b4-b00d-97dcddabdb0e\") " pod="calico-system/calico-node-lrppt" Apr 16 04:52:40.605250 kubelet[2764]: I0416 04:52:40.604816 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/81d6e248-dbc3-4b14-bcf7-e79ac5dd77dd-registration-dir\") pod \"csi-node-driver-6d28c\" (UID: \"81d6e248-dbc3-4b14-bcf7-e79ac5dd77dd\") " pod="calico-system/csi-node-driver-6d28c" Apr 16 04:52:40.605250 kubelet[2764]: I0416 04:52:40.604846 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/ba11ac1e-d038-41b4-b00d-97dcddabdb0e-cni-bin-dir\") pod \"calico-node-lrppt\" (UID: \"ba11ac1e-d038-41b4-b00d-97dcddabdb0e\") " pod="calico-system/calico-node-lrppt" Apr 16 04:52:40.605250 kubelet[2764]: I0416 04:52:40.604863 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ba11ac1e-d038-41b4-b00d-97dcddabdb0e-lib-modules\") pod \"calico-node-lrppt\" (UID: \"ba11ac1e-d038-41b4-b00d-97dcddabdb0e\") " pod="calico-system/calico-node-lrppt" Apr 16 04:52:40.605421 kubelet[2764]: I0416 04:52:40.604880 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/ba11ac1e-d038-41b4-b00d-97dcddabdb0e-var-run-calico\") pod \"calico-node-lrppt\" (UID: \"ba11ac1e-d038-41b4-b00d-97dcddabdb0e\") " pod="calico-system/calico-node-lrppt" Apr 16 04:52:40.605421 kubelet[2764]: I0416 04:52:40.604904 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/ba11ac1e-d038-41b4-b00d-97dcddabdb0e-bpffs\") pod \"calico-node-lrppt\" (UID: \"ba11ac1e-d038-41b4-b00d-97dcddabdb0e\") " pod="calico-system/calico-node-lrppt" Apr 16 04:52:40.605421 kubelet[2764]: I0416 04:52:40.604953 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ba11ac1e-d038-41b4-b00d-97dcddabdb0e-var-lib-calico\") pod \"calico-node-lrppt\" (UID: \"ba11ac1e-d038-41b4-b00d-97dcddabdb0e\") " pod="calico-system/calico-node-lrppt" Apr 16 04:52:40.605421 kubelet[2764]: I0416 04:52:40.604971 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ba11ac1e-d038-41b4-b00d-97dcddabdb0e-xtables-lock\") pod \"calico-node-lrppt\" (UID: \"ba11ac1e-d038-41b4-b00d-97dcddabdb0e\") " pod="calico-system/calico-node-lrppt" Apr 16 04:52:40.605421 kubelet[2764]: I0416 04:52:40.604988 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/81d6e248-dbc3-4b14-bcf7-e79ac5dd77dd-varrun\") pod \"csi-node-driver-6d28c\" (UID: \"81d6e248-dbc3-4b14-bcf7-e79ac5dd77dd\") " pod="calico-system/csi-node-driver-6d28c" Apr 16 04:52:40.605556 kubelet[2764]: I0416 04:52:40.605021 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/ba11ac1e-d038-41b4-b00d-97dcddabdb0e-nodeproc\") pod \"calico-node-lrppt\" (UID: \"ba11ac1e-d038-41b4-b00d-97dcddabdb0e\") " pod="calico-system/calico-node-lrppt" Apr 16 04:52:40.605556 kubelet[2764]: I0416 04:52:40.605034 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ba11ac1e-d038-41b4-b00d-97dcddabdb0e-tigera-ca-bundle\") pod \"calico-node-lrppt\" (UID: \"ba11ac1e-d038-41b4-b00d-97dcddabdb0e\") " pod="calico-system/calico-node-lrppt" Apr 16 04:52:40.605556 kubelet[2764]: I0416 04:52:40.605054 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/ba11ac1e-d038-41b4-b00d-97dcddabdb0e-cni-log-dir\") pod \"calico-node-lrppt\" (UID: \"ba11ac1e-d038-41b4-b00d-97dcddabdb0e\") " pod="calico-system/calico-node-lrppt" Apr 16 04:52:40.605556 kubelet[2764]: I0416 04:52:40.605065 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/ba11ac1e-d038-41b4-b00d-97dcddabdb0e-policysync\") pod \"calico-node-lrppt\" (UID: \"ba11ac1e-d038-41b4-b00d-97dcddabdb0e\") " pod="calico-system/calico-node-lrppt" Apr 16 04:52:40.605556 kubelet[2764]: I0416 04:52:40.605075 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6z5x\" (UniqueName: \"kubernetes.io/projected/81d6e248-dbc3-4b14-bcf7-e79ac5dd77dd-kube-api-access-l6z5x\") pod \"csi-node-driver-6d28c\" (UID: \"81d6e248-dbc3-4b14-bcf7-e79ac5dd77dd\") " pod="calico-system/csi-node-driver-6d28c" Apr 16 04:52:40.684713 kubelet[2764]: E0416 04:52:40.682649 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:52:40.685596 containerd[1618]: time="2026-04-16T04:52:40.685438744Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5589bb8dc8-q74pq,Uid:7d29b6df-3cb3-4afe-a650-71b74b142df1,Namespace:calico-system,Attempt:0,}" Apr 16 04:52:40.725219 kubelet[2764]: E0416 04:52:40.723665 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:52:40.725219 kubelet[2764]: W0416 04:52:40.725077 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:52:40.725219 kubelet[2764]: E0416 04:52:40.725166 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:52:40.727873 kubelet[2764]: E0416 04:52:40.727848 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:52:40.728157 kubelet[2764]: W0416 04:52:40.728053 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:52:40.728157 kubelet[2764]: E0416 04:52:40.728088 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:52:40.733460 kubelet[2764]: E0416 04:52:40.733447 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:52:40.733562 kubelet[2764]: W0416 04:52:40.733550 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:52:40.734121 kubelet[2764]: E0416 04:52:40.733759 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:52:40.735198 kubelet[2764]: E0416 04:52:40.735179 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:52:40.735198 kubelet[2764]: W0416 04:52:40.735196 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:52:40.735265 kubelet[2764]: E0416 04:52:40.735209 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:52:40.758586 containerd[1618]: time="2026-04-16T04:52:40.758530603Z" level=info msg="connecting to shim f7f81f82f66f29ae04126b08ef7ed3b92ca4e16f212ab49ba611902e7ad4b0f5" address="unix:///run/containerd/s/7d0c0f05c576f82a358c4bc002110a223731edcfe92b1a9ce961702b40918419" namespace=k8s.io protocol=ttrpc version=3 Apr 16 04:52:40.767158 containerd[1618]: time="2026-04-16T04:52:40.767007086Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-lrppt,Uid:ba11ac1e-d038-41b4-b00d-97dcddabdb0e,Namespace:calico-system,Attempt:0,}" Apr 16 04:52:40.817965 systemd[1]: Started cri-containerd-f7f81f82f66f29ae04126b08ef7ed3b92ca4e16f212ab49ba611902e7ad4b0f5.scope - libcontainer container f7f81f82f66f29ae04126b08ef7ed3b92ca4e16f212ab49ba611902e7ad4b0f5. Apr 16 04:52:40.826324 containerd[1618]: time="2026-04-16T04:52:40.826019413Z" level=info msg="connecting to shim 51e283e74b9536f87de448b9d81e5573c2855c4d6c802493a4d191c5b571c9cb" address="unix:///run/containerd/s/3d6359ba099da04d98b2a4921fef0b02b22d2b6d9f30fc67d923931d2f18d661" namespace=k8s.io protocol=ttrpc version=3 Apr 16 04:52:40.951455 systemd[1]: Started cri-containerd-51e283e74b9536f87de448b9d81e5573c2855c4d6c802493a4d191c5b571c9cb.scope - libcontainer container 51e283e74b9536f87de448b9d81e5573c2855c4d6c802493a4d191c5b571c9cb. Apr 16 04:52:41.016224 containerd[1618]: time="2026-04-16T04:52:41.016147457Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5589bb8dc8-q74pq,Uid:7d29b6df-3cb3-4afe-a650-71b74b142df1,Namespace:calico-system,Attempt:0,} returns sandbox id \"f7f81f82f66f29ae04126b08ef7ed3b92ca4e16f212ab49ba611902e7ad4b0f5\"" Apr 16 04:52:41.017207 kubelet[2764]: E0416 04:52:41.017086 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:52:41.019547 containerd[1618]: time="2026-04-16T04:52:41.019462163Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Apr 16 04:52:41.044021 containerd[1618]: time="2026-04-16T04:52:41.043253370Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-lrppt,Uid:ba11ac1e-d038-41b4-b00d-97dcddabdb0e,Namespace:calico-system,Attempt:0,} returns sandbox id \"51e283e74b9536f87de448b9d81e5573c2855c4d6c802493a4d191c5b571c9cb\"" Apr 16 04:52:42.183377 kubelet[2764]: E0416 04:52:42.183174 2764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6d28c" podUID="81d6e248-dbc3-4b14-bcf7-e79ac5dd77dd" Apr 16 04:52:42.820360 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount771439237.mount: Deactivated successfully. Apr 16 04:52:43.071269 update_engine[1589]: I20260416 04:52:43.068498 1589 update_attempter.cc:509] Updating boot flags... Apr 16 04:52:44.184702 kubelet[2764]: E0416 04:52:44.184372 2764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6d28c" podUID="81d6e248-dbc3-4b14-bcf7-e79ac5dd77dd" Apr 16 04:52:44.662045 containerd[1618]: time="2026-04-16T04:52:44.661836792Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:52:44.662954 containerd[1618]: time="2026-04-16T04:52:44.662869042Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=36107596" Apr 16 04:52:44.664047 containerd[1618]: time="2026-04-16T04:52:44.663959264Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:52:44.667211 containerd[1618]: time="2026-04-16T04:52:44.667037744Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:52:44.667684 containerd[1618]: time="2026-04-16T04:52:44.667655689Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 3.648169354s" Apr 16 04:52:44.667719 containerd[1618]: time="2026-04-16T04:52:44.667690702Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Apr 16 04:52:44.668614 containerd[1618]: time="2026-04-16T04:52:44.668596109Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Apr 16 04:52:44.682049 containerd[1618]: time="2026-04-16T04:52:44.682010528Z" level=info msg="CreateContainer within sandbox \"f7f81f82f66f29ae04126b08ef7ed3b92ca4e16f212ab49ba611902e7ad4b0f5\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 16 04:52:44.688480 containerd[1618]: time="2026-04-16T04:52:44.688434846Z" level=info msg="Container 84c2f49c90e1c28dbf89800e84381f843fad8efbae650ccacbaf482f4c1e9f1d: CDI devices from CRI Config.CDIDevices: []" Apr 16 04:52:44.696665 containerd[1618]: time="2026-04-16T04:52:44.696470862Z" level=info msg="CreateContainer within sandbox \"f7f81f82f66f29ae04126b08ef7ed3b92ca4e16f212ab49ba611902e7ad4b0f5\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"84c2f49c90e1c28dbf89800e84381f843fad8efbae650ccacbaf482f4c1e9f1d\"" Apr 16 04:52:44.697195 containerd[1618]: time="2026-04-16T04:52:44.697153043Z" level=info msg="StartContainer for \"84c2f49c90e1c28dbf89800e84381f843fad8efbae650ccacbaf482f4c1e9f1d\"" Apr 16 04:52:44.700041 containerd[1618]: time="2026-04-16T04:52:44.699887025Z" level=info msg="connecting to shim 84c2f49c90e1c28dbf89800e84381f843fad8efbae650ccacbaf482f4c1e9f1d" address="unix:///run/containerd/s/7d0c0f05c576f82a358c4bc002110a223731edcfe92b1a9ce961702b40918419" protocol=ttrpc version=3 Apr 16 04:52:44.726263 systemd[1]: Started cri-containerd-84c2f49c90e1c28dbf89800e84381f843fad8efbae650ccacbaf482f4c1e9f1d.scope - libcontainer container 84c2f49c90e1c28dbf89800e84381f843fad8efbae650ccacbaf482f4c1e9f1d. Apr 16 04:52:44.826286 containerd[1618]: time="2026-04-16T04:52:44.826194830Z" level=info msg="StartContainer for \"84c2f49c90e1c28dbf89800e84381f843fad8efbae650ccacbaf482f4c1e9f1d\" returns successfully" Apr 16 04:52:45.274125 kubelet[2764]: E0416 04:52:45.273954 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:52:45.290321 kubelet[2764]: I0416 04:52:45.290068 2764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5589bb8dc8-q74pq" podStartSLOduration=1.64057998 podStartE2EDuration="5.290051939s" podCreationTimestamp="2026-04-16 04:52:40 +0000 UTC" firstStartedPulling="2026-04-16 04:52:41.019043247 +0000 UTC m=+17.010777650" lastFinishedPulling="2026-04-16 04:52:44.668515205 +0000 UTC m=+20.660249609" observedRunningTime="2026-04-16 04:52:45.289775135 +0000 UTC m=+21.281509543" watchObservedRunningTime="2026-04-16 04:52:45.290051939 +0000 UTC m=+21.281786354" Apr 16 04:52:45.351421 kubelet[2764]: E0416 04:52:45.351242 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:52:45.351421 kubelet[2764]: W0416 04:52:45.351294 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:52:45.351421 kubelet[2764]: E0416 04:52:45.351319 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:52:45.351968 kubelet[2764]: E0416 04:52:45.351507 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:52:45.351968 kubelet[2764]: W0416 04:52:45.351513 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:52:45.351968 kubelet[2764]: E0416 04:52:45.351519 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:52:45.351968 kubelet[2764]: E0416 04:52:45.351703 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:52:45.351968 kubelet[2764]: W0416 04:52:45.351709 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:52:45.351968 kubelet[2764]: E0416 04:52:45.351714 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:52:45.351968 kubelet[2764]: E0416 04:52:45.351895 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:52:45.351968 kubelet[2764]: W0416 04:52:45.351900 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:52:45.351968 kubelet[2764]: E0416 04:52:45.351929 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:52:45.352262 kubelet[2764]: E0416 04:52:45.352164 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:52:45.352262 kubelet[2764]: W0416 04:52:45.352171 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:52:45.352262 kubelet[2764]: E0416 04:52:45.352178 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:52:45.352407 kubelet[2764]: E0416 04:52:45.352379 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:52:45.352407 kubelet[2764]: W0416 04:52:45.352393 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:52:45.352407 kubelet[2764]: E0416 04:52:45.352401 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:52:45.355752 kubelet[2764]: E0416 04:52:45.355565 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:52:45.355752 kubelet[2764]: W0416 04:52:45.355638 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:52:45.355752 kubelet[2764]: E0416 04:52:45.355659 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:52:45.356215 kubelet[2764]: E0416 04:52:45.355939 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:52:45.356215 kubelet[2764]: W0416 04:52:45.355946 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:52:45.356215 kubelet[2764]: E0416 04:52:45.355953 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:52:45.356215 kubelet[2764]: E0416 04:52:45.356117 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:52:45.356215 kubelet[2764]: W0416 04:52:45.356123 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:52:45.356215 kubelet[2764]: E0416 04:52:45.356128 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:52:45.356389 kubelet[2764]: E0416 04:52:45.356279 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:52:45.356389 kubelet[2764]: W0416 04:52:45.356284 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:52:45.356389 kubelet[2764]: E0416 04:52:45.356289 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:52:45.356468 kubelet[2764]: E0416 04:52:45.356403 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:52:45.356468 kubelet[2764]: W0416 04:52:45.356408 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:52:45.356468 kubelet[2764]: E0416 04:52:45.356413 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:52:45.356544 kubelet[2764]: E0416 04:52:45.356528 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:52:45.356544 kubelet[2764]: W0416 04:52:45.356540 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:52:45.356582 kubelet[2764]: E0416 04:52:45.356544 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:52:45.356757 kubelet[2764]: E0416 04:52:45.356729 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:52:45.356787 kubelet[2764]: W0416 04:52:45.356761 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:52:45.356787 kubelet[2764]: E0416 04:52:45.356779 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:52:45.357020 kubelet[2764]: E0416 04:52:45.357002 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:52:45.357020 kubelet[2764]: W0416 04:52:45.357015 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:52:45.357101 kubelet[2764]: E0416 04:52:45.357022 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:52:45.357177 kubelet[2764]: E0416 04:52:45.357160 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:52:45.357177 kubelet[2764]: W0416 04:52:45.357171 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:52:45.357177 kubelet[2764]: E0416 04:52:45.357177 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:52:45.360089 kubelet[2764]: E0416 04:52:45.359866 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:52:45.360089 kubelet[2764]: W0416 04:52:45.359965 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:52:45.360089 kubelet[2764]: E0416 04:52:45.359985 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:52:45.360513 kubelet[2764]: E0416 04:52:45.360364 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:52:45.360513 kubelet[2764]: W0416 04:52:45.360382 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:52:45.360513 kubelet[2764]: E0416 04:52:45.360391 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:52:45.360610 kubelet[2764]: E0416 04:52:45.360530 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:52:45.360610 kubelet[2764]: W0416 04:52:45.360536 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:52:45.360610 kubelet[2764]: E0416 04:52:45.360541 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:52:45.360709 kubelet[2764]: E0416 04:52:45.360693 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:52:45.360709 kubelet[2764]: W0416 04:52:45.360704 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:52:45.360744 kubelet[2764]: E0416 04:52:45.360710 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:52:45.360876 kubelet[2764]: E0416 04:52:45.360863 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:52:45.360876 kubelet[2764]: W0416 04:52:45.360874 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:52:45.360942 kubelet[2764]: E0416 04:52:45.360880 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:52:45.361056 kubelet[2764]: E0416 04:52:45.361042 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:52:45.361056 kubelet[2764]: W0416 04:52:45.361054 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:52:45.361089 kubelet[2764]: E0416 04:52:45.361060 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:52:45.361261 kubelet[2764]: E0416 04:52:45.361248 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:52:45.361261 kubelet[2764]: W0416 04:52:45.361259 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:52:45.361301 kubelet[2764]: E0416 04:52:45.361265 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:52:45.361412 kubelet[2764]: E0416 04:52:45.361399 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:52:45.361412 kubelet[2764]: W0416 04:52:45.361410 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:52:45.361446 kubelet[2764]: E0416 04:52:45.361415 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:52:45.361575 kubelet[2764]: E0416 04:52:45.361562 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:52:45.361575 kubelet[2764]: W0416 04:52:45.361574 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:52:45.361617 kubelet[2764]: E0416 04:52:45.361579 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:52:45.361735 kubelet[2764]: E0416 04:52:45.361722 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:52:45.361735 kubelet[2764]: W0416 04:52:45.361733 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:52:45.361839 kubelet[2764]: E0416 04:52:45.361739 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:52:45.361886 kubelet[2764]: E0416 04:52:45.361868 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:52:45.361886 kubelet[2764]: W0416 04:52:45.361883 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:52:45.362002 kubelet[2764]: E0416 04:52:45.361892 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:52:45.362126 kubelet[2764]: E0416 04:52:45.362111 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:52:45.362126 kubelet[2764]: W0416 04:52:45.362124 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:52:45.362163 kubelet[2764]: E0416 04:52:45.362130 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:52:45.362403 kubelet[2764]: E0416 04:52:45.362383 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:52:45.362403 kubelet[2764]: W0416 04:52:45.362401 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:52:45.362451 kubelet[2764]: E0416 04:52:45.362434 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:52:45.362610 kubelet[2764]: E0416 04:52:45.362598 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:52:45.362610 kubelet[2764]: W0416 04:52:45.362609 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:52:45.362644 kubelet[2764]: E0416 04:52:45.362615 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:52:45.362801 kubelet[2764]: E0416 04:52:45.362789 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:52:45.362801 kubelet[2764]: W0416 04:52:45.362800 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:52:45.362838 kubelet[2764]: E0416 04:52:45.362806 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:52:45.363015 kubelet[2764]: E0416 04:52:45.362997 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:52:45.363043 kubelet[2764]: W0416 04:52:45.363015 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:52:45.363043 kubelet[2764]: E0416 04:52:45.363025 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:52:45.363283 kubelet[2764]: E0416 04:52:45.363174 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:52:45.363283 kubelet[2764]: W0416 04:52:45.363180 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:52:45.363283 kubelet[2764]: E0416 04:52:45.363185 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:52:45.363598 kubelet[2764]: E0416 04:52:45.363568 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:52:45.363598 kubelet[2764]: W0416 04:52:45.363585 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:52:45.363598 kubelet[2764]: E0416 04:52:45.363593 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:52:46.183444 kubelet[2764]: E0416 04:52:46.183309 2764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6d28c" podUID="81d6e248-dbc3-4b14-bcf7-e79ac5dd77dd" Apr 16 04:52:46.275710 kubelet[2764]: I0416 04:52:46.275588 2764 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 16 04:52:46.276449 kubelet[2764]: E0416 04:52:46.276348 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:52:46.281850 kubelet[2764]: E0416 04:52:46.281774 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:52:46.281850 kubelet[2764]: W0416 04:52:46.281794 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:52:46.281850 kubelet[2764]: E0416 04:52:46.281810 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:52:46.282062 kubelet[2764]: E0416 04:52:46.281967 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:52:46.282062 kubelet[2764]: W0416 04:52:46.281973 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:52:46.282062 kubelet[2764]: E0416 04:52:46.281980 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:52:46.282062 kubelet[2764]: E0416 04:52:46.282062 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:52:46.282215 kubelet[2764]: W0416 04:52:46.282066 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:52:46.282215 kubelet[2764]: E0416 04:52:46.282072 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:52:46.282215 kubelet[2764]: E0416 04:52:46.282181 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:52:46.282215 kubelet[2764]: W0416 04:52:46.282189 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:52:46.282215 kubelet[2764]: E0416 04:52:46.282198 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:52:46.282479 kubelet[2764]: E0416 04:52:46.282463 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:52:46.282479 kubelet[2764]: W0416 04:52:46.282476 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:52:46.282568 kubelet[2764]: E0416 04:52:46.282485 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:52:46.284367 kubelet[2764]: E0416 04:52:46.283894 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:52:46.284367 kubelet[2764]: W0416 04:52:46.283996 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:52:46.284367 kubelet[2764]: E0416 04:52:46.284090 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:52:46.286146 kubelet[2764]: E0416 04:52:46.286102 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:52:46.286434 kubelet[2764]: W0416 04:52:46.286333 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:52:46.286594 kubelet[2764]: E0416 04:52:46.286529 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:52:46.286976 kubelet[2764]: E0416 04:52:46.286866 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:52:46.286976 kubelet[2764]: W0416 04:52:46.286875 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:52:46.286976 kubelet[2764]: E0416 04:52:46.286887 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:52:46.288283 kubelet[2764]: E0416 04:52:46.288170 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:52:46.288283 kubelet[2764]: W0416 04:52:46.288271 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:52:46.288539 kubelet[2764]: E0416 04:52:46.288313 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:52:46.289744 kubelet[2764]: E0416 04:52:46.289525 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:52:46.289744 kubelet[2764]: W0416 04:52:46.289545 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:52:46.289744 kubelet[2764]: E0416 04:52:46.289612 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:52:46.290303 kubelet[2764]: E0416 04:52:46.289823 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:52:46.290303 kubelet[2764]: W0416 04:52:46.289829 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:52:46.290303 kubelet[2764]: E0416 04:52:46.289836 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:52:46.290303 kubelet[2764]: E0416 04:52:46.290030 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:52:46.290303 kubelet[2764]: W0416 04:52:46.290036 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:52:46.290303 kubelet[2764]: E0416 04:52:46.290042 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:52:46.290303 kubelet[2764]: E0416 04:52:46.290171 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:52:46.290303 kubelet[2764]: W0416 04:52:46.290256 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:52:46.290303 kubelet[2764]: E0416 04:52:46.290265 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:52:46.290611 kubelet[2764]: E0416 04:52:46.290448 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:52:46.290611 kubelet[2764]: W0416 04:52:46.290454 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:52:46.290611 kubelet[2764]: E0416 04:52:46.290462 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:52:46.290809 kubelet[2764]: E0416 04:52:46.290781 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:52:46.290809 kubelet[2764]: W0416 04:52:46.290795 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:52:46.290809 kubelet[2764]: E0416 04:52:46.290806 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:52:46.334506 containerd[1618]: time="2026-04-16T04:52:46.334339568Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:52:46.341634 containerd[1618]: time="2026-04-16T04:52:46.339421515Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4630250" Apr 16 04:52:46.343407 containerd[1618]: time="2026-04-16T04:52:46.342881622Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:52:46.362373 containerd[1618]: time="2026-04-16T04:52:46.362258941Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:52:46.362720 containerd[1618]: time="2026-04-16T04:52:46.362697043Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 1.694078401s" Apr 16 04:52:46.362779 containerd[1618]: time="2026-04-16T04:52:46.362726033Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Apr 16 04:52:46.369736 containerd[1618]: time="2026-04-16T04:52:46.369502949Z" level=info msg="CreateContainer within sandbox \"51e283e74b9536f87de448b9d81e5573c2855c4d6c802493a4d191c5b571c9cb\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 16 04:52:46.383571 kubelet[2764]: E0416 04:52:46.383464 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:52:46.383571 kubelet[2764]: W0416 04:52:46.383506 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:52:46.383571 kubelet[2764]: E0416 04:52:46.383532 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:52:46.383974 kubelet[2764]: E0416 04:52:46.383836 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:52:46.383974 kubelet[2764]: W0416 04:52:46.383843 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:52:46.383974 kubelet[2764]: E0416 04:52:46.383850 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:52:46.384855 kubelet[2764]: E0416 04:52:46.384741 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:52:46.384855 kubelet[2764]: W0416 04:52:46.384789 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:52:46.384855 kubelet[2764]: E0416 04:52:46.384863 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:52:46.385356 kubelet[2764]: E0416 04:52:46.385293 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:52:46.385356 kubelet[2764]: W0416 04:52:46.385308 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:52:46.385356 kubelet[2764]: E0416 04:52:46.385317 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:52:46.386000 kubelet[2764]: E0416 04:52:46.385434 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:52:46.386000 kubelet[2764]: W0416 04:52:46.385439 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:52:46.386000 kubelet[2764]: E0416 04:52:46.385445 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:52:46.391381 containerd[1618]: time="2026-04-16T04:52:46.386009111Z" level=info msg="Container 179051aa7e2d1de604650aebd90da59e7c84178c8d76e421992d018f866a3a12: CDI devices from CRI Config.CDIDevices: []" Apr 16 04:52:46.394138 kubelet[2764]: E0416 04:52:46.393400 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:52:46.395577 kubelet[2764]: W0416 04:52:46.395333 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:52:46.403639 kubelet[2764]: E0416 04:52:46.398686 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:52:46.403639 kubelet[2764]: E0416 04:52:46.401070 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:52:46.403639 kubelet[2764]: W0416 04:52:46.401092 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:52:46.403639 kubelet[2764]: E0416 04:52:46.401108 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:52:46.405327 kubelet[2764]: E0416 04:52:46.405202 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:52:46.405327 kubelet[2764]: W0416 04:52:46.405287 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:52:46.405327 kubelet[2764]: E0416 04:52:46.405304 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:52:46.405625 kubelet[2764]: E0416 04:52:46.405581 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:52:46.405664 kubelet[2764]: W0416 04:52:46.405652 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:52:46.405694 kubelet[2764]: E0416 04:52:46.405665 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:52:46.405998 kubelet[2764]: E0416 04:52:46.405971 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:52:46.405998 kubelet[2764]: W0416 04:52:46.405989 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:52:46.406158 kubelet[2764]: E0416 04:52:46.405998 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:52:46.406489 kubelet[2764]: E0416 04:52:46.406456 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:52:46.406489 kubelet[2764]: W0416 04:52:46.406480 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:52:46.406557 kubelet[2764]: E0416 04:52:46.406492 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:52:46.408120 kubelet[2764]: E0416 04:52:46.407961 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:52:46.408529 kubelet[2764]: W0416 04:52:46.408393 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:52:46.408529 kubelet[2764]: E0416 04:52:46.408468 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:52:46.408938 kubelet[2764]: E0416 04:52:46.408889 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:52:46.408998 kubelet[2764]: W0416 04:52:46.408946 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:52:46.408998 kubelet[2764]: E0416 04:52:46.408958 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:52:46.410127 kubelet[2764]: E0416 04:52:46.410108 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:52:46.410127 kubelet[2764]: W0416 04:52:46.410127 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:52:46.410202 kubelet[2764]: E0416 04:52:46.410140 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:52:46.410475 kubelet[2764]: E0416 04:52:46.410458 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:52:46.410507 kubelet[2764]: W0416 04:52:46.410475 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:52:46.410507 kubelet[2764]: E0416 04:52:46.410486 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:52:46.410988 kubelet[2764]: E0416 04:52:46.410964 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:52:46.410988 kubelet[2764]: W0416 04:52:46.410986 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:52:46.411075 kubelet[2764]: E0416 04:52:46.410997 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:52:46.411417 kubelet[2764]: E0416 04:52:46.411398 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:52:46.411449 kubelet[2764]: W0416 04:52:46.411417 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:52:46.411449 kubelet[2764]: E0416 04:52:46.411428 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:52:46.411954 kubelet[2764]: E0416 04:52:46.411899 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:52:46.412071 kubelet[2764]: W0416 04:52:46.412027 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:52:46.412071 kubelet[2764]: E0416 04:52:46.412047 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:52:46.412789 containerd[1618]: time="2026-04-16T04:52:46.412754130Z" level=info msg="CreateContainer within sandbox \"51e283e74b9536f87de448b9d81e5573c2855c4d6c802493a4d191c5b571c9cb\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"179051aa7e2d1de604650aebd90da59e7c84178c8d76e421992d018f866a3a12\"" Apr 16 04:52:46.413548 containerd[1618]: time="2026-04-16T04:52:46.413235586Z" level=info msg="StartContainer for \"179051aa7e2d1de604650aebd90da59e7c84178c8d76e421992d018f866a3a12\"" Apr 16 04:52:46.415541 containerd[1618]: time="2026-04-16T04:52:46.415481926Z" level=info msg="connecting to shim 179051aa7e2d1de604650aebd90da59e7c84178c8d76e421992d018f866a3a12" address="unix:///run/containerd/s/3d6359ba099da04d98b2a4921fef0b02b22d2b6d9f30fc67d923931d2f18d661" protocol=ttrpc version=3 Apr 16 04:52:46.481773 systemd[1]: Started cri-containerd-179051aa7e2d1de604650aebd90da59e7c84178c8d76e421992d018f866a3a12.scope - libcontainer container 179051aa7e2d1de604650aebd90da59e7c84178c8d76e421992d018f866a3a12. Apr 16 04:52:46.636350 containerd[1618]: time="2026-04-16T04:52:46.634546029Z" level=info msg="StartContainer for \"179051aa7e2d1de604650aebd90da59e7c84178c8d76e421992d018f866a3a12\" returns successfully" Apr 16 04:52:46.659668 systemd[1]: cri-containerd-179051aa7e2d1de604650aebd90da59e7c84178c8d76e421992d018f866a3a12.scope: Deactivated successfully. Apr 16 04:52:46.661744 containerd[1618]: time="2026-04-16T04:52:46.661687413Z" level=info msg="received container exit event container_id:\"179051aa7e2d1de604650aebd90da59e7c84178c8d76e421992d018f866a3a12\" id:\"179051aa7e2d1de604650aebd90da59e7c84178c8d76e421992d018f866a3a12\" pid:3433 exited_at:{seconds:1776315166 nanos:661271737}" Apr 16 04:52:46.702892 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-179051aa7e2d1de604650aebd90da59e7c84178c8d76e421992d018f866a3a12-rootfs.mount: Deactivated successfully. Apr 16 04:52:47.285276 containerd[1618]: time="2026-04-16T04:52:47.285113203Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Apr 16 04:52:48.183108 kubelet[2764]: E0416 04:52:48.182849 2764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6d28c" podUID="81d6e248-dbc3-4b14-bcf7-e79ac5dd77dd" Apr 16 04:52:50.183950 kubelet[2764]: E0416 04:52:50.183688 2764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6d28c" podUID="81d6e248-dbc3-4b14-bcf7-e79ac5dd77dd" Apr 16 04:52:52.183330 kubelet[2764]: E0416 04:52:52.183066 2764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6d28c" podUID="81d6e248-dbc3-4b14-bcf7-e79ac5dd77dd" Apr 16 04:52:54.187000 kubelet[2764]: E0416 04:52:54.186818 2764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6d28c" podUID="81d6e248-dbc3-4b14-bcf7-e79ac5dd77dd" Apr 16 04:52:55.660947 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1794063739.mount: Deactivated successfully. Apr 16 04:52:55.827022 containerd[1618]: time="2026-04-16T04:52:55.826888987Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:52:55.827484 containerd[1618]: time="2026-04-16T04:52:55.827444722Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Apr 16 04:52:55.828344 containerd[1618]: time="2026-04-16T04:52:55.828288886Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:52:55.829852 containerd[1618]: time="2026-04-16T04:52:55.829806137Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:52:55.830320 containerd[1618]: time="2026-04-16T04:52:55.830278315Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 8.545073667s" Apr 16 04:52:55.830379 containerd[1618]: time="2026-04-16T04:52:55.830319167Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Apr 16 04:52:55.837848 containerd[1618]: time="2026-04-16T04:52:55.837626103Z" level=info msg="CreateContainer within sandbox \"51e283e74b9536f87de448b9d81e5573c2855c4d6c802493a4d191c5b571c9cb\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Apr 16 04:52:55.846543 containerd[1618]: time="2026-04-16T04:52:55.846503738Z" level=info msg="Container c8f441055735282a45f44d899bd1b22326ce890e7eccdf34da3eb53664f5784b: CDI devices from CRI Config.CDIDevices: []" Apr 16 04:52:55.879871 containerd[1618]: time="2026-04-16T04:52:55.879757035Z" level=info msg="CreateContainer within sandbox \"51e283e74b9536f87de448b9d81e5573c2855c4d6c802493a4d191c5b571c9cb\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"c8f441055735282a45f44d899bd1b22326ce890e7eccdf34da3eb53664f5784b\"" Apr 16 04:52:55.880241 containerd[1618]: time="2026-04-16T04:52:55.880220541Z" level=info msg="StartContainer for \"c8f441055735282a45f44d899bd1b22326ce890e7eccdf34da3eb53664f5784b\"" Apr 16 04:52:55.881335 containerd[1618]: time="2026-04-16T04:52:55.881305010Z" level=info msg="connecting to shim c8f441055735282a45f44d899bd1b22326ce890e7eccdf34da3eb53664f5784b" address="unix:///run/containerd/s/3d6359ba099da04d98b2a4921fef0b02b22d2b6d9f30fc67d923931d2f18d661" protocol=ttrpc version=3 Apr 16 04:52:55.900072 systemd[1]: Started cri-containerd-c8f441055735282a45f44d899bd1b22326ce890e7eccdf34da3eb53664f5784b.scope - libcontainer container c8f441055735282a45f44d899bd1b22326ce890e7eccdf34da3eb53664f5784b. Apr 16 04:52:56.028194 containerd[1618]: time="2026-04-16T04:52:56.028080947Z" level=info msg="StartContainer for \"c8f441055735282a45f44d899bd1b22326ce890e7eccdf34da3eb53664f5784b\" returns successfully" Apr 16 04:52:56.063015 systemd[1]: cri-containerd-c8f441055735282a45f44d899bd1b22326ce890e7eccdf34da3eb53664f5784b.scope: Deactivated successfully. Apr 16 04:52:56.071363 containerd[1618]: time="2026-04-16T04:52:56.071305875Z" level=info msg="received container exit event container_id:\"c8f441055735282a45f44d899bd1b22326ce890e7eccdf34da3eb53664f5784b\" id:\"c8f441055735282a45f44d899bd1b22326ce890e7eccdf34da3eb53664f5784b\" pid:3491 exited_at:{seconds:1776315176 nanos:63766595}" Apr 16 04:52:56.184052 kubelet[2764]: E0416 04:52:56.183316 2764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6d28c" podUID="81d6e248-dbc3-4b14-bcf7-e79ac5dd77dd" Apr 16 04:52:56.311419 containerd[1618]: time="2026-04-16T04:52:56.310556536Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Apr 16 04:52:56.661469 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c8f441055735282a45f44d899bd1b22326ce890e7eccdf34da3eb53664f5784b-rootfs.mount: Deactivated successfully. Apr 16 04:52:58.187199 kubelet[2764]: E0416 04:52:58.183059 2764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6d28c" podUID="81d6e248-dbc3-4b14-bcf7-e79ac5dd77dd" Apr 16 04:52:59.865116 containerd[1618]: time="2026-04-16T04:52:59.864840502Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:52:59.866392 containerd[1618]: time="2026-04-16T04:52:59.865868194Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Apr 16 04:52:59.867073 containerd[1618]: time="2026-04-16T04:52:59.867012871Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:52:59.922348 containerd[1618]: time="2026-04-16T04:52:59.922225999Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:52:59.950428 containerd[1618]: time="2026-04-16T04:52:59.949179648Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 3.63855252s" Apr 16 04:52:59.950428 containerd[1618]: time="2026-04-16T04:52:59.949228409Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Apr 16 04:52:59.976094 containerd[1618]: time="2026-04-16T04:52:59.975117619Z" level=info msg="CreateContainer within sandbox \"51e283e74b9536f87de448b9d81e5573c2855c4d6c802493a4d191c5b571c9cb\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 16 04:53:00.019629 containerd[1618]: time="2026-04-16T04:53:00.019557144Z" level=info msg="Container 0142c1129ea046bd23d15a968f64af5595c35b0f7d4a5a0f387bdfd2703c7251: CDI devices from CRI Config.CDIDevices: []" Apr 16 04:53:00.025638 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1676566300.mount: Deactivated successfully. Apr 16 04:53:00.036003 containerd[1618]: time="2026-04-16T04:53:00.035875206Z" level=info msg="CreateContainer within sandbox \"51e283e74b9536f87de448b9d81e5573c2855c4d6c802493a4d191c5b571c9cb\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"0142c1129ea046bd23d15a968f64af5595c35b0f7d4a5a0f387bdfd2703c7251\"" Apr 16 04:53:00.036576 containerd[1618]: time="2026-04-16T04:53:00.036511846Z" level=info msg="StartContainer for \"0142c1129ea046bd23d15a968f64af5595c35b0f7d4a5a0f387bdfd2703c7251\"" Apr 16 04:53:00.037818 containerd[1618]: time="2026-04-16T04:53:00.037782962Z" level=info msg="connecting to shim 0142c1129ea046bd23d15a968f64af5595c35b0f7d4a5a0f387bdfd2703c7251" address="unix:///run/containerd/s/3d6359ba099da04d98b2a4921fef0b02b22d2b6d9f30fc67d923931d2f18d661" protocol=ttrpc version=3 Apr 16 04:53:00.097941 systemd[1]: Started cri-containerd-0142c1129ea046bd23d15a968f64af5595c35b0f7d4a5a0f387bdfd2703c7251.scope - libcontainer container 0142c1129ea046bd23d15a968f64af5595c35b0f7d4a5a0f387bdfd2703c7251. Apr 16 04:53:00.174584 containerd[1618]: time="2026-04-16T04:53:00.174325282Z" level=info msg="StartContainer for \"0142c1129ea046bd23d15a968f64af5595c35b0f7d4a5a0f387bdfd2703c7251\" returns successfully" Apr 16 04:53:00.182864 kubelet[2764]: E0416 04:53:00.182725 2764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6d28c" podUID="81d6e248-dbc3-4b14-bcf7-e79ac5dd77dd" Apr 16 04:53:00.842508 systemd[1]: cri-containerd-0142c1129ea046bd23d15a968f64af5595c35b0f7d4a5a0f387bdfd2703c7251.scope: Deactivated successfully. Apr 16 04:53:00.842976 systemd[1]: cri-containerd-0142c1129ea046bd23d15a968f64af5595c35b0f7d4a5a0f387bdfd2703c7251.scope: Consumed 655ms CPU time, 181.4M memory peak, 3.8M read from disk, 177M written to disk. Apr 16 04:53:00.849765 containerd[1618]: time="2026-04-16T04:53:00.849551062Z" level=info msg="received container exit event container_id:\"0142c1129ea046bd23d15a968f64af5595c35b0f7d4a5a0f387bdfd2703c7251\" id:\"0142c1129ea046bd23d15a968f64af5595c35b0f7d4a5a0f387bdfd2703c7251\" pid:3549 exited_at:{seconds:1776315180 nanos:849141944}" Apr 16 04:53:00.868728 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0142c1129ea046bd23d15a968f64af5595c35b0f7d4a5a0f387bdfd2703c7251-rootfs.mount: Deactivated successfully. Apr 16 04:53:00.916994 kubelet[2764]: I0416 04:53:00.916876 2764 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Apr 16 04:53:01.000305 systemd[1]: Created slice kubepods-burstable-podf77a2220_8e6b_4453_a6fc_099782b9a146.slice - libcontainer container kubepods-burstable-podf77a2220_8e6b_4453_a6fc_099782b9a146.slice. Apr 16 04:53:01.009470 systemd[1]: Created slice kubepods-besteffort-pod142902a2_0442_4aed_9c46_d117595f20c4.slice - libcontainer container kubepods-besteffort-pod142902a2_0442_4aed_9c46_d117595f20c4.slice. Apr 16 04:53:01.026503 systemd[1]: Created slice kubepods-besteffort-podedf90915_a4d4_442e_9905_e4fc01d7ae9f.slice - libcontainer container kubepods-besteffort-podedf90915_a4d4_442e_9905_e4fc01d7ae9f.slice. Apr 16 04:53:01.035459 systemd[1]: Created slice kubepods-besteffort-pod2c44fb2e_260b_4b38_98a7_99b84889cce4.slice - libcontainer container kubepods-besteffort-pod2c44fb2e_260b_4b38_98a7_99b84889cce4.slice. Apr 16 04:53:01.042022 kubelet[2764]: I0416 04:53:01.041634 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9k8xr\" (UniqueName: \"kubernetes.io/projected/142902a2-0442-4aed-9c46-d117595f20c4-kube-api-access-9k8xr\") pod \"calico-apiserver-5655d84d6d-jt5hf\" (UID: \"142902a2-0442-4aed-9c46-d117595f20c4\") " pod="calico-system/calico-apiserver-5655d84d6d-jt5hf" Apr 16 04:53:01.042022 kubelet[2764]: I0416 04:53:01.041968 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2vvc\" (UniqueName: \"kubernetes.io/projected/a223c7b0-6e14-4098-bc37-5a4aa3d8d80b-kube-api-access-j2vvc\") pod \"calico-apiserver-5655d84d6d-qrljt\" (UID: \"a223c7b0-6e14-4098-bc37-5a4aa3d8d80b\") " pod="calico-system/calico-apiserver-5655d84d6d-qrljt" Apr 16 04:53:01.042427 kubelet[2764]: I0416 04:53:01.042033 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b6c34228-70fd-4d95-a6af-254442c5d5ae-whisker-backend-key-pair\") pod \"whisker-64df77cb7b-r94th\" (UID: \"b6c34228-70fd-4d95-a6af-254442c5d5ae\") " pod="calico-system/whisker-64df77cb7b-r94th" Apr 16 04:53:01.042427 kubelet[2764]: I0416 04:53:01.042060 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a223c7b0-6e14-4098-bc37-5a4aa3d8d80b-calico-apiserver-certs\") pod \"calico-apiserver-5655d84d6d-qrljt\" (UID: \"a223c7b0-6e14-4098-bc37-5a4aa3d8d80b\") " pod="calico-system/calico-apiserver-5655d84d6d-qrljt" Apr 16 04:53:01.042427 kubelet[2764]: I0416 04:53:01.042079 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vt47j\" (UniqueName: \"kubernetes.io/projected/f77a2220-8e6b-4453-a6fc-099782b9a146-kube-api-access-vt47j\") pod \"coredns-674b8bbfcf-mlhrm\" (UID: \"f77a2220-8e6b-4453-a6fc-099782b9a146\") " pod="kube-system/coredns-674b8bbfcf-mlhrm" Apr 16 04:53:01.042427 kubelet[2764]: I0416 04:53:01.042100 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h26b9\" (UniqueName: \"kubernetes.io/projected/b6c34228-70fd-4d95-a6af-254442c5d5ae-kube-api-access-h26b9\") pod \"whisker-64df77cb7b-r94th\" (UID: \"b6c34228-70fd-4d95-a6af-254442c5d5ae\") " pod="calico-system/whisker-64df77cb7b-r94th" Apr 16 04:53:01.042427 kubelet[2764]: I0416 04:53:01.042120 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfwvl\" (UniqueName: \"kubernetes.io/projected/9118c537-83d9-45de-bef7-fb503241b41d-kube-api-access-vfwvl\") pod \"coredns-674b8bbfcf-mf89m\" (UID: \"9118c537-83d9-45de-bef7-fb503241b41d\") " pod="kube-system/coredns-674b8bbfcf-mf89m" Apr 16 04:53:01.042517 kubelet[2764]: I0416 04:53:01.042144 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f77a2220-8e6b-4453-a6fc-099782b9a146-config-volume\") pod \"coredns-674b8bbfcf-mlhrm\" (UID: \"f77a2220-8e6b-4453-a6fc-099782b9a146\") " pod="kube-system/coredns-674b8bbfcf-mlhrm" Apr 16 04:53:01.042517 kubelet[2764]: I0416 04:53:01.042182 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2c44fb2e-260b-4b38-98a7-99b84889cce4-goldmane-ca-bundle\") pod \"goldmane-5b85766d88-pd8mg\" (UID: \"2c44fb2e-260b-4b38-98a7-99b84889cce4\") " pod="calico-system/goldmane-5b85766d88-pd8mg" Apr 16 04:53:01.042517 kubelet[2764]: I0416 04:53:01.042201 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/2c44fb2e-260b-4b38-98a7-99b84889cce4-goldmane-key-pair\") pod \"goldmane-5b85766d88-pd8mg\" (UID: \"2c44fb2e-260b-4b38-98a7-99b84889cce4\") " pod="calico-system/goldmane-5b85766d88-pd8mg" Apr 16 04:53:01.042517 kubelet[2764]: I0416 04:53:01.042227 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/edf90915-a4d4-442e-9905-e4fc01d7ae9f-tigera-ca-bundle\") pod \"calico-kube-controllers-85fbbfb5c9-s7lgg\" (UID: \"edf90915-a4d4-442e-9905-e4fc01d7ae9f\") " pod="calico-system/calico-kube-controllers-85fbbfb5c9-s7lgg" Apr 16 04:53:01.042517 kubelet[2764]: I0416 04:53:01.042255 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c44fb2e-260b-4b38-98a7-99b84889cce4-config\") pod \"goldmane-5b85766d88-pd8mg\" (UID: \"2c44fb2e-260b-4b38-98a7-99b84889cce4\") " pod="calico-system/goldmane-5b85766d88-pd8mg" Apr 16 04:53:01.042599 kubelet[2764]: I0416 04:53:01.042270 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hn5f2\" (UniqueName: \"kubernetes.io/projected/edf90915-a4d4-442e-9905-e4fc01d7ae9f-kube-api-access-hn5f2\") pod \"calico-kube-controllers-85fbbfb5c9-s7lgg\" (UID: \"edf90915-a4d4-442e-9905-e4fc01d7ae9f\") " pod="calico-system/calico-kube-controllers-85fbbfb5c9-s7lgg" Apr 16 04:53:01.042599 kubelet[2764]: I0416 04:53:01.042296 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9118c537-83d9-45de-bef7-fb503241b41d-config-volume\") pod \"coredns-674b8bbfcf-mf89m\" (UID: \"9118c537-83d9-45de-bef7-fb503241b41d\") " pod="kube-system/coredns-674b8bbfcf-mf89m" Apr 16 04:53:01.042599 kubelet[2764]: I0416 04:53:01.042312 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/142902a2-0442-4aed-9c46-d117595f20c4-calico-apiserver-certs\") pod \"calico-apiserver-5655d84d6d-jt5hf\" (UID: \"142902a2-0442-4aed-9c46-d117595f20c4\") " pod="calico-system/calico-apiserver-5655d84d6d-jt5hf" Apr 16 04:53:01.042599 kubelet[2764]: I0416 04:53:01.042324 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/b6c34228-70fd-4d95-a6af-254442c5d5ae-nginx-config\") pod \"whisker-64df77cb7b-r94th\" (UID: \"b6c34228-70fd-4d95-a6af-254442c5d5ae\") " pod="calico-system/whisker-64df77cb7b-r94th" Apr 16 04:53:01.042599 kubelet[2764]: I0416 04:53:01.042338 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b6c34228-70fd-4d95-a6af-254442c5d5ae-whisker-ca-bundle\") pod \"whisker-64df77cb7b-r94th\" (UID: \"b6c34228-70fd-4d95-a6af-254442c5d5ae\") " pod="calico-system/whisker-64df77cb7b-r94th" Apr 16 04:53:01.042716 kubelet[2764]: I0416 04:53:01.042349 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dv7m\" (UniqueName: \"kubernetes.io/projected/2c44fb2e-260b-4b38-98a7-99b84889cce4-kube-api-access-2dv7m\") pod \"goldmane-5b85766d88-pd8mg\" (UID: \"2c44fb2e-260b-4b38-98a7-99b84889cce4\") " pod="calico-system/goldmane-5b85766d88-pd8mg" Apr 16 04:53:01.047964 systemd[1]: Created slice kubepods-besteffort-poda223c7b0_6e14_4098_bc37_5a4aa3d8d80b.slice - libcontainer container kubepods-besteffort-poda223c7b0_6e14_4098_bc37_5a4aa3d8d80b.slice. Apr 16 04:53:01.056065 systemd[1]: Created slice kubepods-besteffort-podb6c34228_70fd_4d95_a6af_254442c5d5ae.slice - libcontainer container kubepods-besteffort-podb6c34228_70fd_4d95_a6af_254442c5d5ae.slice. Apr 16 04:53:01.064291 systemd[1]: Created slice kubepods-burstable-pod9118c537_83d9_45de_bef7_fb503241b41d.slice - libcontainer container kubepods-burstable-pod9118c537_83d9_45de_bef7_fb503241b41d.slice. Apr 16 04:53:01.306117 kubelet[2764]: E0416 04:53:01.305887 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:53:01.307121 containerd[1618]: time="2026-04-16T04:53:01.307081243Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-mlhrm,Uid:f77a2220-8e6b-4453-a6fc-099782b9a146,Namespace:kube-system,Attempt:0,}" Apr 16 04:53:01.317617 containerd[1618]: time="2026-04-16T04:53:01.317386205Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5655d84d6d-jt5hf,Uid:142902a2-0442-4aed-9c46-d117595f20c4,Namespace:calico-system,Attempt:0,}" Apr 16 04:53:01.393683 containerd[1618]: time="2026-04-16T04:53:01.393542003Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-pd8mg,Uid:2c44fb2e-260b-4b38-98a7-99b84889cce4,Namespace:calico-system,Attempt:0,}" Apr 16 04:53:01.394039 containerd[1618]: time="2026-04-16T04:53:01.393964559Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5655d84d6d-qrljt,Uid:a223c7b0-6e14-4098-bc37-5a4aa3d8d80b,Namespace:calico-system,Attempt:0,}" Apr 16 04:53:01.394304 containerd[1618]: time="2026-04-16T04:53:01.394268536Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-64df77cb7b-r94th,Uid:b6c34228-70fd-4d95-a6af-254442c5d5ae,Namespace:calico-system,Attempt:0,}" Apr 16 04:53:01.396190 kubelet[2764]: E0416 04:53:01.396036 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:53:01.399668 containerd[1618]: time="2026-04-16T04:53:01.398623985Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-mf89m,Uid:9118c537-83d9-45de-bef7-fb503241b41d,Namespace:kube-system,Attempt:0,}" Apr 16 04:53:01.399668 containerd[1618]: time="2026-04-16T04:53:01.399450861Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85fbbfb5c9-s7lgg,Uid:edf90915-a4d4-442e-9905-e4fc01d7ae9f,Namespace:calico-system,Attempt:0,}" Apr 16 04:53:01.457647 containerd[1618]: time="2026-04-16T04:53:01.457050068Z" level=info msg="CreateContainer within sandbox \"51e283e74b9536f87de448b9d81e5573c2855c4d6c802493a4d191c5b571c9cb\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 16 04:53:01.509574 containerd[1618]: time="2026-04-16T04:53:01.509448619Z" level=info msg="Container b21c3f2fdaaf4a653eb2a14127507693e1bc724f4cd64ba481c4a38e77fa47a0: CDI devices from CRI Config.CDIDevices: []" Apr 16 04:53:01.565582 containerd[1618]: time="2026-04-16T04:53:01.540660835Z" level=info msg="CreateContainer within sandbox \"51e283e74b9536f87de448b9d81e5573c2855c4d6c802493a4d191c5b571c9cb\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"b21c3f2fdaaf4a653eb2a14127507693e1bc724f4cd64ba481c4a38e77fa47a0\"" Apr 16 04:53:01.565582 containerd[1618]: time="2026-04-16T04:53:01.556087885Z" level=error msg="Failed to destroy network for sandbox \"3956dc99f775ceb1c18450129f4072bc12aacdbfccec1878b87f14ed13af1f5e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 04:53:01.568115 containerd[1618]: time="2026-04-16T04:53:01.566423071Z" level=info msg="StartContainer for \"b21c3f2fdaaf4a653eb2a14127507693e1bc724f4cd64ba481c4a38e77fa47a0\"" Apr 16 04:53:01.568330 containerd[1618]: time="2026-04-16T04:53:01.568093443Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5655d84d6d-qrljt,Uid:a223c7b0-6e14-4098-bc37-5a4aa3d8d80b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3956dc99f775ceb1c18450129f4072bc12aacdbfccec1878b87f14ed13af1f5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 04:53:01.575414 containerd[1618]: time="2026-04-16T04:53:01.573443826Z" level=info msg="connecting to shim b21c3f2fdaaf4a653eb2a14127507693e1bc724f4cd64ba481c4a38e77fa47a0" address="unix:///run/containerd/s/3d6359ba099da04d98b2a4921fef0b02b22d2b6d9f30fc67d923931d2f18d661" protocol=ttrpc version=3 Apr 16 04:53:01.576858 kubelet[2764]: E0416 04:53:01.576157 2764 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3956dc99f775ceb1c18450129f4072bc12aacdbfccec1878b87f14ed13af1f5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 04:53:01.576858 kubelet[2764]: E0416 04:53:01.576841 2764 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3956dc99f775ceb1c18450129f4072bc12aacdbfccec1878b87f14ed13af1f5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-5655d84d6d-qrljt" Apr 16 04:53:01.577375 kubelet[2764]: E0416 04:53:01.576905 2764 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3956dc99f775ceb1c18450129f4072bc12aacdbfccec1878b87f14ed13af1f5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-5655d84d6d-qrljt" Apr 16 04:53:01.577375 kubelet[2764]: E0416 04:53:01.577017 2764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5655d84d6d-qrljt_calico-system(a223c7b0-6e14-4098-bc37-5a4aa3d8d80b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5655d84d6d-qrljt_calico-system(a223c7b0-6e14-4098-bc37-5a4aa3d8d80b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3956dc99f775ceb1c18450129f4072bc12aacdbfccec1878b87f14ed13af1f5e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-5655d84d6d-qrljt" podUID="a223c7b0-6e14-4098-bc37-5a4aa3d8d80b" Apr 16 04:53:01.610329 containerd[1618]: time="2026-04-16T04:53:01.610097838Z" level=error msg="Failed to destroy network for sandbox \"0a1d9d9f04c2e39c5f9b86539f113a946560bc2f105cb94ea65fde6ebe5ed359\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 04:53:01.612083 containerd[1618]: time="2026-04-16T04:53:01.612049306Z" level=error msg="Failed to destroy network for sandbox \"db19064d0fcf2e3615ca4722b524b7b001da28ccaba6ff746d06fc6a080328fb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 04:53:01.617872 containerd[1618]: time="2026-04-16T04:53:01.617781447Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-mlhrm,Uid:f77a2220-8e6b-4453-a6fc-099782b9a146,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a1d9d9f04c2e39c5f9b86539f113a946560bc2f105cb94ea65fde6ebe5ed359\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 04:53:01.621553 containerd[1618]: time="2026-04-16T04:53:01.621347494Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5655d84d6d-jt5hf,Uid:142902a2-0442-4aed-9c46-d117595f20c4,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"db19064d0fcf2e3615ca4722b524b7b001da28ccaba6ff746d06fc6a080328fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 04:53:01.621764 kubelet[2764]: E0416 04:53:01.621707 2764 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db19064d0fcf2e3615ca4722b524b7b001da28ccaba6ff746d06fc6a080328fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 04:53:01.621806 kubelet[2764]: E0416 04:53:01.621783 2764 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db19064d0fcf2e3615ca4722b524b7b001da28ccaba6ff746d06fc6a080328fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-5655d84d6d-jt5hf" Apr 16 04:53:01.621806 kubelet[2764]: E0416 04:53:01.621802 2764 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db19064d0fcf2e3615ca4722b524b7b001da28ccaba6ff746d06fc6a080328fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-5655d84d6d-jt5hf" Apr 16 04:53:01.621883 kubelet[2764]: E0416 04:53:01.621841 2764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5655d84d6d-jt5hf_calico-system(142902a2-0442-4aed-9c46-d117595f20c4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5655d84d6d-jt5hf_calico-system(142902a2-0442-4aed-9c46-d117595f20c4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"db19064d0fcf2e3615ca4722b524b7b001da28ccaba6ff746d06fc6a080328fb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-5655d84d6d-jt5hf" podUID="142902a2-0442-4aed-9c46-d117595f20c4" Apr 16 04:53:01.624064 kubelet[2764]: E0416 04:53:01.622626 2764 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a1d9d9f04c2e39c5f9b86539f113a946560bc2f105cb94ea65fde6ebe5ed359\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 04:53:01.624064 kubelet[2764]: E0416 04:53:01.622659 2764 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a1d9d9f04c2e39c5f9b86539f113a946560bc2f105cb94ea65fde6ebe5ed359\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-mlhrm" Apr 16 04:53:01.624064 kubelet[2764]: E0416 04:53:01.622676 2764 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a1d9d9f04c2e39c5f9b86539f113a946560bc2f105cb94ea65fde6ebe5ed359\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-mlhrm" Apr 16 04:53:01.624616 kubelet[2764]: E0416 04:53:01.622710 2764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-mlhrm_kube-system(f77a2220-8e6b-4453-a6fc-099782b9a146)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-mlhrm_kube-system(f77a2220-8e6b-4453-a6fc-099782b9a146)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0a1d9d9f04c2e39c5f9b86539f113a946560bc2f105cb94ea65fde6ebe5ed359\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-mlhrm" podUID="f77a2220-8e6b-4453-a6fc-099782b9a146" Apr 16 04:53:01.643143 systemd[1]: Started cri-containerd-b21c3f2fdaaf4a653eb2a14127507693e1bc724f4cd64ba481c4a38e77fa47a0.scope - libcontainer container b21c3f2fdaaf4a653eb2a14127507693e1bc724f4cd64ba481c4a38e77fa47a0. Apr 16 04:53:01.645366 containerd[1618]: time="2026-04-16T04:53:01.645301553Z" level=error msg="Failed to destroy network for sandbox \"06f8b6481a2c7c219ec4ec27db419068e0f0d8d0c8272f73567dcaccfda64e92\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 04:53:01.651470 containerd[1618]: time="2026-04-16T04:53:01.651146381Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-mf89m,Uid:9118c537-83d9-45de-bef7-fb503241b41d,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"06f8b6481a2c7c219ec4ec27db419068e0f0d8d0c8272f73567dcaccfda64e92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 04:53:01.653353 kubelet[2764]: E0416 04:53:01.651688 2764 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"06f8b6481a2c7c219ec4ec27db419068e0f0d8d0c8272f73567dcaccfda64e92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 04:53:01.653353 kubelet[2764]: E0416 04:53:01.651759 2764 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"06f8b6481a2c7c219ec4ec27db419068e0f0d8d0c8272f73567dcaccfda64e92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-mf89m" Apr 16 04:53:01.653353 kubelet[2764]: E0416 04:53:01.651787 2764 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"06f8b6481a2c7c219ec4ec27db419068e0f0d8d0c8272f73567dcaccfda64e92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-mf89m" Apr 16 04:53:01.653696 kubelet[2764]: E0416 04:53:01.651844 2764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-mf89m_kube-system(9118c537-83d9-45de-bef7-fb503241b41d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-mf89m_kube-system(9118c537-83d9-45de-bef7-fb503241b41d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"06f8b6481a2c7c219ec4ec27db419068e0f0d8d0c8272f73567dcaccfda64e92\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-mf89m" podUID="9118c537-83d9-45de-bef7-fb503241b41d" Apr 16 04:53:01.663343 containerd[1618]: time="2026-04-16T04:53:01.663245339Z" level=error msg="Failed to destroy network for sandbox \"1cc1aae0b101d74398eb6ff4ffe18937151ccbdba91ee267c5f5f729215e6577\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 04:53:01.669024 containerd[1618]: time="2026-04-16T04:53:01.668889498Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-64df77cb7b-r94th,Uid:b6c34228-70fd-4d95-a6af-254442c5d5ae,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1cc1aae0b101d74398eb6ff4ffe18937151ccbdba91ee267c5f5f729215e6577\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 04:53:01.669302 kubelet[2764]: E0416 04:53:01.669232 2764 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1cc1aae0b101d74398eb6ff4ffe18937151ccbdba91ee267c5f5f729215e6577\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 04:53:01.669302 kubelet[2764]: E0416 04:53:01.669298 2764 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1cc1aae0b101d74398eb6ff4ffe18937151ccbdba91ee267c5f5f729215e6577\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-64df77cb7b-r94th" Apr 16 04:53:01.669397 kubelet[2764]: E0416 04:53:01.669318 2764 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1cc1aae0b101d74398eb6ff4ffe18937151ccbdba91ee267c5f5f729215e6577\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-64df77cb7b-r94th" Apr 16 04:53:01.669665 kubelet[2764]: E0416 04:53:01.669574 2764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-64df77cb7b-r94th_calico-system(b6c34228-70fd-4d95-a6af-254442c5d5ae)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-64df77cb7b-r94th_calico-system(b6c34228-70fd-4d95-a6af-254442c5d5ae)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1cc1aae0b101d74398eb6ff4ffe18937151ccbdba91ee267c5f5f729215e6577\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-64df77cb7b-r94th" podUID="b6c34228-70fd-4d95-a6af-254442c5d5ae" Apr 16 04:53:01.678362 containerd[1618]: time="2026-04-16T04:53:01.678318133Z" level=error msg="Failed to destroy network for sandbox \"dfec5315ada7dbe2abe53aa59019ef2cc053cbb130568e525d28cc9f83fcf0de\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 04:53:01.682482 containerd[1618]: time="2026-04-16T04:53:01.682321522Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-pd8mg,Uid:2c44fb2e-260b-4b38-98a7-99b84889cce4,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"dfec5315ada7dbe2abe53aa59019ef2cc053cbb130568e525d28cc9f83fcf0de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 04:53:01.685408 kubelet[2764]: E0416 04:53:01.684218 2764 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dfec5315ada7dbe2abe53aa59019ef2cc053cbb130568e525d28cc9f83fcf0de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 04:53:01.685408 kubelet[2764]: E0416 04:53:01.684337 2764 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dfec5315ada7dbe2abe53aa59019ef2cc053cbb130568e525d28cc9f83fcf0de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-pd8mg" Apr 16 04:53:01.685408 kubelet[2764]: E0416 04:53:01.684371 2764 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dfec5315ada7dbe2abe53aa59019ef2cc053cbb130568e525d28cc9f83fcf0de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-pd8mg" Apr 16 04:53:01.685630 kubelet[2764]: E0416 04:53:01.684442 2764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-5b85766d88-pd8mg_calico-system(2c44fb2e-260b-4b38-98a7-99b84889cce4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-5b85766d88-pd8mg_calico-system(2c44fb2e-260b-4b38-98a7-99b84889cce4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dfec5315ada7dbe2abe53aa59019ef2cc053cbb130568e525d28cc9f83fcf0de\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-5b85766d88-pd8mg" podUID="2c44fb2e-260b-4b38-98a7-99b84889cce4" Apr 16 04:53:01.690822 containerd[1618]: time="2026-04-16T04:53:01.690669457Z" level=error msg="Failed to destroy network for sandbox \"9a17ef9a79950e529e9dbde270cc1488d8657a10c47d88e074a3bf7448b41d84\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 04:53:01.692245 containerd[1618]: time="2026-04-16T04:53:01.692157391Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85fbbfb5c9-s7lgg,Uid:edf90915-a4d4-442e-9905-e4fc01d7ae9f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a17ef9a79950e529e9dbde270cc1488d8657a10c47d88e074a3bf7448b41d84\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 04:53:01.693180 kubelet[2764]: E0416 04:53:01.692605 2764 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a17ef9a79950e529e9dbde270cc1488d8657a10c47d88e074a3bf7448b41d84\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 04:53:01.693180 kubelet[2764]: E0416 04:53:01.692669 2764 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a17ef9a79950e529e9dbde270cc1488d8657a10c47d88e074a3bf7448b41d84\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-85fbbfb5c9-s7lgg" Apr 16 04:53:01.693180 kubelet[2764]: E0416 04:53:01.692688 2764 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a17ef9a79950e529e9dbde270cc1488d8657a10c47d88e074a3bf7448b41d84\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-85fbbfb5c9-s7lgg" Apr 16 04:53:01.693309 kubelet[2764]: E0416 04:53:01.692726 2764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-85fbbfb5c9-s7lgg_calico-system(edf90915-a4d4-442e-9905-e4fc01d7ae9f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-85fbbfb5c9-s7lgg_calico-system(edf90915-a4d4-442e-9905-e4fc01d7ae9f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9a17ef9a79950e529e9dbde270cc1488d8657a10c47d88e074a3bf7448b41d84\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-85fbbfb5c9-s7lgg" podUID="edf90915-a4d4-442e-9905-e4fc01d7ae9f" Apr 16 04:53:01.778435 containerd[1618]: time="2026-04-16T04:53:01.777586156Z" level=info msg="StartContainer for \"b21c3f2fdaaf4a653eb2a14127507693e1bc724f4cd64ba481c4a38e77fa47a0\" returns successfully" Apr 16 04:53:02.194208 systemd[1]: Created slice kubepods-besteffort-pod81d6e248_dbc3_4b14_bcf7_e79ac5dd77dd.slice - libcontainer container kubepods-besteffort-pod81d6e248_dbc3_4b14_bcf7_e79ac5dd77dd.slice. Apr 16 04:53:02.199257 containerd[1618]: time="2026-04-16T04:53:02.199202667Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6d28c,Uid:81d6e248-dbc3-4b14-bcf7-e79ac5dd77dd,Namespace:calico-system,Attempt:0,}" Apr 16 04:53:02.511010 kubelet[2764]: I0416 04:53:02.509681 2764 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b6c34228-70fd-4d95-a6af-254442c5d5ae-whisker-backend-key-pair\") pod \"b6c34228-70fd-4d95-a6af-254442c5d5ae\" (UID: \"b6c34228-70fd-4d95-a6af-254442c5d5ae\") " Apr 16 04:53:02.511010 kubelet[2764]: I0416 04:53:02.510498 2764 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b6c34228-70fd-4d95-a6af-254442c5d5ae-whisker-ca-bundle\") pod \"b6c34228-70fd-4d95-a6af-254442c5d5ae\" (UID: \"b6c34228-70fd-4d95-a6af-254442c5d5ae\") " Apr 16 04:53:02.511010 kubelet[2764]: I0416 04:53:02.510549 2764 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/b6c34228-70fd-4d95-a6af-254442c5d5ae-nginx-config\") pod \"b6c34228-70fd-4d95-a6af-254442c5d5ae\" (UID: \"b6c34228-70fd-4d95-a6af-254442c5d5ae\") " Apr 16 04:53:02.511846 kubelet[2764]: I0416 04:53:02.511781 2764 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6c34228-70fd-4d95-a6af-254442c5d5ae-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "b6c34228-70fd-4d95-a6af-254442c5d5ae" (UID: "b6c34228-70fd-4d95-a6af-254442c5d5ae"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 16 04:53:02.512299 kubelet[2764]: I0416 04:53:02.512097 2764 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6c34228-70fd-4d95-a6af-254442c5d5ae-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "b6c34228-70fd-4d95-a6af-254442c5d5ae" (UID: "b6c34228-70fd-4d95-a6af-254442c5d5ae"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 16 04:53:02.528753 kubelet[2764]: I0416 04:53:02.528661 2764 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6c34228-70fd-4d95-a6af-254442c5d5ae-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "b6c34228-70fd-4d95-a6af-254442c5d5ae" (UID: "b6c34228-70fd-4d95-a6af-254442c5d5ae"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 16 04:53:02.529238 systemd[1]: var-lib-kubelet-pods-b6c34228\x2d70fd\x2d4d95\x2da6af\x2d254442c5d5ae-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Apr 16 04:53:02.549975 systemd-networkd[1497]: caliea47f063578: Link UP Apr 16 04:53:02.551727 systemd-networkd[1497]: caliea47f063578: Gained carrier Apr 16 04:53:02.568883 kubelet[2764]: I0416 04:53:02.568626 2764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-lrppt" podStartSLOduration=3.663366764 podStartE2EDuration="22.568606853s" podCreationTimestamp="2026-04-16 04:52:40 +0000 UTC" firstStartedPulling="2026-04-16 04:52:41.046530095 +0000 UTC m=+17.038264499" lastFinishedPulling="2026-04-16 04:52:59.951770184 +0000 UTC m=+35.943504588" observedRunningTime="2026-04-16 04:53:02.475490068 +0000 UTC m=+38.467224476" watchObservedRunningTime="2026-04-16 04:53:02.568606853 +0000 UTC m=+38.560341257" Apr 16 04:53:02.572209 containerd[1618]: 2026-04-16 04:53:02.244 [ERROR][3877] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 16 04:53:02.572209 containerd[1618]: 2026-04-16 04:53:02.276 [INFO][3877] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--6d28c-eth0 csi-node-driver- calico-system 81d6e248-dbc3-4b14-bcf7-e79ac5dd77dd 710 0 2026-04-16 04:52:40 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6d9d697c7c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-6d28c eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] caliea47f063578 [] [] }} ContainerID="227f6656bde7c2318de3ff7cb08736b0b3e2ce69fe3dcc3d9280f933308aa90d" Namespace="calico-system" Pod="csi-node-driver-6d28c" WorkloadEndpoint="localhost-k8s-csi--node--driver--6d28c-" Apr 16 04:53:02.572209 containerd[1618]: 2026-04-16 04:53:02.276 [INFO][3877] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="227f6656bde7c2318de3ff7cb08736b0b3e2ce69fe3dcc3d9280f933308aa90d" Namespace="calico-system" Pod="csi-node-driver-6d28c" WorkloadEndpoint="localhost-k8s-csi--node--driver--6d28c-eth0" Apr 16 04:53:02.572209 containerd[1618]: 2026-04-16 04:53:02.336 [INFO][3894] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="227f6656bde7c2318de3ff7cb08736b0b3e2ce69fe3dcc3d9280f933308aa90d" HandleID="k8s-pod-network.227f6656bde7c2318de3ff7cb08736b0b3e2ce69fe3dcc3d9280f933308aa90d" Workload="localhost-k8s-csi--node--driver--6d28c-eth0" Apr 16 04:53:02.572567 containerd[1618]: 2026-04-16 04:53:02.343 [INFO][3894] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="227f6656bde7c2318de3ff7cb08736b0b3e2ce69fe3dcc3d9280f933308aa90d" HandleID="k8s-pod-network.227f6656bde7c2318de3ff7cb08736b0b3e2ce69fe3dcc3d9280f933308aa90d" Workload="localhost-k8s-csi--node--driver--6d28c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000419bf0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-6d28c", "timestamp":"2026-04-16 04:53:02.33629436 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00013ab00)} Apr 16 04:53:02.572567 containerd[1618]: 2026-04-16 04:53:02.343 [INFO][3894] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 04:53:02.572567 containerd[1618]: 2026-04-16 04:53:02.343 [INFO][3894] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 04:53:02.572567 containerd[1618]: 2026-04-16 04:53:02.343 [INFO][3894] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 16 04:53:02.572567 containerd[1618]: 2026-04-16 04:53:02.415 [INFO][3894] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.227f6656bde7c2318de3ff7cb08736b0b3e2ce69fe3dcc3d9280f933308aa90d" host="localhost" Apr 16 04:53:02.572567 containerd[1618]: 2026-04-16 04:53:02.434 [INFO][3894] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 16 04:53:02.572567 containerd[1618]: 2026-04-16 04:53:02.450 [INFO][3894] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 16 04:53:02.572567 containerd[1618]: 2026-04-16 04:53:02.457 [INFO][3894] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 16 04:53:02.572567 containerd[1618]: 2026-04-16 04:53:02.463 [INFO][3894] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 16 04:53:02.572567 containerd[1618]: 2026-04-16 04:53:02.464 [INFO][3894] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.227f6656bde7c2318de3ff7cb08736b0b3e2ce69fe3dcc3d9280f933308aa90d" host="localhost" Apr 16 04:53:02.572762 containerd[1618]: 2026-04-16 04:53:02.471 [INFO][3894] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.227f6656bde7c2318de3ff7cb08736b0b3e2ce69fe3dcc3d9280f933308aa90d Apr 16 04:53:02.572762 containerd[1618]: 2026-04-16 04:53:02.484 [INFO][3894] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.227f6656bde7c2318de3ff7cb08736b0b3e2ce69fe3dcc3d9280f933308aa90d" host="localhost" Apr 16 04:53:02.572762 containerd[1618]: 2026-04-16 04:53:02.527 [INFO][3894] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.227f6656bde7c2318de3ff7cb08736b0b3e2ce69fe3dcc3d9280f933308aa90d" host="localhost" Apr 16 04:53:02.572762 containerd[1618]: 2026-04-16 04:53:02.527 [INFO][3894] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.227f6656bde7c2318de3ff7cb08736b0b3e2ce69fe3dcc3d9280f933308aa90d" host="localhost" Apr 16 04:53:02.572762 containerd[1618]: 2026-04-16 04:53:02.527 [INFO][3894] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 04:53:02.572762 containerd[1618]: 2026-04-16 04:53:02.527 [INFO][3894] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="227f6656bde7c2318de3ff7cb08736b0b3e2ce69fe3dcc3d9280f933308aa90d" HandleID="k8s-pod-network.227f6656bde7c2318de3ff7cb08736b0b3e2ce69fe3dcc3d9280f933308aa90d" Workload="localhost-k8s-csi--node--driver--6d28c-eth0" Apr 16 04:53:02.573012 containerd[1618]: 2026-04-16 04:53:02.534 [INFO][3877] cni-plugin/k8s.go 418: Populated endpoint ContainerID="227f6656bde7c2318de3ff7cb08736b0b3e2ce69fe3dcc3d9280f933308aa90d" Namespace="calico-system" Pod="csi-node-driver-6d28c" WorkloadEndpoint="localhost-k8s-csi--node--driver--6d28c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--6d28c-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"81d6e248-dbc3-4b14-bcf7-e79ac5dd77dd", ResourceVersion:"710", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 4, 52, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-6d28c", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliea47f063578", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 04:53:02.573197 containerd[1618]: 2026-04-16 04:53:02.534 [INFO][3877] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="227f6656bde7c2318de3ff7cb08736b0b3e2ce69fe3dcc3d9280f933308aa90d" Namespace="calico-system" Pod="csi-node-driver-6d28c" WorkloadEndpoint="localhost-k8s-csi--node--driver--6d28c-eth0" Apr 16 04:53:02.573197 containerd[1618]: 2026-04-16 04:53:02.534 [INFO][3877] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliea47f063578 ContainerID="227f6656bde7c2318de3ff7cb08736b0b3e2ce69fe3dcc3d9280f933308aa90d" Namespace="calico-system" Pod="csi-node-driver-6d28c" WorkloadEndpoint="localhost-k8s-csi--node--driver--6d28c-eth0" Apr 16 04:53:02.573197 containerd[1618]: 2026-04-16 04:53:02.552 [INFO][3877] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="227f6656bde7c2318de3ff7cb08736b0b3e2ce69fe3dcc3d9280f933308aa90d" Namespace="calico-system" Pod="csi-node-driver-6d28c" WorkloadEndpoint="localhost-k8s-csi--node--driver--6d28c-eth0" Apr 16 04:53:02.573259 containerd[1618]: 2026-04-16 04:53:02.553 [INFO][3877] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="227f6656bde7c2318de3ff7cb08736b0b3e2ce69fe3dcc3d9280f933308aa90d" Namespace="calico-system" Pod="csi-node-driver-6d28c" WorkloadEndpoint="localhost-k8s-csi--node--driver--6d28c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--6d28c-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"81d6e248-dbc3-4b14-bcf7-e79ac5dd77dd", ResourceVersion:"710", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 4, 52, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"227f6656bde7c2318de3ff7cb08736b0b3e2ce69fe3dcc3d9280f933308aa90d", Pod:"csi-node-driver-6d28c", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliea47f063578", MAC:"9a:67:e8:96:a4:00", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 04:53:02.573314 containerd[1618]: 2026-04-16 04:53:02.570 [INFO][3877] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="227f6656bde7c2318de3ff7cb08736b0b3e2ce69fe3dcc3d9280f933308aa90d" Namespace="calico-system" Pod="csi-node-driver-6d28c" WorkloadEndpoint="localhost-k8s-csi--node--driver--6d28c-eth0" Apr 16 04:53:02.610970 kubelet[2764]: I0416 04:53:02.610826 2764 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h26b9\" (UniqueName: \"kubernetes.io/projected/b6c34228-70fd-4d95-a6af-254442c5d5ae-kube-api-access-h26b9\") pod \"b6c34228-70fd-4d95-a6af-254442c5d5ae\" (UID: \"b6c34228-70fd-4d95-a6af-254442c5d5ae\") " Apr 16 04:53:02.611792 kubelet[2764]: I0416 04:53:02.611745 2764 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b6c34228-70fd-4d95-a6af-254442c5d5ae-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Apr 16 04:53:02.611792 kubelet[2764]: I0416 04:53:02.611772 2764 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b6c34228-70fd-4d95-a6af-254442c5d5ae-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Apr 16 04:53:02.611792 kubelet[2764]: I0416 04:53:02.611779 2764 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/b6c34228-70fd-4d95-a6af-254442c5d5ae-nginx-config\") on node \"localhost\" DevicePath \"\"" Apr 16 04:53:02.618834 systemd[1]: var-lib-kubelet-pods-b6c34228\x2d70fd\x2d4d95\x2da6af\x2d254442c5d5ae-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dh26b9.mount: Deactivated successfully. Apr 16 04:53:02.620195 kubelet[2764]: I0416 04:53:02.619698 2764 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6c34228-70fd-4d95-a6af-254442c5d5ae-kube-api-access-h26b9" (OuterVolumeSpecName: "kube-api-access-h26b9") pod "b6c34228-70fd-4d95-a6af-254442c5d5ae" (UID: "b6c34228-70fd-4d95-a6af-254442c5d5ae"). InnerVolumeSpecName "kube-api-access-h26b9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 16 04:53:02.713111 kubelet[2764]: I0416 04:53:02.712895 2764 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-h26b9\" (UniqueName: \"kubernetes.io/projected/b6c34228-70fd-4d95-a6af-254442c5d5ae-kube-api-access-h26b9\") on node \"localhost\" DevicePath \"\"" Apr 16 04:53:02.722644 containerd[1618]: time="2026-04-16T04:53:02.722495436Z" level=info msg="connecting to shim 227f6656bde7c2318de3ff7cb08736b0b3e2ce69fe3dcc3d9280f933308aa90d" address="unix:///run/containerd/s/a888a1b0142eb18ebf056ae03a9bd0277d5b9dd13844deefb5291000f620d23d" namespace=k8s.io protocol=ttrpc version=3 Apr 16 04:53:02.830862 systemd[1]: Started cri-containerd-227f6656bde7c2318de3ff7cb08736b0b3e2ce69fe3dcc3d9280f933308aa90d.scope - libcontainer container 227f6656bde7c2318de3ff7cb08736b0b3e2ce69fe3dcc3d9280f933308aa90d. Apr 16 04:53:02.856990 systemd-resolved[1423]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 16 04:53:02.885328 containerd[1618]: time="2026-04-16T04:53:02.885110493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6d28c,Uid:81d6e248-dbc3-4b14-bcf7-e79ac5dd77dd,Namespace:calico-system,Attempt:0,} returns sandbox id \"227f6656bde7c2318de3ff7cb08736b0b3e2ce69fe3dcc3d9280f933308aa90d\"" Apr 16 04:53:02.886973 containerd[1618]: time="2026-04-16T04:53:02.886945052Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Apr 16 04:53:03.465411 systemd[1]: Removed slice kubepods-besteffort-podb6c34228_70fd_4d95_a6af_254442c5d5ae.slice - libcontainer container kubepods-besteffort-podb6c34228_70fd_4d95_a6af_254442c5d5ae.slice. Apr 16 04:53:03.729549 kubelet[2764]: I0416 04:53:03.728431 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/810c25b9-1a00-4307-80a9-872a58e66780-nginx-config\") pod \"whisker-6dcf7c684f-4bhsj\" (UID: \"810c25b9-1a00-4307-80a9-872a58e66780\") " pod="calico-system/whisker-6dcf7c684f-4bhsj" Apr 16 04:53:03.729549 kubelet[2764]: I0416 04:53:03.728557 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/810c25b9-1a00-4307-80a9-872a58e66780-whisker-backend-key-pair\") pod \"whisker-6dcf7c684f-4bhsj\" (UID: \"810c25b9-1a00-4307-80a9-872a58e66780\") " pod="calico-system/whisker-6dcf7c684f-4bhsj" Apr 16 04:53:03.729549 kubelet[2764]: I0416 04:53:03.728588 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/810c25b9-1a00-4307-80a9-872a58e66780-whisker-ca-bundle\") pod \"whisker-6dcf7c684f-4bhsj\" (UID: \"810c25b9-1a00-4307-80a9-872a58e66780\") " pod="calico-system/whisker-6dcf7c684f-4bhsj" Apr 16 04:53:03.729549 kubelet[2764]: I0416 04:53:03.728600 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8ck7\" (UniqueName: \"kubernetes.io/projected/810c25b9-1a00-4307-80a9-872a58e66780-kube-api-access-z8ck7\") pod \"whisker-6dcf7c684f-4bhsj\" (UID: \"810c25b9-1a00-4307-80a9-872a58e66780\") " pod="calico-system/whisker-6dcf7c684f-4bhsj" Apr 16 04:53:03.786671 systemd[1]: Created slice kubepods-besteffort-pod810c25b9_1a00_4307_80a9_872a58e66780.slice - libcontainer container kubepods-besteffort-pod810c25b9_1a00_4307_80a9_872a58e66780.slice. Apr 16 04:53:04.099428 systemd[1]: Started sshd@7-10.0.0.15:22-10.0.0.1:50364.service - OpenSSH per-connection server daemon (10.0.0.1:50364). Apr 16 04:53:04.150517 systemd-networkd[1497]: caliea47f063578: Gained IPv6LL Apr 16 04:53:04.227079 kubelet[2764]: I0416 04:53:04.225830 2764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6c34228-70fd-4d95-a6af-254442c5d5ae" path="/var/lib/kubelet/pods/b6c34228-70fd-4d95-a6af-254442c5d5ae/volumes" Apr 16 04:53:04.318984 containerd[1618]: time="2026-04-16T04:53:04.318475252Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6dcf7c684f-4bhsj,Uid:810c25b9-1a00-4307-80a9-872a58e66780,Namespace:calico-system,Attempt:0,}" Apr 16 04:53:04.396724 sshd[4032]: Accepted publickey for core from 10.0.0.1 port 50364 ssh2: RSA SHA256:IiTfK2rD8LSHXggGFdyxto9bXxmDCS3DeyOSiMga61s Apr 16 04:53:04.403077 sshd-session[4032]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:53:04.411108 systemd-logind[1586]: New session 8 of user core. Apr 16 04:53:04.418591 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 16 04:53:04.788449 sshd[4116]: Connection closed by 10.0.0.1 port 50364 Apr 16 04:53:04.788264 sshd-session[4032]: pam_unix(sshd:session): session closed for user core Apr 16 04:53:04.798455 systemd[1]: sshd@7-10.0.0.15:22-10.0.0.1:50364.service: Deactivated successfully. Apr 16 04:53:04.819032 systemd[1]: session-8.scope: Deactivated successfully. Apr 16 04:53:04.826941 systemd-logind[1586]: Session 8 logged out. Waiting for processes to exit. Apr 16 04:53:04.830558 systemd-logind[1586]: Removed session 8. Apr 16 04:53:05.103055 systemd-networkd[1497]: cali16d1fca7a59: Link UP Apr 16 04:53:05.116743 systemd-networkd[1497]: cali16d1fca7a59: Gained carrier Apr 16 04:53:05.231029 containerd[1618]: 2026-04-16 04:53:04.649 [ERROR][4104] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 16 04:53:05.231029 containerd[1618]: 2026-04-16 04:53:04.736 [INFO][4104] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--6dcf7c684f--4bhsj-eth0 whisker-6dcf7c684f- calico-system 810c25b9-1a00-4307-80a9-872a58e66780 926 0 2026-04-16 04:53:03 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6dcf7c684f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-6dcf7c684f-4bhsj eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali16d1fca7a59 [] [] }} ContainerID="b3ee3b32cd52b6813161fec60e67034c3ea295536a2ac8bdb34ff89341bac292" Namespace="calico-system" Pod="whisker-6dcf7c684f-4bhsj" WorkloadEndpoint="localhost-k8s-whisker--6dcf7c684f--4bhsj-" Apr 16 04:53:05.231029 containerd[1618]: 2026-04-16 04:53:04.736 [INFO][4104] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b3ee3b32cd52b6813161fec60e67034c3ea295536a2ac8bdb34ff89341bac292" Namespace="calico-system" Pod="whisker-6dcf7c684f-4bhsj" WorkloadEndpoint="localhost-k8s-whisker--6dcf7c684f--4bhsj-eth0" Apr 16 04:53:05.231029 containerd[1618]: 2026-04-16 04:53:04.843 [INFO][4157] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b3ee3b32cd52b6813161fec60e67034c3ea295536a2ac8bdb34ff89341bac292" HandleID="k8s-pod-network.b3ee3b32cd52b6813161fec60e67034c3ea295536a2ac8bdb34ff89341bac292" Workload="localhost-k8s-whisker--6dcf7c684f--4bhsj-eth0" Apr 16 04:53:05.231848 containerd[1618]: 2026-04-16 04:53:04.875 [INFO][4157] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="b3ee3b32cd52b6813161fec60e67034c3ea295536a2ac8bdb34ff89341bac292" HandleID="k8s-pod-network.b3ee3b32cd52b6813161fec60e67034c3ea295536a2ac8bdb34ff89341bac292" Workload="localhost-k8s-whisker--6dcf7c684f--4bhsj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000beb10), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-6dcf7c684f-4bhsj", "timestamp":"2026-04-16 04:53:04.843700282 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0006ae160)} Apr 16 04:53:05.231848 containerd[1618]: 2026-04-16 04:53:04.876 [INFO][4157] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 04:53:05.231848 containerd[1618]: 2026-04-16 04:53:04.876 [INFO][4157] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 04:53:05.231848 containerd[1618]: 2026-04-16 04:53:04.876 [INFO][4157] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 16 04:53:05.231848 containerd[1618]: 2026-04-16 04:53:04.893 [INFO][4157] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.b3ee3b32cd52b6813161fec60e67034c3ea295536a2ac8bdb34ff89341bac292" host="localhost" Apr 16 04:53:05.231848 containerd[1618]: 2026-04-16 04:53:04.929 [INFO][4157] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 16 04:53:05.231848 containerd[1618]: 2026-04-16 04:53:04.946 [INFO][4157] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 16 04:53:05.231848 containerd[1618]: 2026-04-16 04:53:04.951 [INFO][4157] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 16 04:53:05.231848 containerd[1618]: 2026-04-16 04:53:04.958 [INFO][4157] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 16 04:53:05.231848 containerd[1618]: 2026-04-16 04:53:04.959 [INFO][4157] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b3ee3b32cd52b6813161fec60e67034c3ea295536a2ac8bdb34ff89341bac292" host="localhost" Apr 16 04:53:05.232358 containerd[1618]: 2026-04-16 04:53:04.966 [INFO][4157] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.b3ee3b32cd52b6813161fec60e67034c3ea295536a2ac8bdb34ff89341bac292 Apr 16 04:53:05.232358 containerd[1618]: 2026-04-16 04:53:04.995 [INFO][4157] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b3ee3b32cd52b6813161fec60e67034c3ea295536a2ac8bdb34ff89341bac292" host="localhost" Apr 16 04:53:05.232358 containerd[1618]: 2026-04-16 04:53:05.045 [INFO][4157] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.b3ee3b32cd52b6813161fec60e67034c3ea295536a2ac8bdb34ff89341bac292" host="localhost" Apr 16 04:53:05.232358 containerd[1618]: 2026-04-16 04:53:05.047 [INFO][4157] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.b3ee3b32cd52b6813161fec60e67034c3ea295536a2ac8bdb34ff89341bac292" host="localhost" Apr 16 04:53:05.232358 containerd[1618]: 2026-04-16 04:53:05.050 [INFO][4157] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 04:53:05.232358 containerd[1618]: 2026-04-16 04:53:05.052 [INFO][4157] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="b3ee3b32cd52b6813161fec60e67034c3ea295536a2ac8bdb34ff89341bac292" HandleID="k8s-pod-network.b3ee3b32cd52b6813161fec60e67034c3ea295536a2ac8bdb34ff89341bac292" Workload="localhost-k8s-whisker--6dcf7c684f--4bhsj-eth0" Apr 16 04:53:05.232615 containerd[1618]: 2026-04-16 04:53:05.073 [INFO][4104] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b3ee3b32cd52b6813161fec60e67034c3ea295536a2ac8bdb34ff89341bac292" Namespace="calico-system" Pod="whisker-6dcf7c684f-4bhsj" WorkloadEndpoint="localhost-k8s-whisker--6dcf7c684f--4bhsj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6dcf7c684f--4bhsj-eth0", GenerateName:"whisker-6dcf7c684f-", Namespace:"calico-system", SelfLink:"", UID:"810c25b9-1a00-4307-80a9-872a58e66780", ResourceVersion:"926", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 4, 53, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6dcf7c684f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-6dcf7c684f-4bhsj", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali16d1fca7a59", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 04:53:05.232615 containerd[1618]: 2026-04-16 04:53:05.075 [INFO][4104] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="b3ee3b32cd52b6813161fec60e67034c3ea295536a2ac8bdb34ff89341bac292" Namespace="calico-system" Pod="whisker-6dcf7c684f-4bhsj" WorkloadEndpoint="localhost-k8s-whisker--6dcf7c684f--4bhsj-eth0" Apr 16 04:53:05.232760 containerd[1618]: 2026-04-16 04:53:05.075 [INFO][4104] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali16d1fca7a59 ContainerID="b3ee3b32cd52b6813161fec60e67034c3ea295536a2ac8bdb34ff89341bac292" Namespace="calico-system" Pod="whisker-6dcf7c684f-4bhsj" WorkloadEndpoint="localhost-k8s-whisker--6dcf7c684f--4bhsj-eth0" Apr 16 04:53:05.232760 containerd[1618]: 2026-04-16 04:53:05.119 [INFO][4104] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b3ee3b32cd52b6813161fec60e67034c3ea295536a2ac8bdb34ff89341bac292" Namespace="calico-system" Pod="whisker-6dcf7c684f-4bhsj" WorkloadEndpoint="localhost-k8s-whisker--6dcf7c684f--4bhsj-eth0" Apr 16 04:53:05.232800 containerd[1618]: 2026-04-16 04:53:05.122 [INFO][4104] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b3ee3b32cd52b6813161fec60e67034c3ea295536a2ac8bdb34ff89341bac292" Namespace="calico-system" Pod="whisker-6dcf7c684f-4bhsj" WorkloadEndpoint="localhost-k8s-whisker--6dcf7c684f--4bhsj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6dcf7c684f--4bhsj-eth0", GenerateName:"whisker-6dcf7c684f-", Namespace:"calico-system", SelfLink:"", UID:"810c25b9-1a00-4307-80a9-872a58e66780", ResourceVersion:"926", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 4, 53, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6dcf7c684f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b3ee3b32cd52b6813161fec60e67034c3ea295536a2ac8bdb34ff89341bac292", Pod:"whisker-6dcf7c684f-4bhsj", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali16d1fca7a59", MAC:"06:cd:0d:b3:aa:f8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 04:53:05.232965 containerd[1618]: 2026-04-16 04:53:05.221 [INFO][4104] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b3ee3b32cd52b6813161fec60e67034c3ea295536a2ac8bdb34ff89341bac292" Namespace="calico-system" Pod="whisker-6dcf7c684f-4bhsj" WorkloadEndpoint="localhost-k8s-whisker--6dcf7c684f--4bhsj-eth0" Apr 16 04:53:05.318099 containerd[1618]: time="2026-04-16T04:53:05.317156424Z" level=info msg="connecting to shim b3ee3b32cd52b6813161fec60e67034c3ea295536a2ac8bdb34ff89341bac292" address="unix:///run/containerd/s/39cb236f350e7f960323eb89314735252da299b041bb8a16b1633eb24b367358" namespace=k8s.io protocol=ttrpc version=3 Apr 16 04:53:05.364187 systemd[1]: Started cri-containerd-b3ee3b32cd52b6813161fec60e67034c3ea295536a2ac8bdb34ff89341bac292.scope - libcontainer container b3ee3b32cd52b6813161fec60e67034c3ea295536a2ac8bdb34ff89341bac292. Apr 16 04:53:05.395053 kubelet[2764]: I0416 04:53:05.393517 2764 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 16 04:53:05.397818 kubelet[2764]: E0416 04:53:05.397382 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:53:05.450033 containerd[1618]: time="2026-04-16T04:53:05.449994686Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:53:05.457113 containerd[1618]: time="2026-04-16T04:53:05.456982402Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Apr 16 04:53:05.469614 systemd-resolved[1423]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 16 04:53:05.480579 containerd[1618]: time="2026-04-16T04:53:05.473090524Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:53:05.505947 kubelet[2764]: E0416 04:53:05.505844 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:53:05.621567 containerd[1618]: time="2026-04-16T04:53:05.618261952Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:53:05.719449 containerd[1618]: time="2026-04-16T04:53:05.715252375Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 2.82827516s" Apr 16 04:53:05.719449 containerd[1618]: time="2026-04-16T04:53:05.715328887Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Apr 16 04:53:05.751792 containerd[1618]: time="2026-04-16T04:53:05.751631058Z" level=info msg="CreateContainer within sandbox \"227f6656bde7c2318de3ff7cb08736b0b3e2ce69fe3dcc3d9280f933308aa90d\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 16 04:53:05.825260 containerd[1618]: time="2026-04-16T04:53:05.825194895Z" level=info msg="Container 13c6d73e1d5ede2714e313ef3ec578a7abea84264e4ce66a24942dd59a65ec3c: CDI devices from CRI Config.CDIDevices: []" Apr 16 04:53:05.847317 containerd[1618]: time="2026-04-16T04:53:05.847256607Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6dcf7c684f-4bhsj,Uid:810c25b9-1a00-4307-80a9-872a58e66780,Namespace:calico-system,Attempt:0,} returns sandbox id \"b3ee3b32cd52b6813161fec60e67034c3ea295536a2ac8bdb34ff89341bac292\"" Apr 16 04:53:05.856575 containerd[1618]: time="2026-04-16T04:53:05.852892306Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Apr 16 04:53:05.899829 containerd[1618]: time="2026-04-16T04:53:05.897602805Z" level=info msg="CreateContainer within sandbox \"227f6656bde7c2318de3ff7cb08736b0b3e2ce69fe3dcc3d9280f933308aa90d\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"13c6d73e1d5ede2714e313ef3ec578a7abea84264e4ce66a24942dd59a65ec3c\"" Apr 16 04:53:05.905467 containerd[1618]: time="2026-04-16T04:53:05.905255811Z" level=info msg="StartContainer for \"13c6d73e1d5ede2714e313ef3ec578a7abea84264e4ce66a24942dd59a65ec3c\"" Apr 16 04:53:05.922183 containerd[1618]: time="2026-04-16T04:53:05.922064696Z" level=info msg="connecting to shim 13c6d73e1d5ede2714e313ef3ec578a7abea84264e4ce66a24942dd59a65ec3c" address="unix:///run/containerd/s/a888a1b0142eb18ebf056ae03a9bd0277d5b9dd13844deefb5291000f620d23d" protocol=ttrpc version=3 Apr 16 04:53:06.046795 systemd[1]: Started cri-containerd-13c6d73e1d5ede2714e313ef3ec578a7abea84264e4ce66a24942dd59a65ec3c.scope - libcontainer container 13c6d73e1d5ede2714e313ef3ec578a7abea84264e4ce66a24942dd59a65ec3c. Apr 16 04:53:06.520774 containerd[1618]: time="2026-04-16T04:53:06.520476564Z" level=info msg="StartContainer for \"13c6d73e1d5ede2714e313ef3ec578a7abea84264e4ce66a24942dd59a65ec3c\" returns successfully" Apr 16 04:53:06.613776 systemd-networkd[1497]: cali16d1fca7a59: Gained IPv6LL Apr 16 04:53:09.565973 containerd[1618]: time="2026-04-16T04:53:09.565673575Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:53:09.568026 containerd[1618]: time="2026-04-16T04:53:09.567413999Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Apr 16 04:53:09.576563 containerd[1618]: time="2026-04-16T04:53:09.576406339Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:53:09.584647 containerd[1618]: time="2026-04-16T04:53:09.584358949Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:53:09.585955 containerd[1618]: time="2026-04-16T04:53:09.585809108Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 3.728910141s" Apr 16 04:53:09.585955 containerd[1618]: time="2026-04-16T04:53:09.585871975Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Apr 16 04:53:09.608513 containerd[1618]: time="2026-04-16T04:53:09.608272956Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Apr 16 04:53:09.660078 containerd[1618]: time="2026-04-16T04:53:09.660025824Z" level=info msg="CreateContainer within sandbox \"b3ee3b32cd52b6813161fec60e67034c3ea295536a2ac8bdb34ff89341bac292\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Apr 16 04:53:09.685045 containerd[1618]: time="2026-04-16T04:53:09.684337759Z" level=info msg="Container 017f902dd7562edbe813068123705be286d20c1c4b524eba286c0ba61b2b7f11: CDI devices from CRI Config.CDIDevices: []" Apr 16 04:53:09.916149 containerd[1618]: time="2026-04-16T04:53:09.912531498Z" level=info msg="CreateContainer within sandbox \"b3ee3b32cd52b6813161fec60e67034c3ea295536a2ac8bdb34ff89341bac292\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"017f902dd7562edbe813068123705be286d20c1c4b524eba286c0ba61b2b7f11\"" Apr 16 04:53:09.916525 systemd[1]: Started sshd@8-10.0.0.15:22-10.0.0.1:49424.service - OpenSSH per-connection server daemon (10.0.0.1:49424). Apr 16 04:53:09.991252 containerd[1618]: time="2026-04-16T04:53:09.991152893Z" level=info msg="StartContainer for \"017f902dd7562edbe813068123705be286d20c1c4b524eba286c0ba61b2b7f11\"" Apr 16 04:53:11.635522 containerd[1618]: time="2026-04-16T04:53:11.632445356Z" level=info msg="connecting to shim 017f902dd7562edbe813068123705be286d20c1c4b524eba286c0ba61b2b7f11" address="unix:///run/containerd/s/39cb236f350e7f960323eb89314735252da299b041bb8a16b1633eb24b367358" protocol=ttrpc version=3 Apr 16 04:53:11.839320 systemd[1]: Started cri-containerd-017f902dd7562edbe813068123705be286d20c1c4b524eba286c0ba61b2b7f11.scope - libcontainer container 017f902dd7562edbe813068123705be286d20c1c4b524eba286c0ba61b2b7f11. Apr 16 04:53:12.906459 sshd[4348]: Accepted publickey for core from 10.0.0.1 port 49424 ssh2: RSA SHA256:IiTfK2rD8LSHXggGFdyxto9bXxmDCS3DeyOSiMga61s Apr 16 04:53:13.040864 sshd-session[4348]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:53:13.157511 systemd-logind[1586]: New session 9 of user core. Apr 16 04:53:13.169269 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 16 04:53:13.265314 containerd[1618]: time="2026-04-16T04:53:13.265168050Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-pd8mg,Uid:2c44fb2e-260b-4b38-98a7-99b84889cce4,Namespace:calico-system,Attempt:0,}" Apr 16 04:53:13.702458 systemd-networkd[1497]: vxlan.calico: Link UP Apr 16 04:53:13.702467 systemd-networkd[1497]: vxlan.calico: Gained carrier Apr 16 04:53:13.821411 sshd[4386]: Connection closed by 10.0.0.1 port 49424 Apr 16 04:53:13.875289 sshd-session[4348]: pam_unix(sshd:session): session closed for user core Apr 16 04:53:13.886845 systemd-logind[1586]: Session 9 logged out. Waiting for processes to exit. Apr 16 04:53:13.896295 systemd[1]: sshd@8-10.0.0.15:22-10.0.0.1:49424.service: Deactivated successfully. Apr 16 04:53:13.909089 systemd[1]: session-9.scope: Deactivated successfully. Apr 16 04:53:13.922084 containerd[1618]: time="2026-04-16T04:53:13.921888892Z" level=info msg="StartContainer for \"017f902dd7562edbe813068123705be286d20c1c4b524eba286c0ba61b2b7f11\" returns successfully" Apr 16 04:53:13.924656 systemd-logind[1586]: Removed session 9. Apr 16 04:53:14.135083 systemd-networkd[1497]: cali243d081be7a: Link UP Apr 16 04:53:14.135532 systemd-networkd[1497]: cali243d081be7a: Gained carrier Apr 16 04:53:14.153933 containerd[1618]: 2026-04-16 04:53:13.929 [INFO][4399] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--5b85766d88--pd8mg-eth0 goldmane-5b85766d88- calico-system 2c44fb2e-260b-4b38-98a7-99b84889cce4 859 0 2026-04-16 04:52:39 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:5b85766d88 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-5b85766d88-pd8mg eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali243d081be7a [] [] }} ContainerID="8e6584e9dc6018a5ade3c1f23382481332f1a4dd06b7dff5349f2abe8bfffbc5" Namespace="calico-system" Pod="goldmane-5b85766d88-pd8mg" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--pd8mg-" Apr 16 04:53:14.153933 containerd[1618]: 2026-04-16 04:53:13.930 [INFO][4399] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8e6584e9dc6018a5ade3c1f23382481332f1a4dd06b7dff5349f2abe8bfffbc5" Namespace="calico-system" Pod="goldmane-5b85766d88-pd8mg" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--pd8mg-eth0" Apr 16 04:53:14.153933 containerd[1618]: 2026-04-16 04:53:13.989 [INFO][4446] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8e6584e9dc6018a5ade3c1f23382481332f1a4dd06b7dff5349f2abe8bfffbc5" HandleID="k8s-pod-network.8e6584e9dc6018a5ade3c1f23382481332f1a4dd06b7dff5349f2abe8bfffbc5" Workload="localhost-k8s-goldmane--5b85766d88--pd8mg-eth0" Apr 16 04:53:14.154145 containerd[1618]: 2026-04-16 04:53:14.018 [INFO][4446] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="8e6584e9dc6018a5ade3c1f23382481332f1a4dd06b7dff5349f2abe8bfffbc5" HandleID="k8s-pod-network.8e6584e9dc6018a5ade3c1f23382481332f1a4dd06b7dff5349f2abe8bfffbc5" Workload="localhost-k8s-goldmane--5b85766d88--pd8mg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003f1580), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-5b85766d88-pd8mg", "timestamp":"2026-04-16 04:53:13.989593236 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0002dedc0)} Apr 16 04:53:14.154145 containerd[1618]: 2026-04-16 04:53:14.021 [INFO][4446] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 04:53:14.154145 containerd[1618]: 2026-04-16 04:53:14.022 [INFO][4446] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 04:53:14.154145 containerd[1618]: 2026-04-16 04:53:14.022 [INFO][4446] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 16 04:53:14.154145 containerd[1618]: 2026-04-16 04:53:14.028 [INFO][4446] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.8e6584e9dc6018a5ade3c1f23382481332f1a4dd06b7dff5349f2abe8bfffbc5" host="localhost" Apr 16 04:53:14.154145 containerd[1618]: 2026-04-16 04:53:14.053 [INFO][4446] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 16 04:53:14.154145 containerd[1618]: 2026-04-16 04:53:14.066 [INFO][4446] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 16 04:53:14.154145 containerd[1618]: 2026-04-16 04:53:14.081 [INFO][4446] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 16 04:53:14.154145 containerd[1618]: 2026-04-16 04:53:14.092 [INFO][4446] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 16 04:53:14.154145 containerd[1618]: 2026-04-16 04:53:14.093 [INFO][4446] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8e6584e9dc6018a5ade3c1f23382481332f1a4dd06b7dff5349f2abe8bfffbc5" host="localhost" Apr 16 04:53:14.155023 containerd[1618]: 2026-04-16 04:53:14.096 [INFO][4446] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.8e6584e9dc6018a5ade3c1f23382481332f1a4dd06b7dff5349f2abe8bfffbc5 Apr 16 04:53:14.155023 containerd[1618]: 2026-04-16 04:53:14.106 [INFO][4446] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8e6584e9dc6018a5ade3c1f23382481332f1a4dd06b7dff5349f2abe8bfffbc5" host="localhost" Apr 16 04:53:14.155023 containerd[1618]: 2026-04-16 04:53:14.118 [INFO][4446] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.8e6584e9dc6018a5ade3c1f23382481332f1a4dd06b7dff5349f2abe8bfffbc5" host="localhost" Apr 16 04:53:14.155023 containerd[1618]: 2026-04-16 04:53:14.119 [INFO][4446] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.8e6584e9dc6018a5ade3c1f23382481332f1a4dd06b7dff5349f2abe8bfffbc5" host="localhost" Apr 16 04:53:14.155023 containerd[1618]: 2026-04-16 04:53:14.119 [INFO][4446] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 04:53:14.155023 containerd[1618]: 2026-04-16 04:53:14.119 [INFO][4446] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="8e6584e9dc6018a5ade3c1f23382481332f1a4dd06b7dff5349f2abe8bfffbc5" HandleID="k8s-pod-network.8e6584e9dc6018a5ade3c1f23382481332f1a4dd06b7dff5349f2abe8bfffbc5" Workload="localhost-k8s-goldmane--5b85766d88--pd8mg-eth0" Apr 16 04:53:14.155228 containerd[1618]: 2026-04-16 04:53:14.130 [INFO][4399] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8e6584e9dc6018a5ade3c1f23382481332f1a4dd06b7dff5349f2abe8bfffbc5" Namespace="calico-system" Pod="goldmane-5b85766d88-pd8mg" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--pd8mg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--5b85766d88--pd8mg-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"2c44fb2e-260b-4b38-98a7-99b84889cce4", ResourceVersion:"859", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 4, 52, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-5b85766d88-pd8mg", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali243d081be7a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 04:53:14.155228 containerd[1618]: 2026-04-16 04:53:14.132 [INFO][4399] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="8e6584e9dc6018a5ade3c1f23382481332f1a4dd06b7dff5349f2abe8bfffbc5" Namespace="calico-system" Pod="goldmane-5b85766d88-pd8mg" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--pd8mg-eth0" Apr 16 04:53:14.155346 containerd[1618]: 2026-04-16 04:53:14.132 [INFO][4399] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali243d081be7a ContainerID="8e6584e9dc6018a5ade3c1f23382481332f1a4dd06b7dff5349f2abe8bfffbc5" Namespace="calico-system" Pod="goldmane-5b85766d88-pd8mg" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--pd8mg-eth0" Apr 16 04:53:14.155346 containerd[1618]: 2026-04-16 04:53:14.135 [INFO][4399] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8e6584e9dc6018a5ade3c1f23382481332f1a4dd06b7dff5349f2abe8bfffbc5" Namespace="calico-system" Pod="goldmane-5b85766d88-pd8mg" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--pd8mg-eth0" Apr 16 04:53:14.155401 containerd[1618]: 2026-04-16 04:53:14.135 [INFO][4399] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8e6584e9dc6018a5ade3c1f23382481332f1a4dd06b7dff5349f2abe8bfffbc5" Namespace="calico-system" Pod="goldmane-5b85766d88-pd8mg" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--pd8mg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--5b85766d88--pd8mg-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"2c44fb2e-260b-4b38-98a7-99b84889cce4", ResourceVersion:"859", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 4, 52, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8e6584e9dc6018a5ade3c1f23382481332f1a4dd06b7dff5349f2abe8bfffbc5", Pod:"goldmane-5b85766d88-pd8mg", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali243d081be7a", MAC:"1a:03:9d:04:4d:79", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 04:53:14.155479 containerd[1618]: 2026-04-16 04:53:14.148 [INFO][4399] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8e6584e9dc6018a5ade3c1f23382481332f1a4dd06b7dff5349f2abe8bfffbc5" Namespace="calico-system" Pod="goldmane-5b85766d88-pd8mg" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--pd8mg-eth0" Apr 16 04:53:14.239323 containerd[1618]: time="2026-04-16T04:53:14.239215354Z" level=info msg="connecting to shim 8e6584e9dc6018a5ade3c1f23382481332f1a4dd06b7dff5349f2abe8bfffbc5" address="unix:///run/containerd/s/6209150428d13aad34bf814c8e982c7029a54f2d50d0065f21ec0f3b22449794" namespace=k8s.io protocol=ttrpc version=3 Apr 16 04:53:14.330079 systemd[1]: Started cri-containerd-8e6584e9dc6018a5ade3c1f23382481332f1a4dd06b7dff5349f2abe8bfffbc5.scope - libcontainer container 8e6584e9dc6018a5ade3c1f23382481332f1a4dd06b7dff5349f2abe8bfffbc5. Apr 16 04:53:14.360700 systemd-resolved[1423]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 16 04:53:14.510937 containerd[1618]: time="2026-04-16T04:53:14.510701972Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-pd8mg,Uid:2c44fb2e-260b-4b38-98a7-99b84889cce4,Namespace:calico-system,Attempt:0,} returns sandbox id \"8e6584e9dc6018a5ade3c1f23382481332f1a4dd06b7dff5349f2abe8bfffbc5\"" Apr 16 04:53:14.845022 containerd[1618]: time="2026-04-16T04:53:14.844860926Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Apr 16 04:53:14.847651 containerd[1618]: time="2026-04-16T04:53:14.847533923Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:53:14.853839 containerd[1618]: time="2026-04-16T04:53:14.853615086Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:53:14.858464 containerd[1618]: time="2026-04-16T04:53:14.857056027Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:53:14.869028 containerd[1618]: time="2026-04-16T04:53:14.868947408Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 5.260611s" Apr 16 04:53:14.869028 containerd[1618]: time="2026-04-16T04:53:14.868987365Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Apr 16 04:53:14.870101 containerd[1618]: time="2026-04-16T04:53:14.870048701Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Apr 16 04:53:14.879894 containerd[1618]: time="2026-04-16T04:53:14.879819190Z" level=info msg="CreateContainer within sandbox \"227f6656bde7c2318de3ff7cb08736b0b3e2ce69fe3dcc3d9280f933308aa90d\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 16 04:53:14.911545 containerd[1618]: time="2026-04-16T04:53:14.911460289Z" level=info msg="Container c05a4591efb34afcce1799da61a3901660cec642171e04e9e511582869084682: CDI devices from CRI Config.CDIDevices: []" Apr 16 04:53:14.939561 containerd[1618]: time="2026-04-16T04:53:14.938694072Z" level=info msg="CreateContainer within sandbox \"227f6656bde7c2318de3ff7cb08736b0b3e2ce69fe3dcc3d9280f933308aa90d\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"c05a4591efb34afcce1799da61a3901660cec642171e04e9e511582869084682\"" Apr 16 04:53:14.941074 containerd[1618]: time="2026-04-16T04:53:14.941017701Z" level=info msg="StartContainer for \"c05a4591efb34afcce1799da61a3901660cec642171e04e9e511582869084682\"" Apr 16 04:53:14.944845 containerd[1618]: time="2026-04-16T04:53:14.944783225Z" level=info msg="connecting to shim c05a4591efb34afcce1799da61a3901660cec642171e04e9e511582869084682" address="unix:///run/containerd/s/a888a1b0142eb18ebf056ae03a9bd0277d5b9dd13844deefb5291000f620d23d" protocol=ttrpc version=3 Apr 16 04:53:14.965646 systemd-networkd[1497]: vxlan.calico: Gained IPv6LL Apr 16 04:53:14.983268 systemd[1]: Started cri-containerd-c05a4591efb34afcce1799da61a3901660cec642171e04e9e511582869084682.scope - libcontainer container c05a4591efb34afcce1799da61a3901660cec642171e04e9e511582869084682. Apr 16 04:53:15.169551 containerd[1618]: time="2026-04-16T04:53:15.168971017Z" level=info msg="StartContainer for \"c05a4591efb34afcce1799da61a3901660cec642171e04e9e511582869084682\" returns successfully" Apr 16 04:53:15.186780 kubelet[2764]: E0416 04:53:15.186377 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:53:15.188521 containerd[1618]: time="2026-04-16T04:53:15.186636443Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5655d84d6d-qrljt,Uid:a223c7b0-6e14-4098-bc37-5a4aa3d8d80b,Namespace:calico-system,Attempt:0,}" Apr 16 04:53:15.188521 containerd[1618]: time="2026-04-16T04:53:15.187097353Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85fbbfb5c9-s7lgg,Uid:edf90915-a4d4-442e-9905-e4fc01d7ae9f,Namespace:calico-system,Attempt:0,}" Apr 16 04:53:15.188521 containerd[1618]: time="2026-04-16T04:53:15.187433918Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5655d84d6d-jt5hf,Uid:142902a2-0442-4aed-9c46-d117595f20c4,Namespace:calico-system,Attempt:0,}" Apr 16 04:53:15.188521 containerd[1618]: time="2026-04-16T04:53:15.188332430Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-mf89m,Uid:9118c537-83d9-45de-bef7-fb503241b41d,Namespace:kube-system,Attempt:0,}" Apr 16 04:53:15.552526 kubelet[2764]: I0416 04:53:15.552452 2764 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 16 04:53:15.554673 kubelet[2764]: I0416 04:53:15.554517 2764 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 16 04:53:15.798372 systemd-networkd[1497]: cali243d081be7a: Gained IPv6LL Apr 16 04:53:15.860446 systemd-networkd[1497]: calia52945d34d5: Link UP Apr 16 04:53:15.866801 systemd-networkd[1497]: calia52945d34d5: Gained carrier Apr 16 04:53:15.906987 kubelet[2764]: I0416 04:53:15.904864 2764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-6d28c" podStartSLOduration=23.921550188 podStartE2EDuration="35.904842581s" podCreationTimestamp="2026-04-16 04:52:40 +0000 UTC" firstStartedPulling="2026-04-16 04:53:02.886656592 +0000 UTC m=+38.878390996" lastFinishedPulling="2026-04-16 04:53:14.869948969 +0000 UTC m=+50.861683389" observedRunningTime="2026-04-16 04:53:15.890490824 +0000 UTC m=+51.882225236" watchObservedRunningTime="2026-04-16 04:53:15.904842581 +0000 UTC m=+51.896576992" Apr 16 04:53:16.044871 containerd[1618]: 2026-04-16 04:53:15.315 [INFO][4628] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5655d84d6d--jt5hf-eth0 calico-apiserver-5655d84d6d- calico-system 142902a2-0442-4aed-9c46-d117595f20c4 855 0 2026-04-16 04:52:39 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5655d84d6d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5655d84d6d-jt5hf eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calia52945d34d5 [] [] }} ContainerID="f4e4f3cfe9b3929c13a1ef1d7e2de21082c8d29385ae011b47dae1b9a7ef8bb1" Namespace="calico-system" Pod="calico-apiserver-5655d84d6d-jt5hf" WorkloadEndpoint="localhost-k8s-calico--apiserver--5655d84d6d--jt5hf-" Apr 16 04:53:16.044871 containerd[1618]: 2026-04-16 04:53:15.316 [INFO][4628] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f4e4f3cfe9b3929c13a1ef1d7e2de21082c8d29385ae011b47dae1b9a7ef8bb1" Namespace="calico-system" Pod="calico-apiserver-5655d84d6d-jt5hf" WorkloadEndpoint="localhost-k8s-calico--apiserver--5655d84d6d--jt5hf-eth0" Apr 16 04:53:16.044871 containerd[1618]: 2026-04-16 04:53:15.619 [INFO][4672] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f4e4f3cfe9b3929c13a1ef1d7e2de21082c8d29385ae011b47dae1b9a7ef8bb1" HandleID="k8s-pod-network.f4e4f3cfe9b3929c13a1ef1d7e2de21082c8d29385ae011b47dae1b9a7ef8bb1" Workload="localhost-k8s-calico--apiserver--5655d84d6d--jt5hf-eth0" Apr 16 04:53:16.046253 containerd[1618]: 2026-04-16 04:53:15.648 [INFO][4672] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="f4e4f3cfe9b3929c13a1ef1d7e2de21082c8d29385ae011b47dae1b9a7ef8bb1" HandleID="k8s-pod-network.f4e4f3cfe9b3929c13a1ef1d7e2de21082c8d29385ae011b47dae1b9a7ef8bb1" Workload="localhost-k8s-calico--apiserver--5655d84d6d--jt5hf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003f8140), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-5655d84d6d-jt5hf", "timestamp":"2026-04-16 04:53:15.619813287 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000199080)} Apr 16 04:53:16.046253 containerd[1618]: 2026-04-16 04:53:15.649 [INFO][4672] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 04:53:16.046253 containerd[1618]: 2026-04-16 04:53:15.649 [INFO][4672] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 04:53:16.046253 containerd[1618]: 2026-04-16 04:53:15.649 [INFO][4672] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 16 04:53:16.046253 containerd[1618]: 2026-04-16 04:53:15.654 [INFO][4672] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.f4e4f3cfe9b3929c13a1ef1d7e2de21082c8d29385ae011b47dae1b9a7ef8bb1" host="localhost" Apr 16 04:53:16.046253 containerd[1618]: 2026-04-16 04:53:15.684 [INFO][4672] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 16 04:53:16.046253 containerd[1618]: 2026-04-16 04:53:15.742 [INFO][4672] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 16 04:53:16.046253 containerd[1618]: 2026-04-16 04:53:15.746 [INFO][4672] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 16 04:53:16.046253 containerd[1618]: 2026-04-16 04:53:15.750 [INFO][4672] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 16 04:53:16.046253 containerd[1618]: 2026-04-16 04:53:15.750 [INFO][4672] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f4e4f3cfe9b3929c13a1ef1d7e2de21082c8d29385ae011b47dae1b9a7ef8bb1" host="localhost" Apr 16 04:53:16.046869 containerd[1618]: 2026-04-16 04:53:15.752 [INFO][4672] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.f4e4f3cfe9b3929c13a1ef1d7e2de21082c8d29385ae011b47dae1b9a7ef8bb1 Apr 16 04:53:16.046869 containerd[1618]: 2026-04-16 04:53:15.776 [INFO][4672] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f4e4f3cfe9b3929c13a1ef1d7e2de21082c8d29385ae011b47dae1b9a7ef8bb1" host="localhost" Apr 16 04:53:16.046869 containerd[1618]: 2026-04-16 04:53:15.803 [INFO][4672] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.f4e4f3cfe9b3929c13a1ef1d7e2de21082c8d29385ae011b47dae1b9a7ef8bb1" host="localhost" Apr 16 04:53:16.046869 containerd[1618]: 2026-04-16 04:53:15.806 [INFO][4672] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.f4e4f3cfe9b3929c13a1ef1d7e2de21082c8d29385ae011b47dae1b9a7ef8bb1" host="localhost" Apr 16 04:53:16.046869 containerd[1618]: 2026-04-16 04:53:15.808 [INFO][4672] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 04:53:16.046869 containerd[1618]: 2026-04-16 04:53:15.808 [INFO][4672] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="f4e4f3cfe9b3929c13a1ef1d7e2de21082c8d29385ae011b47dae1b9a7ef8bb1" HandleID="k8s-pod-network.f4e4f3cfe9b3929c13a1ef1d7e2de21082c8d29385ae011b47dae1b9a7ef8bb1" Workload="localhost-k8s-calico--apiserver--5655d84d6d--jt5hf-eth0" Apr 16 04:53:16.046995 containerd[1618]: 2026-04-16 04:53:15.826 [INFO][4628] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f4e4f3cfe9b3929c13a1ef1d7e2de21082c8d29385ae011b47dae1b9a7ef8bb1" Namespace="calico-system" Pod="calico-apiserver-5655d84d6d-jt5hf" WorkloadEndpoint="localhost-k8s-calico--apiserver--5655d84d6d--jt5hf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5655d84d6d--jt5hf-eth0", GenerateName:"calico-apiserver-5655d84d6d-", Namespace:"calico-system", SelfLink:"", UID:"142902a2-0442-4aed-9c46-d117595f20c4", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 4, 52, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5655d84d6d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5655d84d6d-jt5hf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calia52945d34d5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 04:53:16.047056 containerd[1618]: 2026-04-16 04:53:15.835 [INFO][4628] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="f4e4f3cfe9b3929c13a1ef1d7e2de21082c8d29385ae011b47dae1b9a7ef8bb1" Namespace="calico-system" Pod="calico-apiserver-5655d84d6d-jt5hf" WorkloadEndpoint="localhost-k8s-calico--apiserver--5655d84d6d--jt5hf-eth0" Apr 16 04:53:16.047056 containerd[1618]: 2026-04-16 04:53:15.839 [INFO][4628] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia52945d34d5 ContainerID="f4e4f3cfe9b3929c13a1ef1d7e2de21082c8d29385ae011b47dae1b9a7ef8bb1" Namespace="calico-system" Pod="calico-apiserver-5655d84d6d-jt5hf" WorkloadEndpoint="localhost-k8s-calico--apiserver--5655d84d6d--jt5hf-eth0" Apr 16 04:53:16.047056 containerd[1618]: 2026-04-16 04:53:15.884 [INFO][4628] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f4e4f3cfe9b3929c13a1ef1d7e2de21082c8d29385ae011b47dae1b9a7ef8bb1" Namespace="calico-system" Pod="calico-apiserver-5655d84d6d-jt5hf" WorkloadEndpoint="localhost-k8s-calico--apiserver--5655d84d6d--jt5hf-eth0" Apr 16 04:53:16.047106 containerd[1618]: 2026-04-16 04:53:15.918 [INFO][4628] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f4e4f3cfe9b3929c13a1ef1d7e2de21082c8d29385ae011b47dae1b9a7ef8bb1" Namespace="calico-system" Pod="calico-apiserver-5655d84d6d-jt5hf" WorkloadEndpoint="localhost-k8s-calico--apiserver--5655d84d6d--jt5hf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5655d84d6d--jt5hf-eth0", GenerateName:"calico-apiserver-5655d84d6d-", Namespace:"calico-system", SelfLink:"", UID:"142902a2-0442-4aed-9c46-d117595f20c4", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 4, 52, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5655d84d6d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f4e4f3cfe9b3929c13a1ef1d7e2de21082c8d29385ae011b47dae1b9a7ef8bb1", Pod:"calico-apiserver-5655d84d6d-jt5hf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calia52945d34d5", MAC:"c6:72:57:58:8a:98", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 04:53:16.047159 containerd[1618]: 2026-04-16 04:53:16.034 [INFO][4628] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f4e4f3cfe9b3929c13a1ef1d7e2de21082c8d29385ae011b47dae1b9a7ef8bb1" Namespace="calico-system" Pod="calico-apiserver-5655d84d6d-jt5hf" WorkloadEndpoint="localhost-k8s-calico--apiserver--5655d84d6d--jt5hf-eth0" Apr 16 04:53:16.100842 systemd-networkd[1497]: cali357763ba1b9: Link UP Apr 16 04:53:16.102323 systemd-networkd[1497]: cali357763ba1b9: Gained carrier Apr 16 04:53:16.113159 containerd[1618]: time="2026-04-16T04:53:16.111981642Z" level=info msg="connecting to shim f4e4f3cfe9b3929c13a1ef1d7e2de21082c8d29385ae011b47dae1b9a7ef8bb1" address="unix:///run/containerd/s/8f0aecdad3f78f5e7f9fbd3f1ce694e088abd19c8d3a74d3406010452657f431" namespace=k8s.io protocol=ttrpc version=3 Apr 16 04:53:16.147816 containerd[1618]: 2026-04-16 04:53:15.298 [INFO][4606] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--85fbbfb5c9--s7lgg-eth0 calico-kube-controllers-85fbbfb5c9- calico-system edf90915-a4d4-442e-9905-e4fc01d7ae9f 863 0 2026-04-16 04:52:40 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:85fbbfb5c9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-85fbbfb5c9-s7lgg eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali357763ba1b9 [] [] }} ContainerID="8b5bbf44cb54679559d5ade3092b95b64bf6b0b82088e61dcf593ee30b39de21" Namespace="calico-system" Pod="calico-kube-controllers-85fbbfb5c9-s7lgg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--85fbbfb5c9--s7lgg-" Apr 16 04:53:16.147816 containerd[1618]: 2026-04-16 04:53:15.298 [INFO][4606] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8b5bbf44cb54679559d5ade3092b95b64bf6b0b82088e61dcf593ee30b39de21" Namespace="calico-system" Pod="calico-kube-controllers-85fbbfb5c9-s7lgg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--85fbbfb5c9--s7lgg-eth0" Apr 16 04:53:16.147816 containerd[1618]: 2026-04-16 04:53:15.638 [INFO][4666] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8b5bbf44cb54679559d5ade3092b95b64bf6b0b82088e61dcf593ee30b39de21" HandleID="k8s-pod-network.8b5bbf44cb54679559d5ade3092b95b64bf6b0b82088e61dcf593ee30b39de21" Workload="localhost-k8s-calico--kube--controllers--85fbbfb5c9--s7lgg-eth0" Apr 16 04:53:16.149084 containerd[1618]: 2026-04-16 04:53:15.659 [INFO][4666] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="8b5bbf44cb54679559d5ade3092b95b64bf6b0b82088e61dcf593ee30b39de21" HandleID="k8s-pod-network.8b5bbf44cb54679559d5ade3092b95b64bf6b0b82088e61dcf593ee30b39de21" Workload="localhost-k8s-calico--kube--controllers--85fbbfb5c9--s7lgg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00041be10), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-85fbbfb5c9-s7lgg", "timestamp":"2026-04-16 04:53:15.63837337 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00031c580)} Apr 16 04:53:16.149084 containerd[1618]: 2026-04-16 04:53:15.664 [INFO][4666] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 04:53:16.149084 containerd[1618]: 2026-04-16 04:53:15.807 [INFO][4666] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 04:53:16.149084 containerd[1618]: 2026-04-16 04:53:15.807 [INFO][4666] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 16 04:53:16.149084 containerd[1618]: 2026-04-16 04:53:15.818 [INFO][4666] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.8b5bbf44cb54679559d5ade3092b95b64bf6b0b82088e61dcf593ee30b39de21" host="localhost" Apr 16 04:53:16.149084 containerd[1618]: 2026-04-16 04:53:15.845 [INFO][4666] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 16 04:53:16.149084 containerd[1618]: 2026-04-16 04:53:15.907 [INFO][4666] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 16 04:53:16.149084 containerd[1618]: 2026-04-16 04:53:15.917 [INFO][4666] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 16 04:53:16.149084 containerd[1618]: 2026-04-16 04:53:16.011 [INFO][4666] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 16 04:53:16.153363 containerd[1618]: 2026-04-16 04:53:16.012 [INFO][4666] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8b5bbf44cb54679559d5ade3092b95b64bf6b0b82088e61dcf593ee30b39de21" host="localhost" Apr 16 04:53:16.153363 containerd[1618]: 2026-04-16 04:53:16.032 [INFO][4666] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.8b5bbf44cb54679559d5ade3092b95b64bf6b0b82088e61dcf593ee30b39de21 Apr 16 04:53:16.153363 containerd[1618]: 2026-04-16 04:53:16.060 [INFO][4666] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8b5bbf44cb54679559d5ade3092b95b64bf6b0b82088e61dcf593ee30b39de21" host="localhost" Apr 16 04:53:16.153363 containerd[1618]: 2026-04-16 04:53:16.073 [INFO][4666] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.8b5bbf44cb54679559d5ade3092b95b64bf6b0b82088e61dcf593ee30b39de21" host="localhost" Apr 16 04:53:16.153363 containerd[1618]: 2026-04-16 04:53:16.073 [INFO][4666] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.8b5bbf44cb54679559d5ade3092b95b64bf6b0b82088e61dcf593ee30b39de21" host="localhost" Apr 16 04:53:16.153363 containerd[1618]: 2026-04-16 04:53:16.074 [INFO][4666] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 04:53:16.153363 containerd[1618]: 2026-04-16 04:53:16.074 [INFO][4666] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="8b5bbf44cb54679559d5ade3092b95b64bf6b0b82088e61dcf593ee30b39de21" HandleID="k8s-pod-network.8b5bbf44cb54679559d5ade3092b95b64bf6b0b82088e61dcf593ee30b39de21" Workload="localhost-k8s-calico--kube--controllers--85fbbfb5c9--s7lgg-eth0" Apr 16 04:53:16.153529 containerd[1618]: 2026-04-16 04:53:16.075 [INFO][4606] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8b5bbf44cb54679559d5ade3092b95b64bf6b0b82088e61dcf593ee30b39de21" Namespace="calico-system" Pod="calico-kube-controllers-85fbbfb5c9-s7lgg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--85fbbfb5c9--s7lgg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--85fbbfb5c9--s7lgg-eth0", GenerateName:"calico-kube-controllers-85fbbfb5c9-", Namespace:"calico-system", SelfLink:"", UID:"edf90915-a4d4-442e-9905-e4fc01d7ae9f", ResourceVersion:"863", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 4, 52, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"85fbbfb5c9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-85fbbfb5c9-s7lgg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali357763ba1b9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 04:53:16.153619 containerd[1618]: 2026-04-16 04:53:16.076 [INFO][4606] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="8b5bbf44cb54679559d5ade3092b95b64bf6b0b82088e61dcf593ee30b39de21" Namespace="calico-system" Pod="calico-kube-controllers-85fbbfb5c9-s7lgg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--85fbbfb5c9--s7lgg-eth0" Apr 16 04:53:16.153619 containerd[1618]: 2026-04-16 04:53:16.076 [INFO][4606] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali357763ba1b9 ContainerID="8b5bbf44cb54679559d5ade3092b95b64bf6b0b82088e61dcf593ee30b39de21" Namespace="calico-system" Pod="calico-kube-controllers-85fbbfb5c9-s7lgg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--85fbbfb5c9--s7lgg-eth0" Apr 16 04:53:16.153619 containerd[1618]: 2026-04-16 04:53:16.098 [INFO][4606] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8b5bbf44cb54679559d5ade3092b95b64bf6b0b82088e61dcf593ee30b39de21" Namespace="calico-system" Pod="calico-kube-controllers-85fbbfb5c9-s7lgg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--85fbbfb5c9--s7lgg-eth0" Apr 16 04:53:16.153841 containerd[1618]: 2026-04-16 04:53:16.100 [INFO][4606] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8b5bbf44cb54679559d5ade3092b95b64bf6b0b82088e61dcf593ee30b39de21" Namespace="calico-system" Pod="calico-kube-controllers-85fbbfb5c9-s7lgg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--85fbbfb5c9--s7lgg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--85fbbfb5c9--s7lgg-eth0", GenerateName:"calico-kube-controllers-85fbbfb5c9-", Namespace:"calico-system", SelfLink:"", UID:"edf90915-a4d4-442e-9905-e4fc01d7ae9f", ResourceVersion:"863", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 4, 52, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"85fbbfb5c9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8b5bbf44cb54679559d5ade3092b95b64bf6b0b82088e61dcf593ee30b39de21", Pod:"calico-kube-controllers-85fbbfb5c9-s7lgg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali357763ba1b9", MAC:"1a:19:5d:03:5d:78", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 04:53:16.153964 containerd[1618]: 2026-04-16 04:53:16.132 [INFO][4606] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8b5bbf44cb54679559d5ade3092b95b64bf6b0b82088e61dcf593ee30b39de21" Namespace="calico-system" Pod="calico-kube-controllers-85fbbfb5c9-s7lgg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--85fbbfb5c9--s7lgg-eth0" Apr 16 04:53:16.233672 systemd[1]: Started cri-containerd-f4e4f3cfe9b3929c13a1ef1d7e2de21082c8d29385ae011b47dae1b9a7ef8bb1.scope - libcontainer container f4e4f3cfe9b3929c13a1ef1d7e2de21082c8d29385ae011b47dae1b9a7ef8bb1. Apr 16 04:53:16.267389 systemd-networkd[1497]: cali9e14a1616dd: Link UP Apr 16 04:53:16.273101 systemd-networkd[1497]: cali9e14a1616dd: Gained carrier Apr 16 04:53:16.340168 containerd[1618]: time="2026-04-16T04:53:16.339247103Z" level=info msg="connecting to shim 8b5bbf44cb54679559d5ade3092b95b64bf6b0b82088e61dcf593ee30b39de21" address="unix:///run/containerd/s/979c5972c3b2f299f515111569b79c2d4af9ccecf0e6ceed4a3d5173869ebf04" namespace=k8s.io protocol=ttrpc version=3 Apr 16 04:53:16.377392 containerd[1618]: 2026-04-16 04:53:15.506 [INFO][4633] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--mf89m-eth0 coredns-674b8bbfcf- kube-system 9118c537-83d9-45de-bef7-fb503241b41d 862 0 2026-04-16 04:52:30 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-mf89m eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali9e14a1616dd [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="da18ad9044868f077216c5ded98d729b50c64e186f3ae837ad7e1a8d4e99a7c0" Namespace="kube-system" Pod="coredns-674b8bbfcf-mf89m" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--mf89m-" Apr 16 04:53:16.377392 containerd[1618]: 2026-04-16 04:53:15.508 [INFO][4633] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="da18ad9044868f077216c5ded98d729b50c64e186f3ae837ad7e1a8d4e99a7c0" Namespace="kube-system" Pod="coredns-674b8bbfcf-mf89m" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--mf89m-eth0" Apr 16 04:53:16.377392 containerd[1618]: 2026-04-16 04:53:15.682 [INFO][4680] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="da18ad9044868f077216c5ded98d729b50c64e186f3ae837ad7e1a8d4e99a7c0" HandleID="k8s-pod-network.da18ad9044868f077216c5ded98d729b50c64e186f3ae837ad7e1a8d4e99a7c0" Workload="localhost-k8s-coredns--674b8bbfcf--mf89m-eth0" Apr 16 04:53:16.378772 containerd[1618]: 2026-04-16 04:53:15.745 [INFO][4680] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="da18ad9044868f077216c5ded98d729b50c64e186f3ae837ad7e1a8d4e99a7c0" HandleID="k8s-pod-network.da18ad9044868f077216c5ded98d729b50c64e186f3ae837ad7e1a8d4e99a7c0" Workload="localhost-k8s-coredns--674b8bbfcf--mf89m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e210), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-mf89m", "timestamp":"2026-04-16 04:53:15.682408029 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0005d6000)} Apr 16 04:53:16.378772 containerd[1618]: 2026-04-16 04:53:15.746 [INFO][4680] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 04:53:16.378772 containerd[1618]: 2026-04-16 04:53:16.074 [INFO][4680] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 04:53:16.378772 containerd[1618]: 2026-04-16 04:53:16.074 [INFO][4680] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 16 04:53:16.378772 containerd[1618]: 2026-04-16 04:53:16.100 [INFO][4680] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.da18ad9044868f077216c5ded98d729b50c64e186f3ae837ad7e1a8d4e99a7c0" host="localhost" Apr 16 04:53:16.378772 containerd[1618]: 2026-04-16 04:53:16.145 [INFO][4680] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 16 04:53:16.378772 containerd[1618]: 2026-04-16 04:53:16.157 [INFO][4680] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 16 04:53:16.378772 containerd[1618]: 2026-04-16 04:53:16.163 [INFO][4680] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 16 04:53:16.378772 containerd[1618]: 2026-04-16 04:53:16.175 [INFO][4680] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 16 04:53:16.378772 containerd[1618]: 2026-04-16 04:53:16.175 [INFO][4680] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.da18ad9044868f077216c5ded98d729b50c64e186f3ae837ad7e1a8d4e99a7c0" host="localhost" Apr 16 04:53:16.379241 containerd[1618]: 2026-04-16 04:53:16.182 [INFO][4680] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.da18ad9044868f077216c5ded98d729b50c64e186f3ae837ad7e1a8d4e99a7c0 Apr 16 04:53:16.379241 containerd[1618]: 2026-04-16 04:53:16.219 [INFO][4680] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.da18ad9044868f077216c5ded98d729b50c64e186f3ae837ad7e1a8d4e99a7c0" host="localhost" Apr 16 04:53:16.379241 containerd[1618]: 2026-04-16 04:53:16.240 [INFO][4680] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.da18ad9044868f077216c5ded98d729b50c64e186f3ae837ad7e1a8d4e99a7c0" host="localhost" Apr 16 04:53:16.379241 containerd[1618]: 2026-04-16 04:53:16.240 [INFO][4680] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.da18ad9044868f077216c5ded98d729b50c64e186f3ae837ad7e1a8d4e99a7c0" host="localhost" Apr 16 04:53:16.379241 containerd[1618]: 2026-04-16 04:53:16.241 [INFO][4680] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 04:53:16.379241 containerd[1618]: 2026-04-16 04:53:16.241 [INFO][4680] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="da18ad9044868f077216c5ded98d729b50c64e186f3ae837ad7e1a8d4e99a7c0" HandleID="k8s-pod-network.da18ad9044868f077216c5ded98d729b50c64e186f3ae837ad7e1a8d4e99a7c0" Workload="localhost-k8s-coredns--674b8bbfcf--mf89m-eth0" Apr 16 04:53:16.379476 containerd[1618]: 2026-04-16 04:53:16.254 [INFO][4633] cni-plugin/k8s.go 418: Populated endpoint ContainerID="da18ad9044868f077216c5ded98d729b50c64e186f3ae837ad7e1a8d4e99a7c0" Namespace="kube-system" Pod="coredns-674b8bbfcf-mf89m" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--mf89m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--mf89m-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"9118c537-83d9-45de-bef7-fb503241b41d", ResourceVersion:"862", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 4, 52, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-mf89m", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9e14a1616dd", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 04:53:16.383393 containerd[1618]: 2026-04-16 04:53:16.255 [INFO][4633] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="da18ad9044868f077216c5ded98d729b50c64e186f3ae837ad7e1a8d4e99a7c0" Namespace="kube-system" Pod="coredns-674b8bbfcf-mf89m" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--mf89m-eth0" Apr 16 04:53:16.383393 containerd[1618]: 2026-04-16 04:53:16.255 [INFO][4633] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9e14a1616dd ContainerID="da18ad9044868f077216c5ded98d729b50c64e186f3ae837ad7e1a8d4e99a7c0" Namespace="kube-system" Pod="coredns-674b8bbfcf-mf89m" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--mf89m-eth0" Apr 16 04:53:16.383393 containerd[1618]: 2026-04-16 04:53:16.272 [INFO][4633] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="da18ad9044868f077216c5ded98d729b50c64e186f3ae837ad7e1a8d4e99a7c0" Namespace="kube-system" Pod="coredns-674b8bbfcf-mf89m" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--mf89m-eth0" Apr 16 04:53:16.387135 containerd[1618]: 2026-04-16 04:53:16.288 [INFO][4633] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="da18ad9044868f077216c5ded98d729b50c64e186f3ae837ad7e1a8d4e99a7c0" Namespace="kube-system" Pod="coredns-674b8bbfcf-mf89m" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--mf89m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--mf89m-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"9118c537-83d9-45de-bef7-fb503241b41d", ResourceVersion:"862", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 4, 52, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"da18ad9044868f077216c5ded98d729b50c64e186f3ae837ad7e1a8d4e99a7c0", Pod:"coredns-674b8bbfcf-mf89m", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9e14a1616dd", MAC:"16:60:b6:b9:e0:30", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 04:53:16.387135 containerd[1618]: 2026-04-16 04:53:16.361 [INFO][4633] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="da18ad9044868f077216c5ded98d729b50c64e186f3ae837ad7e1a8d4e99a7c0" Namespace="kube-system" Pod="coredns-674b8bbfcf-mf89m" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--mf89m-eth0" Apr 16 04:53:16.521441 systemd-networkd[1497]: cali948f885ee06: Link UP Apr 16 04:53:16.521983 systemd-resolved[1423]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 16 04:53:16.522548 systemd-networkd[1497]: cali948f885ee06: Gained carrier Apr 16 04:53:16.542890 systemd[1]: Started cri-containerd-8b5bbf44cb54679559d5ade3092b95b64bf6b0b82088e61dcf593ee30b39de21.scope - libcontainer container 8b5bbf44cb54679559d5ade3092b95b64bf6b0b82088e61dcf593ee30b39de21. Apr 16 04:53:16.547233 containerd[1618]: time="2026-04-16T04:53:16.547158587Z" level=info msg="connecting to shim da18ad9044868f077216c5ded98d729b50c64e186f3ae837ad7e1a8d4e99a7c0" address="unix:///run/containerd/s/74bcb2580cbdfab6ce6f1e45651543650bda8f4fa8bed7c3d22e51d106e16765" namespace=k8s.io protocol=ttrpc version=3 Apr 16 04:53:16.576923 containerd[1618]: 2026-04-16 04:53:15.388 [INFO][4620] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5655d84d6d--qrljt-eth0 calico-apiserver-5655d84d6d- calico-system a223c7b0-6e14-4098-bc37-5a4aa3d8d80b 860 0 2026-04-16 04:52:39 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5655d84d6d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5655d84d6d-qrljt eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali948f885ee06 [] [] }} ContainerID="1ce9066d3c65b39b42818be18fc314e93a9d646114d338172c004a58f6e57f18" Namespace="calico-system" Pod="calico-apiserver-5655d84d6d-qrljt" WorkloadEndpoint="localhost-k8s-calico--apiserver--5655d84d6d--qrljt-" Apr 16 04:53:16.576923 containerd[1618]: 2026-04-16 04:53:15.392 [INFO][4620] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1ce9066d3c65b39b42818be18fc314e93a9d646114d338172c004a58f6e57f18" Namespace="calico-system" Pod="calico-apiserver-5655d84d6d-qrljt" WorkloadEndpoint="localhost-k8s-calico--apiserver--5655d84d6d--qrljt-eth0" Apr 16 04:53:16.576923 containerd[1618]: 2026-04-16 04:53:15.678 [INFO][4682] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1ce9066d3c65b39b42818be18fc314e93a9d646114d338172c004a58f6e57f18" HandleID="k8s-pod-network.1ce9066d3c65b39b42818be18fc314e93a9d646114d338172c004a58f6e57f18" Workload="localhost-k8s-calico--apiserver--5655d84d6d--qrljt-eth0" Apr 16 04:53:16.576923 containerd[1618]: 2026-04-16 04:53:15.748 [INFO][4682] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="1ce9066d3c65b39b42818be18fc314e93a9d646114d338172c004a58f6e57f18" HandleID="k8s-pod-network.1ce9066d3c65b39b42818be18fc314e93a9d646114d338172c004a58f6e57f18" Workload="localhost-k8s-calico--apiserver--5655d84d6d--qrljt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003a0b40), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-5655d84d6d-qrljt", "timestamp":"2026-04-16 04:53:15.678860505 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003de9a0)} Apr 16 04:53:16.576923 containerd[1618]: 2026-04-16 04:53:15.748 [INFO][4682] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 04:53:16.576923 containerd[1618]: 2026-04-16 04:53:16.241 [INFO][4682] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 04:53:16.576923 containerd[1618]: 2026-04-16 04:53:16.242 [INFO][4682] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 16 04:53:16.576923 containerd[1618]: 2026-04-16 04:53:16.256 [INFO][4682] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.1ce9066d3c65b39b42818be18fc314e93a9d646114d338172c004a58f6e57f18" host="localhost" Apr 16 04:53:16.576923 containerd[1618]: 2026-04-16 04:53:16.290 [INFO][4682] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 16 04:53:16.576923 containerd[1618]: 2026-04-16 04:53:16.345 [INFO][4682] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 16 04:53:16.576923 containerd[1618]: 2026-04-16 04:53:16.352 [INFO][4682] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 16 04:53:16.576923 containerd[1618]: 2026-04-16 04:53:16.376 [INFO][4682] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 16 04:53:16.576923 containerd[1618]: 2026-04-16 04:53:16.376 [INFO][4682] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1ce9066d3c65b39b42818be18fc314e93a9d646114d338172c004a58f6e57f18" host="localhost" Apr 16 04:53:16.576923 containerd[1618]: 2026-04-16 04:53:16.386 [INFO][4682] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.1ce9066d3c65b39b42818be18fc314e93a9d646114d338172c004a58f6e57f18 Apr 16 04:53:16.576923 containerd[1618]: 2026-04-16 04:53:16.408 [INFO][4682] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1ce9066d3c65b39b42818be18fc314e93a9d646114d338172c004a58f6e57f18" host="localhost" Apr 16 04:53:16.576923 containerd[1618]: 2026-04-16 04:53:16.503 [INFO][4682] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.1ce9066d3c65b39b42818be18fc314e93a9d646114d338172c004a58f6e57f18" host="localhost" Apr 16 04:53:16.576923 containerd[1618]: 2026-04-16 04:53:16.505 [INFO][4682] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.1ce9066d3c65b39b42818be18fc314e93a9d646114d338172c004a58f6e57f18" host="localhost" Apr 16 04:53:16.576923 containerd[1618]: 2026-04-16 04:53:16.505 [INFO][4682] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 04:53:16.576923 containerd[1618]: 2026-04-16 04:53:16.505 [INFO][4682] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="1ce9066d3c65b39b42818be18fc314e93a9d646114d338172c004a58f6e57f18" HandleID="k8s-pod-network.1ce9066d3c65b39b42818be18fc314e93a9d646114d338172c004a58f6e57f18" Workload="localhost-k8s-calico--apiserver--5655d84d6d--qrljt-eth0" Apr 16 04:53:16.578308 containerd[1618]: 2026-04-16 04:53:16.515 [INFO][4620] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1ce9066d3c65b39b42818be18fc314e93a9d646114d338172c004a58f6e57f18" Namespace="calico-system" Pod="calico-apiserver-5655d84d6d-qrljt" WorkloadEndpoint="localhost-k8s-calico--apiserver--5655d84d6d--qrljt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5655d84d6d--qrljt-eth0", GenerateName:"calico-apiserver-5655d84d6d-", Namespace:"calico-system", SelfLink:"", UID:"a223c7b0-6e14-4098-bc37-5a4aa3d8d80b", ResourceVersion:"860", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 4, 52, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5655d84d6d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5655d84d6d-qrljt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali948f885ee06", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 04:53:16.578308 containerd[1618]: 2026-04-16 04:53:16.516 [INFO][4620] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="1ce9066d3c65b39b42818be18fc314e93a9d646114d338172c004a58f6e57f18" Namespace="calico-system" Pod="calico-apiserver-5655d84d6d-qrljt" WorkloadEndpoint="localhost-k8s-calico--apiserver--5655d84d6d--qrljt-eth0" Apr 16 04:53:16.578308 containerd[1618]: 2026-04-16 04:53:16.516 [INFO][4620] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali948f885ee06 ContainerID="1ce9066d3c65b39b42818be18fc314e93a9d646114d338172c004a58f6e57f18" Namespace="calico-system" Pod="calico-apiserver-5655d84d6d-qrljt" WorkloadEndpoint="localhost-k8s-calico--apiserver--5655d84d6d--qrljt-eth0" Apr 16 04:53:16.578308 containerd[1618]: 2026-04-16 04:53:16.523 [INFO][4620] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1ce9066d3c65b39b42818be18fc314e93a9d646114d338172c004a58f6e57f18" Namespace="calico-system" Pod="calico-apiserver-5655d84d6d-qrljt" WorkloadEndpoint="localhost-k8s-calico--apiserver--5655d84d6d--qrljt-eth0" Apr 16 04:53:16.578308 containerd[1618]: 2026-04-16 04:53:16.524 [INFO][4620] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1ce9066d3c65b39b42818be18fc314e93a9d646114d338172c004a58f6e57f18" Namespace="calico-system" Pod="calico-apiserver-5655d84d6d-qrljt" WorkloadEndpoint="localhost-k8s-calico--apiserver--5655d84d6d--qrljt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5655d84d6d--qrljt-eth0", GenerateName:"calico-apiserver-5655d84d6d-", Namespace:"calico-system", SelfLink:"", UID:"a223c7b0-6e14-4098-bc37-5a4aa3d8d80b", ResourceVersion:"860", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 4, 52, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5655d84d6d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1ce9066d3c65b39b42818be18fc314e93a9d646114d338172c004a58f6e57f18", Pod:"calico-apiserver-5655d84d6d-qrljt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali948f885ee06", MAC:"de:98:7d:e6:bc:7d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 04:53:16.578308 containerd[1618]: 2026-04-16 04:53:16.562 [INFO][4620] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1ce9066d3c65b39b42818be18fc314e93a9d646114d338172c004a58f6e57f18" Namespace="calico-system" Pod="calico-apiserver-5655d84d6d-qrljt" WorkloadEndpoint="localhost-k8s-calico--apiserver--5655d84d6d--qrljt-eth0" Apr 16 04:53:16.636859 systemd-resolved[1423]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 16 04:53:16.653290 systemd[1]: Started cri-containerd-da18ad9044868f077216c5ded98d729b50c64e186f3ae837ad7e1a8d4e99a7c0.scope - libcontainer container da18ad9044868f077216c5ded98d729b50c64e186f3ae837ad7e1a8d4e99a7c0. Apr 16 04:53:16.660562 containerd[1618]: time="2026-04-16T04:53:16.660519929Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5655d84d6d-jt5hf,Uid:142902a2-0442-4aed-9c46-d117595f20c4,Namespace:calico-system,Attempt:0,} returns sandbox id \"f4e4f3cfe9b3929c13a1ef1d7e2de21082c8d29385ae011b47dae1b9a7ef8bb1\"" Apr 16 04:53:16.695156 systemd-resolved[1423]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 16 04:53:16.720316 containerd[1618]: time="2026-04-16T04:53:16.720252448Z" level=info msg="connecting to shim 1ce9066d3c65b39b42818be18fc314e93a9d646114d338172c004a58f6e57f18" address="unix:///run/containerd/s/a928cf8ef1760ea58fa286d988960627243465f4565754e2b2754c39ed3f5c03" namespace=k8s.io protocol=ttrpc version=3 Apr 16 04:53:16.743873 containerd[1618]: time="2026-04-16T04:53:16.743801362Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85fbbfb5c9-s7lgg,Uid:edf90915-a4d4-442e-9905-e4fc01d7ae9f,Namespace:calico-system,Attempt:0,} returns sandbox id \"8b5bbf44cb54679559d5ade3092b95b64bf6b0b82088e61dcf593ee30b39de21\"" Apr 16 04:53:16.779945 containerd[1618]: time="2026-04-16T04:53:16.779726784Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-mf89m,Uid:9118c537-83d9-45de-bef7-fb503241b41d,Namespace:kube-system,Attempt:0,} returns sandbox id \"da18ad9044868f077216c5ded98d729b50c64e186f3ae837ad7e1a8d4e99a7c0\"" Apr 16 04:53:16.788132 kubelet[2764]: E0416 04:53:16.788084 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:53:16.807769 systemd[1]: Started cri-containerd-1ce9066d3c65b39b42818be18fc314e93a9d646114d338172c004a58f6e57f18.scope - libcontainer container 1ce9066d3c65b39b42818be18fc314e93a9d646114d338172c004a58f6e57f18. Apr 16 04:53:16.827831 containerd[1618]: time="2026-04-16T04:53:16.826723708Z" level=info msg="CreateContainer within sandbox \"da18ad9044868f077216c5ded98d729b50c64e186f3ae837ad7e1a8d4e99a7c0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 16 04:53:16.854087 containerd[1618]: time="2026-04-16T04:53:16.853750928Z" level=info msg="Container 06d7b267a3d327fbec341d9926ce346a53ddf2a79314b8a78f7727696e1ae601: CDI devices from CRI Config.CDIDevices: []" Apr 16 04:53:16.863222 containerd[1618]: time="2026-04-16T04:53:16.863077267Z" level=info msg="CreateContainer within sandbox \"da18ad9044868f077216c5ded98d729b50c64e186f3ae837ad7e1a8d4e99a7c0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"06d7b267a3d327fbec341d9926ce346a53ddf2a79314b8a78f7727696e1ae601\"" Apr 16 04:53:16.864135 containerd[1618]: time="2026-04-16T04:53:16.864095699Z" level=info msg="StartContainer for \"06d7b267a3d327fbec341d9926ce346a53ddf2a79314b8a78f7727696e1ae601\"" Apr 16 04:53:16.865520 containerd[1618]: time="2026-04-16T04:53:16.865349277Z" level=info msg="connecting to shim 06d7b267a3d327fbec341d9926ce346a53ddf2a79314b8a78f7727696e1ae601" address="unix:///run/containerd/s/74bcb2580cbdfab6ce6f1e45651543650bda8f4fa8bed7c3d22e51d106e16765" protocol=ttrpc version=3 Apr 16 04:53:16.865151 systemd-resolved[1423]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 16 04:53:16.909419 systemd[1]: Started cri-containerd-06d7b267a3d327fbec341d9926ce346a53ddf2a79314b8a78f7727696e1ae601.scope - libcontainer container 06d7b267a3d327fbec341d9926ce346a53ddf2a79314b8a78f7727696e1ae601. Apr 16 04:53:16.941339 containerd[1618]: time="2026-04-16T04:53:16.941260575Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5655d84d6d-qrljt,Uid:a223c7b0-6e14-4098-bc37-5a4aa3d8d80b,Namespace:calico-system,Attempt:0,} returns sandbox id \"1ce9066d3c65b39b42818be18fc314e93a9d646114d338172c004a58f6e57f18\"" Apr 16 04:53:17.014352 systemd-networkd[1497]: calia52945d34d5: Gained IPv6LL Apr 16 04:53:17.021446 containerd[1618]: time="2026-04-16T04:53:17.021396572Z" level=info msg="StartContainer for \"06d7b267a3d327fbec341d9926ce346a53ddf2a79314b8a78f7727696e1ae601\" returns successfully" Apr 16 04:53:17.184431 kubelet[2764]: E0416 04:53:17.183829 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:53:17.185320 containerd[1618]: time="2026-04-16T04:53:17.185178299Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-mlhrm,Uid:f77a2220-8e6b-4453-a6fc-099782b9a146,Namespace:kube-system,Attempt:0,}" Apr 16 04:53:17.332003 systemd-networkd[1497]: cali357763ba1b9: Gained IPv6LL Apr 16 04:53:17.594332 systemd-networkd[1497]: cali2f5ea6ebbcf: Link UP Apr 16 04:53:17.596476 systemd-networkd[1497]: cali2f5ea6ebbcf: Gained carrier Apr 16 04:53:17.620165 containerd[1618]: 2026-04-16 04:53:17.332 [INFO][4980] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--mlhrm-eth0 coredns-674b8bbfcf- kube-system f77a2220-8e6b-4453-a6fc-099782b9a146 852 0 2026-04-16 04:52:30 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-mlhrm eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali2f5ea6ebbcf [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="fb2c00b5d7f103f178591eed658ba727a7ced3c790085ed0303afc8a7d8ad681" Namespace="kube-system" Pod="coredns-674b8bbfcf-mlhrm" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--mlhrm-" Apr 16 04:53:17.620165 containerd[1618]: 2026-04-16 04:53:17.332 [INFO][4980] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fb2c00b5d7f103f178591eed658ba727a7ced3c790085ed0303afc8a7d8ad681" Namespace="kube-system" Pod="coredns-674b8bbfcf-mlhrm" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--mlhrm-eth0" Apr 16 04:53:17.620165 containerd[1618]: 2026-04-16 04:53:17.399 [INFO][5000] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fb2c00b5d7f103f178591eed658ba727a7ced3c790085ed0303afc8a7d8ad681" HandleID="k8s-pod-network.fb2c00b5d7f103f178591eed658ba727a7ced3c790085ed0303afc8a7d8ad681" Workload="localhost-k8s-coredns--674b8bbfcf--mlhrm-eth0" Apr 16 04:53:17.620165 containerd[1618]: 2026-04-16 04:53:17.416 [INFO][5000] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="fb2c00b5d7f103f178591eed658ba727a7ced3c790085ed0303afc8a7d8ad681" HandleID="k8s-pod-network.fb2c00b5d7f103f178591eed658ba727a7ced3c790085ed0303afc8a7d8ad681" Workload="localhost-k8s-coredns--674b8bbfcf--mlhrm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f8f0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-mlhrm", "timestamp":"2026-04-16 04:53:17.399335121 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0004f2420)} Apr 16 04:53:17.620165 containerd[1618]: 2026-04-16 04:53:17.417 [INFO][5000] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 04:53:17.620165 containerd[1618]: 2026-04-16 04:53:17.417 [INFO][5000] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 04:53:17.620165 containerd[1618]: 2026-04-16 04:53:17.417 [INFO][5000] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 16 04:53:17.620165 containerd[1618]: 2026-04-16 04:53:17.476 [INFO][5000] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.fb2c00b5d7f103f178591eed658ba727a7ced3c790085ed0303afc8a7d8ad681" host="localhost" Apr 16 04:53:17.620165 containerd[1618]: 2026-04-16 04:53:17.502 [INFO][5000] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 16 04:53:17.620165 containerd[1618]: 2026-04-16 04:53:17.528 [INFO][5000] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 16 04:53:17.620165 containerd[1618]: 2026-04-16 04:53:17.537 [INFO][5000] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 16 04:53:17.620165 containerd[1618]: 2026-04-16 04:53:17.547 [INFO][5000] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 16 04:53:17.620165 containerd[1618]: 2026-04-16 04:53:17.548 [INFO][5000] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.fb2c00b5d7f103f178591eed658ba727a7ced3c790085ed0303afc8a7d8ad681" host="localhost" Apr 16 04:53:17.620165 containerd[1618]: 2026-04-16 04:53:17.553 [INFO][5000] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.fb2c00b5d7f103f178591eed658ba727a7ced3c790085ed0303afc8a7d8ad681 Apr 16 04:53:17.620165 containerd[1618]: 2026-04-16 04:53:17.563 [INFO][5000] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.fb2c00b5d7f103f178591eed658ba727a7ced3c790085ed0303afc8a7d8ad681" host="localhost" Apr 16 04:53:17.620165 containerd[1618]: 2026-04-16 04:53:17.583 [INFO][5000] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.fb2c00b5d7f103f178591eed658ba727a7ced3c790085ed0303afc8a7d8ad681" host="localhost" Apr 16 04:53:17.620165 containerd[1618]: 2026-04-16 04:53:17.585 [INFO][5000] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.fb2c00b5d7f103f178591eed658ba727a7ced3c790085ed0303afc8a7d8ad681" host="localhost" Apr 16 04:53:17.620165 containerd[1618]: 2026-04-16 04:53:17.585 [INFO][5000] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 04:53:17.620165 containerd[1618]: 2026-04-16 04:53:17.585 [INFO][5000] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="fb2c00b5d7f103f178591eed658ba727a7ced3c790085ed0303afc8a7d8ad681" HandleID="k8s-pod-network.fb2c00b5d7f103f178591eed658ba727a7ced3c790085ed0303afc8a7d8ad681" Workload="localhost-k8s-coredns--674b8bbfcf--mlhrm-eth0" Apr 16 04:53:17.623305 containerd[1618]: 2026-04-16 04:53:17.588 [INFO][4980] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fb2c00b5d7f103f178591eed658ba727a7ced3c790085ed0303afc8a7d8ad681" Namespace="kube-system" Pod="coredns-674b8bbfcf-mlhrm" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--mlhrm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--mlhrm-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"f77a2220-8e6b-4453-a6fc-099782b9a146", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 4, 52, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-mlhrm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2f5ea6ebbcf", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 04:53:17.623305 containerd[1618]: 2026-04-16 04:53:17.588 [INFO][4980] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="fb2c00b5d7f103f178591eed658ba727a7ced3c790085ed0303afc8a7d8ad681" Namespace="kube-system" Pod="coredns-674b8bbfcf-mlhrm" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--mlhrm-eth0" Apr 16 04:53:17.623305 containerd[1618]: 2026-04-16 04:53:17.589 [INFO][4980] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2f5ea6ebbcf ContainerID="fb2c00b5d7f103f178591eed658ba727a7ced3c790085ed0303afc8a7d8ad681" Namespace="kube-system" Pod="coredns-674b8bbfcf-mlhrm" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--mlhrm-eth0" Apr 16 04:53:17.623305 containerd[1618]: 2026-04-16 04:53:17.598 [INFO][4980] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fb2c00b5d7f103f178591eed658ba727a7ced3c790085ed0303afc8a7d8ad681" Namespace="kube-system" Pod="coredns-674b8bbfcf-mlhrm" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--mlhrm-eth0" Apr 16 04:53:17.623305 containerd[1618]: 2026-04-16 04:53:17.600 [INFO][4980] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fb2c00b5d7f103f178591eed658ba727a7ced3c790085ed0303afc8a7d8ad681" Namespace="kube-system" Pod="coredns-674b8bbfcf-mlhrm" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--mlhrm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--mlhrm-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"f77a2220-8e6b-4453-a6fc-099782b9a146", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 4, 52, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fb2c00b5d7f103f178591eed658ba727a7ced3c790085ed0303afc8a7d8ad681", Pod:"coredns-674b8bbfcf-mlhrm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2f5ea6ebbcf", MAC:"ae:5b:4b:e1:7c:05", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 04:53:17.623305 containerd[1618]: 2026-04-16 04:53:17.617 [INFO][4980] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fb2c00b5d7f103f178591eed658ba727a7ced3c790085ed0303afc8a7d8ad681" Namespace="kube-system" Pod="coredns-674b8bbfcf-mlhrm" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--mlhrm-eth0" Apr 16 04:53:17.667398 containerd[1618]: time="2026-04-16T04:53:17.667293869Z" level=info msg="connecting to shim fb2c00b5d7f103f178591eed658ba727a7ced3c790085ed0303afc8a7d8ad681" address="unix:///run/containerd/s/4c9073facef6431a92f2791e1beadade1e2d274e984591e8ec220b1f921bc4be" namespace=k8s.io protocol=ttrpc version=3 Apr 16 04:53:17.715136 systemd[1]: Started cri-containerd-fb2c00b5d7f103f178591eed658ba727a7ced3c790085ed0303afc8a7d8ad681.scope - libcontainer container fb2c00b5d7f103f178591eed658ba727a7ced3c790085ed0303afc8a7d8ad681. Apr 16 04:53:17.744016 systemd-resolved[1423]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 16 04:53:17.801539 containerd[1618]: time="2026-04-16T04:53:17.801493201Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-mlhrm,Uid:f77a2220-8e6b-4453-a6fc-099782b9a146,Namespace:kube-system,Attempt:0,} returns sandbox id \"fb2c00b5d7f103f178591eed658ba727a7ced3c790085ed0303afc8a7d8ad681\"" Apr 16 04:53:17.802765 kubelet[2764]: E0416 04:53:17.802717 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:53:17.812604 containerd[1618]: time="2026-04-16T04:53:17.812405466Z" level=info msg="CreateContainer within sandbox \"fb2c00b5d7f103f178591eed658ba727a7ced3c790085ed0303afc8a7d8ad681\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 16 04:53:17.840588 containerd[1618]: time="2026-04-16T04:53:17.840531592Z" level=info msg="Container 1f91d072ae19d706fa219a91b40ccbc7690e92ab1f4842afa4ceaa14eac5aebc: CDI devices from CRI Config.CDIDevices: []" Apr 16 04:53:17.852322 containerd[1618]: time="2026-04-16T04:53:17.851604485Z" level=info msg="CreateContainer within sandbox \"fb2c00b5d7f103f178591eed658ba727a7ced3c790085ed0303afc8a7d8ad681\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1f91d072ae19d706fa219a91b40ccbc7690e92ab1f4842afa4ceaa14eac5aebc\"" Apr 16 04:53:17.852848 containerd[1618]: time="2026-04-16T04:53:17.852569831Z" level=info msg="StartContainer for \"1f91d072ae19d706fa219a91b40ccbc7690e92ab1f4842afa4ceaa14eac5aebc\"" Apr 16 04:53:17.853421 containerd[1618]: time="2026-04-16T04:53:17.853396141Z" level=info msg="connecting to shim 1f91d072ae19d706fa219a91b40ccbc7690e92ab1f4842afa4ceaa14eac5aebc" address="unix:///run/containerd/s/4c9073facef6431a92f2791e1beadade1e2d274e984591e8ec220b1f921bc4be" protocol=ttrpc version=3 Apr 16 04:53:17.871091 kubelet[2764]: E0416 04:53:17.870902 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:53:17.885854 systemd[1]: Started cri-containerd-1f91d072ae19d706fa219a91b40ccbc7690e92ab1f4842afa4ceaa14eac5aebc.scope - libcontainer container 1f91d072ae19d706fa219a91b40ccbc7690e92ab1f4842afa4ceaa14eac5aebc. Apr 16 04:53:17.962758 containerd[1618]: time="2026-04-16T04:53:17.962715239Z" level=info msg="StartContainer for \"1f91d072ae19d706fa219a91b40ccbc7690e92ab1f4842afa4ceaa14eac5aebc\" returns successfully" Apr 16 04:53:17.974181 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount164662353.mount: Deactivated successfully. Apr 16 04:53:17.974896 systemd-networkd[1497]: cali9e14a1616dd: Gained IPv6LL Apr 16 04:53:18.037800 containerd[1618]: time="2026-04-16T04:53:18.036847073Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:53:18.044512 containerd[1618]: time="2026-04-16T04:53:18.041465539Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Apr 16 04:53:18.048831 containerd[1618]: time="2026-04-16T04:53:18.048667322Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:53:18.072490 containerd[1618]: time="2026-04-16T04:53:18.071639679Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:53:18.073117 containerd[1618]: time="2026-04-16T04:53:18.073068289Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 3.202983027s" Apr 16 04:53:18.073176 containerd[1618]: time="2026-04-16T04:53:18.073115333Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Apr 16 04:53:18.077764 containerd[1618]: time="2026-04-16T04:53:18.077384534Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Apr 16 04:53:18.091083 containerd[1618]: time="2026-04-16T04:53:18.090686829Z" level=info msg="CreateContainer within sandbox \"b3ee3b32cd52b6813161fec60e67034c3ea295536a2ac8bdb34ff89341bac292\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Apr 16 04:53:18.122057 containerd[1618]: time="2026-04-16T04:53:18.119896810Z" level=info msg="Container cd72dd1c5992972c86a3e044805db39a4888b754e57238949673724265c6a90f: CDI devices from CRI Config.CDIDevices: []" Apr 16 04:53:18.202310 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1676752614.mount: Deactivated successfully. Apr 16 04:53:18.241836 containerd[1618]: time="2026-04-16T04:53:18.241703464Z" level=info msg="CreateContainer within sandbox \"b3ee3b32cd52b6813161fec60e67034c3ea295536a2ac8bdb34ff89341bac292\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"cd72dd1c5992972c86a3e044805db39a4888b754e57238949673724265c6a90f\"" Apr 16 04:53:18.254973 containerd[1618]: time="2026-04-16T04:53:18.254859375Z" level=info msg="StartContainer for \"cd72dd1c5992972c86a3e044805db39a4888b754e57238949673724265c6a90f\"" Apr 16 04:53:18.263766 containerd[1618]: time="2026-04-16T04:53:18.263662692Z" level=info msg="connecting to shim cd72dd1c5992972c86a3e044805db39a4888b754e57238949673724265c6a90f" address="unix:///run/containerd/s/39cb236f350e7f960323eb89314735252da299b041bb8a16b1633eb24b367358" protocol=ttrpc version=3 Apr 16 04:53:18.316112 systemd[1]: Started cri-containerd-cd72dd1c5992972c86a3e044805db39a4888b754e57238949673724265c6a90f.scope - libcontainer container cd72dd1c5992972c86a3e044805db39a4888b754e57238949673724265c6a90f. Apr 16 04:53:18.413254 containerd[1618]: time="2026-04-16T04:53:18.412829117Z" level=info msg="StartContainer for \"cd72dd1c5992972c86a3e044805db39a4888b754e57238949673724265c6a90f\" returns successfully" Apr 16 04:53:18.419151 systemd-networkd[1497]: cali948f885ee06: Gained IPv6LL Apr 16 04:53:18.834462 systemd[1]: Started sshd@9-10.0.0.15:22-10.0.0.1:34728.service - OpenSSH per-connection server daemon (10.0.0.1:34728). Apr 16 04:53:18.877076 kubelet[2764]: E0416 04:53:18.877030 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:53:18.901593 kubelet[2764]: E0416 04:53:18.901443 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:53:18.911456 kubelet[2764]: I0416 04:53:18.911306 2764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-mf89m" podStartSLOduration=48.911288866 podStartE2EDuration="48.911288866s" podCreationTimestamp="2026-04-16 04:52:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 04:53:17.916384288 +0000 UTC m=+53.908118693" watchObservedRunningTime="2026-04-16 04:53:18.911288866 +0000 UTC m=+54.903023278" Apr 16 04:53:18.997691 systemd-networkd[1497]: cali2f5ea6ebbcf: Gained IPv6LL Apr 16 04:53:19.071088 kubelet[2764]: I0416 04:53:19.067970 2764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-6dcf7c684f-4bhsj" podStartSLOduration=3.843760263 podStartE2EDuration="16.067897784s" podCreationTimestamp="2026-04-16 04:53:03 +0000 UTC" firstStartedPulling="2026-04-16 04:53:05.852087376 +0000 UTC m=+41.843821780" lastFinishedPulling="2026-04-16 04:53:18.076224894 +0000 UTC m=+54.067959301" observedRunningTime="2026-04-16 04:53:19.065806499 +0000 UTC m=+55.057540911" watchObservedRunningTime="2026-04-16 04:53:19.067897784 +0000 UTC m=+55.059632198" Apr 16 04:53:19.071088 kubelet[2764]: I0416 04:53:19.069354 2764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-mlhrm" podStartSLOduration=49.069289502 podStartE2EDuration="49.069289502s" podCreationTimestamp="2026-04-16 04:52:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 04:53:18.913162905 +0000 UTC m=+54.904897315" watchObservedRunningTime="2026-04-16 04:53:19.069289502 +0000 UTC m=+55.061023921" Apr 16 04:53:19.216631 sshd[5160]: Accepted publickey for core from 10.0.0.1 port 34728 ssh2: RSA SHA256:IiTfK2rD8LSHXggGFdyxto9bXxmDCS3DeyOSiMga61s Apr 16 04:53:19.225302 sshd-session[5160]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:53:19.251687 systemd-logind[1586]: New session 10 of user core. Apr 16 04:53:19.260325 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 16 04:53:19.369359 sshd[5168]: Connection closed by 10.0.0.1 port 34728 Apr 16 04:53:19.369958 sshd-session[5160]: pam_unix(sshd:session): session closed for user core Apr 16 04:53:19.384325 systemd[1]: sshd@9-10.0.0.15:22-10.0.0.1:34728.service: Deactivated successfully. Apr 16 04:53:19.386348 systemd[1]: session-10.scope: Deactivated successfully. Apr 16 04:53:19.387238 systemd-logind[1586]: Session 10 logged out. Waiting for processes to exit. Apr 16 04:53:19.390778 systemd[1]: Started sshd@10-10.0.0.15:22-10.0.0.1:34730.service - OpenSSH per-connection server daemon (10.0.0.1:34730). Apr 16 04:53:19.391274 systemd-logind[1586]: Removed session 10. Apr 16 04:53:19.484085 sshd[5186]: Accepted publickey for core from 10.0.0.1 port 34730 ssh2: RSA SHA256:IiTfK2rD8LSHXggGFdyxto9bXxmDCS3DeyOSiMga61s Apr 16 04:53:19.485418 sshd-session[5186]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:53:19.497663 systemd-logind[1586]: New session 11 of user core. Apr 16 04:53:19.511561 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 16 04:53:19.794142 sshd[5189]: Connection closed by 10.0.0.1 port 34730 Apr 16 04:53:19.794903 sshd-session[5186]: pam_unix(sshd:session): session closed for user core Apr 16 04:53:19.926690 systemd[1]: sshd@10-10.0.0.15:22-10.0.0.1:34730.service: Deactivated successfully. Apr 16 04:53:19.978666 systemd[1]: session-11.scope: Deactivated successfully. Apr 16 04:53:19.981650 kubelet[2764]: E0416 04:53:19.978683 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:53:19.989510 kubelet[2764]: E0416 04:53:19.985552 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:53:19.985834 systemd-logind[1586]: Session 11 logged out. Waiting for processes to exit. Apr 16 04:53:20.035778 systemd[1]: Started sshd@11-10.0.0.15:22-10.0.0.1:34734.service - OpenSSH per-connection server daemon (10.0.0.1:34734). Apr 16 04:53:20.043376 systemd-logind[1586]: Removed session 11. Apr 16 04:53:20.243554 sshd[5209]: Accepted publickey for core from 10.0.0.1 port 34734 ssh2: RSA SHA256:IiTfK2rD8LSHXggGFdyxto9bXxmDCS3DeyOSiMga61s Apr 16 04:53:20.247458 sshd-session[5209]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:53:20.275650 systemd-logind[1586]: New session 12 of user core. Apr 16 04:53:20.280032 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 16 04:53:20.682327 sshd[5213]: Connection closed by 10.0.0.1 port 34734 Apr 16 04:53:20.684330 sshd-session[5209]: pam_unix(sshd:session): session closed for user core Apr 16 04:53:20.691846 systemd[1]: sshd@11-10.0.0.15:22-10.0.0.1:34734.service: Deactivated successfully. Apr 16 04:53:20.692438 systemd-logind[1586]: Session 12 logged out. Waiting for processes to exit. Apr 16 04:53:20.694747 systemd[1]: session-12.scope: Deactivated successfully. Apr 16 04:53:20.702611 systemd-logind[1586]: Removed session 12. Apr 16 04:53:20.987123 kubelet[2764]: E0416 04:53:20.986885 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:53:20.992159 kubelet[2764]: E0416 04:53:20.991585 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:53:21.521175 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount197691648.mount: Deactivated successfully. Apr 16 04:53:22.233610 containerd[1618]: time="2026-04-16T04:53:22.233488176Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:53:22.234268 containerd[1618]: time="2026-04-16T04:53:22.234225125Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Apr 16 04:53:22.235219 containerd[1618]: time="2026-04-16T04:53:22.235162375Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:53:22.237425 containerd[1618]: time="2026-04-16T04:53:22.237380419Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:53:22.237887 containerd[1618]: time="2026-04-16T04:53:22.237865905Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 4.160456647s" Apr 16 04:53:22.237950 containerd[1618]: time="2026-04-16T04:53:22.237893246Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Apr 16 04:53:22.239151 containerd[1618]: time="2026-04-16T04:53:22.239057851Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 16 04:53:22.254110 containerd[1618]: time="2026-04-16T04:53:22.254036069Z" level=info msg="CreateContainer within sandbox \"8e6584e9dc6018a5ade3c1f23382481332f1a4dd06b7dff5349f2abe8bfffbc5\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Apr 16 04:53:22.268051 containerd[1618]: time="2026-04-16T04:53:22.267971523Z" level=info msg="Container 34ef6440d10eb6ec8d71ee4221a6b22f76487a12b5598394b17fcdce7db9815c: CDI devices from CRI Config.CDIDevices: []" Apr 16 04:53:22.300744 containerd[1618]: time="2026-04-16T04:53:22.300639205Z" level=info msg="CreateContainer within sandbox \"8e6584e9dc6018a5ade3c1f23382481332f1a4dd06b7dff5349f2abe8bfffbc5\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"34ef6440d10eb6ec8d71ee4221a6b22f76487a12b5598394b17fcdce7db9815c\"" Apr 16 04:53:22.301452 containerd[1618]: time="2026-04-16T04:53:22.301421461Z" level=info msg="StartContainer for \"34ef6440d10eb6ec8d71ee4221a6b22f76487a12b5598394b17fcdce7db9815c\"" Apr 16 04:53:22.304309 containerd[1618]: time="2026-04-16T04:53:22.303408034Z" level=info msg="connecting to shim 34ef6440d10eb6ec8d71ee4221a6b22f76487a12b5598394b17fcdce7db9815c" address="unix:///run/containerd/s/6209150428d13aad34bf814c8e982c7029a54f2d50d0065f21ec0f3b22449794" protocol=ttrpc version=3 Apr 16 04:53:22.368178 systemd[1]: Started cri-containerd-34ef6440d10eb6ec8d71ee4221a6b22f76487a12b5598394b17fcdce7db9815c.scope - libcontainer container 34ef6440d10eb6ec8d71ee4221a6b22f76487a12b5598394b17fcdce7db9815c. Apr 16 04:53:22.511459 containerd[1618]: time="2026-04-16T04:53:22.511071682Z" level=info msg="StartContainer for \"34ef6440d10eb6ec8d71ee4221a6b22f76487a12b5598394b17fcdce7db9815c\" returns successfully" Apr 16 04:53:23.108985 kubelet[2764]: I0416 04:53:23.108776 2764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-5b85766d88-pd8mg" podStartSLOduration=36.388674787 podStartE2EDuration="44.108744419s" podCreationTimestamp="2026-04-16 04:52:39 +0000 UTC" firstStartedPulling="2026-04-16 04:53:14.518775255 +0000 UTC m=+50.510509659" lastFinishedPulling="2026-04-16 04:53:22.238844884 +0000 UTC m=+58.230579291" observedRunningTime="2026-04-16 04:53:23.101738529 +0000 UTC m=+59.093472942" watchObservedRunningTime="2026-04-16 04:53:23.108744419 +0000 UTC m=+59.100478831" Apr 16 04:53:25.727671 systemd[1]: Started sshd@12-10.0.0.15:22-10.0.0.1:41266.service - OpenSSH per-connection server daemon (10.0.0.1:41266). Apr 16 04:53:25.985697 sshd[5370]: Accepted publickey for core from 10.0.0.1 port 41266 ssh2: RSA SHA256:IiTfK2rD8LSHXggGFdyxto9bXxmDCS3DeyOSiMga61s Apr 16 04:53:25.996066 sshd-session[5370]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:53:26.012485 systemd-logind[1586]: New session 13 of user core. Apr 16 04:53:26.018469 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 16 04:53:26.235289 containerd[1618]: time="2026-04-16T04:53:26.235105816Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:53:26.236753 containerd[1618]: time="2026-04-16T04:53:26.236563951Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Apr 16 04:53:26.239625 containerd[1618]: time="2026-04-16T04:53:26.238378286Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:53:26.244934 containerd[1618]: time="2026-04-16T04:53:26.244773473Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 4.005685394s" Apr 16 04:53:26.244934 containerd[1618]: time="2026-04-16T04:53:26.244848209Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 16 04:53:26.245856 containerd[1618]: time="2026-04-16T04:53:26.244996576Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:53:26.246756 containerd[1618]: time="2026-04-16T04:53:26.246601910Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Apr 16 04:53:26.257239 containerd[1618]: time="2026-04-16T04:53:26.257196585Z" level=info msg="CreateContainer within sandbox \"f4e4f3cfe9b3929c13a1ef1d7e2de21082c8d29385ae011b47dae1b9a7ef8bb1\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 16 04:53:26.334737 containerd[1618]: time="2026-04-16T04:53:26.333678982Z" level=info msg="Container 96093c991d068c67b2fbec28e04e382e03194b65ed21d039e09f03f5ef30f078: CDI devices from CRI Config.CDIDevices: []" Apr 16 04:53:26.355603 containerd[1618]: time="2026-04-16T04:53:26.355462319Z" level=info msg="CreateContainer within sandbox \"f4e4f3cfe9b3929c13a1ef1d7e2de21082c8d29385ae011b47dae1b9a7ef8bb1\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"96093c991d068c67b2fbec28e04e382e03194b65ed21d039e09f03f5ef30f078\"" Apr 16 04:53:26.360016 containerd[1618]: time="2026-04-16T04:53:26.359842430Z" level=info msg="StartContainer for \"96093c991d068c67b2fbec28e04e382e03194b65ed21d039e09f03f5ef30f078\"" Apr 16 04:53:26.364784 containerd[1618]: time="2026-04-16T04:53:26.364537235Z" level=info msg="connecting to shim 96093c991d068c67b2fbec28e04e382e03194b65ed21d039e09f03f5ef30f078" address="unix:///run/containerd/s/8f0aecdad3f78f5e7f9fbd3f1ce694e088abd19c8d3a74d3406010452657f431" protocol=ttrpc version=3 Apr 16 04:53:26.416256 sshd[5373]: Connection closed by 10.0.0.1 port 41266 Apr 16 04:53:26.419962 sshd-session[5370]: pam_unix(sshd:session): session closed for user core Apr 16 04:53:26.496119 systemd[1]: sshd@12-10.0.0.15:22-10.0.0.1:41266.service: Deactivated successfully. Apr 16 04:53:26.500274 systemd[1]: session-13.scope: Deactivated successfully. Apr 16 04:53:26.503039 systemd-logind[1586]: Session 13 logged out. Waiting for processes to exit. Apr 16 04:53:26.529551 systemd[1]: Started cri-containerd-96093c991d068c67b2fbec28e04e382e03194b65ed21d039e09f03f5ef30f078.scope - libcontainer container 96093c991d068c67b2fbec28e04e382e03194b65ed21d039e09f03f5ef30f078. Apr 16 04:53:26.531553 systemd-logind[1586]: Removed session 13. Apr 16 04:53:26.702965 containerd[1618]: time="2026-04-16T04:53:26.701699920Z" level=info msg="StartContainer for \"96093c991d068c67b2fbec28e04e382e03194b65ed21d039e09f03f5ef30f078\" returns successfully" Apr 16 04:53:27.073210 kubelet[2764]: I0416 04:53:27.073104 2764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-5655d84d6d-jt5hf" podStartSLOduration=38.491386589 podStartE2EDuration="48.073090045s" podCreationTimestamp="2026-04-16 04:52:39 +0000 UTC" firstStartedPulling="2026-04-16 04:53:16.664369642 +0000 UTC m=+52.656104057" lastFinishedPulling="2026-04-16 04:53:26.246073094 +0000 UTC m=+62.237807513" observedRunningTime="2026-04-16 04:53:27.072312155 +0000 UTC m=+63.064046564" watchObservedRunningTime="2026-04-16 04:53:27.073090045 +0000 UTC m=+63.064824460" Apr 16 04:53:28.055399 kubelet[2764]: I0416 04:53:28.055353 2764 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 16 04:53:29.059226 kubelet[2764]: I0416 04:53:29.059164 2764 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 16 04:53:31.196056 containerd[1618]: time="2026-04-16T04:53:31.193687241Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:53:31.196788 containerd[1618]: time="2026-04-16T04:53:31.196756878Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Apr 16 04:53:31.199349 containerd[1618]: time="2026-04-16T04:53:31.199146255Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:53:31.207992 containerd[1618]: time="2026-04-16T04:53:31.207780672Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:53:31.213276 containerd[1618]: time="2026-04-16T04:53:31.213181861Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 4.9663852s" Apr 16 04:53:31.213276 containerd[1618]: time="2026-04-16T04:53:31.213275917Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Apr 16 04:53:31.215318 containerd[1618]: time="2026-04-16T04:53:31.215291876Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 16 04:53:31.256981 containerd[1618]: time="2026-04-16T04:53:31.255498673Z" level=info msg="CreateContainer within sandbox \"8b5bbf44cb54679559d5ade3092b95b64bf6b0b82088e61dcf593ee30b39de21\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 16 04:53:31.288521 containerd[1618]: time="2026-04-16T04:53:31.288407828Z" level=info msg="Container 4971643f15b8a5643375473aaf51e684ffaa47c92a2b2ec59ba56fbddc952cef: CDI devices from CRI Config.CDIDevices: []" Apr 16 04:53:31.318031 containerd[1618]: time="2026-04-16T04:53:31.317877504Z" level=info msg="CreateContainer within sandbox \"8b5bbf44cb54679559d5ade3092b95b64bf6b0b82088e61dcf593ee30b39de21\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"4971643f15b8a5643375473aaf51e684ffaa47c92a2b2ec59ba56fbddc952cef\"" Apr 16 04:53:31.324957 containerd[1618]: time="2026-04-16T04:53:31.324811700Z" level=info msg="StartContainer for \"4971643f15b8a5643375473aaf51e684ffaa47c92a2b2ec59ba56fbddc952cef\"" Apr 16 04:53:31.335880 containerd[1618]: time="2026-04-16T04:53:31.329345971Z" level=info msg="connecting to shim 4971643f15b8a5643375473aaf51e684ffaa47c92a2b2ec59ba56fbddc952cef" address="unix:///run/containerd/s/979c5972c3b2f299f515111569b79c2d4af9ccecf0e6ceed4a3d5173869ebf04" protocol=ttrpc version=3 Apr 16 04:53:31.520326 systemd[1]: Started cri-containerd-4971643f15b8a5643375473aaf51e684ffaa47c92a2b2ec59ba56fbddc952cef.scope - libcontainer container 4971643f15b8a5643375473aaf51e684ffaa47c92a2b2ec59ba56fbddc952cef. Apr 16 04:53:31.523153 systemd[1]: Started sshd@13-10.0.0.15:22-10.0.0.1:41272.service - OpenSSH per-connection server daemon (10.0.0.1:41272). Apr 16 04:53:31.629414 containerd[1618]: time="2026-04-16T04:53:31.628734038Z" level=info msg="StartContainer for \"4971643f15b8a5643375473aaf51e684ffaa47c92a2b2ec59ba56fbddc952cef\" returns successfully" Apr 16 04:53:31.636606 sshd[5457]: Accepted publickey for core from 10.0.0.1 port 41272 ssh2: RSA SHA256:IiTfK2rD8LSHXggGFdyxto9bXxmDCS3DeyOSiMga61s Apr 16 04:53:31.639131 sshd-session[5457]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:53:31.647367 systemd-logind[1586]: New session 14 of user core. Apr 16 04:53:31.653982 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 16 04:53:31.687298 containerd[1618]: time="2026-04-16T04:53:31.686855783Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:53:31.699449 containerd[1618]: time="2026-04-16T04:53:31.699211935Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Apr 16 04:53:31.702035 containerd[1618]: time="2026-04-16T04:53:31.701975593Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 486.655789ms" Apr 16 04:53:31.702134 containerd[1618]: time="2026-04-16T04:53:31.702042321Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 16 04:53:31.720087 containerd[1618]: time="2026-04-16T04:53:31.719248794Z" level=info msg="CreateContainer within sandbox \"1ce9066d3c65b39b42818be18fc314e93a9d646114d338172c004a58f6e57f18\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 16 04:53:31.745667 containerd[1618]: time="2026-04-16T04:53:31.745146071Z" level=info msg="Container b75fdc8f6bda119854e524032324b52c707f741322b76f9cbc7c6d36f58f340a: CDI devices from CRI Config.CDIDevices: []" Apr 16 04:53:31.756565 containerd[1618]: time="2026-04-16T04:53:31.756506976Z" level=info msg="CreateContainer within sandbox \"1ce9066d3c65b39b42818be18fc314e93a9d646114d338172c004a58f6e57f18\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"b75fdc8f6bda119854e524032324b52c707f741322b76f9cbc7c6d36f58f340a\"" Apr 16 04:53:31.758441 containerd[1618]: time="2026-04-16T04:53:31.758317564Z" level=info msg="StartContainer for \"b75fdc8f6bda119854e524032324b52c707f741322b76f9cbc7c6d36f58f340a\"" Apr 16 04:53:31.768488 containerd[1618]: time="2026-04-16T04:53:31.768455004Z" level=info msg="connecting to shim b75fdc8f6bda119854e524032324b52c707f741322b76f9cbc7c6d36f58f340a" address="unix:///run/containerd/s/a928cf8ef1760ea58fa286d988960627243465f4565754e2b2754c39ed3f5c03" protocol=ttrpc version=3 Apr 16 04:53:31.805956 systemd[1]: Started cri-containerd-b75fdc8f6bda119854e524032324b52c707f741322b76f9cbc7c6d36f58f340a.scope - libcontainer container b75fdc8f6bda119854e524032324b52c707f741322b76f9cbc7c6d36f58f340a. Apr 16 04:53:31.859434 sshd[5487]: Connection closed by 10.0.0.1 port 41272 Apr 16 04:53:31.861241 sshd-session[5457]: pam_unix(sshd:session): session closed for user core Apr 16 04:53:31.873562 systemd[1]: sshd@13-10.0.0.15:22-10.0.0.1:41272.service: Deactivated successfully. Apr 16 04:53:31.875593 systemd[1]: session-14.scope: Deactivated successfully. Apr 16 04:53:31.885806 systemd-logind[1586]: Session 14 logged out. Waiting for processes to exit. Apr 16 04:53:31.888324 systemd-logind[1586]: Removed session 14. Apr 16 04:53:32.047868 containerd[1618]: time="2026-04-16T04:53:32.047820304Z" level=info msg="StartContainer for \"b75fdc8f6bda119854e524032324b52c707f741322b76f9cbc7c6d36f58f340a\" returns successfully" Apr 16 04:53:32.151319 kubelet[2764]: I0416 04:53:32.150700 2764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-85fbbfb5c9-s7lgg" podStartSLOduration=37.684968211 podStartE2EDuration="52.150680786s" podCreationTimestamp="2026-04-16 04:52:40 +0000 UTC" firstStartedPulling="2026-04-16 04:53:16.748614615 +0000 UTC m=+52.740349020" lastFinishedPulling="2026-04-16 04:53:31.21432719 +0000 UTC m=+67.206061595" observedRunningTime="2026-04-16 04:53:32.14829345 +0000 UTC m=+68.140027860" watchObservedRunningTime="2026-04-16 04:53:32.150680786 +0000 UTC m=+68.142415201" Apr 16 04:53:32.249697 kubelet[2764]: I0416 04:53:32.249645 2764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-5655d84d6d-qrljt" podStartSLOduration=38.489574608 podStartE2EDuration="53.24962788s" podCreationTimestamp="2026-04-16 04:52:39 +0000 UTC" firstStartedPulling="2026-04-16 04:53:16.943081173 +0000 UTC m=+52.934815580" lastFinishedPulling="2026-04-16 04:53:31.703134447 +0000 UTC m=+67.694868852" observedRunningTime="2026-04-16 04:53:32.173253695 +0000 UTC m=+68.164988100" watchObservedRunningTime="2026-04-16 04:53:32.24962788 +0000 UTC m=+68.241362295" Apr 16 04:53:35.186076 kubelet[2764]: E0416 04:53:35.185864 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:53:36.921748 systemd[1]: Started sshd@14-10.0.0.15:22-10.0.0.1:40416.service - OpenSSH per-connection server daemon (10.0.0.1:40416). Apr 16 04:53:37.135480 sshd[5630]: Accepted publickey for core from 10.0.0.1 port 40416 ssh2: RSA SHA256:IiTfK2rD8LSHXggGFdyxto9bXxmDCS3DeyOSiMga61s Apr 16 04:53:37.140030 sshd-session[5630]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:53:37.151471 systemd-logind[1586]: New session 15 of user core. Apr 16 04:53:37.161695 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 16 04:53:37.536187 sshd[5633]: Connection closed by 10.0.0.1 port 40416 Apr 16 04:53:37.538170 sshd-session[5630]: pam_unix(sshd:session): session closed for user core Apr 16 04:53:37.551841 systemd[1]: sshd@14-10.0.0.15:22-10.0.0.1:40416.service: Deactivated successfully. Apr 16 04:53:37.555765 systemd[1]: session-15.scope: Deactivated successfully. Apr 16 04:53:37.558527 systemd-logind[1586]: Session 15 logged out. Waiting for processes to exit. Apr 16 04:53:37.563397 systemd-logind[1586]: Removed session 15. Apr 16 04:53:42.715820 systemd[1]: Started sshd@15-10.0.0.15:22-10.0.0.1:40420.service - OpenSSH per-connection server daemon (10.0.0.1:40420). Apr 16 04:53:43.227178 sshd[5654]: Accepted publickey for core from 10.0.0.1 port 40420 ssh2: RSA SHA256:IiTfK2rD8LSHXggGFdyxto9bXxmDCS3DeyOSiMga61s Apr 16 04:53:43.240200 sshd-session[5654]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:53:43.346415 systemd-logind[1586]: New session 16 of user core. Apr 16 04:53:43.427053 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 16 04:53:44.545737 sshd[5657]: Connection closed by 10.0.0.1 port 40420 Apr 16 04:53:44.546670 sshd-session[5654]: pam_unix(sshd:session): session closed for user core Apr 16 04:53:44.559185 systemd[1]: sshd@15-10.0.0.15:22-10.0.0.1:40420.service: Deactivated successfully. Apr 16 04:53:44.563819 systemd[1]: session-16.scope: Deactivated successfully. Apr 16 04:53:44.570738 systemd-logind[1586]: Session 16 logged out. Waiting for processes to exit. Apr 16 04:53:44.577961 systemd[1]: Started sshd@16-10.0.0.15:22-10.0.0.1:40422.service - OpenSSH per-connection server daemon (10.0.0.1:40422). Apr 16 04:53:44.579490 systemd-logind[1586]: Removed session 16. Apr 16 04:53:44.727320 sshd[5674]: Accepted publickey for core from 10.0.0.1 port 40422 ssh2: RSA SHA256:IiTfK2rD8LSHXggGFdyxto9bXxmDCS3DeyOSiMga61s Apr 16 04:53:44.760378 sshd-session[5674]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:53:44.783620 systemd-logind[1586]: New session 17 of user core. Apr 16 04:53:44.804331 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 16 04:53:47.747842 sshd[5677]: Connection closed by 10.0.0.1 port 40422 Apr 16 04:53:47.752176 sshd-session[5674]: pam_unix(sshd:session): session closed for user core Apr 16 04:53:47.803865 systemd[1]: sshd@16-10.0.0.15:22-10.0.0.1:40422.service: Deactivated successfully. Apr 16 04:53:47.828618 systemd[1]: session-17.scope: Deactivated successfully. Apr 16 04:53:47.836356 systemd[1]: session-17.scope: Consumed 1.384s CPU time, 57.8M memory peak. Apr 16 04:53:47.863244 systemd-logind[1586]: Session 17 logged out. Waiting for processes to exit. Apr 16 04:53:47.970382 systemd[1]: Started sshd@17-10.0.0.15:22-10.0.0.1:37704.service - OpenSSH per-connection server daemon (10.0.0.1:37704). Apr 16 04:53:48.000114 systemd-logind[1586]: Removed session 17. Apr 16 04:53:48.896501 sshd[5690]: Accepted publickey for core from 10.0.0.1 port 37704 ssh2: RSA SHA256:IiTfK2rD8LSHXggGFdyxto9bXxmDCS3DeyOSiMga61s Apr 16 04:53:48.905298 sshd-session[5690]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:53:49.007034 systemd-logind[1586]: New session 18 of user core. Apr 16 04:53:49.032240 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 16 04:53:53.283257 kubelet[2764]: E0416 04:53:53.282426 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:53:55.170586 sshd[5693]: Connection closed by 10.0.0.1 port 37704 Apr 16 04:53:55.169140 sshd-session[5690]: pam_unix(sshd:session): session closed for user core Apr 16 04:53:55.226969 systemd[1]: Started sshd@18-10.0.0.15:22-10.0.0.1:60296.service - OpenSSH per-connection server daemon (10.0.0.1:60296). Apr 16 04:53:55.233731 systemd[1]: sshd@17-10.0.0.15:22-10.0.0.1:37704.service: Deactivated successfully. Apr 16 04:53:55.258593 systemd[1]: session-18.scope: Deactivated successfully. Apr 16 04:53:55.269054 systemd[1]: session-18.scope: Consumed 3.007s CPU time, 43.1M memory peak. Apr 16 04:53:55.288955 systemd-logind[1586]: Session 18 logged out. Waiting for processes to exit. Apr 16 04:53:55.388599 systemd-logind[1586]: Removed session 18. Apr 16 04:53:56.603574 kubelet[2764]: E0416 04:53:56.602675 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:53:56.981458 sshd[5733]: Accepted publickey for core from 10.0.0.1 port 60296 ssh2: RSA SHA256:IiTfK2rD8LSHXggGFdyxto9bXxmDCS3DeyOSiMga61s Apr 16 04:53:57.093370 sshd-session[5733]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:53:57.198822 systemd-logind[1586]: New session 19 of user core. Apr 16 04:53:57.211903 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 16 04:53:58.298694 kubelet[2764]: E0416 04:53:58.298081 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:54:01.210682 sshd[5751]: Connection closed by 10.0.0.1 port 60296 Apr 16 04:54:01.230609 sshd-session[5733]: pam_unix(sshd:session): session closed for user core Apr 16 04:54:01.527329 systemd[1]: Started sshd@19-10.0.0.15:22-10.0.0.1:60302.service - OpenSSH per-connection server daemon (10.0.0.1:60302). Apr 16 04:54:01.587308 systemd[1]: sshd@18-10.0.0.15:22-10.0.0.1:60296.service: Deactivated successfully. Apr 16 04:54:01.649647 systemd[1]: session-19.scope: Deactivated successfully. Apr 16 04:54:01.651652 systemd[1]: session-19.scope: Consumed 2.199s CPU time, 29.4M memory peak. Apr 16 04:54:01.672584 systemd-logind[1586]: Session 19 logged out. Waiting for processes to exit. Apr 16 04:54:01.780858 systemd-logind[1586]: Removed session 19. Apr 16 04:54:02.369120 sshd[5794]: Accepted publickey for core from 10.0.0.1 port 60302 ssh2: RSA SHA256:IiTfK2rD8LSHXggGFdyxto9bXxmDCS3DeyOSiMga61s Apr 16 04:54:02.396893 sshd-session[5794]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:54:02.467021 systemd-logind[1586]: New session 20 of user core. Apr 16 04:54:02.477660 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 16 04:54:05.095754 sshd[5815]: Connection closed by 10.0.0.1 port 60302 Apr 16 04:54:05.098221 sshd-session[5794]: pam_unix(sshd:session): session closed for user core Apr 16 04:54:05.384414 systemd[1]: sshd@19-10.0.0.15:22-10.0.0.1:60302.service: Deactivated successfully. Apr 16 04:54:05.592653 systemd[1]: session-20.scope: Deactivated successfully. Apr 16 04:54:05.612112 systemd[1]: session-20.scope: Consumed 1.110s CPU time, 16.8M memory peak. Apr 16 04:54:05.726542 systemd-logind[1586]: Session 20 logged out. Waiting for processes to exit. Apr 16 04:54:05.743377 systemd-logind[1586]: Removed session 20. Apr 16 04:54:10.150245 systemd[1]: Started sshd@20-10.0.0.15:22-10.0.0.1:53896.service - OpenSSH per-connection server daemon (10.0.0.1:53896). Apr 16 04:54:10.844613 sshd[5871]: Accepted publickey for core from 10.0.0.1 port 53896 ssh2: RSA SHA256:IiTfK2rD8LSHXggGFdyxto9bXxmDCS3DeyOSiMga61s Apr 16 04:54:10.894335 sshd-session[5871]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:54:11.082754 systemd-logind[1586]: New session 21 of user core. Apr 16 04:54:11.125616 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 16 04:54:12.107652 sshd[5874]: Connection closed by 10.0.0.1 port 53896 Apr 16 04:54:12.126268 sshd-session[5871]: pam_unix(sshd:session): session closed for user core Apr 16 04:54:12.216818 systemd[1]: sshd@20-10.0.0.15:22-10.0.0.1:53896.service: Deactivated successfully. Apr 16 04:54:12.229622 systemd[1]: session-21.scope: Deactivated successfully. Apr 16 04:54:12.246030 systemd-logind[1586]: Session 21 logged out. Waiting for processes to exit. Apr 16 04:54:12.247995 systemd-logind[1586]: Removed session 21. Apr 16 04:54:14.210479 kubelet[2764]: E0416 04:54:14.207661 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:54:17.323840 systemd[1]: Started sshd@21-10.0.0.15:22-10.0.0.1:51420.service - OpenSSH per-connection server daemon (10.0.0.1:51420). Apr 16 04:54:18.409161 sshd[5888]: Accepted publickey for core from 10.0.0.1 port 51420 ssh2: RSA SHA256:IiTfK2rD8LSHXggGFdyxto9bXxmDCS3DeyOSiMga61s Apr 16 04:54:18.451923 sshd-session[5888]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:54:18.533104 systemd-logind[1586]: New session 22 of user core. Apr 16 04:54:18.541351 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 16 04:54:19.708017 sshd[5892]: Connection closed by 10.0.0.1 port 51420 Apr 16 04:54:19.712493 sshd-session[5888]: pam_unix(sshd:session): session closed for user core Apr 16 04:54:19.744366 systemd[1]: sshd@21-10.0.0.15:22-10.0.0.1:51420.service: Deactivated successfully. Apr 16 04:54:19.747738 systemd[1]: session-22.scope: Deactivated successfully. Apr 16 04:54:19.756554 systemd-logind[1586]: Session 22 logged out. Waiting for processes to exit. Apr 16 04:54:19.763150 systemd-logind[1586]: Removed session 22.