Mar 6 02:20:57.279145 kernel: Linux version 6.12.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Thu Mar 5 23:16:40 -00 2026 Mar 6 02:20:57.279178 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5bef16c10382b6f77f9493af2297475832ff2f09f1ada4155425ad9b32dd6e53 Mar 6 02:20:57.279194 kernel: BIOS-provided physical RAM map: Mar 6 02:20:57.279202 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Mar 6 02:20:57.279210 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Mar 6 02:20:57.279218 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Mar 6 02:20:57.279227 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Mar 6 02:20:57.279544 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Mar 6 02:20:57.279559 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 6 02:20:57.279569 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Mar 6 02:20:57.279578 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 6 02:20:57.279592 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Mar 6 02:20:57.279602 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 6 02:20:57.279612 kernel: NX (Execute Disable) protection: active Mar 6 02:20:57.279624 kernel: APIC: Static calls initialized Mar 6 02:20:57.279634 kernel: SMBIOS 2.8 present. Mar 6 02:20:57.279649 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Mar 6 02:20:57.279658 kernel: DMI: Memory slots populated: 1/1 Mar 6 02:20:57.279668 kernel: Hypervisor detected: KVM Mar 6 02:20:57.279678 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Mar 6 02:20:57.279688 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 6 02:20:57.279698 kernel: kvm-clock: using sched offset of 51845240293 cycles Mar 6 02:20:57.279709 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 6 02:20:57.279719 kernel: tsc: Detected 2445.426 MHz processor Mar 6 02:20:57.279729 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 6 02:20:57.279740 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 6 02:20:57.279756 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Mar 6 02:20:57.279931 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Mar 6 02:20:57.279948 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 6 02:20:57.279958 kernel: Using GB pages for direct mapping Mar 6 02:20:57.279967 kernel: ACPI: Early table checksum verification disabled Mar 6 02:20:57.279976 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Mar 6 02:20:57.279985 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 6 02:20:57.279993 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 6 02:20:57.280002 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 6 02:20:57.280015 kernel: ACPI: FACS 0x000000009CFE0000 000040 Mar 6 02:20:57.280025 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 6 02:20:57.280034 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 6 02:20:57.280043 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 6 02:20:57.280052 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 6 02:20:57.280066 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Mar 6 02:20:57.280078 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Mar 6 02:20:57.280088 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Mar 6 02:20:57.280099 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Mar 6 02:20:57.280110 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Mar 6 02:20:57.280119 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Mar 6 02:20:57.280129 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Mar 6 02:20:57.280138 kernel: No NUMA configuration found Mar 6 02:20:57.280147 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Mar 6 02:20:57.280160 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Mar 6 02:20:57.280170 kernel: Zone ranges: Mar 6 02:20:57.280179 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 6 02:20:57.280188 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Mar 6 02:20:57.280197 kernel: Normal empty Mar 6 02:20:57.280207 kernel: Device empty Mar 6 02:20:57.280216 kernel: Movable zone start for each node Mar 6 02:20:57.280225 kernel: Early memory node ranges Mar 6 02:20:57.280536 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Mar 6 02:20:57.280549 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Mar 6 02:20:57.280564 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Mar 6 02:20:57.280577 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 6 02:20:57.280586 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Mar 6 02:20:57.280596 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Mar 6 02:20:57.280605 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 6 02:20:57.280614 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 6 02:20:57.280624 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 6 02:20:57.280633 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 6 02:20:57.280643 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 6 02:20:57.280656 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 6 02:20:57.280665 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 6 02:20:57.280674 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 6 02:20:57.280684 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 6 02:20:57.280693 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 6 02:20:57.280702 kernel: TSC deadline timer available Mar 6 02:20:57.280711 kernel: CPU topo: Max. logical packages: 1 Mar 6 02:20:57.280723 kernel: CPU topo: Max. logical dies: 1 Mar 6 02:20:57.280733 kernel: CPU topo: Max. dies per package: 1 Mar 6 02:20:57.280746 kernel: CPU topo: Max. threads per core: 1 Mar 6 02:20:57.280756 kernel: CPU topo: Num. cores per package: 4 Mar 6 02:20:57.280919 kernel: CPU topo: Num. threads per package: 4 Mar 6 02:20:57.280931 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Mar 6 02:20:57.280940 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 6 02:20:57.280949 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 6 02:20:57.280958 kernel: kvm-guest: setup PV sched yield Mar 6 02:20:57.280967 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Mar 6 02:20:57.280977 kernel: Booting paravirtualized kernel on KVM Mar 6 02:20:57.280987 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 6 02:20:57.281000 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Mar 6 02:20:57.281009 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Mar 6 02:20:57.281018 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Mar 6 02:20:57.281028 kernel: pcpu-alloc: [0] 0 1 2 3 Mar 6 02:20:57.281040 kernel: kvm-guest: PV spinlocks enabled Mar 6 02:20:57.281050 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 6 02:20:57.281060 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5bef16c10382b6f77f9493af2297475832ff2f09f1ada4155425ad9b32dd6e53 Mar 6 02:20:57.281070 kernel: random: crng init done Mar 6 02:20:57.281083 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 6 02:20:57.281092 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 6 02:20:57.281101 kernel: Fallback order for Node 0: 0 Mar 6 02:20:57.281111 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Mar 6 02:20:57.281120 kernel: Policy zone: DMA32 Mar 6 02:20:57.281129 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 6 02:20:57.281138 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 6 02:20:57.281148 kernel: ftrace: allocating 40099 entries in 157 pages Mar 6 02:20:57.281157 kernel: ftrace: allocated 157 pages with 5 groups Mar 6 02:20:57.281171 kernel: Dynamic Preempt: voluntary Mar 6 02:20:57.281182 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 6 02:20:57.281194 kernel: rcu: RCU event tracing is enabled. Mar 6 02:20:57.281204 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 6 02:20:57.281214 kernel: Trampoline variant of Tasks RCU enabled. Mar 6 02:20:57.281223 kernel: Rude variant of Tasks RCU enabled. Mar 6 02:20:57.281510 kernel: Tracing variant of Tasks RCU enabled. Mar 6 02:20:57.281524 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 6 02:20:57.281534 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 6 02:20:57.281548 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 6 02:20:57.281557 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 6 02:20:57.281567 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 6 02:20:57.281577 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Mar 6 02:20:57.281586 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 6 02:20:57.281605 kernel: Console: colour VGA+ 80x25 Mar 6 02:20:57.281617 kernel: printk: legacy console [ttyS0] enabled Mar 6 02:20:57.281627 kernel: ACPI: Core revision 20240827 Mar 6 02:20:57.281637 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 6 02:20:57.281646 kernel: APIC: Switch to symmetric I/O mode setup Mar 6 02:20:57.281657 kernel: x2apic enabled Mar 6 02:20:57.281667 kernel: APIC: Switched APIC routing to: physical x2apic Mar 6 02:20:57.281681 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Mar 6 02:20:57.281691 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Mar 6 02:20:57.281702 kernel: kvm-guest: setup PV IPIs Mar 6 02:20:57.281712 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 6 02:20:57.281722 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Mar 6 02:20:57.281735 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Mar 6 02:20:57.281745 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 6 02:20:57.281755 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 6 02:20:57.281924 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 6 02:20:57.281937 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 6 02:20:57.281948 kernel: Spectre V2 : Mitigation: Retpolines Mar 6 02:20:57.281959 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 6 02:20:57.281971 kernel: Speculative Store Bypass: Vulnerable Mar 6 02:20:57.281986 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Mar 6 02:20:57.281996 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Mar 6 02:20:57.282006 kernel: active return thunk: srso_alias_return_thunk Mar 6 02:20:57.282016 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Mar 6 02:20:57.282026 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Mar 6 02:20:57.282037 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Mar 6 02:20:57.282049 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 6 02:20:57.282060 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 6 02:20:57.282070 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 6 02:20:57.282083 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 6 02:20:57.282093 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Mar 6 02:20:57.282103 kernel: Freeing SMP alternatives memory: 32K Mar 6 02:20:57.282113 kernel: pid_max: default: 32768 minimum: 301 Mar 6 02:20:57.282123 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Mar 6 02:20:57.282132 kernel: landlock: Up and running. Mar 6 02:20:57.282142 kernel: SELinux: Initializing. Mar 6 02:20:57.282151 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 6 02:20:57.282161 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 6 02:20:57.282175 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Mar 6 02:20:57.282185 kernel: Performance Events: PMU not available due to virtualization, using software events only. Mar 6 02:20:57.282196 kernel: signal: max sigframe size: 1776 Mar 6 02:20:57.282207 kernel: rcu: Hierarchical SRCU implementation. Mar 6 02:20:57.282218 kernel: rcu: Max phase no-delay instances is 400. Mar 6 02:20:57.282229 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Mar 6 02:20:57.282541 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 6 02:20:57.282553 kernel: smp: Bringing up secondary CPUs ... Mar 6 02:20:57.282565 kernel: smpboot: x86: Booting SMP configuration: Mar 6 02:20:57.282580 kernel: .... node #0, CPUs: #1 #2 #3 Mar 6 02:20:57.282590 kernel: smp: Brought up 1 node, 4 CPUs Mar 6 02:20:57.282600 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Mar 6 02:20:57.282611 kernel: Memory: 2420716K/2571752K available (14336K kernel code, 2445K rwdata, 26064K rodata, 46196K init, 2564K bss, 145096K reserved, 0K cma-reserved) Mar 6 02:20:57.282621 kernel: devtmpfs: initialized Mar 6 02:20:57.282631 kernel: x86/mm: Memory block size: 128MB Mar 6 02:20:57.282642 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 6 02:20:57.282654 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 6 02:20:57.282668 kernel: pinctrl core: initialized pinctrl subsystem Mar 6 02:20:57.282677 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 6 02:20:57.282687 kernel: audit: initializing netlink subsys (disabled) Mar 6 02:20:57.282697 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 6 02:20:57.282707 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 6 02:20:57.282716 kernel: audit: type=2000 audit(1772763633.483:1): state=initialized audit_enabled=0 res=1 Mar 6 02:20:57.282726 kernel: cpuidle: using governor menu Mar 6 02:20:57.282736 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 6 02:20:57.282746 kernel: dca service started, version 1.12.1 Mar 6 02:20:57.282756 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Mar 6 02:20:57.282932 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Mar 6 02:20:57.282947 kernel: PCI: Using configuration type 1 for base access Mar 6 02:20:57.282958 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 6 02:20:57.282968 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 6 02:20:57.282978 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 6 02:20:57.282987 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 6 02:20:57.283745 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 6 02:20:57.283936 kernel: ACPI: Added _OSI(Module Device) Mar 6 02:20:57.283949 kernel: ACPI: Added _OSI(Processor Device) Mar 6 02:20:57.283968 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 6 02:20:57.283978 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 6 02:20:57.283988 kernel: ACPI: Interpreter enabled Mar 6 02:20:57.283998 kernel: ACPI: PM: (supports S0 S3 S5) Mar 6 02:20:57.284007 kernel: ACPI: Using IOAPIC for interrupt routing Mar 6 02:20:57.284017 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 6 02:20:57.284027 kernel: PCI: Using E820 reservations for host bridge windows Mar 6 02:20:57.284036 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 6 02:20:57.284046 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 6 02:20:57.284700 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 6 02:20:57.285044 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 6 02:20:57.285207 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 6 02:20:57.285222 kernel: PCI host bridge to bus 0000:00 Mar 6 02:20:57.285715 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 6 02:20:57.286042 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 6 02:20:57.286205 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 6 02:20:57.288047 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Mar 6 02:20:57.288207 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 6 02:20:57.288662 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Mar 6 02:20:57.289129 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 6 02:20:57.289624 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Mar 6 02:20:57.289979 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Mar 6 02:20:57.290159 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Mar 6 02:20:57.290636 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Mar 6 02:20:57.290988 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Mar 6 02:20:57.291148 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 6 02:20:57.291619 kernel: pci 0000:00:01.0: pci_fixup_video+0x0/0x100 took 22460 usecs Mar 6 02:20:57.291971 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Mar 6 02:20:57.292139 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Mar 6 02:20:57.292741 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Mar 6 02:20:57.293055 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Mar 6 02:20:57.293223 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Mar 6 02:20:57.293668 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Mar 6 02:20:57.293988 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Mar 6 02:20:57.294141 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Mar 6 02:20:57.294621 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Mar 6 02:20:57.294944 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Mar 6 02:20:57.295097 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Mar 6 02:20:57.296721 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Mar 6 02:20:57.297736 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Mar 6 02:20:57.298081 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Mar 6 02:20:57.298549 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 6 02:20:57.298726 kernel: pci 0000:00:1f.0: quirk_ich7_lpc+0x0/0xc0 took 18554 usecs Mar 6 02:20:57.299063 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Mar 6 02:20:57.299223 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Mar 6 02:20:57.299683 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Mar 6 02:20:57.300042 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Mar 6 02:20:57.300203 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Mar 6 02:20:57.300218 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 6 02:20:57.300542 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 6 02:20:57.300557 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 6 02:20:57.300567 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 6 02:20:57.300576 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 6 02:20:57.300586 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 6 02:20:57.300596 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 6 02:20:57.300605 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 6 02:20:57.300615 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 6 02:20:57.300625 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 6 02:20:57.300640 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 6 02:20:57.300651 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 6 02:20:57.300664 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 6 02:20:57.300673 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 6 02:20:57.300683 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 6 02:20:57.300693 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 6 02:20:57.300702 kernel: iommu: Default domain type: Translated Mar 6 02:20:57.300712 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 6 02:20:57.300722 kernel: PCI: Using ACPI for IRQ routing Mar 6 02:20:57.300736 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 6 02:20:57.300746 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Mar 6 02:20:57.300756 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Mar 6 02:20:57.301093 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 6 02:20:57.301564 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 6 02:20:57.301726 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 6 02:20:57.301741 kernel: vgaarb: loaded Mar 6 02:20:57.301752 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 6 02:20:57.301933 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 6 02:20:57.301946 kernel: clocksource: Switched to clocksource kvm-clock Mar 6 02:20:57.301957 kernel: VFS: Disk quotas dquot_6.6.0 Mar 6 02:20:57.301967 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 6 02:20:57.301977 kernel: pnp: PnP ACPI init Mar 6 02:20:57.302144 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Mar 6 02:20:57.302161 kernel: pnp: PnP ACPI: found 6 devices Mar 6 02:20:57.302172 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 6 02:20:57.302186 kernel: NET: Registered PF_INET protocol family Mar 6 02:20:57.302196 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 6 02:20:57.302206 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 6 02:20:57.302215 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 6 02:20:57.302225 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 6 02:20:57.302537 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 6 02:20:57.302553 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 6 02:20:57.302564 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 6 02:20:57.302573 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 6 02:20:57.302587 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 6 02:20:57.302596 kernel: NET: Registered PF_XDP protocol family Mar 6 02:20:57.302743 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 6 02:20:57.303055 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 6 02:20:57.303197 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 6 02:20:57.303622 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Mar 6 02:20:57.303933 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Mar 6 02:20:57.304080 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Mar 6 02:20:57.304099 kernel: PCI: CLS 0 bytes, default 64 Mar 6 02:20:57.304110 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Mar 6 02:20:57.304120 kernel: Initialise system trusted keyrings Mar 6 02:20:57.304130 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 6 02:20:57.304142 kernel: Key type asymmetric registered Mar 6 02:20:57.304155 kernel: Asymmetric key parser 'x509' registered Mar 6 02:20:57.304165 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 6 02:20:57.304175 kernel: io scheduler mq-deadline registered Mar 6 02:20:57.304185 kernel: io scheduler kyber registered Mar 6 02:20:57.304194 kernel: io scheduler bfq registered Mar 6 02:20:57.304208 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 6 02:20:57.304220 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 6 02:20:57.304229 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 6 02:20:57.304964 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 6 02:20:57.304975 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 6 02:20:57.304985 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 6 02:20:57.304995 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 6 02:20:57.305005 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 6 02:20:57.305015 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 6 02:20:57.305703 kernel: rtc_cmos 00:04: RTC can wake from S4 Mar 6 02:20:57.305721 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Mar 6 02:20:57.306178 kernel: rtc_cmos 00:04: registered as rtc0 Mar 6 02:20:57.306627 kernel: rtc_cmos 00:04: setting system clock to 2026-03-06T02:20:53 UTC (1772763653) Mar 6 02:20:57.306954 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Mar 6 02:20:57.306971 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 6 02:20:57.306984 kernel: NET: Registered PF_INET6 protocol family Mar 6 02:20:57.307000 kernel: Segment Routing with IPv6 Mar 6 02:20:57.307009 kernel: In-situ OAM (IOAM) with IPv6 Mar 6 02:20:57.307019 kernel: NET: Registered PF_PACKET protocol family Mar 6 02:20:57.307029 kernel: Key type dns_resolver registered Mar 6 02:20:57.307039 kernel: IPI shorthand broadcast: enabled Mar 6 02:20:57.307048 kernel: sched_clock: Marking stable (15393122807, 2546995048)->(21235820239, -3295702384) Mar 6 02:20:57.307058 kernel: registered taskstats version 1 Mar 6 02:20:57.307068 kernel: Loading compiled-in X.509 certificates Mar 6 02:20:57.307077 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.74-flatcar: 30893fe9fd219d26109af079e6493e1c8b1c00af' Mar 6 02:20:57.307090 kernel: Demotion targets for Node 0: null Mar 6 02:20:57.307100 kernel: Key type .fscrypt registered Mar 6 02:20:57.307110 kernel: Key type fscrypt-provisioning registered Mar 6 02:20:57.307120 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 6 02:20:57.307130 kernel: ima: Allocated hash algorithm: sha1 Mar 6 02:20:57.307142 kernel: ima: No architecture policies found Mar 6 02:20:57.307154 kernel: clk: Disabling unused clocks Mar 6 02:20:57.307164 kernel: Warning: unable to open an initial console. Mar 6 02:20:57.307174 kernel: Freeing unused kernel image (initmem) memory: 46196K Mar 6 02:20:57.307188 kernel: Write protecting the kernel read-only data: 40960k Mar 6 02:20:57.307198 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Mar 6 02:20:57.307207 kernel: Run /init as init process Mar 6 02:20:57.307217 kernel: with arguments: Mar 6 02:20:57.307227 kernel: /init Mar 6 02:20:57.307538 kernel: with environment: Mar 6 02:20:57.307549 kernel: HOME=/ Mar 6 02:20:57.307558 kernel: TERM=linux Mar 6 02:20:57.307569 systemd[1]: Successfully made /usr/ read-only. Mar 6 02:20:57.307587 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 6 02:20:57.307598 systemd[1]: Detected virtualization kvm. Mar 6 02:20:57.307608 systemd[1]: Detected architecture x86-64. Mar 6 02:20:57.307618 systemd[1]: Running in initrd. Mar 6 02:20:57.307628 systemd[1]: No hostname configured, using default hostname. Mar 6 02:20:57.307638 systemd[1]: Hostname set to . Mar 6 02:20:57.307651 systemd[1]: Initializing machine ID from VM UUID. Mar 6 02:20:57.307677 systemd[1]: Queued start job for default target initrd.target. Mar 6 02:20:57.307690 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 6 02:20:57.307701 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 6 02:20:57.307713 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 6 02:20:57.307724 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 6 02:20:57.307735 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 6 02:20:57.307750 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 6 02:20:57.307762 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 6 02:20:57.307944 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 6 02:20:57.307957 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 6 02:20:57.307968 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 6 02:20:57.307978 systemd[1]: Reached target paths.target - Path Units. Mar 6 02:20:57.307989 systemd[1]: Reached target slices.target - Slice Units. Mar 6 02:20:57.308004 systemd[1]: Reached target swap.target - Swaps. Mar 6 02:20:57.308015 systemd[1]: Reached target timers.target - Timer Units. Mar 6 02:20:57.308025 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 6 02:20:57.308036 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 6 02:20:57.308047 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 6 02:20:57.308057 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 6 02:20:57.308619 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 6 02:20:57.308650 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 6 02:20:57.308661 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 6 02:20:57.308682 systemd[1]: Reached target sockets.target - Socket Units. Mar 6 02:20:57.308692 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 6 02:20:57.308705 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 6 02:20:57.308719 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 6 02:20:57.308730 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Mar 6 02:20:57.308741 systemd[1]: Starting systemd-fsck-usr.service... Mar 6 02:20:57.308752 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 6 02:20:57.308762 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 6 02:20:57.308936 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 6 02:20:57.309153 systemd-journald[201]: Collecting audit messages is disabled. Mar 6 02:20:57.309189 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 6 02:20:57.309203 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 6 02:20:57.309215 systemd-journald[201]: Journal started Mar 6 02:20:57.310172 systemd-journald[201]: Runtime Journal (/run/log/journal/55312763872d4b3d9455fad5f31cfbec) is 6M, max 48.3M, 42.2M free. Mar 6 02:20:57.366994 systemd[1]: Started systemd-journald.service - Journal Service. Mar 6 02:20:57.382111 systemd[1]: Finished systemd-fsck-usr.service. Mar 6 02:20:58.728103 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 1673577648 wd_nsec: 1673577572 Mar 6 02:20:58.753044 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 6 02:20:58.800563 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 6 02:20:58.910699 systemd-modules-load[205]: Inserted module 'overlay' Mar 6 02:20:58.980761 systemd-tmpfiles[213]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Mar 6 02:20:58.981117 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 6 02:20:59.069956 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 6 02:20:59.087678 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 6 02:20:59.290130 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 6 02:20:59.299205 kernel: Bridge firewalling registered Mar 6 02:20:59.300156 systemd-modules-load[205]: Inserted module 'br_netfilter' Mar 6 02:20:59.307655 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 6 02:21:01.224223 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 6 02:21:01.327976 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 6 02:21:01.427174 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 6 02:21:01.498998 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 6 02:21:01.691062 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 6 02:21:01.696648 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 6 02:21:01.809980 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 6 02:21:01.821040 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 6 02:21:01.995178 dracut-cmdline[240]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5bef16c10382b6f77f9493af2297475832ff2f09f1ada4155425ad9b32dd6e53 Mar 6 02:21:02.039124 systemd-resolved[245]: Positive Trust Anchors: Mar 6 02:21:02.039135 systemd-resolved[245]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 6 02:21:02.039177 systemd-resolved[245]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 6 02:21:02.053682 systemd-resolved[245]: Defaulting to hostname 'linux'. Mar 6 02:21:02.062966 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 6 02:21:02.102058 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 6 02:21:03.154012 kernel: SCSI subsystem initialized Mar 6 02:21:03.207683 kernel: Loading iSCSI transport class v2.0-870. Mar 6 02:21:03.299979 kernel: iscsi: registered transport (tcp) Mar 6 02:21:03.375139 kernel: iscsi: registered transport (qla4xxx) Mar 6 02:21:03.375213 kernel: QLogic iSCSI HBA Driver Mar 6 02:21:03.457924 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 6 02:21:03.518042 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 6 02:21:03.530615 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 6 02:21:03.639369 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 6 02:21:03.647711 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 6 02:21:03.741588 kernel: raid6: avx2x4 gen() 28390 MB/s Mar 6 02:21:03.758834 kernel: raid6: avx2x2 gen() 28949 MB/s Mar 6 02:21:03.778440 kernel: raid6: avx2x1 gen() 18424 MB/s Mar 6 02:21:03.778593 kernel: raid6: using algorithm avx2x2 gen() 28949 MB/s Mar 6 02:21:03.799294 kernel: raid6: .... xor() 22092 MB/s, rmw enabled Mar 6 02:21:03.799519 kernel: raid6: using avx2x2 recovery algorithm Mar 6 02:21:03.837480 kernel: xor: automatically using best checksumming function avx Mar 6 02:21:04.285667 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 6 02:21:04.320064 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 6 02:21:04.327666 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 6 02:21:04.378009 systemd-udevd[453]: Using default interface naming scheme 'v255'. Mar 6 02:21:04.386993 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 6 02:21:04.389026 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 6 02:21:04.476152 dracut-pre-trigger[457]: rd.md=0: removing MD RAID activation Mar 6 02:21:04.523508 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 6 02:21:04.529517 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 6 02:21:04.660089 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 6 02:21:04.671341 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 6 02:21:04.719092 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Mar 6 02:21:04.729410 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 6 02:21:04.731505 kernel: cryptd: max_cpu_qlen set to 1000 Mar 6 02:21:04.739325 kernel: libata version 3.00 loaded. Mar 6 02:21:04.762851 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 6 02:21:04.762908 kernel: GPT:9289727 != 19775487 Mar 6 02:21:04.762926 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 6 02:21:04.764767 kernel: GPT:9289727 != 19775487 Mar 6 02:21:04.765974 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 6 02:21:04.770984 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 6 02:21:04.778333 kernel: ahci 0000:00:1f.2: version 3.0 Mar 6 02:21:04.785757 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 6 02:21:04.785911 kernel: AES CTR mode by8 optimization enabled Mar 6 02:21:04.785926 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Mar 6 02:21:04.786292 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Mar 6 02:21:04.789684 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 6 02:21:04.789888 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Mar 6 02:21:04.795032 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 6 02:21:04.804577 kernel: scsi host0: ahci Mar 6 02:21:04.795100 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 6 02:21:04.804836 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 6 02:21:04.818888 kernel: scsi host1: ahci Mar 6 02:21:04.829607 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 6 02:21:04.840074 kernel: scsi host2: ahci Mar 6 02:21:04.845468 kernel: scsi host3: ahci Mar 6 02:21:04.848138 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 6 02:21:04.853314 kernel: scsi host4: ahci Mar 6 02:21:04.857425 kernel: scsi host5: ahci Mar 6 02:21:04.857764 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 lpm-pol 1 Mar 6 02:21:04.857780 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 lpm-pol 1 Mar 6 02:21:04.862034 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 lpm-pol 1 Mar 6 02:21:04.862111 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 lpm-pol 1 Mar 6 02:21:04.863195 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 6 02:21:04.878228 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 lpm-pol 1 Mar 6 02:21:04.878306 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 lpm-pol 1 Mar 6 02:21:04.882636 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 6 02:21:04.905751 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 6 02:21:05.020440 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 6 02:21:05.029155 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 6 02:21:05.032733 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 6 02:21:05.041162 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 6 02:21:05.076363 disk-uuid[617]: Primary Header is updated. Mar 6 02:21:05.076363 disk-uuid[617]: Secondary Entries is updated. Mar 6 02:21:05.076363 disk-uuid[617]: Secondary Header is updated. Mar 6 02:21:05.085127 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 6 02:21:05.179288 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 6 02:21:05.179358 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 6 02:21:05.184434 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 6 02:21:05.188366 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 6 02:21:05.192310 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 6 02:21:05.200311 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Mar 6 02:21:05.200362 kernel: ata3.00: LPM support broken, forcing max_power Mar 6 02:21:05.203582 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Mar 6 02:21:05.205726 kernel: ata3.00: applying bridge limits Mar 6 02:21:05.209636 kernel: ata3.00: LPM support broken, forcing max_power Mar 6 02:21:05.209692 kernel: ata3.00: configured for UDMA/100 Mar 6 02:21:05.232625 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 6 02:21:05.314942 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Mar 6 02:21:05.315450 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 6 02:21:05.331750 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Mar 6 02:21:05.729865 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 6 02:21:05.735515 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 6 02:21:05.743778 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 6 02:21:05.759458 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 6 02:21:05.770024 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 6 02:21:05.809575 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 6 02:21:06.095310 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 6 02:21:06.095630 disk-uuid[618]: The operation has completed successfully. Mar 6 02:21:06.133511 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 6 02:21:06.137110 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 6 02:21:06.195008 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 6 02:21:06.228545 sh[648]: Success Mar 6 02:21:06.257343 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 6 02:21:06.257415 kernel: device-mapper: uevent: version 1.0.3 Mar 6 02:21:06.261290 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Mar 6 02:21:06.274285 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Mar 6 02:21:06.316681 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 6 02:21:06.318864 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 6 02:21:06.345356 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 6 02:21:06.358423 kernel: BTRFS: device fsid 1235dd15-5252-4928-9c6c-372370c6bfca devid 1 transid 39 /dev/mapper/usr (253:0) scanned by mount (660) Mar 6 02:21:06.363337 kernel: BTRFS info (device dm-0): first mount of filesystem 1235dd15-5252-4928-9c6c-372370c6bfca Mar 6 02:21:06.363383 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 6 02:21:06.378208 kernel: BTRFS info (device dm-0 state E): disabling log replay at mount time Mar 6 02:21:06.378315 kernel: BTRFS info (device dm-0 state E): enabling free space tree Mar 6 02:21:06.380188 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 6 02:21:06.384642 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Mar 6 02:21:06.391399 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 6 02:21:06.392680 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 6 02:21:06.404935 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 6 02:21:06.455322 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (691) Mar 6 02:21:06.455391 kernel: BTRFS info (device vda6): first mount of filesystem 993ea71e-e97d-4f5e-b5c7-fdac31a53b6b Mar 6 02:21:06.461414 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 6 02:21:06.468503 kernel: BTRFS info (device vda6): turning on async discard Mar 6 02:21:06.468548 kernel: BTRFS info (device vda6): enabling free space tree Mar 6 02:21:06.478371 kernel: BTRFS info (device vda6): last unmount of filesystem 993ea71e-e97d-4f5e-b5c7-fdac31a53b6b Mar 6 02:21:06.479914 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 6 02:21:06.484390 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 6 02:21:06.772690 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 6 02:21:06.784603 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 6 02:21:06.830820 ignition[746]: Ignition 2.22.0 Mar 6 02:21:06.830850 ignition[746]: Stage: fetch-offline Mar 6 02:21:06.830954 ignition[746]: no configs at "/usr/lib/ignition/base.d" Mar 6 02:21:06.830966 ignition[746]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 6 02:21:06.831197 ignition[746]: parsed url from cmdline: "" Mar 6 02:21:06.831201 ignition[746]: no config URL provided Mar 6 02:21:06.831222 ignition[746]: reading system config file "/usr/lib/ignition/user.ign" Mar 6 02:21:06.844578 systemd-networkd[834]: lo: Link UP Mar 6 02:21:06.831232 ignition[746]: no config at "/usr/lib/ignition/user.ign" Mar 6 02:21:06.844583 systemd-networkd[834]: lo: Gained carrier Mar 6 02:21:06.831320 ignition[746]: op(1): [started] loading QEMU firmware config module Mar 6 02:21:06.846974 systemd-networkd[834]: Enumeration completed Mar 6 02:21:06.831326 ignition[746]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 6 02:21:06.847418 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 6 02:21:06.868473 ignition[746]: op(1): [finished] loading QEMU firmware config module Mar 6 02:21:06.850359 systemd-networkd[834]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 6 02:21:06.850363 systemd-networkd[834]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 6 02:21:06.867440 systemd-networkd[834]: eth0: Link UP Mar 6 02:21:06.867587 systemd-networkd[834]: eth0: Gained carrier Mar 6 02:21:06.867597 systemd-networkd[834]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 6 02:21:06.870503 systemd[1]: Reached target network.target - Network. Mar 6 02:21:06.905618 systemd-networkd[834]: eth0: DHCPv4 address 10.0.0.33/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 6 02:21:07.208892 ignition[746]: parsing config with SHA512: 1f3eb7d9b007dfadaa41b747e2ccf1c1289d107c096cc85254aa6f14eeea14dee3ab69d6bd7b94309064122a95a6ff1f09283479bf692732529de041a460d974 Mar 6 02:21:07.229201 unknown[746]: fetched base config from "system" Mar 6 02:21:07.229225 unknown[746]: fetched user config from "qemu" Mar 6 02:21:07.230100 ignition[746]: fetch-offline: fetch-offline passed Mar 6 02:21:07.230445 ignition[746]: Ignition finished successfully Mar 6 02:21:07.243441 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 6 02:21:07.249521 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 6 02:21:07.264030 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 6 02:21:07.381590 ignition[843]: Ignition 2.22.0 Mar 6 02:21:07.381621 ignition[843]: Stage: kargs Mar 6 02:21:07.386008 ignition[843]: no configs at "/usr/lib/ignition/base.d" Mar 6 02:21:07.386063 ignition[843]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 6 02:21:07.400598 ignition[843]: kargs: kargs passed Mar 6 02:21:07.400742 ignition[843]: Ignition finished successfully Mar 6 02:21:07.408927 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 6 02:21:07.413476 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 6 02:21:07.473690 ignition[851]: Ignition 2.22.0 Mar 6 02:21:07.473726 ignition[851]: Stage: disks Mar 6 02:21:07.473993 ignition[851]: no configs at "/usr/lib/ignition/base.d" Mar 6 02:21:07.474014 ignition[851]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 6 02:21:07.477351 ignition[851]: disks: disks passed Mar 6 02:21:07.477432 ignition[851]: Ignition finished successfully Mar 6 02:21:07.488837 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 6 02:21:07.494108 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 6 02:21:07.500050 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 6 02:21:07.500292 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 6 02:21:07.508858 systemd[1]: Reached target sysinit.target - System Initialization. Mar 6 02:21:07.513911 systemd[1]: Reached target basic.target - Basic System. Mar 6 02:21:07.521059 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 6 02:21:07.569688 systemd-fsck[861]: ROOT: clean, 15/553520 files, 52789/553472 blocks Mar 6 02:21:07.575715 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 6 02:21:07.579609 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 6 02:21:07.989326 kernel: EXT4-fs (vda9): mounted filesystem 16ab7223-a8af-43d2-ad40-7e1bf0ff2a89 r/w with ordered data mode. Quota mode: none. Mar 6 02:21:07.990568 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 6 02:21:07.993563 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 6 02:21:08.001462 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 6 02:21:08.002439 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 6 02:21:08.005850 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 6 02:21:08.005913 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 6 02:21:08.005945 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 6 02:21:08.033147 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 6 02:21:08.039331 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 6 02:21:08.055310 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (869) Mar 6 02:21:08.055342 kernel: BTRFS info (device vda6): first mount of filesystem 993ea71e-e97d-4f5e-b5c7-fdac31a53b6b Mar 6 02:21:08.055360 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 6 02:21:08.062215 kernel: BTRFS info (device vda6): turning on async discard Mar 6 02:21:08.062294 kernel: BTRFS info (device vda6): enabling free space tree Mar 6 02:21:08.063865 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 6 02:21:08.102938 initrd-setup-root[893]: cut: /sysroot/etc/passwd: No such file or directory Mar 6 02:21:08.107994 initrd-setup-root[900]: cut: /sysroot/etc/group: No such file or directory Mar 6 02:21:08.115498 initrd-setup-root[907]: cut: /sysroot/etc/shadow: No such file or directory Mar 6 02:21:08.123078 initrd-setup-root[914]: cut: /sysroot/etc/gshadow: No such file or directory Mar 6 02:21:08.298187 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 6 02:21:08.308886 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 6 02:21:08.316955 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 6 02:21:08.389519 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 6 02:21:08.396006 kernel: BTRFS info (device vda6): last unmount of filesystem 993ea71e-e97d-4f5e-b5c7-fdac31a53b6b Mar 6 02:21:08.423654 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 6 02:21:08.503413 ignition[984]: INFO : Ignition 2.22.0 Mar 6 02:21:08.503413 ignition[984]: INFO : Stage: mount Mar 6 02:21:08.509620 ignition[984]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 6 02:21:08.509620 ignition[984]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 6 02:21:08.509620 ignition[984]: INFO : mount: mount passed Mar 6 02:21:08.509620 ignition[984]: INFO : Ignition finished successfully Mar 6 02:21:08.508978 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 6 02:21:08.514840 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 6 02:21:08.546649 systemd-networkd[834]: eth0: Gained IPv6LL Mar 6 02:21:08.996405 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 6 02:21:09.036588 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (995) Mar 6 02:21:09.053042 kernel: BTRFS info (device vda6): first mount of filesystem 993ea71e-e97d-4f5e-b5c7-fdac31a53b6b Mar 6 02:21:09.053129 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 6 02:21:09.062440 kernel: BTRFS info (device vda6): turning on async discard Mar 6 02:21:09.062539 kernel: BTRFS info (device vda6): enabling free space tree Mar 6 02:21:09.067451 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 6 02:21:09.179014 ignition[1012]: INFO : Ignition 2.22.0 Mar 6 02:21:09.179014 ignition[1012]: INFO : Stage: files Mar 6 02:21:09.188606 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 6 02:21:09.188606 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 6 02:21:09.188606 ignition[1012]: DEBUG : files: compiled without relabeling support, skipping Mar 6 02:21:09.199641 ignition[1012]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 6 02:21:09.199641 ignition[1012]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 6 02:21:09.199641 ignition[1012]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 6 02:21:09.199641 ignition[1012]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 6 02:21:09.199641 ignition[1012]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 6 02:21:09.199641 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 6 02:21:09.199641 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 6 02:21:09.194837 unknown[1012]: wrote ssh authorized keys file for user: core Mar 6 02:21:09.282635 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 6 02:21:09.825847 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 6 02:21:09.833105 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 6 02:21:09.833105 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 6 02:21:09.833105 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 6 02:21:09.833105 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 6 02:21:09.833105 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 6 02:21:09.833105 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 6 02:21:09.833105 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 6 02:21:09.833105 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 6 02:21:09.893468 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 6 02:21:09.893468 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 6 02:21:09.893468 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 6 02:21:09.893468 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 6 02:21:09.893468 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 6 02:21:09.893468 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Mar 6 02:21:10.162988 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 6 02:21:12.028848 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 6 02:21:12.028848 ignition[1012]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 6 02:21:12.040122 ignition[1012]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 6 02:21:12.047921 ignition[1012]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 6 02:21:12.047921 ignition[1012]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 6 02:21:12.047921 ignition[1012]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Mar 6 02:21:12.047921 ignition[1012]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 6 02:21:12.047921 ignition[1012]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 6 02:21:12.047921 ignition[1012]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Mar 6 02:21:12.047921 ignition[1012]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Mar 6 02:21:12.093583 ignition[1012]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 6 02:21:12.099194 ignition[1012]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 6 02:21:12.103188 ignition[1012]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Mar 6 02:21:12.103188 ignition[1012]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Mar 6 02:21:12.103188 ignition[1012]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Mar 6 02:21:12.103188 ignition[1012]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 6 02:21:12.103188 ignition[1012]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 6 02:21:12.103188 ignition[1012]: INFO : files: files passed Mar 6 02:21:12.103188 ignition[1012]: INFO : Ignition finished successfully Mar 6 02:21:12.107583 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 6 02:21:12.120042 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 6 02:21:12.129668 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 6 02:21:12.161313 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 6 02:21:12.161444 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 6 02:21:12.169076 initrd-setup-root-after-ignition[1040]: grep: /sysroot/oem/oem-release: No such file or directory Mar 6 02:21:12.174910 initrd-setup-root-after-ignition[1043]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 6 02:21:12.178875 initrd-setup-root-after-ignition[1043]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 6 02:21:12.182693 initrd-setup-root-after-ignition[1047]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 6 02:21:12.187867 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 6 02:21:12.192669 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 6 02:21:12.197583 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 6 02:21:12.282021 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 6 02:21:12.284860 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 6 02:21:12.292023 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 6 02:21:12.297169 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 6 02:21:12.297444 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 6 02:21:12.299396 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 6 02:21:12.377393 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 6 02:21:12.385083 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 6 02:21:12.430953 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 6 02:21:12.431297 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 6 02:21:12.456616 systemd[1]: Stopped target timers.target - Timer Units. Mar 6 02:21:12.469910 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 6 02:21:12.470104 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 6 02:21:12.505559 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 6 02:21:12.534901 systemd[1]: Stopped target basic.target - Basic System. Mar 6 02:21:12.552788 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 6 02:21:12.563302 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 6 02:21:12.567618 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 6 02:21:12.574974 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Mar 6 02:21:12.580780 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 6 02:21:12.587164 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 6 02:21:12.592553 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 6 02:21:12.593880 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 6 02:21:12.603439 systemd[1]: Stopped target swap.target - Swaps. Mar 6 02:21:12.608374 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 6 02:21:12.608564 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 6 02:21:12.619435 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 6 02:21:12.619701 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 6 02:21:12.625994 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 6 02:21:12.633424 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 6 02:21:12.636945 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 6 02:21:12.637112 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 6 02:21:12.658585 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 6 02:21:12.658873 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 6 02:21:12.662054 systemd[1]: Stopped target paths.target - Path Units. Mar 6 02:21:12.668597 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 6 02:21:12.672565 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 6 02:21:12.680583 systemd[1]: Stopped target slices.target - Slice Units. Mar 6 02:21:12.683923 systemd[1]: Stopped target sockets.target - Socket Units. Mar 6 02:21:12.690133 systemd[1]: iscsid.socket: Deactivated successfully. Mar 6 02:21:12.690356 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 6 02:21:12.698058 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 6 02:21:12.698206 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 6 02:21:12.703846 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 6 02:21:12.704095 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 6 02:21:12.706712 systemd[1]: ignition-files.service: Deactivated successfully. Mar 6 02:21:12.706923 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 6 02:21:12.720512 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 6 02:21:12.731514 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 6 02:21:12.756945 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 6 02:21:12.761036 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 6 02:21:12.767068 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 6 02:21:12.767344 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 6 02:21:12.779960 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 6 02:21:12.780075 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 6 02:21:12.801935 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 6 02:21:12.971772 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 6 02:21:12.972015 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 6 02:21:13.020013 ignition[1067]: INFO : Ignition 2.22.0 Mar 6 02:21:13.020013 ignition[1067]: INFO : Stage: umount Mar 6 02:21:13.026087 ignition[1067]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 6 02:21:13.026087 ignition[1067]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 6 02:21:13.026087 ignition[1067]: INFO : umount: umount passed Mar 6 02:21:13.026087 ignition[1067]: INFO : Ignition finished successfully Mar 6 02:21:13.048534 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 6 02:21:13.048914 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 6 02:21:13.058084 systemd[1]: Stopped target network.target - Network. Mar 6 02:21:13.064824 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 6 02:21:13.064917 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 6 02:21:13.069439 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 6 02:21:13.069523 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 6 02:21:13.071902 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 6 02:21:13.071978 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 6 02:21:13.076483 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 6 02:21:13.076554 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 6 02:21:13.081106 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 6 02:21:13.081181 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 6 02:21:13.086159 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 6 02:21:13.095542 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 6 02:21:13.104299 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 6 02:21:13.104545 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 6 02:21:13.112621 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Mar 6 02:21:13.113031 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 6 02:21:13.113100 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 6 02:21:13.121740 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Mar 6 02:21:13.128777 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 6 02:21:13.128987 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 6 02:21:13.135714 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Mar 6 02:21:13.137443 systemd[1]: Stopped target network-pre.target - Preparation for Network. Mar 6 02:21:13.140697 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 6 02:21:13.140789 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 6 02:21:13.170315 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 6 02:21:13.172638 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 6 02:21:13.172755 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 6 02:21:13.174149 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 6 02:21:13.174215 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 6 02:21:13.198176 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 6 02:21:13.198331 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 6 02:21:13.198540 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 6 02:21:13.210336 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 6 02:21:13.216941 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 6 02:21:13.228556 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 6 02:21:13.232579 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 6 02:21:13.232661 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 6 02:21:13.235436 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 6 02:21:13.235497 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 6 02:21:13.252614 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 6 02:21:13.252720 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 6 02:21:13.263122 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 6 02:21:13.263201 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 6 02:21:13.271631 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 6 02:21:13.271712 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 6 02:21:13.281938 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 6 02:21:13.290856 systemd[1]: systemd-network-generator.service: Deactivated successfully. Mar 6 02:21:13.290944 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Mar 6 02:21:13.311572 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 6 02:21:13.316004 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 6 02:21:13.361758 kernel: hrtimer: interrupt took 7224097 ns Mar 6 02:21:13.360058 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 6 02:21:13.360359 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 6 02:21:13.373361 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 6 02:21:13.373484 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 6 02:21:13.382485 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 6 02:21:13.386549 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 6 02:21:13.408709 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 6 02:21:13.408918 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 6 02:21:13.439639 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 6 02:21:13.439954 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 6 02:21:13.461325 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 6 02:21:13.466639 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 6 02:21:13.524620 systemd[1]: Switching root. Mar 6 02:21:13.591347 systemd-journald[201]: Received SIGTERM from PID 1 (systemd). Mar 6 02:21:13.591699 systemd-journald[201]: Journal stopped Mar 6 02:21:15.723519 kernel: SELinux: policy capability network_peer_controls=1 Mar 6 02:21:15.723612 kernel: SELinux: policy capability open_perms=1 Mar 6 02:21:15.723633 kernel: SELinux: policy capability extended_socket_class=1 Mar 6 02:21:15.723681 kernel: SELinux: policy capability always_check_network=0 Mar 6 02:21:15.723700 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 6 02:21:15.723717 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 6 02:21:15.723736 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 6 02:21:15.723759 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 6 02:21:15.723778 kernel: SELinux: policy capability userspace_initial_context=0 Mar 6 02:21:15.723844 kernel: audit: type=1403 audit(1772763673.858:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 6 02:21:15.723881 systemd[1]: Successfully loaded SELinux policy in 90.196ms. Mar 6 02:21:15.723910 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 12.394ms. Mar 6 02:21:15.723936 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 6 02:21:15.723956 systemd[1]: Detected virtualization kvm. Mar 6 02:21:15.723975 systemd[1]: Detected architecture x86-64. Mar 6 02:21:15.723994 systemd[1]: Detected first boot. Mar 6 02:21:15.724015 systemd[1]: Initializing machine ID from VM UUID. Mar 6 02:21:15.724034 zram_generator::config[1113]: No configuration found. Mar 6 02:21:15.724053 kernel: Guest personality initialized and is inactive Mar 6 02:21:15.724067 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Mar 6 02:21:15.724086 kernel: Initialized host personality Mar 6 02:21:15.724096 kernel: NET: Registered PF_VSOCK protocol family Mar 6 02:21:15.724109 systemd[1]: Populated /etc with preset unit settings. Mar 6 02:21:15.724121 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Mar 6 02:21:15.724131 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 6 02:21:15.724142 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 6 02:21:15.724152 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 6 02:21:15.724163 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 6 02:21:15.724174 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 6 02:21:15.724187 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 6 02:21:15.724197 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 6 02:21:15.724212 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 6 02:21:15.724223 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 6 02:21:15.724275 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 6 02:21:15.724289 systemd[1]: Created slice user.slice - User and Session Slice. Mar 6 02:21:15.724300 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 6 02:21:15.724311 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 6 02:21:15.724322 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 6 02:21:15.724336 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 6 02:21:15.724347 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 6 02:21:15.724358 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 6 02:21:15.724370 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 6 02:21:15.724390 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 6 02:21:15.724413 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 6 02:21:15.724428 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 6 02:21:15.724443 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 6 02:21:15.724454 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 6 02:21:15.724465 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 6 02:21:15.724476 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 6 02:21:15.724486 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 6 02:21:15.724497 systemd[1]: Reached target slices.target - Slice Units. Mar 6 02:21:15.724508 systemd[1]: Reached target swap.target - Swaps. Mar 6 02:21:15.724518 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 6 02:21:15.724529 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 6 02:21:15.724542 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 6 02:21:15.724553 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 6 02:21:15.724564 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 6 02:21:15.724574 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 6 02:21:15.724585 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 6 02:21:15.724595 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 6 02:21:15.724606 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 6 02:21:15.724617 systemd[1]: Mounting media.mount - External Media Directory... Mar 6 02:21:15.724627 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 6 02:21:15.724640 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 6 02:21:15.724652 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 6 02:21:15.724663 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 6 02:21:15.724674 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 6 02:21:15.724684 systemd[1]: Reached target machines.target - Containers. Mar 6 02:21:15.724695 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 6 02:21:15.724706 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 6 02:21:15.724717 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 6 02:21:15.724727 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 6 02:21:15.724740 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 6 02:21:15.724751 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 6 02:21:15.724761 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 6 02:21:15.724772 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 6 02:21:15.724782 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 6 02:21:15.724829 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 6 02:21:15.724841 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 6 02:21:15.724852 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 6 02:21:15.724884 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 6 02:21:15.724895 systemd[1]: Stopped systemd-fsck-usr.service. Mar 6 02:21:15.724906 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 6 02:21:15.724917 kernel: fuse: init (API version 7.41) Mar 6 02:21:15.724927 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 6 02:21:15.724939 kernel: ACPI: bus type drm_connector registered Mar 6 02:21:15.724949 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 6 02:21:15.724960 kernel: loop: module loaded Mar 6 02:21:15.724971 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 6 02:21:15.724984 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 6 02:21:15.724995 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 6 02:21:15.725005 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 6 02:21:15.725016 systemd[1]: verity-setup.service: Deactivated successfully. Mar 6 02:21:15.725027 systemd[1]: Stopped verity-setup.service. Mar 6 02:21:15.725038 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 6 02:21:15.725051 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 6 02:21:15.725061 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 6 02:21:15.725072 systemd[1]: Mounted media.mount - External Media Directory. Mar 6 02:21:15.725082 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 6 02:21:15.725097 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 6 02:21:15.725132 systemd-journald[1198]: Collecting audit messages is disabled. Mar 6 02:21:15.725155 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 6 02:21:15.725166 systemd-journald[1198]: Journal started Mar 6 02:21:15.725185 systemd-journald[1198]: Runtime Journal (/run/log/journal/55312763872d4b3d9455fad5f31cfbec) is 6M, max 48.3M, 42.2M free. Mar 6 02:21:15.016009 systemd[1]: Queued start job for default target multi-user.target. Mar 6 02:21:15.039127 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 6 02:21:15.040001 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 6 02:21:15.041598 systemd[1]: systemd-journald.service: Consumed 1.284s CPU time. Mar 6 02:21:15.733326 systemd[1]: Started systemd-journald.service - Journal Service. Mar 6 02:21:15.737421 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 6 02:21:15.743083 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 6 02:21:15.760574 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 6 02:21:15.760963 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 6 02:21:15.765615 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 6 02:21:15.765963 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 6 02:21:15.770486 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 6 02:21:15.770834 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 6 02:21:15.775176 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 6 02:21:15.775715 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 6 02:21:15.779511 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 6 02:21:15.779869 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 6 02:21:15.783029 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 6 02:21:15.783456 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 6 02:21:15.786953 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 6 02:21:15.790337 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 6 02:21:15.794555 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 6 02:21:15.798182 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 6 02:21:15.815707 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 6 02:21:15.823750 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 6 02:21:15.843160 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 6 02:21:15.850999 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 6 02:21:15.851104 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 6 02:21:15.860022 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 6 02:21:15.866475 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 6 02:21:15.869899 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 6 02:21:16.082964 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 6 02:21:16.094598 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 6 02:21:16.098436 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 6 02:21:16.111569 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 6 02:21:16.116302 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 6 02:21:16.117944 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 6 02:21:16.128329 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 6 02:21:16.131450 systemd-journald[1198]: Time spent on flushing to /var/log/journal/55312763872d4b3d9455fad5f31cfbec is 237.697ms for 976 entries. Mar 6 02:21:16.131450 systemd-journald[1198]: System Journal (/var/log/journal/55312763872d4b3d9455fad5f31cfbec) is 8M, max 195.6M, 187.6M free. Mar 6 02:21:16.407285 systemd-journald[1198]: Received client request to flush runtime journal. Mar 6 02:21:16.145040 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 6 02:21:16.165012 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 6 02:21:16.169845 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 6 02:21:16.174537 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 6 02:21:16.188730 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 6 02:21:16.192673 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 6 02:21:16.199637 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 6 02:21:16.412021 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 6 02:21:16.422327 kernel: loop0: detected capacity change from 0 to 228704 Mar 6 02:21:16.429792 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 6 02:21:16.431094 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 6 02:21:16.441628 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 6 02:21:16.479616 systemd-tmpfiles[1233]: ACLs are not supported, ignoring. Mar 6 02:21:16.479635 systemd-tmpfiles[1233]: ACLs are not supported, ignoring. Mar 6 02:21:16.483333 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 6 02:21:16.486097 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 6 02:21:16.493181 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 6 02:21:16.513306 kernel: loop1: detected capacity change from 0 to 110984 Mar 6 02:21:16.546119 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 6 02:21:16.563640 kernel: loop2: detected capacity change from 0 to 128560 Mar 6 02:21:16.562612 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 6 02:21:16.756356 kernel: loop3: detected capacity change from 0 to 228704 Mar 6 02:21:16.760495 systemd-tmpfiles[1256]: ACLs are not supported, ignoring. Mar 6 02:21:16.760539 systemd-tmpfiles[1256]: ACLs are not supported, ignoring. Mar 6 02:21:16.765193 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 6 02:21:16.777307 kernel: loop4: detected capacity change from 0 to 110984 Mar 6 02:21:16.799280 kernel: loop5: detected capacity change from 0 to 128560 Mar 6 02:21:16.821444 (sd-merge)[1259]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 6 02:21:16.822643 (sd-merge)[1259]: Merged extensions into '/usr'. Mar 6 02:21:16.828721 systemd[1]: Reload requested from client PID 1232 ('systemd-sysext') (unit systemd-sysext.service)... Mar 6 02:21:16.828749 systemd[1]: Reloading... Mar 6 02:21:17.169282 zram_generator::config[1282]: No configuration found. Mar 6 02:21:17.561621 ldconfig[1227]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 6 02:21:17.839330 systemd[1]: Reloading finished in 1009 ms. Mar 6 02:21:17.868059 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 6 02:21:17.871663 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 6 02:21:17.894183 systemd[1]: Starting ensure-sysext.service... Mar 6 02:21:17.897669 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 6 02:21:17.918536 systemd[1]: Reload requested from client PID 1324 ('systemctl') (unit ensure-sysext.service)... Mar 6 02:21:17.918578 systemd[1]: Reloading... Mar 6 02:21:17.939986 systemd-tmpfiles[1325]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Mar 6 02:21:17.940043 systemd-tmpfiles[1325]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Mar 6 02:21:17.940992 systemd-tmpfiles[1325]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 6 02:21:17.942051 systemd-tmpfiles[1325]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 6 02:21:17.944421 systemd-tmpfiles[1325]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 6 02:21:17.945022 systemd-tmpfiles[1325]: ACLs are not supported, ignoring. Mar 6 02:21:17.945452 systemd-tmpfiles[1325]: ACLs are not supported, ignoring. Mar 6 02:21:17.956999 systemd-tmpfiles[1325]: Detected autofs mount point /boot during canonicalization of boot. Mar 6 02:21:17.957126 systemd-tmpfiles[1325]: Skipping /boot Mar 6 02:21:17.971438 systemd-tmpfiles[1325]: Detected autofs mount point /boot during canonicalization of boot. Mar 6 02:21:17.972282 systemd-tmpfiles[1325]: Skipping /boot Mar 6 02:21:18.131301 zram_generator::config[1352]: No configuration found. Mar 6 02:21:18.338906 systemd[1]: Reloading finished in 419 ms. Mar 6 02:21:18.366515 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 6 02:21:18.398459 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 6 02:21:18.409942 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 6 02:21:18.414360 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 6 02:21:18.418613 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 6 02:21:18.437924 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 6 02:21:18.455560 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 6 02:21:18.460941 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 6 02:21:18.468627 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 6 02:21:18.468981 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 6 02:21:18.473505 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 6 02:21:18.478541 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 6 02:21:18.482923 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 6 02:21:18.485987 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 6 02:21:18.486104 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 6 02:21:18.488013 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 6 02:21:18.491490 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 6 02:21:18.492897 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 6 02:21:18.498094 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 6 02:21:18.498378 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 6 02:21:18.503629 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 6 02:21:18.504099 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 6 02:21:18.509298 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 6 02:21:18.509680 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 6 02:21:18.523116 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 6 02:21:18.523401 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 6 02:21:18.526526 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 6 02:21:18.528911 systemd-udevd[1395]: Using default interface naming scheme 'v255'. Mar 6 02:21:18.532658 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 6 02:21:18.535077 augenrules[1425]: No rules Mar 6 02:21:18.544341 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 6 02:21:18.548765 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 6 02:21:18.548917 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 6 02:21:18.552488 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 6 02:21:18.555575 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 6 02:21:18.557950 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 6 02:21:18.562324 systemd[1]: audit-rules.service: Deactivated successfully. Mar 6 02:21:18.562751 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 6 02:21:18.567865 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 6 02:21:18.571767 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 6 02:21:18.572081 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 6 02:21:18.577642 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 6 02:21:18.577957 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 6 02:21:18.581787 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 6 02:21:18.582215 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 6 02:21:18.585625 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 6 02:21:18.589974 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 6 02:21:18.593912 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 6 02:21:18.629651 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 6 02:21:18.633093 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 6 02:21:18.637477 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 6 02:21:18.639738 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 6 02:21:18.651680 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 6 02:21:18.935529 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 6 02:21:18.947930 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 6 02:21:18.950949 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 6 02:21:18.951030 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 6 02:21:18.959460 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 6 02:21:18.962399 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 6 02:21:18.962431 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 6 02:21:18.968864 systemd[1]: Finished ensure-sysext.service. Mar 6 02:21:18.971746 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 6 02:21:18.972026 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 6 02:21:18.975478 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 6 02:21:18.975737 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 6 02:21:18.979459 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 6 02:21:18.979857 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 6 02:21:18.981831 augenrules[1472]: /sbin/augenrules: No change Mar 6 02:21:18.984159 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 6 02:21:18.984445 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 6 02:21:19.001942 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 6 02:21:19.002025 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 6 02:21:19.005467 augenrules[1501]: No rules Mar 6 02:21:19.014621 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 6 02:21:19.018595 systemd[1]: audit-rules.service: Deactivated successfully. Mar 6 02:21:19.019108 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 6 02:21:19.026563 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 6 02:21:19.078300 systemd-resolved[1394]: Positive Trust Anchors: Mar 6 02:21:19.078329 systemd-resolved[1394]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 6 02:21:19.078377 systemd-resolved[1394]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 6 02:21:19.091017 systemd-resolved[1394]: Defaulting to hostname 'linux'. Mar 6 02:21:19.097529 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 6 02:21:19.101407 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 6 02:21:19.178718 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 6 02:21:19.182523 systemd[1]: Reached target sysinit.target - System Initialization. Mar 6 02:21:19.186201 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 6 02:21:19.190345 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 6 02:21:19.190771 systemd-networkd[1479]: lo: Link UP Mar 6 02:21:19.190798 systemd-networkd[1479]: lo: Gained carrier Mar 6 02:21:19.193012 systemd-networkd[1479]: Enumeration completed Mar 6 02:21:19.193729 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Mar 6 02:21:19.193852 systemd-networkd[1479]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 6 02:21:19.193859 systemd-networkd[1479]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 6 02:21:19.195000 systemd-networkd[1479]: eth0: Link UP Mar 6 02:21:19.195203 systemd-networkd[1479]: eth0: Gained carrier Mar 6 02:21:19.195217 systemd-networkd[1479]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 6 02:21:19.196704 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 6 02:21:19.200119 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 6 02:21:19.200149 systemd[1]: Reached target paths.target - Path Units. Mar 6 02:21:19.202571 systemd[1]: Reached target time-set.target - System Time Set. Mar 6 02:21:19.205883 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 6 02:21:19.210123 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 6 02:21:19.215641 systemd[1]: Reached target timers.target - Timer Units. Mar 6 02:21:19.220931 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 6 02:21:19.221396 systemd-networkd[1479]: eth0: DHCPv4 address 10.0.0.33/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 6 02:21:19.223317 kernel: mousedev: PS/2 mouse device common for all mice Mar 6 02:21:19.225302 systemd-timesyncd[1506]: Network configuration changed, trying to establish connection. Mar 6 02:21:19.785006 systemd-resolved[1394]: Clock change detected. Flushing caches. Mar 6 02:21:19.785177 systemd-timesyncd[1506]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 6 02:21:19.785275 systemd-timesyncd[1506]: Initial clock synchronization to Fri 2026-03-06 02:21:19.784973 UTC. Mar 6 02:21:19.786842 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 6 02:21:19.792390 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 6 02:21:19.793114 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Mar 6 02:21:19.796943 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Mar 6 02:21:19.802245 kernel: ACPI: button: Power Button [PWRF] Mar 6 02:21:19.806130 systemd[1]: Reached target ssh-access.target - SSH Access Available. Mar 6 02:21:19.827851 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 6 02:21:19.832830 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 6 02:21:19.837886 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 6 02:21:19.841710 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 6 02:21:19.860659 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 6 02:21:19.873566 systemd[1]: Reached target network.target - Network. Mar 6 02:21:19.878355 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 6 02:21:19.879479 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 6 02:21:20.123582 systemd[1]: Reached target sockets.target - Socket Units. Mar 6 02:21:20.127384 systemd[1]: Reached target basic.target - Basic System. Mar 6 02:21:20.130264 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 6 02:21:20.130379 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 6 02:21:20.133291 systemd[1]: Starting containerd.service - containerd container runtime... Mar 6 02:21:20.137805 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 6 02:21:20.142764 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 6 02:21:20.152115 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 6 02:21:20.156441 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 6 02:21:20.159704 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 6 02:21:20.164477 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Mar 6 02:21:20.166820 jq[1539]: false Mar 6 02:21:20.168441 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 6 02:21:20.170205 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 6 02:21:20.187728 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 6 02:21:20.193363 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 6 02:21:20.202011 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 6 02:21:20.220414 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 6 02:21:20.227298 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 6 02:21:20.228171 extend-filesystems[1540]: Found /dev/vda6 Mar 6 02:21:20.243992 extend-filesystems[1540]: Found /dev/vda9 Mar 6 02:21:20.243992 extend-filesystems[1540]: Checking size of /dev/vda9 Mar 6 02:21:20.234578 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 6 02:21:20.240420 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 6 02:21:20.246371 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 6 02:21:20.251396 systemd[1]: Starting update-engine.service - Update Engine... Mar 6 02:21:20.258217 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 6 02:21:20.264233 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 6 02:21:20.264706 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 6 02:21:20.264952 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 6 02:21:20.265333 systemd[1]: motdgen.service: Deactivated successfully. Mar 6 02:21:20.265569 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 6 02:21:20.268853 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 6 02:21:20.272842 google_oslogin_nss_cache[1541]: oslogin_cache_refresh[1541]: Refreshing passwd entry cache Mar 6 02:21:20.272872 oslogin_cache_refresh[1541]: Refreshing passwd entry cache Mar 6 02:21:20.276395 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 6 02:21:20.283668 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 6 02:21:20.314292 google_oslogin_nss_cache[1541]: oslogin_cache_refresh[1541]: Failure getting users, quitting Mar 6 02:21:20.314292 google_oslogin_nss_cache[1541]: oslogin_cache_refresh[1541]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Mar 6 02:21:20.314292 google_oslogin_nss_cache[1541]: oslogin_cache_refresh[1541]: Refreshing group entry cache Mar 6 02:21:20.313788 oslogin_cache_refresh[1541]: Failure getting users, quitting Mar 6 02:21:20.313810 oslogin_cache_refresh[1541]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Mar 6 02:21:20.313862 oslogin_cache_refresh[1541]: Refreshing group entry cache Mar 6 02:21:20.317418 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 6 02:21:20.327683 extend-filesystems[1540]: Resized partition /dev/vda9 Mar 6 02:21:20.335167 google_oslogin_nss_cache[1541]: oslogin_cache_refresh[1541]: Failure getting groups, quitting Mar 6 02:21:20.335167 google_oslogin_nss_cache[1541]: oslogin_cache_refresh[1541]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Mar 6 02:21:20.330372 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Mar 6 02:21:20.328561 oslogin_cache_refresh[1541]: Failure getting groups, quitting Mar 6 02:21:20.330743 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Mar 6 02:21:20.328576 oslogin_cache_refresh[1541]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Mar 6 02:21:20.350297 extend-filesystems[1581]: resize2fs 1.47.3 (8-Jul-2025) Mar 6 02:21:20.377227 jq[1563]: true Mar 6 02:21:20.464416 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 6 02:21:20.488564 (ntainerd)[1583]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 6 02:21:20.494264 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 6 02:21:20.505774 update_engine[1562]: I20260306 02:21:20.497259 1562 main.cc:92] Flatcar Update Engine starting Mar 6 02:21:20.506038 tar[1568]: linux-amd64/LICENSE Mar 6 02:21:20.506038 tar[1568]: linux-amd64/helm Mar 6 02:21:20.523479 dbus-daemon[1536]: [system] SELinux support is enabled Mar 6 02:21:20.712744 update_engine[1562]: I20260306 02:21:20.527724 1562 update_check_scheduler.cc:74] Next update check in 8m41s Mar 6 02:21:20.524356 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 6 02:21:20.533218 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 6 02:21:20.712885 jq[1585]: true Mar 6 02:21:20.533237 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 6 02:21:20.538144 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 6 02:21:20.538175 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 6 02:21:20.543282 systemd[1]: Started update-engine.service - Update Engine. Mar 6 02:21:20.696477 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 6 02:21:20.722038 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 6 02:21:20.754657 extend-filesystems[1581]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 6 02:21:20.754657 extend-filesystems[1581]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 6 02:21:20.754657 extend-filesystems[1581]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 6 02:21:20.774980 extend-filesystems[1540]: Resized filesystem in /dev/vda9 Mar 6 02:21:20.773842 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 6 02:21:20.777858 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 6 02:21:20.857517 sshd_keygen[1586]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 6 02:21:20.858224 systemd-logind[1555]: Watching system buttons on /dev/input/event2 (Power Button) Mar 6 02:21:20.858285 systemd-logind[1555]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 6 02:21:20.859609 systemd-logind[1555]: New seat seat0. Mar 6 02:21:20.875735 bash[1612]: Updated "/home/core/.ssh/authorized_keys" Mar 6 02:21:20.899990 kernel: kvm_amd: TSC scaling supported Mar 6 02:21:20.900178 kernel: kvm_amd: Nested Virtualization enabled Mar 6 02:21:20.900200 kernel: kvm_amd: Nested Paging enabled Mar 6 02:21:20.900212 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Mar 6 02:21:20.900224 kernel: kvm_amd: PMU virtualization is disabled Mar 6 02:21:21.115824 systemd-networkd[1479]: eth0: Gained IPv6LL Mar 6 02:21:21.259397 kernel: EDAC MC: Ver: 3.0.0 Mar 6 02:21:21.263338 locksmithd[1591]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 6 02:21:21.938172 containerd[1583]: time="2026-03-06T02:21:21Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Mar 6 02:21:21.939861 containerd[1583]: time="2026-03-06T02:21:21.939780613Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Mar 6 02:21:21.961897 containerd[1583]: time="2026-03-06T02:21:21.961829409Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="61.686µs" Mar 6 02:21:21.961897 containerd[1583]: time="2026-03-06T02:21:21.961889682Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Mar 6 02:21:21.962013 containerd[1583]: time="2026-03-06T02:21:21.961942500Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Mar 6 02:21:21.962317 containerd[1583]: time="2026-03-06T02:21:21.962269751Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Mar 6 02:21:21.962344 containerd[1583]: time="2026-03-06T02:21:21.962320856Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Mar 6 02:21:21.962428 containerd[1583]: time="2026-03-06T02:21:21.962399082Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 6 02:21:21.962580 containerd[1583]: time="2026-03-06T02:21:21.962529125Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 6 02:21:21.962580 containerd[1583]: time="2026-03-06T02:21:21.962574320Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 6 02:21:21.963019 containerd[1583]: time="2026-03-06T02:21:21.962970329Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 6 02:21:21.963088 containerd[1583]: time="2026-03-06T02:21:21.963019932Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 6 02:21:21.963167 containerd[1583]: time="2026-03-06T02:21:21.963130579Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 6 02:21:21.963167 containerd[1583]: time="2026-03-06T02:21:21.963160975Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Mar 6 02:21:21.963404 containerd[1583]: time="2026-03-06T02:21:21.963360889Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Mar 6 02:21:21.963959 containerd[1583]: time="2026-03-06T02:21:21.963912278Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 6 02:21:21.964083 containerd[1583]: time="2026-03-06T02:21:21.964002827Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 6 02:21:21.964083 containerd[1583]: time="2026-03-06T02:21:21.964043012Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Mar 6 02:21:21.964198 containerd[1583]: time="2026-03-06T02:21:21.964164358Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Mar 6 02:21:21.964566 containerd[1583]: time="2026-03-06T02:21:21.964530472Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Mar 6 02:21:21.964698 containerd[1583]: time="2026-03-06T02:21:21.964657128Z" level=info msg="metadata content store policy set" policy=shared Mar 6 02:21:22.134452 containerd[1583]: time="2026-03-06T02:21:22.133655124Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Mar 6 02:21:22.135374 containerd[1583]: time="2026-03-06T02:21:22.135282232Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Mar 6 02:21:22.135558 containerd[1583]: time="2026-03-06T02:21:22.135490942Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Mar 6 02:21:22.135649 containerd[1583]: time="2026-03-06T02:21:22.135585198Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Mar 6 02:21:22.135747 containerd[1583]: time="2026-03-06T02:21:22.135675797Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Mar 6 02:21:22.135775 containerd[1583]: time="2026-03-06T02:21:22.135755406Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Mar 6 02:21:22.135898 containerd[1583]: time="2026-03-06T02:21:22.135839663Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Mar 6 02:21:22.136001 containerd[1583]: time="2026-03-06T02:21:22.135946182Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Mar 6 02:21:22.136027 containerd[1583]: time="2026-03-06T02:21:22.135997938Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Mar 6 02:21:22.136182 containerd[1583]: time="2026-03-06T02:21:22.136119726Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Mar 6 02:21:22.137323 containerd[1583]: time="2026-03-06T02:21:22.137129847Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Mar 6 02:21:22.137323 containerd[1583]: time="2026-03-06T02:21:22.137203934Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Mar 6 02:21:22.138179 containerd[1583]: time="2026-03-06T02:21:22.138157605Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Mar 6 02:21:22.138338 containerd[1583]: time="2026-03-06T02:21:22.138322924Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Mar 6 02:21:22.138464 containerd[1583]: time="2026-03-06T02:21:22.138449079Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Mar 6 02:21:22.138567 containerd[1583]: time="2026-03-06T02:21:22.138553314Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Mar 6 02:21:22.138692 containerd[1583]: time="2026-03-06T02:21:22.138669621Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Mar 6 02:21:22.138745 containerd[1583]: time="2026-03-06T02:21:22.138734672Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Mar 6 02:21:22.138800 containerd[1583]: time="2026-03-06T02:21:22.138788242Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Mar 6 02:21:22.138892 containerd[1583]: time="2026-03-06T02:21:22.138877800Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Mar 6 02:21:22.139018 containerd[1583]: time="2026-03-06T02:21:22.139003154Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Mar 6 02:21:22.139127 containerd[1583]: time="2026-03-06T02:21:22.139113209Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Mar 6 02:21:22.139241 containerd[1583]: time="2026-03-06T02:21:22.139228094Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Mar 6 02:21:22.139692 containerd[1583]: time="2026-03-06T02:21:22.139653688Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Mar 6 02:21:22.139804 containerd[1583]: time="2026-03-06T02:21:22.139790323Z" level=info msg="Start snapshots syncer" Mar 6 02:21:22.139924 containerd[1583]: time="2026-03-06T02:21:22.139909055Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Mar 6 02:21:22.141759 containerd[1583]: time="2026-03-06T02:21:22.141684449Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Mar 6 02:21:22.143086 containerd[1583]: time="2026-03-06T02:21:22.142225830Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Mar 6 02:21:22.143086 containerd[1583]: time="2026-03-06T02:21:22.142401358Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Mar 6 02:21:22.143086 containerd[1583]: time="2026-03-06T02:21:22.142562058Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Mar 6 02:21:22.143086 containerd[1583]: time="2026-03-06T02:21:22.142583377Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Mar 6 02:21:22.143086 containerd[1583]: time="2026-03-06T02:21:22.142593537Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Mar 6 02:21:22.143086 containerd[1583]: time="2026-03-06T02:21:22.142602653Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Mar 6 02:21:22.143086 containerd[1583]: time="2026-03-06T02:21:22.142732516Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Mar 6 02:21:22.143086 containerd[1583]: time="2026-03-06T02:21:22.142761881Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Mar 6 02:21:22.143086 containerd[1583]: time="2026-03-06T02:21:22.142812185Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Mar 6 02:21:22.143086 containerd[1583]: time="2026-03-06T02:21:22.142923744Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Mar 6 02:21:22.143086 containerd[1583]: time="2026-03-06T02:21:22.142935956Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Mar 6 02:21:22.143086 containerd[1583]: time="2026-03-06T02:21:22.142945644Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Mar 6 02:21:22.143931 containerd[1583]: time="2026-03-06T02:21:22.143910175Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 6 02:21:22.144024 containerd[1583]: time="2026-03-06T02:21:22.144009199Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 6 02:21:22.144110 containerd[1583]: time="2026-03-06T02:21:22.144096663Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 6 02:21:22.144158 containerd[1583]: time="2026-03-06T02:21:22.144146286Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 6 02:21:22.144261 containerd[1583]: time="2026-03-06T02:21:22.144247564Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Mar 6 02:21:22.144304 containerd[1583]: time="2026-03-06T02:21:22.144294232Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Mar 6 02:21:22.144382 containerd[1583]: time="2026-03-06T02:21:22.144367208Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Mar 6 02:21:22.144477 containerd[1583]: time="2026-03-06T02:21:22.144465582Z" level=info msg="runtime interface created" Mar 6 02:21:22.144524 containerd[1583]: time="2026-03-06T02:21:22.144514543Z" level=info msg="created NRI interface" Mar 6 02:21:22.144563 containerd[1583]: time="2026-03-06T02:21:22.144552795Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Mar 6 02:21:22.144664 containerd[1583]: time="2026-03-06T02:21:22.144615191Z" level=info msg="Connect containerd service" Mar 6 02:21:22.144763 containerd[1583]: time="2026-03-06T02:21:22.144750263Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 6 02:21:22.148151 containerd[1583]: time="2026-03-06T02:21:22.148127198Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 6 02:21:22.442527 systemd[1]: Started systemd-logind.service - User Login Management. Mar 6 02:21:22.447416 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 6 02:21:22.450739 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 6 02:21:22.454922 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 6 02:21:22.458892 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 6 02:21:22.464147 tar[1568]: linux-amd64/README.md Mar 6 02:21:22.478190 systemd[1]: Reached target network-online.target - Network is Online. Mar 6 02:21:22.484603 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 6 02:21:22.490098 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 6 02:21:22.495232 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 6 02:21:22.500973 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 6 02:21:22.507333 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 6 02:21:22.513347 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 6 02:21:22.528476 systemd[1]: issuegen.service: Deactivated successfully. Mar 6 02:21:22.528811 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 6 02:21:22.542818 containerd[1583]: time="2026-03-06T02:21:22.540884406Z" level=info msg="Start subscribing containerd event" Mar 6 02:21:22.887435 containerd[1583]: time="2026-03-06T02:21:22.883681202Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 6 02:21:22.887435 containerd[1583]: time="2026-03-06T02:21:22.883972946Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 6 02:21:22.889371 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 6 02:21:22.893850 containerd[1583]: time="2026-03-06T02:21:22.893500329Z" level=info msg="Start recovering state" Mar 6 02:21:22.896162 containerd[1583]: time="2026-03-06T02:21:22.895584490Z" level=info msg="Start event monitor" Mar 6 02:21:22.896162 containerd[1583]: time="2026-03-06T02:21:22.895686520Z" level=info msg="Start cni network conf syncer for default" Mar 6 02:21:22.896162 containerd[1583]: time="2026-03-06T02:21:22.895819419Z" level=info msg="Start streaming server" Mar 6 02:21:22.896162 containerd[1583]: time="2026-03-06T02:21:22.895978205Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Mar 6 02:21:22.896461 containerd[1583]: time="2026-03-06T02:21:22.896430209Z" level=info msg="runtime interface starting up..." Mar 6 02:21:22.896587 containerd[1583]: time="2026-03-06T02:21:22.896541957Z" level=info msg="starting plugins..." Mar 6 02:21:22.897549 containerd[1583]: time="2026-03-06T02:21:22.897142759Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Mar 6 02:21:22.897862 containerd[1583]: time="2026-03-06T02:21:22.897834310Z" level=info msg="containerd successfully booted in 0.961723s" Mar 6 02:21:22.897863 systemd[1]: Started containerd.service - containerd container runtime. Mar 6 02:21:22.921607 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 6 02:21:22.945536 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 6 02:21:22.950332 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 6 02:21:22.950668 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 6 02:21:22.960255 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 6 02:21:22.964299 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 6 02:21:22.966460 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 6 02:21:22.969940 systemd[1]: Reached target getty.target - Login Prompts. Mar 6 02:21:25.956705 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 02:21:25.962158 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 6 02:21:25.966238 systemd[1]: Startup finished in 16.168s (kernel) + 19.163s (initrd) + 11.639s (userspace) = 46.971s. Mar 6 02:21:26.063677 (kubelet)[1688]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 6 02:21:28.222840 kubelet[1688]: E0306 02:21:28.222357 1688 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 6 02:21:28.226700 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 6 02:21:28.227003 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 6 02:21:28.227911 systemd[1]: kubelet.service: Consumed 4.823s CPU time, 269.1M memory peak. Mar 6 02:21:29.847806 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 6 02:21:29.849497 systemd[1]: Started sshd@0-10.0.0.33:22-10.0.0.1:35602.service - OpenSSH per-connection server daemon (10.0.0.1:35602). Mar 6 02:21:29.970517 sshd[1701]: Accepted publickey for core from 10.0.0.1 port 35602 ssh2: RSA SHA256:gQgg0JALxZbqjBb2wrGCiVnsME27E+oGvBgjcLvBgwM Mar 6 02:21:29.973180 sshd-session[1701]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 02:21:29.984736 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 6 02:21:29.986394 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 6 02:21:29.995924 systemd-logind[1555]: New session 1 of user core. Mar 6 02:21:30.177794 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 6 02:21:30.183197 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 6 02:21:30.221962 (systemd)[1706]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 6 02:21:30.228176 systemd-logind[1555]: New session c1 of user core. Mar 6 02:21:30.558971 systemd[1706]: Queued start job for default target default.target. Mar 6 02:21:30.585791 systemd[1706]: Created slice app.slice - User Application Slice. Mar 6 02:21:30.585838 systemd[1706]: Reached target paths.target - Paths. Mar 6 02:21:30.585899 systemd[1706]: Reached target timers.target - Timers. Mar 6 02:21:30.589007 systemd[1706]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 6 02:21:30.641022 systemd[1706]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 6 02:21:30.641329 systemd[1706]: Reached target sockets.target - Sockets. Mar 6 02:21:30.641407 systemd[1706]: Reached target basic.target - Basic System. Mar 6 02:21:30.641492 systemd[1706]: Reached target default.target - Main User Target. Mar 6 02:21:30.641539 systemd[1706]: Startup finished in 399ms. Mar 6 02:21:30.641903 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 6 02:21:30.658461 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 6 02:21:30.678107 systemd[1]: Started sshd@1-10.0.0.33:22-10.0.0.1:33886.service - OpenSSH per-connection server daemon (10.0.0.1:33886). Mar 6 02:21:30.756341 sshd[1717]: Accepted publickey for core from 10.0.0.1 port 33886 ssh2: RSA SHA256:gQgg0JALxZbqjBb2wrGCiVnsME27E+oGvBgjcLvBgwM Mar 6 02:21:30.758257 sshd-session[1717]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 02:21:30.764412 systemd-logind[1555]: New session 2 of user core. Mar 6 02:21:30.774273 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 6 02:21:30.793582 sshd[1720]: Connection closed by 10.0.0.1 port 33886 Mar 6 02:21:30.794329 sshd-session[1717]: pam_unix(sshd:session): session closed for user core Mar 6 02:21:30.823373 systemd[1]: sshd@1-10.0.0.33:22-10.0.0.1:33886.service: Deactivated successfully. Mar 6 02:21:30.825545 systemd[1]: session-2.scope: Deactivated successfully. Mar 6 02:21:30.826892 systemd-logind[1555]: Session 2 logged out. Waiting for processes to exit. Mar 6 02:21:30.829953 systemd[1]: Started sshd@2-10.0.0.33:22-10.0.0.1:33890.service - OpenSSH per-connection server daemon (10.0.0.1:33890). Mar 6 02:21:30.831602 systemd-logind[1555]: Removed session 2. Mar 6 02:21:30.919083 sshd[1726]: Accepted publickey for core from 10.0.0.1 port 33890 ssh2: RSA SHA256:gQgg0JALxZbqjBb2wrGCiVnsME27E+oGvBgjcLvBgwM Mar 6 02:21:30.921589 sshd-session[1726]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 02:21:30.928425 systemd-logind[1555]: New session 3 of user core. Mar 6 02:21:30.938345 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 6 02:21:30.950950 sshd[1729]: Connection closed by 10.0.0.1 port 33890 Mar 6 02:21:30.951485 sshd-session[1726]: pam_unix(sshd:session): session closed for user core Mar 6 02:21:30.964339 systemd[1]: sshd@2-10.0.0.33:22-10.0.0.1:33890.service: Deactivated successfully. Mar 6 02:21:30.966380 systemd[1]: session-3.scope: Deactivated successfully. Mar 6 02:21:30.967373 systemd-logind[1555]: Session 3 logged out. Waiting for processes to exit. Mar 6 02:21:30.970043 systemd[1]: Started sshd@3-10.0.0.33:22-10.0.0.1:33906.service - OpenSSH per-connection server daemon (10.0.0.1:33906). Mar 6 02:21:30.971923 systemd-logind[1555]: Removed session 3. Mar 6 02:21:31.039034 sshd[1735]: Accepted publickey for core from 10.0.0.1 port 33906 ssh2: RSA SHA256:gQgg0JALxZbqjBb2wrGCiVnsME27E+oGvBgjcLvBgwM Mar 6 02:21:31.041251 sshd-session[1735]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 02:21:31.048292 systemd-logind[1555]: New session 4 of user core. Mar 6 02:21:31.065423 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 6 02:21:31.084764 sshd[1738]: Connection closed by 10.0.0.1 port 33906 Mar 6 02:21:31.086271 sshd-session[1735]: pam_unix(sshd:session): session closed for user core Mar 6 02:21:31.109355 systemd[1]: sshd@3-10.0.0.33:22-10.0.0.1:33906.service: Deactivated successfully. Mar 6 02:21:31.112210 systemd[1]: session-4.scope: Deactivated successfully. Mar 6 02:21:31.113482 systemd-logind[1555]: Session 4 logged out. Waiting for processes to exit. Mar 6 02:21:31.115909 systemd[1]: Started sshd@4-10.0.0.33:22-10.0.0.1:33922.service - OpenSSH per-connection server daemon (10.0.0.1:33922). Mar 6 02:21:31.117618 systemd-logind[1555]: Removed session 4. Mar 6 02:21:31.178516 sshd[1744]: Accepted publickey for core from 10.0.0.1 port 33922 ssh2: RSA SHA256:gQgg0JALxZbqjBb2wrGCiVnsME27E+oGvBgjcLvBgwM Mar 6 02:21:31.180726 sshd-session[1744]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 02:21:31.187869 systemd-logind[1555]: New session 5 of user core. Mar 6 02:21:31.197402 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 6 02:21:31.221290 sudo[1748]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 6 02:21:31.221670 sudo[1748]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 6 02:21:31.249340 sudo[1748]: pam_unix(sudo:session): session closed for user root Mar 6 02:21:31.255016 sshd[1747]: Connection closed by 10.0.0.1 port 33922 Mar 6 02:21:31.255600 sshd-session[1744]: pam_unix(sshd:session): session closed for user core Mar 6 02:21:31.277496 systemd[1]: sshd@4-10.0.0.33:22-10.0.0.1:33922.service: Deactivated successfully. Mar 6 02:21:31.280168 systemd[1]: session-5.scope: Deactivated successfully. Mar 6 02:21:31.281630 systemd-logind[1555]: Session 5 logged out. Waiting for processes to exit. Mar 6 02:21:31.285532 systemd[1]: Started sshd@5-10.0.0.33:22-10.0.0.1:33924.service - OpenSSH per-connection server daemon (10.0.0.1:33924). Mar 6 02:21:31.286547 systemd-logind[1555]: Removed session 5. Mar 6 02:21:31.364470 sshd[1754]: Accepted publickey for core from 10.0.0.1 port 33924 ssh2: RSA SHA256:gQgg0JALxZbqjBb2wrGCiVnsME27E+oGvBgjcLvBgwM Mar 6 02:21:31.366481 sshd-session[1754]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 02:21:31.384565 systemd-logind[1555]: New session 6 of user core. Mar 6 02:21:31.414745 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 6 02:21:31.480249 sudo[1759]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 6 02:21:31.480824 sudo[1759]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 6 02:21:31.573010 sudo[1759]: pam_unix(sudo:session): session closed for user root Mar 6 02:21:31.582616 sudo[1758]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 6 02:21:31.583100 sudo[1758]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 6 02:21:31.597159 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 6 02:21:31.676764 augenrules[1781]: No rules Mar 6 02:21:31.678979 systemd[1]: audit-rules.service: Deactivated successfully. Mar 6 02:21:31.679445 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 6 02:21:31.681021 sudo[1758]: pam_unix(sudo:session): session closed for user root Mar 6 02:21:31.683412 sshd[1757]: Connection closed by 10.0.0.1 port 33924 Mar 6 02:21:31.684020 sshd-session[1754]: pam_unix(sshd:session): session closed for user core Mar 6 02:21:31.698983 systemd[1]: sshd@5-10.0.0.33:22-10.0.0.1:33924.service: Deactivated successfully. Mar 6 02:21:31.705918 systemd[1]: session-6.scope: Deactivated successfully. Mar 6 02:21:31.707154 systemd-logind[1555]: Session 6 logged out. Waiting for processes to exit. Mar 6 02:21:31.709994 systemd[1]: Started sshd@6-10.0.0.33:22-10.0.0.1:33928.service - OpenSSH per-connection server daemon (10.0.0.1:33928). Mar 6 02:21:31.712010 systemd-logind[1555]: Removed session 6. Mar 6 02:21:31.788315 sshd[1790]: Accepted publickey for core from 10.0.0.1 port 33928 ssh2: RSA SHA256:gQgg0JALxZbqjBb2wrGCiVnsME27E+oGvBgjcLvBgwM Mar 6 02:21:31.790708 sshd-session[1790]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 02:21:31.796862 systemd-logind[1555]: New session 7 of user core. Mar 6 02:21:31.816452 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 6 02:21:31.830914 sudo[1794]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 6 02:21:31.831423 sudo[1794]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 6 02:21:32.364192 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 6 02:21:32.377483 (dockerd)[1814]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 6 02:21:32.730019 dockerd[1814]: time="2026-03-06T02:21:32.729797105Z" level=info msg="Starting up" Mar 6 02:21:32.731000 dockerd[1814]: time="2026-03-06T02:21:32.730947690Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Mar 6 02:21:32.756858 dockerd[1814]: time="2026-03-06T02:21:32.756794876Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Mar 6 02:21:32.815451 dockerd[1814]: time="2026-03-06T02:21:32.815312084Z" level=info msg="Loading containers: start." Mar 6 02:21:32.827127 kernel: Initializing XFRM netlink socket Mar 6 02:21:33.593313 systemd-networkd[1479]: docker0: Link UP Mar 6 02:21:33.598762 dockerd[1814]: time="2026-03-06T02:21:33.598453391Z" level=info msg="Loading containers: done." Mar 6 02:21:33.849691 dockerd[1814]: time="2026-03-06T02:21:33.849404228Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 6 02:21:33.849691 dockerd[1814]: time="2026-03-06T02:21:33.849520154Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Mar 6 02:21:33.849691 dockerd[1814]: time="2026-03-06T02:21:33.849610914Z" level=info msg="Initializing buildkit" Mar 6 02:21:33.890255 dockerd[1814]: time="2026-03-06T02:21:33.890203921Z" level=info msg="Completed buildkit initialization" Mar 6 02:21:33.896769 dockerd[1814]: time="2026-03-06T02:21:33.896723672Z" level=info msg="Daemon has completed initialization" Mar 6 02:21:33.898013 dockerd[1814]: time="2026-03-06T02:21:33.896910639Z" level=info msg="API listen on /run/docker.sock" Mar 6 02:21:33.897018 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 6 02:21:35.559286 containerd[1583]: time="2026-03-06T02:21:35.558924508Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\"" Mar 6 02:21:36.584554 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1774994485.mount: Deactivated successfully. Mar 6 02:21:38.267563 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 6 02:21:38.271787 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 6 02:21:39.712498 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 02:21:39.736477 (kubelet)[2100]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 6 02:21:39.787689 containerd[1583]: time="2026-03-06T02:21:39.787594450Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 02:21:39.789966 containerd[1583]: time="2026-03-06T02:21:39.789920452Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.9: active requests=0, bytes read=30116186" Mar 6 02:21:39.791119 containerd[1583]: time="2026-03-06T02:21:39.790997088Z" level=info msg="ImageCreate event name:\"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 02:21:39.794537 containerd[1583]: time="2026-03-06T02:21:39.794456541Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 02:21:39.795656 containerd[1583]: time="2026-03-06T02:21:39.795574339Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.9\" with image id \"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\", size \"30112785\" in 4.236564621s" Mar 6 02:21:39.795656 containerd[1583]: time="2026-03-06T02:21:39.795627608Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\" returns image reference \"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\"" Mar 6 02:21:39.798279 containerd[1583]: time="2026-03-06T02:21:39.798208166Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\"" Mar 6 02:21:39.865451 kubelet[2100]: E0306 02:21:39.865370 2100 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 6 02:21:39.871479 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 6 02:21:39.871763 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 6 02:21:39.872392 systemd[1]: kubelet.service: Consumed 1.311s CPU time, 111.5M memory peak. Mar 6 02:21:42.992880 containerd[1583]: time="2026-03-06T02:21:42.992563491Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 02:21:42.993828 containerd[1583]: time="2026-03-06T02:21:42.993650861Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.9: active requests=0, bytes read=26021810" Mar 6 02:21:42.995591 containerd[1583]: time="2026-03-06T02:21:42.995509569Z" level=info msg="ImageCreate event name:\"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 02:21:43.001263 containerd[1583]: time="2026-03-06T02:21:43.001176621Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 02:21:43.004969 containerd[1583]: time="2026-03-06T02:21:43.004857163Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.9\" with image id \"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\", size \"27678758\" in 3.206606326s" Mar 6 02:21:43.004969 containerd[1583]: time="2026-03-06T02:21:43.004912907Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\" returns image reference \"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\"" Mar 6 02:21:43.010488 containerd[1583]: time="2026-03-06T02:21:43.010296016Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\"" Mar 6 02:21:45.121463 containerd[1583]: time="2026-03-06T02:21:45.121243813Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 02:21:45.122523 containerd[1583]: time="2026-03-06T02:21:45.122078901Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.9: active requests=0, bytes read=20162746" Mar 6 02:21:45.123742 containerd[1583]: time="2026-03-06T02:21:45.123632261Z" level=info msg="ImageCreate event name:\"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 02:21:45.127724 containerd[1583]: time="2026-03-06T02:21:45.127661273Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 02:21:45.129256 containerd[1583]: time="2026-03-06T02:21:45.129176452Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.9\" with image id \"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\", size \"21819712\" in 2.118833818s" Mar 6 02:21:45.129256 containerd[1583]: time="2026-03-06T02:21:45.129217578Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\" returns image reference \"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\"" Mar 6 02:21:45.130561 containerd[1583]: time="2026-03-06T02:21:45.130521302Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\"" Mar 6 02:21:46.875025 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3865123653.mount: Deactivated successfully. Mar 6 02:21:47.663265 containerd[1583]: time="2026-03-06T02:21:47.662970460Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 02:21:47.664173 containerd[1583]: time="2026-03-06T02:21:47.663876636Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.9: active requests=0, bytes read=31828647" Mar 6 02:21:47.665767 containerd[1583]: time="2026-03-06T02:21:47.665662538Z" level=info msg="ImageCreate event name:\"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 02:21:47.668790 containerd[1583]: time="2026-03-06T02:21:47.668737893Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 02:21:47.669358 containerd[1583]: time="2026-03-06T02:21:47.669265294Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.9\" with image id \"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\", repo tag \"registry.k8s.io/kube-proxy:v1.33.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\", size \"31827666\" in 2.538715799s" Mar 6 02:21:47.669358 containerd[1583]: time="2026-03-06T02:21:47.669310278Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\" returns image reference \"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\"" Mar 6 02:21:47.670675 containerd[1583]: time="2026-03-06T02:21:47.670632607Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Mar 6 02:21:48.353355 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1181908566.mount: Deactivated successfully. Mar 6 02:21:49.637406 containerd[1583]: time="2026-03-06T02:21:49.637095355Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 02:21:49.638638 containerd[1583]: time="2026-03-06T02:21:49.638499081Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Mar 6 02:21:49.639964 containerd[1583]: time="2026-03-06T02:21:49.639910958Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 02:21:49.643360 containerd[1583]: time="2026-03-06T02:21:49.643309048Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 02:21:49.644772 containerd[1583]: time="2026-03-06T02:21:49.644663160Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.973971203s" Mar 6 02:21:49.644772 containerd[1583]: time="2026-03-06T02:21:49.644740775Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Mar 6 02:21:49.646898 containerd[1583]: time="2026-03-06T02:21:49.646603779Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 6 02:21:50.015235 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 6 02:21:50.017305 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 6 02:21:50.111611 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount882458808.mount: Deactivated successfully. Mar 6 02:21:50.182352 containerd[1583]: time="2026-03-06T02:21:50.182228411Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 6 02:21:50.204837 containerd[1583]: time="2026-03-06T02:21:50.204748269Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Mar 6 02:21:50.218728 containerd[1583]: time="2026-03-06T02:21:50.218549204Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 6 02:21:50.223107 containerd[1583]: time="2026-03-06T02:21:50.222147884Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 6 02:21:50.223284 containerd[1583]: time="2026-03-06T02:21:50.223258418Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 576.618442ms" Mar 6 02:21:50.223448 containerd[1583]: time="2026-03-06T02:21:50.223343276Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Mar 6 02:21:50.224477 containerd[1583]: time="2026-03-06T02:21:50.224418308Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Mar 6 02:21:50.279777 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 02:21:50.299842 (kubelet)[2189]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 6 02:21:50.366351 kubelet[2189]: E0306 02:21:50.366281 2189 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 6 02:21:50.370246 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 6 02:21:50.370452 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 6 02:21:50.370899 systemd[1]: kubelet.service: Consumed 279ms CPU time, 112.4M memory peak. Mar 6 02:21:50.665269 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount131537312.mount: Deactivated successfully. Mar 6 02:21:51.616505 containerd[1583]: time="2026-03-06T02:21:51.616350644Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 02:21:51.617433 containerd[1583]: time="2026-03-06T02:21:51.617374701Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=23718840" Mar 6 02:21:51.619013 containerd[1583]: time="2026-03-06T02:21:51.618951171Z" level=info msg="ImageCreate event name:\"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 02:21:51.622176 containerd[1583]: time="2026-03-06T02:21:51.622014084Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 02:21:51.623741 containerd[1583]: time="2026-03-06T02:21:51.623636155Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"23716032\" in 1.399171591s" Mar 6 02:21:51.623741 containerd[1583]: time="2026-03-06T02:21:51.623721214Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\"" Mar 6 02:21:54.448381 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 02:21:54.448552 systemd[1]: kubelet.service: Consumed 279ms CPU time, 112.4M memory peak. Mar 6 02:21:54.451037 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 6 02:21:54.486471 systemd[1]: Reload requested from client PID 2291 ('systemctl') (unit session-7.scope)... Mar 6 02:21:54.486506 systemd[1]: Reloading... Mar 6 02:21:54.577158 zram_generator::config[2332]: No configuration found. Mar 6 02:21:54.817939 systemd[1]: Reloading finished in 331 ms. Mar 6 02:21:54.904014 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 6 02:21:54.904240 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 6 02:21:54.904675 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 02:21:54.904815 systemd[1]: kubelet.service: Consumed 180ms CPU time, 98.2M memory peak. Mar 6 02:21:54.907345 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 6 02:21:55.094793 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 02:21:55.117786 (kubelet)[2381]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 6 02:21:55.174593 kubelet[2381]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 6 02:21:55.174593 kubelet[2381]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 6 02:21:55.174593 kubelet[2381]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 6 02:21:55.175213 kubelet[2381]: I0306 02:21:55.174818 2381 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 6 02:21:55.536499 kubelet[2381]: I0306 02:21:55.536423 2381 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 6 02:21:55.536499 kubelet[2381]: I0306 02:21:55.536480 2381 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 6 02:21:55.536915 kubelet[2381]: I0306 02:21:55.536799 2381 server.go:956] "Client rotation is on, will bootstrap in background" Mar 6 02:21:55.571604 kubelet[2381]: E0306 02:21:55.571517 2381 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.33:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.33:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 6 02:21:55.574348 kubelet[2381]: I0306 02:21:55.574282 2381 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 6 02:21:55.588592 kubelet[2381]: I0306 02:21:55.588545 2381 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 6 02:21:55.595755 kubelet[2381]: I0306 02:21:55.595558 2381 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 6 02:21:55.596079 kubelet[2381]: I0306 02:21:55.595995 2381 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 6 02:21:55.596435 kubelet[2381]: I0306 02:21:55.596033 2381 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 6 02:21:55.596621 kubelet[2381]: I0306 02:21:55.596451 2381 topology_manager.go:138] "Creating topology manager with none policy" Mar 6 02:21:55.596621 kubelet[2381]: I0306 02:21:55.596461 2381 container_manager_linux.go:303] "Creating device plugin manager" Mar 6 02:21:55.596811 kubelet[2381]: I0306 02:21:55.596743 2381 state_mem.go:36] "Initialized new in-memory state store" Mar 6 02:21:55.601315 kubelet[2381]: I0306 02:21:55.601235 2381 kubelet.go:480] "Attempting to sync node with API server" Mar 6 02:21:55.601315 kubelet[2381]: I0306 02:21:55.601282 2381 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 6 02:21:55.601399 kubelet[2381]: I0306 02:21:55.601377 2381 kubelet.go:386] "Adding apiserver pod source" Mar 6 02:21:55.604311 kubelet[2381]: I0306 02:21:55.602808 2381 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 6 02:21:55.606118 kubelet[2381]: E0306 02:21:55.605971 2381 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.33:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.33:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 6 02:21:55.606414 kubelet[2381]: I0306 02:21:55.606276 2381 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Mar 6 02:21:55.606779 kubelet[2381]: E0306 02:21:55.606709 2381 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.33:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.33:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 6 02:21:55.607359 kubelet[2381]: I0306 02:21:55.607303 2381 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 6 02:21:55.608184 kubelet[2381]: W0306 02:21:55.608143 2381 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 6 02:21:55.614306 kubelet[2381]: I0306 02:21:55.614254 2381 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 6 02:21:55.615140 kubelet[2381]: I0306 02:21:55.615094 2381 server.go:1289] "Started kubelet" Mar 6 02:21:55.616473 kubelet[2381]: I0306 02:21:55.616341 2381 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 6 02:21:55.617381 kubelet[2381]: I0306 02:21:55.616220 2381 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 6 02:21:55.618737 kubelet[2381]: I0306 02:21:55.618464 2381 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 6 02:21:55.620958 kubelet[2381]: I0306 02:21:55.620889 2381 server.go:317] "Adding debug handlers to kubelet server" Mar 6 02:21:55.620999 kubelet[2381]: I0306 02:21:55.620979 2381 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 6 02:21:55.621576 kubelet[2381]: I0306 02:21:55.621537 2381 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 6 02:21:55.623247 kubelet[2381]: I0306 02:21:55.622481 2381 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 6 02:21:55.623247 kubelet[2381]: E0306 02:21:55.622696 2381 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 6 02:21:55.623247 kubelet[2381]: I0306 02:21:55.623041 2381 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 6 02:21:55.623451 kubelet[2381]: I0306 02:21:55.623255 2381 reconciler.go:26] "Reconciler: start to sync state" Mar 6 02:21:55.624416 kubelet[2381]: E0306 02:21:55.624143 2381 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.33:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.33:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 6 02:21:55.624416 kubelet[2381]: E0306 02:21:55.624239 2381 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.33:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.33:6443: connect: connection refused" interval="200ms" Mar 6 02:21:55.624862 kubelet[2381]: E0306 02:21:55.623006 2381 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.33:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.33:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.189a1f34f3e44cdb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-06 02:21:55.614297307 +0000 UTC m=+0.491183963,LastTimestamp:2026-03-06 02:21:55.614297307 +0000 UTC m=+0.491183963,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 6 02:21:55.627122 kubelet[2381]: I0306 02:21:55.626784 2381 factory.go:223] Registration of the systemd container factory successfully Mar 6 02:21:55.627190 kubelet[2381]: I0306 02:21:55.627156 2381 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 6 02:21:55.627973 kubelet[2381]: E0306 02:21:55.627953 2381 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 6 02:21:55.628827 kubelet[2381]: I0306 02:21:55.628776 2381 factory.go:223] Registration of the containerd container factory successfully Mar 6 02:21:55.645114 kubelet[2381]: I0306 02:21:55.645028 2381 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 6 02:21:55.645114 kubelet[2381]: I0306 02:21:55.645092 2381 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 6 02:21:55.645114 kubelet[2381]: I0306 02:21:55.645108 2381 state_mem.go:36] "Initialized new in-memory state store" Mar 6 02:21:55.650137 kubelet[2381]: I0306 02:21:55.650017 2381 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 6 02:21:55.652342 kubelet[2381]: I0306 02:21:55.652304 2381 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 6 02:21:55.652437 kubelet[2381]: I0306 02:21:55.652411 2381 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 6 02:21:55.652508 kubelet[2381]: I0306 02:21:55.652461 2381 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 6 02:21:55.652508 kubelet[2381]: I0306 02:21:55.652507 2381 kubelet.go:2436] "Starting kubelet main sync loop" Mar 6 02:21:55.652604 kubelet[2381]: E0306 02:21:55.652559 2381 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 6 02:21:55.653451 kubelet[2381]: E0306 02:21:55.653374 2381 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.33:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.33:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 6 02:21:55.723945 kubelet[2381]: E0306 02:21:55.723867 2381 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 6 02:21:55.733318 kubelet[2381]: I0306 02:21:55.733188 2381 policy_none.go:49] "None policy: Start" Mar 6 02:21:55.733318 kubelet[2381]: I0306 02:21:55.733309 2381 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 6 02:21:55.733506 kubelet[2381]: I0306 02:21:55.733368 2381 state_mem.go:35] "Initializing new in-memory state store" Mar 6 02:21:55.742716 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 6 02:21:55.753761 kubelet[2381]: E0306 02:21:55.753645 2381 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 6 02:21:55.758895 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 6 02:21:55.764188 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 6 02:21:55.779115 kubelet[2381]: E0306 02:21:55.778980 2381 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 6 02:21:55.779791 kubelet[2381]: I0306 02:21:55.779634 2381 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 6 02:21:55.779791 kubelet[2381]: I0306 02:21:55.779719 2381 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 6 02:21:55.780930 kubelet[2381]: I0306 02:21:55.780806 2381 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 6 02:21:55.782658 kubelet[2381]: E0306 02:21:55.782558 2381 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 6 02:21:55.782842 kubelet[2381]: E0306 02:21:55.782776 2381 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 6 02:21:55.826396 kubelet[2381]: E0306 02:21:55.826181 2381 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.33:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.33:6443: connect: connection refused" interval="400ms" Mar 6 02:21:55.885378 kubelet[2381]: I0306 02:21:55.885231 2381 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 6 02:21:55.885963 kubelet[2381]: E0306 02:21:55.885834 2381 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.33:6443/api/v1/nodes\": dial tcp 10.0.0.33:6443: connect: connection refused" node="localhost" Mar 6 02:21:55.971340 systemd[1]: Created slice kubepods-burstable-podbcf27a85543a3288f59a047d9cce5028.slice - libcontainer container kubepods-burstable-podbcf27a85543a3288f59a047d9cce5028.slice. Mar 6 02:21:55.986805 kubelet[2381]: E0306 02:21:55.986725 2381 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 6 02:21:55.990498 systemd[1]: Created slice kubepods-burstable-pod8747e1f8a49a618fbc1324a8fe2d3754.slice - libcontainer container kubepods-burstable-pod8747e1f8a49a618fbc1324a8fe2d3754.slice. Mar 6 02:21:56.008322 kubelet[2381]: E0306 02:21:56.008187 2381 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 6 02:21:56.011340 systemd[1]: Created slice kubepods-burstable-pode944e4cb17af904786c3a2e01e298498.slice - libcontainer container kubepods-burstable-pode944e4cb17af904786c3a2e01e298498.slice. Mar 6 02:21:56.013870 kubelet[2381]: E0306 02:21:56.013812 2381 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 6 02:21:56.024766 kubelet[2381]: I0306 02:21:56.024703 2381 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bcf27a85543a3288f59a047d9cce5028-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"bcf27a85543a3288f59a047d9cce5028\") " pod="kube-system/kube-apiserver-localhost" Mar 6 02:21:56.024766 kubelet[2381]: I0306 02:21:56.024749 2381 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bcf27a85543a3288f59a047d9cce5028-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"bcf27a85543a3288f59a047d9cce5028\") " pod="kube-system/kube-apiserver-localhost" Mar 6 02:21:56.024766 kubelet[2381]: I0306 02:21:56.024767 2381 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bcf27a85543a3288f59a047d9cce5028-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"bcf27a85543a3288f59a047d9cce5028\") " pod="kube-system/kube-apiserver-localhost" Mar 6 02:21:56.024918 kubelet[2381]: I0306 02:21:56.024783 2381 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 6 02:21:56.024918 kubelet[2381]: I0306 02:21:56.024795 2381 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 6 02:21:56.024918 kubelet[2381]: I0306 02:21:56.024808 2381 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 6 02:21:56.024918 kubelet[2381]: I0306 02:21:56.024831 2381 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 6 02:21:56.024918 kubelet[2381]: I0306 02:21:56.024844 2381 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 6 02:21:56.025170 kubelet[2381]: I0306 02:21:56.024856 2381 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e944e4cb17af904786c3a2e01e298498-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"e944e4cb17af904786c3a2e01e298498\") " pod="kube-system/kube-scheduler-localhost" Mar 6 02:21:56.091434 kubelet[2381]: I0306 02:21:56.090823 2381 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 6 02:21:56.091434 kubelet[2381]: E0306 02:21:56.091284 2381 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.33:6443/api/v1/nodes\": dial tcp 10.0.0.33:6443: connect: connection refused" node="localhost" Mar 6 02:21:56.227220 kubelet[2381]: E0306 02:21:56.227140 2381 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.33:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.33:6443: connect: connection refused" interval="800ms" Mar 6 02:21:56.287751 kubelet[2381]: E0306 02:21:56.287629 2381 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:21:56.289210 containerd[1583]: time="2026-03-06T02:21:56.289016677Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:bcf27a85543a3288f59a047d9cce5028,Namespace:kube-system,Attempt:0,}" Mar 6 02:21:56.310104 kubelet[2381]: E0306 02:21:56.309710 2381 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:21:56.310390 containerd[1583]: time="2026-03-06T02:21:56.310321597Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8747e1f8a49a618fbc1324a8fe2d3754,Namespace:kube-system,Attempt:0,}" Mar 6 02:21:56.314817 kubelet[2381]: E0306 02:21:56.314761 2381 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:21:56.315378 containerd[1583]: time="2026-03-06T02:21:56.315298013Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:e944e4cb17af904786c3a2e01e298498,Namespace:kube-system,Attempt:0,}" Mar 6 02:21:56.339001 containerd[1583]: time="2026-03-06T02:21:56.338924606Z" level=info msg="connecting to shim 4318d8e3446510debb6951f3ad5a7d9c5191d025d267d038d2e7276858d94fb7" address="unix:///run/containerd/s/dbde8de7eefa2564367f37cf9d07f8544fad1bef1478f7be8eba464222882cef" namespace=k8s.io protocol=ttrpc version=3 Mar 6 02:21:56.347405 containerd[1583]: time="2026-03-06T02:21:56.347223585Z" level=info msg="connecting to shim aacf3beb76c4d628bbedaed25aa7559ad12395d8a76baa618fe4c9fc5689bcfc" address="unix:///run/containerd/s/188320e8503cd49bfc9403f4aab4fd24fc34de25e86c5024a98e265acc14fc0d" namespace=k8s.io protocol=ttrpc version=3 Mar 6 02:21:56.360374 containerd[1583]: time="2026-03-06T02:21:56.360327487Z" level=info msg="connecting to shim d68e827caa88923ee99460713e73f94e26893f5a4fe4e709c89283ee3e236b6c" address="unix:///run/containerd/s/857293049d0701078c747d03beb9b76a6ea40ab57b00912311e859a491af22a1" namespace=k8s.io protocol=ttrpc version=3 Mar 6 02:21:56.395285 systemd[1]: Started cri-containerd-4318d8e3446510debb6951f3ad5a7d9c5191d025d267d038d2e7276858d94fb7.scope - libcontainer container 4318d8e3446510debb6951f3ad5a7d9c5191d025d267d038d2e7276858d94fb7. Mar 6 02:21:56.408346 systemd[1]: Started cri-containerd-aacf3beb76c4d628bbedaed25aa7559ad12395d8a76baa618fe4c9fc5689bcfc.scope - libcontainer container aacf3beb76c4d628bbedaed25aa7559ad12395d8a76baa618fe4c9fc5689bcfc. Mar 6 02:21:56.410892 systemd[1]: Started cri-containerd-d68e827caa88923ee99460713e73f94e26893f5a4fe4e709c89283ee3e236b6c.scope - libcontainer container d68e827caa88923ee99460713e73f94e26893f5a4fe4e709c89283ee3e236b6c. Mar 6 02:21:56.469720 containerd[1583]: time="2026-03-06T02:21:56.469592782Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:e944e4cb17af904786c3a2e01e298498,Namespace:kube-system,Attempt:0,} returns sandbox id \"d68e827caa88923ee99460713e73f94e26893f5a4fe4e709c89283ee3e236b6c\"" Mar 6 02:21:56.471833 kubelet[2381]: E0306 02:21:56.471780 2381 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:21:56.480264 containerd[1583]: time="2026-03-06T02:21:56.480191046Z" level=info msg="CreateContainer within sandbox \"d68e827caa88923ee99460713e73f94e26893f5a4fe4e709c89283ee3e236b6c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 6 02:21:56.490960 containerd[1583]: time="2026-03-06T02:21:56.490924533Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8747e1f8a49a618fbc1324a8fe2d3754,Namespace:kube-system,Attempt:0,} returns sandbox id \"aacf3beb76c4d628bbedaed25aa7559ad12395d8a76baa618fe4c9fc5689bcfc\"" Mar 6 02:21:56.492023 kubelet[2381]: E0306 02:21:56.491933 2381 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:21:56.492917 kubelet[2381]: I0306 02:21:56.492874 2381 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 6 02:21:56.493223 containerd[1583]: time="2026-03-06T02:21:56.493137867Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:bcf27a85543a3288f59a047d9cce5028,Namespace:kube-system,Attempt:0,} returns sandbox id \"4318d8e3446510debb6951f3ad5a7d9c5191d025d267d038d2e7276858d94fb7\"" Mar 6 02:21:56.493528 kubelet[2381]: E0306 02:21:56.493497 2381 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.33:6443/api/v1/nodes\": dial tcp 10.0.0.33:6443: connect: connection refused" node="localhost" Mar 6 02:21:56.493856 kubelet[2381]: E0306 02:21:56.493838 2381 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:21:56.497306 containerd[1583]: time="2026-03-06T02:21:56.497245401Z" level=info msg="CreateContainer within sandbox \"aacf3beb76c4d628bbedaed25aa7559ad12395d8a76baa618fe4c9fc5689bcfc\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 6 02:21:56.499890 containerd[1583]: time="2026-03-06T02:21:56.499637186Z" level=info msg="Container acc6bb8af14a8eb67992a5a129c1e756403ab0974e21bfcd02f855a7523dddf0: CDI devices from CRI Config.CDIDevices: []" Mar 6 02:21:56.501243 containerd[1583]: time="2026-03-06T02:21:56.501190438Z" level=info msg="CreateContainer within sandbox \"4318d8e3446510debb6951f3ad5a7d9c5191d025d267d038d2e7276858d94fb7\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 6 02:21:56.507330 containerd[1583]: time="2026-03-06T02:21:56.507250803Z" level=info msg="Container 12cf2809e2dbe4ae597d61a2080d36ac66a7cb28d3a09a51e999d2ac39a72b80: CDI devices from CRI Config.CDIDevices: []" Mar 6 02:21:56.516473 containerd[1583]: time="2026-03-06T02:21:56.516367188Z" level=info msg="Container 8b62e83534762981fdbb45b98c7288827b943ce30a0af6c20fc830ecba50f9f4: CDI devices from CRI Config.CDIDevices: []" Mar 6 02:21:56.526233 containerd[1583]: time="2026-03-06T02:21:56.526111570Z" level=info msg="CreateContainer within sandbox \"d68e827caa88923ee99460713e73f94e26893f5a4fe4e709c89283ee3e236b6c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"acc6bb8af14a8eb67992a5a129c1e756403ab0974e21bfcd02f855a7523dddf0\"" Mar 6 02:21:56.527373 containerd[1583]: time="2026-03-06T02:21:56.527291320Z" level=info msg="CreateContainer within sandbox \"aacf3beb76c4d628bbedaed25aa7559ad12395d8a76baa618fe4c9fc5689bcfc\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"12cf2809e2dbe4ae597d61a2080d36ac66a7cb28d3a09a51e999d2ac39a72b80\"" Mar 6 02:21:56.527769 containerd[1583]: time="2026-03-06T02:21:56.527746503Z" level=info msg="StartContainer for \"acc6bb8af14a8eb67992a5a129c1e756403ab0974e21bfcd02f855a7523dddf0\"" Mar 6 02:21:56.528270 containerd[1583]: time="2026-03-06T02:21:56.528167792Z" level=info msg="StartContainer for \"12cf2809e2dbe4ae597d61a2080d36ac66a7cb28d3a09a51e999d2ac39a72b80\"" Mar 6 02:21:56.529553 containerd[1583]: time="2026-03-06T02:21:56.529470883Z" level=info msg="connecting to shim acc6bb8af14a8eb67992a5a129c1e756403ab0974e21bfcd02f855a7523dddf0" address="unix:///run/containerd/s/857293049d0701078c747d03beb9b76a6ea40ab57b00912311e859a491af22a1" protocol=ttrpc version=3 Mar 6 02:21:56.530619 containerd[1583]: time="2026-03-06T02:21:56.530539894Z" level=info msg="connecting to shim 12cf2809e2dbe4ae597d61a2080d36ac66a7cb28d3a09a51e999d2ac39a72b80" address="unix:///run/containerd/s/188320e8503cd49bfc9403f4aab4fd24fc34de25e86c5024a98e265acc14fc0d" protocol=ttrpc version=3 Mar 6 02:21:56.531853 containerd[1583]: time="2026-03-06T02:21:56.531773816Z" level=info msg="CreateContainer within sandbox \"4318d8e3446510debb6951f3ad5a7d9c5191d025d267d038d2e7276858d94fb7\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"8b62e83534762981fdbb45b98c7288827b943ce30a0af6c20fc830ecba50f9f4\"" Mar 6 02:21:56.533728 containerd[1583]: time="2026-03-06T02:21:56.533654696Z" level=info msg="StartContainer for \"8b62e83534762981fdbb45b98c7288827b943ce30a0af6c20fc830ecba50f9f4\"" Mar 6 02:21:56.534873 containerd[1583]: time="2026-03-06T02:21:56.534851546Z" level=info msg="connecting to shim 8b62e83534762981fdbb45b98c7288827b943ce30a0af6c20fc830ecba50f9f4" address="unix:///run/containerd/s/dbde8de7eefa2564367f37cf9d07f8544fad1bef1478f7be8eba464222882cef" protocol=ttrpc version=3 Mar 6 02:21:56.557242 systemd[1]: Started cri-containerd-12cf2809e2dbe4ae597d61a2080d36ac66a7cb28d3a09a51e999d2ac39a72b80.scope - libcontainer container 12cf2809e2dbe4ae597d61a2080d36ac66a7cb28d3a09a51e999d2ac39a72b80. Mar 6 02:21:56.567239 systemd[1]: Started cri-containerd-8b62e83534762981fdbb45b98c7288827b943ce30a0af6c20fc830ecba50f9f4.scope - libcontainer container 8b62e83534762981fdbb45b98c7288827b943ce30a0af6c20fc830ecba50f9f4. Mar 6 02:21:56.568517 systemd[1]: Started cri-containerd-acc6bb8af14a8eb67992a5a129c1e756403ab0974e21bfcd02f855a7523dddf0.scope - libcontainer container acc6bb8af14a8eb67992a5a129c1e756403ab0974e21bfcd02f855a7523dddf0. Mar 6 02:21:56.649214 containerd[1583]: time="2026-03-06T02:21:56.647354992Z" level=info msg="StartContainer for \"12cf2809e2dbe4ae597d61a2080d36ac66a7cb28d3a09a51e999d2ac39a72b80\" returns successfully" Mar 6 02:21:56.655288 containerd[1583]: time="2026-03-06T02:21:56.655221299Z" level=info msg="StartContainer for \"8b62e83534762981fdbb45b98c7288827b943ce30a0af6c20fc830ecba50f9f4\" returns successfully" Mar 6 02:21:56.668107 kubelet[2381]: E0306 02:21:56.667643 2381 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 6 02:21:56.668107 kubelet[2381]: E0306 02:21:56.667803 2381 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:21:56.672988 kubelet[2381]: E0306 02:21:56.672554 2381 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 6 02:21:56.672988 kubelet[2381]: E0306 02:21:56.672878 2381 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:21:56.680691 containerd[1583]: time="2026-03-06T02:21:56.680569571Z" level=info msg="StartContainer for \"acc6bb8af14a8eb67992a5a129c1e756403ab0974e21bfcd02f855a7523dddf0\" returns successfully" Mar 6 02:21:57.296544 kubelet[2381]: I0306 02:21:57.296493 2381 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 6 02:21:57.676153 kubelet[2381]: E0306 02:21:57.675180 2381 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 6 02:21:57.676153 kubelet[2381]: E0306 02:21:57.675785 2381 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:21:57.676153 kubelet[2381]: E0306 02:21:57.676085 2381 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 6 02:21:57.676338 kubelet[2381]: E0306 02:21:57.676260 2381 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:21:58.173351 kubelet[2381]: E0306 02:21:58.173257 2381 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Mar 6 02:21:58.255772 kubelet[2381]: I0306 02:21:58.255693 2381 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Mar 6 02:21:58.255772 kubelet[2381]: E0306 02:21:58.255740 2381 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Mar 6 02:21:58.324144 kubelet[2381]: I0306 02:21:58.324030 2381 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 6 02:21:58.331917 kubelet[2381]: E0306 02:21:58.331825 2381 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Mar 6 02:21:58.331917 kubelet[2381]: I0306 02:21:58.331882 2381 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 6 02:21:58.333614 kubelet[2381]: E0306 02:21:58.333578 2381 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Mar 6 02:21:58.333614 kubelet[2381]: I0306 02:21:58.333615 2381 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 6 02:21:58.335696 kubelet[2381]: E0306 02:21:58.335588 2381 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Mar 6 02:21:58.396222 kubelet[2381]: I0306 02:21:58.396179 2381 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 6 02:21:58.398975 kubelet[2381]: E0306 02:21:58.398931 2381 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Mar 6 02:21:58.399345 kubelet[2381]: E0306 02:21:58.399249 2381 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:21:58.607637 kubelet[2381]: I0306 02:21:58.607572 2381 apiserver.go:52] "Watching apiserver" Mar 6 02:21:58.624250 kubelet[2381]: I0306 02:21:58.624198 2381 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 6 02:21:58.675423 kubelet[2381]: I0306 02:21:58.675351 2381 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 6 02:21:58.675423 kubelet[2381]: I0306 02:21:58.675413 2381 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 6 02:21:58.678249 kubelet[2381]: E0306 02:21:58.678184 2381 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Mar 6 02:21:58.678344 kubelet[2381]: E0306 02:21:58.678299 2381 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Mar 6 02:21:58.678344 kubelet[2381]: E0306 02:21:58.678335 2381 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:21:58.678459 kubelet[2381]: E0306 02:21:58.678438 2381 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:21:59.677903 kubelet[2381]: I0306 02:21:59.677722 2381 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 6 02:21:59.685166 kubelet[2381]: E0306 02:21:59.685113 2381 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:22:00.893475 kubelet[2381]: E0306 02:22:00.893004 2381 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:22:03.770464 kubelet[2381]: E0306 02:22:03.768728 2381 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.916s" Mar 6 02:22:03.795936 kubelet[2381]: E0306 02:22:03.795876 2381 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:22:03.955145 systemd[1]: Reload requested from client PID 2666 ('systemctl') (unit session-7.scope)... Mar 6 02:22:03.955237 systemd[1]: Reloading... Mar 6 02:22:04.179156 zram_generator::config[2712]: No configuration found. Mar 6 02:22:04.540455 systemd[1]: Reloading finished in 584 ms. Mar 6 02:22:04.579230 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 6 02:22:04.599882 systemd[1]: kubelet.service: Deactivated successfully. Mar 6 02:22:04.600430 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 02:22:04.600763 systemd[1]: kubelet.service: Consumed 2.820s CPU time, 133.1M memory peak. Mar 6 02:22:04.614928 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 6 02:22:05.083877 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 02:22:05.097498 (kubelet)[2754]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 6 02:22:05.219424 kubelet[2754]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 6 02:22:05.219424 kubelet[2754]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 6 02:22:05.219424 kubelet[2754]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 6 02:22:05.219960 kubelet[2754]: I0306 02:22:05.219595 2754 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 6 02:22:05.228581 kubelet[2754]: I0306 02:22:05.228512 2754 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 6 02:22:05.228581 kubelet[2754]: I0306 02:22:05.228548 2754 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 6 02:22:05.229157 kubelet[2754]: I0306 02:22:05.229037 2754 server.go:956] "Client rotation is on, will bootstrap in background" Mar 6 02:22:05.231232 kubelet[2754]: I0306 02:22:05.231169 2754 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 6 02:22:05.240069 kubelet[2754]: I0306 02:22:05.239983 2754 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 6 02:22:05.247907 kubelet[2754]: I0306 02:22:05.247121 2754 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 6 02:22:05.281890 kubelet[2754]: I0306 02:22:05.281508 2754 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 6 02:22:05.286501 kubelet[2754]: I0306 02:22:05.284825 2754 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 6 02:22:05.292729 kubelet[2754]: I0306 02:22:05.287649 2754 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 6 02:22:05.294245 kubelet[2754]: I0306 02:22:05.292918 2754 topology_manager.go:138] "Creating topology manager with none policy" Mar 6 02:22:05.294245 kubelet[2754]: I0306 02:22:05.292986 2754 container_manager_linux.go:303] "Creating device plugin manager" Mar 6 02:22:05.294245 kubelet[2754]: I0306 02:22:05.293387 2754 state_mem.go:36] "Initialized new in-memory state store" Mar 6 02:22:05.294245 kubelet[2754]: I0306 02:22:05.293935 2754 kubelet.go:480] "Attempting to sync node with API server" Mar 6 02:22:05.294245 kubelet[2754]: I0306 02:22:05.294020 2754 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 6 02:22:05.294245 kubelet[2754]: I0306 02:22:05.294109 2754 kubelet.go:386] "Adding apiserver pod source" Mar 6 02:22:05.294245 kubelet[2754]: I0306 02:22:05.294150 2754 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 6 02:22:05.331717 kubelet[2754]: I0306 02:22:05.330790 2754 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Mar 6 02:22:05.335114 kubelet[2754]: I0306 02:22:05.334336 2754 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 6 02:22:05.346893 kubelet[2754]: I0306 02:22:05.346819 2754 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 6 02:22:05.346975 kubelet[2754]: I0306 02:22:05.346907 2754 server.go:1289] "Started kubelet" Mar 6 02:22:05.347472 kubelet[2754]: I0306 02:22:05.347291 2754 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 6 02:22:05.348818 kubelet[2754]: I0306 02:22:05.348362 2754 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 6 02:22:05.351098 kubelet[2754]: I0306 02:22:05.349735 2754 server.go:317] "Adding debug handlers to kubelet server" Mar 6 02:22:05.351098 kubelet[2754]: I0306 02:22:05.350159 2754 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 6 02:22:05.354372 kubelet[2754]: I0306 02:22:05.353833 2754 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 6 02:22:05.355401 kubelet[2754]: I0306 02:22:05.354378 2754 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 6 02:22:05.355401 kubelet[2754]: I0306 02:22:05.354979 2754 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 6 02:22:05.356382 kubelet[2754]: E0306 02:22:05.356255 2754 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 6 02:22:05.359138 kubelet[2754]: I0306 02:22:05.358014 2754 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 6 02:22:05.359138 kubelet[2754]: I0306 02:22:05.358434 2754 reconciler.go:26] "Reconciler: start to sync state" Mar 6 02:22:05.361122 kubelet[2754]: I0306 02:22:05.361089 2754 factory.go:223] Registration of the systemd container factory successfully Mar 6 02:22:05.361263 kubelet[2754]: I0306 02:22:05.361221 2754 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 6 02:22:05.364228 kubelet[2754]: I0306 02:22:05.364206 2754 factory.go:223] Registration of the containerd container factory successfully Mar 6 02:22:05.386374 kubelet[2754]: I0306 02:22:05.386102 2754 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 6 02:22:05.392222 kubelet[2754]: I0306 02:22:05.391943 2754 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 6 02:22:05.393258 kubelet[2754]: I0306 02:22:05.393177 2754 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 6 02:22:05.393433 kubelet[2754]: I0306 02:22:05.393291 2754 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 6 02:22:05.393433 kubelet[2754]: I0306 02:22:05.393385 2754 kubelet.go:2436] "Starting kubelet main sync loop" Mar 6 02:22:05.394418 kubelet[2754]: E0306 02:22:05.393710 2754 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 6 02:22:05.494677 kubelet[2754]: E0306 02:22:05.494460 2754 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 6 02:22:05.568118 kubelet[2754]: I0306 02:22:05.567775 2754 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 6 02:22:05.568118 kubelet[2754]: I0306 02:22:05.567821 2754 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 6 02:22:05.568118 kubelet[2754]: I0306 02:22:05.567887 2754 state_mem.go:36] "Initialized new in-memory state store" Mar 6 02:22:05.568910 kubelet[2754]: I0306 02:22:05.568345 2754 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 6 02:22:05.568910 kubelet[2754]: I0306 02:22:05.568357 2754 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 6 02:22:05.568910 kubelet[2754]: I0306 02:22:05.568379 2754 policy_none.go:49] "None policy: Start" Mar 6 02:22:05.568910 kubelet[2754]: I0306 02:22:05.568389 2754 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 6 02:22:05.568910 kubelet[2754]: I0306 02:22:05.568421 2754 state_mem.go:35] "Initializing new in-memory state store" Mar 6 02:22:05.568910 kubelet[2754]: I0306 02:22:05.568527 2754 state_mem.go:75] "Updated machine memory state" Mar 6 02:22:05.629387 kubelet[2754]: E0306 02:22:05.628255 2754 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 6 02:22:05.631472 kubelet[2754]: I0306 02:22:05.631419 2754 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 6 02:22:05.631554 kubelet[2754]: I0306 02:22:05.631467 2754 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 6 02:22:05.632599 kubelet[2754]: I0306 02:22:05.632581 2754 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 6 02:22:05.638087 kubelet[2754]: E0306 02:22:05.636861 2754 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 6 02:22:05.704142 kubelet[2754]: I0306 02:22:05.703988 2754 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 6 02:22:05.704288 kubelet[2754]: I0306 02:22:05.704271 2754 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 6 02:22:05.704588 kubelet[2754]: I0306 02:22:05.704475 2754 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 6 02:22:05.715849 kubelet[2754]: E0306 02:22:05.715697 2754 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Mar 6 02:22:05.772968 kubelet[2754]: I0306 02:22:05.771763 2754 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 6 02:22:05.836976 kubelet[2754]: I0306 02:22:05.836658 2754 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Mar 6 02:22:05.836976 kubelet[2754]: I0306 02:22:05.836820 2754 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Mar 6 02:22:05.865711 kubelet[2754]: I0306 02:22:05.865565 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bcf27a85543a3288f59a047d9cce5028-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"bcf27a85543a3288f59a047d9cce5028\") " pod="kube-system/kube-apiserver-localhost" Mar 6 02:22:05.865868 kubelet[2754]: I0306 02:22:05.865722 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bcf27a85543a3288f59a047d9cce5028-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"bcf27a85543a3288f59a047d9cce5028\") " pod="kube-system/kube-apiserver-localhost" Mar 6 02:22:05.865868 kubelet[2754]: I0306 02:22:05.865762 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 6 02:22:05.865868 kubelet[2754]: I0306 02:22:05.865851 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 6 02:22:05.866003 kubelet[2754]: I0306 02:22:05.865878 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 6 02:22:05.866003 kubelet[2754]: I0306 02:22:05.865905 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e944e4cb17af904786c3a2e01e298498-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"e944e4cb17af904786c3a2e01e298498\") " pod="kube-system/kube-scheduler-localhost" Mar 6 02:22:05.866003 kubelet[2754]: I0306 02:22:05.865928 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bcf27a85543a3288f59a047d9cce5028-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"bcf27a85543a3288f59a047d9cce5028\") " pod="kube-system/kube-apiserver-localhost" Mar 6 02:22:05.866003 kubelet[2754]: I0306 02:22:05.865987 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 6 02:22:05.866214 kubelet[2754]: I0306 02:22:05.866014 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 6 02:22:06.044723 update_engine[1562]: I20260306 02:22:06.039898 1562 update_attempter.cc:509] Updating boot flags... Mar 6 02:22:06.060119 kubelet[2754]: E0306 02:22:06.059499 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:22:06.296484 kubelet[2754]: I0306 02:22:06.295500 2754 apiserver.go:52] "Watching apiserver" Mar 6 02:22:06.318317 kubelet[2754]: E0306 02:22:06.317424 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:22:06.318317 kubelet[2754]: E0306 02:22:06.317567 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:22:06.387662 kubelet[2754]: I0306 02:22:06.360833 2754 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 6 02:22:06.537300 kubelet[2754]: E0306 02:22:06.537231 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:22:06.537500 kubelet[2754]: E0306 02:22:06.537486 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:22:06.537796 kubelet[2754]: E0306 02:22:06.537783 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:22:06.565760 kubelet[2754]: I0306 02:22:06.565561 2754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.565538482 podStartE2EDuration="1.565538482s" podCreationTimestamp="2026-03-06 02:22:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-06 02:22:06.556212542 +0000 UTC m=+1.427824828" watchObservedRunningTime="2026-03-06 02:22:06.565538482 +0000 UTC m=+1.437150767" Mar 6 02:22:06.575147 kubelet[2754]: I0306 02:22:06.574988 2754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.574975583 podStartE2EDuration="1.574975583s" podCreationTimestamp="2026-03-06 02:22:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-06 02:22:06.574888682 +0000 UTC m=+1.446500968" watchObservedRunningTime="2026-03-06 02:22:06.574975583 +0000 UTC m=+1.446587869" Mar 6 02:22:06.575656 kubelet[2754]: I0306 02:22:06.575566 2754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=7.5755598 podStartE2EDuration="7.5755598s" podCreationTimestamp="2026-03-06 02:21:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-06 02:22:06.565846366 +0000 UTC m=+1.437458653" watchObservedRunningTime="2026-03-06 02:22:06.5755598 +0000 UTC m=+1.447172086" Mar 6 02:22:07.570795 kubelet[2754]: E0306 02:22:07.570484 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:22:07.570795 kubelet[2754]: E0306 02:22:07.570673 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:22:07.649919 kubelet[2754]: E0306 02:22:07.649783 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:22:08.630725 kubelet[2754]: E0306 02:22:08.584665 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:22:08.630725 kubelet[2754]: E0306 02:22:08.587041 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:22:08.765891 kubelet[2754]: I0306 02:22:08.765815 2754 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 6 02:22:08.766998 containerd[1583]: time="2026-03-06T02:22:08.766923379Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 6 02:22:08.767707 kubelet[2754]: I0306 02:22:08.767639 2754 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 6 02:22:09.575450 kubelet[2754]: E0306 02:22:09.575232 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:22:09.781464 systemd[1]: Created slice kubepods-besteffort-pod17e31f39_8739_418e_abe4_4d9325aac9ea.slice - libcontainer container kubepods-besteffort-pod17e31f39_8739_418e_abe4_4d9325aac9ea.slice. Mar 6 02:22:09.903442 systemd[1]: Created slice kubepods-besteffort-pod7fda148f_5173_40d4_a4b3_d5c64700d155.slice - libcontainer container kubepods-besteffort-pod7fda148f_5173_40d4_a4b3_d5c64700d155.slice. Mar 6 02:22:09.918375 kubelet[2754]: I0306 02:22:09.918273 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/17e31f39-8739-418e-abe4-4d9325aac9ea-kube-proxy\") pod \"kube-proxy-c9rhg\" (UID: \"17e31f39-8739-418e-abe4-4d9325aac9ea\") " pod="kube-system/kube-proxy-c9rhg" Mar 6 02:22:09.918375 kubelet[2754]: I0306 02:22:09.918329 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/17e31f39-8739-418e-abe4-4d9325aac9ea-xtables-lock\") pod \"kube-proxy-c9rhg\" (UID: \"17e31f39-8739-418e-abe4-4d9325aac9ea\") " pod="kube-system/kube-proxy-c9rhg" Mar 6 02:22:09.919000 kubelet[2754]: I0306 02:22:09.918547 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/17e31f39-8739-418e-abe4-4d9325aac9ea-lib-modules\") pod \"kube-proxy-c9rhg\" (UID: \"17e31f39-8739-418e-abe4-4d9325aac9ea\") " pod="kube-system/kube-proxy-c9rhg" Mar 6 02:22:09.919000 kubelet[2754]: I0306 02:22:09.918703 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpdvc\" (UniqueName: \"kubernetes.io/projected/17e31f39-8739-418e-abe4-4d9325aac9ea-kube-api-access-jpdvc\") pod \"kube-proxy-c9rhg\" (UID: \"17e31f39-8739-418e-abe4-4d9325aac9ea\") " pod="kube-system/kube-proxy-c9rhg" Mar 6 02:22:10.019977 kubelet[2754]: I0306 02:22:10.019303 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plqwd\" (UniqueName: \"kubernetes.io/projected/7fda148f-5173-40d4-a4b3-d5c64700d155-kube-api-access-plqwd\") pod \"tigera-operator-6bf85f8dd-gt6wr\" (UID: \"7fda148f-5173-40d4-a4b3-d5c64700d155\") " pod="tigera-operator/tigera-operator-6bf85f8dd-gt6wr" Mar 6 02:22:10.019977 kubelet[2754]: I0306 02:22:10.019365 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7fda148f-5173-40d4-a4b3-d5c64700d155-var-lib-calico\") pod \"tigera-operator-6bf85f8dd-gt6wr\" (UID: \"7fda148f-5173-40d4-a4b3-d5c64700d155\") " pod="tigera-operator/tigera-operator-6bf85f8dd-gt6wr" Mar 6 02:22:10.092865 kubelet[2754]: E0306 02:22:10.092734 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:22:10.094087 containerd[1583]: time="2026-03-06T02:22:10.093955464Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-c9rhg,Uid:17e31f39-8739-418e-abe4-4d9325aac9ea,Namespace:kube-system,Attempt:0,}" Mar 6 02:22:10.144116 containerd[1583]: time="2026-03-06T02:22:10.143400089Z" level=info msg="connecting to shim 3bf7a4fcbd292e0f03bee5a2076507924dc3fa8b135d6bcc6fd103cd9e58351d" address="unix:///run/containerd/s/5065bf5e35794bc457895d049707de5fd9d3699123b12335ba0a69b07a0599e6" namespace=k8s.io protocol=ttrpc version=3 Mar 6 02:22:10.196377 systemd[1]: Started cri-containerd-3bf7a4fcbd292e0f03bee5a2076507924dc3fa8b135d6bcc6fd103cd9e58351d.scope - libcontainer container 3bf7a4fcbd292e0f03bee5a2076507924dc3fa8b135d6bcc6fd103cd9e58351d. Mar 6 02:22:10.215544 containerd[1583]: time="2026-03-06T02:22:10.215490436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-gt6wr,Uid:7fda148f-5173-40d4-a4b3-d5c64700d155,Namespace:tigera-operator,Attempt:0,}" Mar 6 02:22:10.248394 containerd[1583]: time="2026-03-06T02:22:10.248249582Z" level=info msg="connecting to shim 3fd3162eff5412ba2c94aee6753f4f9839504ea3380535959f5ec4f26d0f2d91" address="unix:///run/containerd/s/26d4f430fd49eadabdf7d071fa59a6903f0201242a8baba87c65f089310f491f" namespace=k8s.io protocol=ttrpc version=3 Mar 6 02:22:10.261202 containerd[1583]: time="2026-03-06T02:22:10.261129586Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-c9rhg,Uid:17e31f39-8739-418e-abe4-4d9325aac9ea,Namespace:kube-system,Attempt:0,} returns sandbox id \"3bf7a4fcbd292e0f03bee5a2076507924dc3fa8b135d6bcc6fd103cd9e58351d\"" Mar 6 02:22:10.264825 kubelet[2754]: E0306 02:22:10.264755 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:22:10.274339 containerd[1583]: time="2026-03-06T02:22:10.274032296Z" level=info msg="CreateContainer within sandbox \"3bf7a4fcbd292e0f03bee5a2076507924dc3fa8b135d6bcc6fd103cd9e58351d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 6 02:22:10.286298 systemd[1]: Started cri-containerd-3fd3162eff5412ba2c94aee6753f4f9839504ea3380535959f5ec4f26d0f2d91.scope - libcontainer container 3fd3162eff5412ba2c94aee6753f4f9839504ea3380535959f5ec4f26d0f2d91. Mar 6 02:22:10.311157 containerd[1583]: time="2026-03-06T02:22:10.310999729Z" level=info msg="Container 8a105018765c6616a9236de8665a08248cfcdbdfc5d55e47bd274bf4b0455975: CDI devices from CRI Config.CDIDevices: []" Mar 6 02:22:10.376443 containerd[1583]: time="2026-03-06T02:22:10.374927130Z" level=info msg="CreateContainer within sandbox \"3bf7a4fcbd292e0f03bee5a2076507924dc3fa8b135d6bcc6fd103cd9e58351d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8a105018765c6616a9236de8665a08248cfcdbdfc5d55e47bd274bf4b0455975\"" Mar 6 02:22:10.386133 containerd[1583]: time="2026-03-06T02:22:10.385676378Z" level=info msg="StartContainer for \"8a105018765c6616a9236de8665a08248cfcdbdfc5d55e47bd274bf4b0455975\"" Mar 6 02:22:10.387730 containerd[1583]: time="2026-03-06T02:22:10.387679349Z" level=info msg="connecting to shim 8a105018765c6616a9236de8665a08248cfcdbdfc5d55e47bd274bf4b0455975" address="unix:///run/containerd/s/5065bf5e35794bc457895d049707de5fd9d3699123b12335ba0a69b07a0599e6" protocol=ttrpc version=3 Mar 6 02:22:10.505411 containerd[1583]: time="2026-03-06T02:22:10.505210029Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-gt6wr,Uid:7fda148f-5173-40d4-a4b3-d5c64700d155,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"3fd3162eff5412ba2c94aee6753f4f9839504ea3380535959f5ec4f26d0f2d91\"" Mar 6 02:22:10.510116 containerd[1583]: time="2026-03-06T02:22:10.509022789Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Mar 6 02:22:10.510642 systemd[1]: Started cri-containerd-8a105018765c6616a9236de8665a08248cfcdbdfc5d55e47bd274bf4b0455975.scope - libcontainer container 8a105018765c6616a9236de8665a08248cfcdbdfc5d55e47bd274bf4b0455975. Mar 6 02:22:10.618154 containerd[1583]: time="2026-03-06T02:22:10.618089199Z" level=info msg="StartContainer for \"8a105018765c6616a9236de8665a08248cfcdbdfc5d55e47bd274bf4b0455975\" returns successfully" Mar 6 02:22:10.840647 kubelet[2754]: E0306 02:22:10.840158 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:22:11.588127 kubelet[2754]: E0306 02:22:11.588013 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:22:11.588831 kubelet[2754]: E0306 02:22:11.588196 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:22:11.838027 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2891264124.mount: Deactivated successfully. Mar 6 02:22:12.590094 kubelet[2754]: E0306 02:22:12.589973 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:22:12.590532 kubelet[2754]: E0306 02:22:12.590298 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:22:12.729291 containerd[1583]: time="2026-03-06T02:22:12.729232465Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 02:22:12.730215 containerd[1583]: time="2026-03-06T02:22:12.730169792Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Mar 6 02:22:12.731841 containerd[1583]: time="2026-03-06T02:22:12.731785462Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 02:22:12.734441 containerd[1583]: time="2026-03-06T02:22:12.734315023Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 02:22:12.735142 containerd[1583]: time="2026-03-06T02:22:12.734987249Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 2.225882828s" Mar 6 02:22:12.735142 containerd[1583]: time="2026-03-06T02:22:12.735084821Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Mar 6 02:22:12.740519 containerd[1583]: time="2026-03-06T02:22:12.740476715Z" level=info msg="CreateContainer within sandbox \"3fd3162eff5412ba2c94aee6753f4f9839504ea3380535959f5ec4f26d0f2d91\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 6 02:22:12.757210 containerd[1583]: time="2026-03-06T02:22:12.757167156Z" level=info msg="Container 21f0741749926c07f65d547428e0b84d5cab5b13eee003d8dbdf900df851ed33: CDI devices from CRI Config.CDIDevices: []" Mar 6 02:22:12.765457 containerd[1583]: time="2026-03-06T02:22:12.765392416Z" level=info msg="CreateContainer within sandbox \"3fd3162eff5412ba2c94aee6753f4f9839504ea3380535959f5ec4f26d0f2d91\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"21f0741749926c07f65d547428e0b84d5cab5b13eee003d8dbdf900df851ed33\"" Mar 6 02:22:12.766157 containerd[1583]: time="2026-03-06T02:22:12.766121304Z" level=info msg="StartContainer for \"21f0741749926c07f65d547428e0b84d5cab5b13eee003d8dbdf900df851ed33\"" Mar 6 02:22:12.767136 containerd[1583]: time="2026-03-06T02:22:12.767004699Z" level=info msg="connecting to shim 21f0741749926c07f65d547428e0b84d5cab5b13eee003d8dbdf900df851ed33" address="unix:///run/containerd/s/26d4f430fd49eadabdf7d071fa59a6903f0201242a8baba87c65f089310f491f" protocol=ttrpc version=3 Mar 6 02:22:12.848398 systemd[1]: Started cri-containerd-21f0741749926c07f65d547428e0b84d5cab5b13eee003d8dbdf900df851ed33.scope - libcontainer container 21f0741749926c07f65d547428e0b84d5cab5b13eee003d8dbdf900df851ed33. Mar 6 02:22:12.897570 containerd[1583]: time="2026-03-06T02:22:12.897505561Z" level=info msg="StartContainer for \"21f0741749926c07f65d547428e0b84d5cab5b13eee003d8dbdf900df851ed33\" returns successfully" Mar 6 02:22:13.607484 kubelet[2754]: I0306 02:22:13.607302 2754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-c9rhg" podStartSLOduration=4.607282005 podStartE2EDuration="4.607282005s" podCreationTimestamp="2026-03-06 02:22:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-06 02:22:11.599007644 +0000 UTC m=+6.470620141" watchObservedRunningTime="2026-03-06 02:22:13.607282005 +0000 UTC m=+8.478894290" Mar 6 02:22:19.995804 sudo[1794]: pam_unix(sudo:session): session closed for user root Mar 6 02:22:19.998514 sshd[1793]: Connection closed by 10.0.0.1 port 33928 Mar 6 02:22:20.002413 sshd-session[1790]: pam_unix(sshd:session): session closed for user core Mar 6 02:22:20.015547 systemd[1]: sshd@6-10.0.0.33:22-10.0.0.1:33928.service: Deactivated successfully. Mar 6 02:22:20.019231 systemd[1]: session-7.scope: Deactivated successfully. Mar 6 02:22:20.020291 systemd[1]: session-7.scope: Consumed 8.714s CPU time, 231.7M memory peak. Mar 6 02:22:20.024228 systemd-logind[1555]: Session 7 logged out. Waiting for processes to exit. Mar 6 02:22:20.027550 systemd-logind[1555]: Removed session 7. Mar 6 02:22:22.616241 kubelet[2754]: I0306 02:22:22.615977 2754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6bf85f8dd-gt6wr" podStartSLOduration=11.387872601 podStartE2EDuration="13.615955258s" podCreationTimestamp="2026-03-06 02:22:09 +0000 UTC" firstStartedPulling="2026-03-06 02:22:10.507937687 +0000 UTC m=+5.379549972" lastFinishedPulling="2026-03-06 02:22:12.736020343 +0000 UTC m=+7.607632629" observedRunningTime="2026-03-06 02:22:13.607464253 +0000 UTC m=+8.479076539" watchObservedRunningTime="2026-03-06 02:22:22.615955258 +0000 UTC m=+17.487567574" Mar 6 02:22:22.657358 systemd[1]: Created slice kubepods-besteffort-pod6c4d03b2_6da3_4d4f_98b6_af0dc634b961.slice - libcontainer container kubepods-besteffort-pod6c4d03b2_6da3_4d4f_98b6_af0dc634b961.slice. Mar 6 02:22:22.770377 kubelet[2754]: I0306 02:22:22.770276 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/6c4d03b2-6da3-4d4f-98b6-af0dc634b961-typha-certs\") pod \"calico-typha-66b55d79cd-mzj6b\" (UID: \"6c4d03b2-6da3-4d4f-98b6-af0dc634b961\") " pod="calico-system/calico-typha-66b55d79cd-mzj6b" Mar 6 02:22:22.770525 kubelet[2754]: I0306 02:22:22.770381 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c4d03b2-6da3-4d4f-98b6-af0dc634b961-tigera-ca-bundle\") pod \"calico-typha-66b55d79cd-mzj6b\" (UID: \"6c4d03b2-6da3-4d4f-98b6-af0dc634b961\") " pod="calico-system/calico-typha-66b55d79cd-mzj6b" Mar 6 02:22:22.770525 kubelet[2754]: I0306 02:22:22.770421 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lmlq\" (UniqueName: \"kubernetes.io/projected/6c4d03b2-6da3-4d4f-98b6-af0dc634b961-kube-api-access-7lmlq\") pod \"calico-typha-66b55d79cd-mzj6b\" (UID: \"6c4d03b2-6da3-4d4f-98b6-af0dc634b961\") " pod="calico-system/calico-typha-66b55d79cd-mzj6b" Mar 6 02:22:22.794426 systemd[1]: Created slice kubepods-besteffort-pod2b0e8b51_8ba0_4f72_8cde_3874382357dc.slice - libcontainer container kubepods-besteffort-pod2b0e8b51_8ba0_4f72_8cde_3874382357dc.slice. Mar 6 02:22:22.872180 kubelet[2754]: I0306 02:22:22.871828 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/2b0e8b51-8ba0-4f72-8cde-3874382357dc-bpffs\") pod \"calico-node-nxsxm\" (UID: \"2b0e8b51-8ba0-4f72-8cde-3874382357dc\") " pod="calico-system/calico-node-nxsxm" Mar 6 02:22:22.872180 kubelet[2754]: I0306 02:22:22.871898 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2b0e8b51-8ba0-4f72-8cde-3874382357dc-lib-modules\") pod \"calico-node-nxsxm\" (UID: \"2b0e8b51-8ba0-4f72-8cde-3874382357dc\") " pod="calico-system/calico-node-nxsxm" Mar 6 02:22:22.872180 kubelet[2754]: I0306 02:22:22.871919 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/2b0e8b51-8ba0-4f72-8cde-3874382357dc-policysync\") pod \"calico-node-nxsxm\" (UID: \"2b0e8b51-8ba0-4f72-8cde-3874382357dc\") " pod="calico-system/calico-node-nxsxm" Mar 6 02:22:22.872180 kubelet[2754]: I0306 02:22:22.871944 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/2b0e8b51-8ba0-4f72-8cde-3874382357dc-var-lib-calico\") pod \"calico-node-nxsxm\" (UID: \"2b0e8b51-8ba0-4f72-8cde-3874382357dc\") " pod="calico-system/calico-node-nxsxm" Mar 6 02:22:22.872180 kubelet[2754]: I0306 02:22:22.871976 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/2b0e8b51-8ba0-4f72-8cde-3874382357dc-cni-bin-dir\") pod \"calico-node-nxsxm\" (UID: \"2b0e8b51-8ba0-4f72-8cde-3874382357dc\") " pod="calico-system/calico-node-nxsxm" Mar 6 02:22:22.872503 kubelet[2754]: I0306 02:22:22.871996 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/2b0e8b51-8ba0-4f72-8cde-3874382357dc-cni-net-dir\") pod \"calico-node-nxsxm\" (UID: \"2b0e8b51-8ba0-4f72-8cde-3874382357dc\") " pod="calico-system/calico-node-nxsxm" Mar 6 02:22:22.872503 kubelet[2754]: I0306 02:22:22.872019 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/2b0e8b51-8ba0-4f72-8cde-3874382357dc-flexvol-driver-host\") pod \"calico-node-nxsxm\" (UID: \"2b0e8b51-8ba0-4f72-8cde-3874382357dc\") " pod="calico-system/calico-node-nxsxm" Mar 6 02:22:22.872503 kubelet[2754]: I0306 02:22:22.872122 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/2b0e8b51-8ba0-4f72-8cde-3874382357dc-var-run-calico\") pod \"calico-node-nxsxm\" (UID: \"2b0e8b51-8ba0-4f72-8cde-3874382357dc\") " pod="calico-system/calico-node-nxsxm" Mar 6 02:22:22.872503 kubelet[2754]: I0306 02:22:22.872149 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/2b0e8b51-8ba0-4f72-8cde-3874382357dc-nodeproc\") pod \"calico-node-nxsxm\" (UID: \"2b0e8b51-8ba0-4f72-8cde-3874382357dc\") " pod="calico-system/calico-node-nxsxm" Mar 6 02:22:22.873823 kubelet[2754]: I0306 02:22:22.873308 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/2b0e8b51-8ba0-4f72-8cde-3874382357dc-cni-log-dir\") pod \"calico-node-nxsxm\" (UID: \"2b0e8b51-8ba0-4f72-8cde-3874382357dc\") " pod="calico-system/calico-node-nxsxm" Mar 6 02:22:22.873823 kubelet[2754]: I0306 02:22:22.873355 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/2b0e8b51-8ba0-4f72-8cde-3874382357dc-node-certs\") pod \"calico-node-nxsxm\" (UID: \"2b0e8b51-8ba0-4f72-8cde-3874382357dc\") " pod="calico-system/calico-node-nxsxm" Mar 6 02:22:22.873823 kubelet[2754]: I0306 02:22:22.873426 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2b0e8b51-8ba0-4f72-8cde-3874382357dc-tigera-ca-bundle\") pod \"calico-node-nxsxm\" (UID: \"2b0e8b51-8ba0-4f72-8cde-3874382357dc\") " pod="calico-system/calico-node-nxsxm" Mar 6 02:22:22.873823 kubelet[2754]: I0306 02:22:22.873442 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzrjp\" (UniqueName: \"kubernetes.io/projected/2b0e8b51-8ba0-4f72-8cde-3874382357dc-kube-api-access-qzrjp\") pod \"calico-node-nxsxm\" (UID: \"2b0e8b51-8ba0-4f72-8cde-3874382357dc\") " pod="calico-system/calico-node-nxsxm" Mar 6 02:22:22.873823 kubelet[2754]: I0306 02:22:22.873468 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/2b0e8b51-8ba0-4f72-8cde-3874382357dc-sys-fs\") pod \"calico-node-nxsxm\" (UID: \"2b0e8b51-8ba0-4f72-8cde-3874382357dc\") " pod="calico-system/calico-node-nxsxm" Mar 6 02:22:22.874022 kubelet[2754]: I0306 02:22:22.873482 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2b0e8b51-8ba0-4f72-8cde-3874382357dc-xtables-lock\") pod \"calico-node-nxsxm\" (UID: \"2b0e8b51-8ba0-4f72-8cde-3874382357dc\") " pod="calico-system/calico-node-nxsxm" Mar 6 02:22:22.900924 kubelet[2754]: E0306 02:22:22.899099 2754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cnt7s" podUID="26067065-4369-4983-afda-5a2ac20a4352" Mar 6 02:22:22.968428 kubelet[2754]: E0306 02:22:22.968263 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:22:22.969491 containerd[1583]: time="2026-03-06T02:22:22.969397619Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-66b55d79cd-mzj6b,Uid:6c4d03b2-6da3-4d4f-98b6-af0dc634b961,Namespace:calico-system,Attempt:0,}" Mar 6 02:22:22.976639 kubelet[2754]: I0306 02:22:22.975885 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/26067065-4369-4983-afda-5a2ac20a4352-varrun\") pod \"csi-node-driver-cnt7s\" (UID: \"26067065-4369-4983-afda-5a2ac20a4352\") " pod="calico-system/csi-node-driver-cnt7s" Mar 6 02:22:22.976639 kubelet[2754]: I0306 02:22:22.975930 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ds2k5\" (UniqueName: \"kubernetes.io/projected/26067065-4369-4983-afda-5a2ac20a4352-kube-api-access-ds2k5\") pod \"csi-node-driver-cnt7s\" (UID: \"26067065-4369-4983-afda-5a2ac20a4352\") " pod="calico-system/csi-node-driver-cnt7s" Mar 6 02:22:22.976639 kubelet[2754]: I0306 02:22:22.976025 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/26067065-4369-4983-afda-5a2ac20a4352-registration-dir\") pod \"csi-node-driver-cnt7s\" (UID: \"26067065-4369-4983-afda-5a2ac20a4352\") " pod="calico-system/csi-node-driver-cnt7s" Mar 6 02:22:22.976639 kubelet[2754]: I0306 02:22:22.976114 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/26067065-4369-4983-afda-5a2ac20a4352-socket-dir\") pod \"csi-node-driver-cnt7s\" (UID: \"26067065-4369-4983-afda-5a2ac20a4352\") " pod="calico-system/csi-node-driver-cnt7s" Mar 6 02:22:22.976639 kubelet[2754]: I0306 02:22:22.976235 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/26067065-4369-4983-afda-5a2ac20a4352-kubelet-dir\") pod \"csi-node-driver-cnt7s\" (UID: \"26067065-4369-4983-afda-5a2ac20a4352\") " pod="calico-system/csi-node-driver-cnt7s" Mar 6 02:22:22.988364 kubelet[2754]: E0306 02:22:22.988231 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 02:22:22.988364 kubelet[2754]: W0306 02:22:22.988264 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 02:22:22.988364 kubelet[2754]: E0306 02:22:22.988317 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 02:22:22.995756 kubelet[2754]: E0306 02:22:22.995703 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 02:22:22.996012 kubelet[2754]: W0306 02:22:22.995997 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 02:22:22.996533 kubelet[2754]: E0306 02:22:22.996259 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 02:22:23.041506 kubelet[2754]: E0306 02:22:23.040783 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 02:22:23.041506 kubelet[2754]: W0306 02:22:23.040806 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 02:22:23.041506 kubelet[2754]: E0306 02:22:23.040824 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 02:22:23.053949 containerd[1583]: time="2026-03-06T02:22:23.053880252Z" level=info msg="connecting to shim 96bc15866336e4a3be74803c5aac75e9b85e8e33c4e2b002e3c1c49b74448010" address="unix:///run/containerd/s/8ca656122e4dc0755d7e8e1c79dad485e55c763caf55f29a0a89f9889e150278" namespace=k8s.io protocol=ttrpc version=3 Mar 6 02:22:23.078783 kubelet[2754]: E0306 02:22:23.078657 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 02:22:23.078783 kubelet[2754]: W0306 02:22:23.078689 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 02:22:23.078783 kubelet[2754]: E0306 02:22:23.078714 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 02:22:23.079186 kubelet[2754]: E0306 02:22:23.079041 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 02:22:23.079186 kubelet[2754]: W0306 02:22:23.079090 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 02:22:23.079186 kubelet[2754]: E0306 02:22:23.079105 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 02:22:23.079527 kubelet[2754]: E0306 02:22:23.079497 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 02:22:23.079527 kubelet[2754]: W0306 02:22:23.079509 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 02:22:23.079527 kubelet[2754]: E0306 02:22:23.079517 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 02:22:23.080011 kubelet[2754]: E0306 02:22:23.079938 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 02:22:23.080011 kubelet[2754]: W0306 02:22:23.079984 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 02:22:23.080011 kubelet[2754]: E0306 02:22:23.079995 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 02:22:23.080324 kubelet[2754]: E0306 02:22:23.080264 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 02:22:23.080324 kubelet[2754]: W0306 02:22:23.080277 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 02:22:23.080324 kubelet[2754]: E0306 02:22:23.080285 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 02:22:23.080671 kubelet[2754]: E0306 02:22:23.080621 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 02:22:23.080671 kubelet[2754]: W0306 02:22:23.080648 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 02:22:23.080671 kubelet[2754]: E0306 02:22:23.080657 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 02:22:23.081004 kubelet[2754]: E0306 02:22:23.080828 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 02:22:23.081004 kubelet[2754]: W0306 02:22:23.080838 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 02:22:23.081004 kubelet[2754]: E0306 02:22:23.080845 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 02:22:23.081173 kubelet[2754]: E0306 02:22:23.081116 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 02:22:23.081173 kubelet[2754]: W0306 02:22:23.081125 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 02:22:23.081173 kubelet[2754]: E0306 02:22:23.081132 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 02:22:23.081834 kubelet[2754]: E0306 02:22:23.081746 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 02:22:23.081834 kubelet[2754]: W0306 02:22:23.081781 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 02:22:23.081834 kubelet[2754]: E0306 02:22:23.081793 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 02:22:23.084263 kubelet[2754]: E0306 02:22:23.083178 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 02:22:23.084263 kubelet[2754]: W0306 02:22:23.083304 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 02:22:23.084263 kubelet[2754]: E0306 02:22:23.083402 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 02:22:23.084263 kubelet[2754]: E0306 02:22:23.083980 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 02:22:23.084263 kubelet[2754]: W0306 02:22:23.083989 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 02:22:23.084263 kubelet[2754]: E0306 02:22:23.083999 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 02:22:23.085432 kubelet[2754]: E0306 02:22:23.084495 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 02:22:23.085432 kubelet[2754]: W0306 02:22:23.084541 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 02:22:23.085432 kubelet[2754]: E0306 02:22:23.084551 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 02:22:23.085432 kubelet[2754]: E0306 02:22:23.085171 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 02:22:23.085432 kubelet[2754]: W0306 02:22:23.085182 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 02:22:23.085432 kubelet[2754]: E0306 02:22:23.085192 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 02:22:23.086524 kubelet[2754]: E0306 02:22:23.086243 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 02:22:23.086524 kubelet[2754]: W0306 02:22:23.086350 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 02:22:23.086524 kubelet[2754]: E0306 02:22:23.086361 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 02:22:23.086864 kubelet[2754]: E0306 02:22:23.086829 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 02:22:23.087189 kubelet[2754]: W0306 02:22:23.087116 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 02:22:23.087189 kubelet[2754]: E0306 02:22:23.087140 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 02:22:23.088212 kubelet[2754]: E0306 02:22:23.088129 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 02:22:23.088543 kubelet[2754]: W0306 02:22:23.088522 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 02:22:23.088543 kubelet[2754]: E0306 02:22:23.088540 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 02:22:23.089251 kubelet[2754]: E0306 02:22:23.089227 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 02:22:23.089834 kubelet[2754]: W0306 02:22:23.089698 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 02:22:23.089834 kubelet[2754]: E0306 02:22:23.089721 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 02:22:23.089386 systemd[1]: Started cri-containerd-96bc15866336e4a3be74803c5aac75e9b85e8e33c4e2b002e3c1c49b74448010.scope - libcontainer container 96bc15866336e4a3be74803c5aac75e9b85e8e33c4e2b002e3c1c49b74448010. Mar 6 02:22:23.093311 kubelet[2754]: E0306 02:22:23.093279 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 02:22:23.093311 kubelet[2754]: W0306 02:22:23.093303 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 02:22:23.093311 kubelet[2754]: E0306 02:22:23.093316 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 02:22:23.094353 kubelet[2754]: E0306 02:22:23.094332 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 02:22:23.094439 kubelet[2754]: W0306 02:22:23.094428 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 02:22:23.094497 kubelet[2754]: E0306 02:22:23.094487 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 02:22:23.094841 kubelet[2754]: E0306 02:22:23.094828 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 02:22:23.094895 kubelet[2754]: W0306 02:22:23.094886 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 02:22:23.094949 kubelet[2754]: E0306 02:22:23.094938 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 02:22:23.096346 kubelet[2754]: E0306 02:22:23.096334 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 02:22:23.096412 kubelet[2754]: W0306 02:22:23.096402 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 02:22:23.096455 kubelet[2754]: E0306 02:22:23.096446 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 02:22:23.096961 kubelet[2754]: E0306 02:22:23.096948 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 02:22:23.097025 kubelet[2754]: W0306 02:22:23.097014 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 02:22:23.097204 kubelet[2754]: E0306 02:22:23.097184 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 02:22:23.097875 kubelet[2754]: E0306 02:22:23.097820 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 02:22:23.097875 kubelet[2754]: W0306 02:22:23.097838 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 02:22:23.097875 kubelet[2754]: E0306 02:22:23.097855 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 02:22:23.098482 kubelet[2754]: E0306 02:22:23.098430 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 02:22:23.098482 kubelet[2754]: W0306 02:22:23.098448 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 02:22:23.098482 kubelet[2754]: E0306 02:22:23.098462 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 02:22:23.099182 kubelet[2754]: E0306 02:22:23.099145 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 02:22:23.099249 kubelet[2754]: W0306 02:22:23.099158 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 02:22:23.099249 kubelet[2754]: E0306 02:22:23.099221 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 02:22:23.099886 kubelet[2754]: E0306 02:22:23.099802 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 02:22:23.099886 kubelet[2754]: W0306 02:22:23.099839 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 02:22:23.099886 kubelet[2754]: E0306 02:22:23.099849 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 02:22:23.101711 containerd[1583]: time="2026-03-06T02:22:23.101031941Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-nxsxm,Uid:2b0e8b51-8ba0-4f72-8cde-3874382357dc,Namespace:calico-system,Attempt:0,}" Mar 6 02:22:23.137729 containerd[1583]: time="2026-03-06T02:22:23.137536976Z" level=info msg="connecting to shim 13407487b2cbd3f83227eeea8c087639d4d85e5eff06d1e664b062ea04a7c65f" address="unix:///run/containerd/s/af7e64030c5d5bf953d9de6587ebd505b505e3aa31976784141541e3f085205e" namespace=k8s.io protocol=ttrpc version=3 Mar 6 02:22:23.180473 systemd[1]: Started cri-containerd-13407487b2cbd3f83227eeea8c087639d4d85e5eff06d1e664b062ea04a7c65f.scope - libcontainer container 13407487b2cbd3f83227eeea8c087639d4d85e5eff06d1e664b062ea04a7c65f. Mar 6 02:22:23.219617 containerd[1583]: time="2026-03-06T02:22:23.219394003Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-66b55d79cd-mzj6b,Uid:6c4d03b2-6da3-4d4f-98b6-af0dc634b961,Namespace:calico-system,Attempt:0,} returns sandbox id \"96bc15866336e4a3be74803c5aac75e9b85e8e33c4e2b002e3c1c49b74448010\"" Mar 6 02:22:23.220549 kubelet[2754]: E0306 02:22:23.220500 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:22:23.223134 containerd[1583]: time="2026-03-06T02:22:23.223027288Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Mar 6 02:22:23.261869 containerd[1583]: time="2026-03-06T02:22:23.261749379Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-nxsxm,Uid:2b0e8b51-8ba0-4f72-8cde-3874382357dc,Namespace:calico-system,Attempt:0,} returns sandbox id \"13407487b2cbd3f83227eeea8c087639d4d85e5eff06d1e664b062ea04a7c65f\"" Mar 6 02:22:24.298423 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2050711585.mount: Deactivated successfully. Mar 6 02:22:24.394650 kubelet[2754]: E0306 02:22:24.394494 2754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cnt7s" podUID="26067065-4369-4983-afda-5a2ac20a4352" Mar 6 02:22:26.393870 kubelet[2754]: E0306 02:22:26.393804 2754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cnt7s" podUID="26067065-4369-4983-afda-5a2ac20a4352" Mar 6 02:22:27.595190 containerd[1583]: time="2026-03-06T02:22:27.595008076Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 02:22:27.596336 containerd[1583]: time="2026-03-06T02:22:27.596237552Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=36107596" Mar 6 02:22:27.598408 containerd[1583]: time="2026-03-06T02:22:27.598313074Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 02:22:27.626313 containerd[1583]: time="2026-03-06T02:22:27.626119754Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 02:22:27.627794 containerd[1583]: time="2026-03-06T02:22:27.627582372Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 4.404186978s" Mar 6 02:22:27.627794 containerd[1583]: time="2026-03-06T02:22:27.627668163Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Mar 6 02:22:27.631283 containerd[1583]: time="2026-03-06T02:22:27.631029182Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Mar 6 02:22:27.654667 containerd[1583]: time="2026-03-06T02:22:27.654538581Z" level=info msg="CreateContainer within sandbox \"96bc15866336e4a3be74803c5aac75e9b85e8e33c4e2b002e3c1c49b74448010\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 6 02:22:27.671083 containerd[1583]: time="2026-03-06T02:22:27.670989480Z" level=info msg="Container ba48826ad632973d33b50965315c068b1e44f0c7118c646ad883799cb4e3d587: CDI devices from CRI Config.CDIDevices: []" Mar 6 02:22:27.687403 containerd[1583]: time="2026-03-06T02:22:27.687282110Z" level=info msg="CreateContainer within sandbox \"96bc15866336e4a3be74803c5aac75e9b85e8e33c4e2b002e3c1c49b74448010\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"ba48826ad632973d33b50965315c068b1e44f0c7118c646ad883799cb4e3d587\"" Mar 6 02:22:27.688381 containerd[1583]: time="2026-03-06T02:22:27.688018737Z" level=info msg="StartContainer for \"ba48826ad632973d33b50965315c068b1e44f0c7118c646ad883799cb4e3d587\"" Mar 6 02:22:27.689693 containerd[1583]: time="2026-03-06T02:22:27.689646012Z" level=info msg="connecting to shim ba48826ad632973d33b50965315c068b1e44f0c7118c646ad883799cb4e3d587" address="unix:///run/containerd/s/8ca656122e4dc0755d7e8e1c79dad485e55c763caf55f29a0a89f9889e150278" protocol=ttrpc version=3 Mar 6 02:22:27.731331 systemd[1]: Started cri-containerd-ba48826ad632973d33b50965315c068b1e44f0c7118c646ad883799cb4e3d587.scope - libcontainer container ba48826ad632973d33b50965315c068b1e44f0c7118c646ad883799cb4e3d587. Mar 6 02:22:27.854301 containerd[1583]: time="2026-03-06T02:22:27.854158985Z" level=info msg="StartContainer for \"ba48826ad632973d33b50965315c068b1e44f0c7118c646ad883799cb4e3d587\" returns successfully" Mar 6 02:22:28.393966 kubelet[2754]: E0306 02:22:28.393834 2754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cnt7s" podUID="26067065-4369-4983-afda-5a2ac20a4352" Mar 6 02:22:28.677136 kubelet[2754]: E0306 02:22:28.676868 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:22:28.680001 kubelet[2754]: E0306 02:22:28.679948 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 02:22:28.680001 kubelet[2754]: W0306 02:22:28.679969 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 02:22:28.680001 kubelet[2754]: E0306 02:22:28.679989 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 02:22:28.680511 kubelet[2754]: E0306 02:22:28.680432 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 02:22:28.680511 kubelet[2754]: W0306 02:22:28.680480 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 02:22:28.680511 kubelet[2754]: E0306 02:22:28.680502 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 02:22:28.681010 kubelet[2754]: E0306 02:22:28.680969 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 02:22:28.681010 kubelet[2754]: W0306 02:22:28.681004 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 02:22:28.681210 kubelet[2754]: E0306 02:22:28.681029 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 02:22:28.681744 kubelet[2754]: E0306 02:22:28.681626 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 02:22:28.681744 kubelet[2754]: W0306 02:22:28.681667 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 02:22:28.681744 kubelet[2754]: E0306 02:22:28.681686 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 02:22:28.682160 kubelet[2754]: E0306 02:22:28.682123 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 02:22:28.682160 kubelet[2754]: W0306 02:22:28.682157 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 02:22:28.682259 kubelet[2754]: E0306 02:22:28.682176 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 02:22:28.683544 kubelet[2754]: E0306 02:22:28.682806 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 02:22:28.683544 kubelet[2754]: W0306 02:22:28.682821 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 02:22:28.683544 kubelet[2754]: E0306 02:22:28.682839 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 02:22:28.684271 kubelet[2754]: E0306 02:22:28.684209 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 02:22:28.684271 kubelet[2754]: W0306 02:22:28.684248 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 02:22:28.684363 kubelet[2754]: E0306 02:22:28.684303 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 02:22:28.685178 kubelet[2754]: E0306 02:22:28.684747 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 02:22:28.685178 kubelet[2754]: W0306 02:22:28.684854 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 02:22:28.685178 kubelet[2754]: E0306 02:22:28.684870 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 02:22:28.685369 kubelet[2754]: E0306 02:22:28.685316 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 02:22:28.685369 kubelet[2754]: W0306 02:22:28.685327 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 02:22:28.685369 kubelet[2754]: E0306 02:22:28.685341 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 02:22:28.685677 kubelet[2754]: E0306 02:22:28.685622 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 02:22:28.685677 kubelet[2754]: W0306 02:22:28.685667 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 02:22:28.685759 kubelet[2754]: E0306 02:22:28.685685 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 02:22:28.686034 kubelet[2754]: E0306 02:22:28.685958 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 02:22:28.686034 kubelet[2754]: W0306 02:22:28.686001 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 02:22:28.686034 kubelet[2754]: E0306 02:22:28.686019 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 02:22:28.686400 kubelet[2754]: E0306 02:22:28.686372 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 02:22:28.686400 kubelet[2754]: W0306 02:22:28.686397 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 02:22:28.686552 kubelet[2754]: E0306 02:22:28.686409 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 02:22:28.686895 kubelet[2754]: E0306 02:22:28.686806 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 02:22:28.686895 kubelet[2754]: W0306 02:22:28.686852 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 02:22:28.686895 kubelet[2754]: E0306 02:22:28.686869 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 02:22:28.687277 kubelet[2754]: E0306 02:22:28.687216 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 02:22:28.687277 kubelet[2754]: W0306 02:22:28.687231 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 02:22:28.687277 kubelet[2754]: E0306 02:22:28.687245 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 02:22:28.687670 kubelet[2754]: E0306 02:22:28.687477 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 02:22:28.687670 kubelet[2754]: W0306 02:22:28.687490 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 02:22:28.687670 kubelet[2754]: E0306 02:22:28.687502 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 02:22:28.700970 kubelet[2754]: I0306 02:22:28.700782 2754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-66b55d79cd-mzj6b" podStartSLOduration=2.293451721 podStartE2EDuration="6.700758938s" podCreationTimestamp="2026-03-06 02:22:22 +0000 UTC" firstStartedPulling="2026-03-06 02:22:23.222008439 +0000 UTC m=+18.093620724" lastFinishedPulling="2026-03-06 02:22:27.629315655 +0000 UTC m=+22.500927941" observedRunningTime="2026-03-06 02:22:28.699320795 +0000 UTC m=+23.570933082" watchObservedRunningTime="2026-03-06 02:22:28.700758938 +0000 UTC m=+23.572371224" Mar 6 02:22:28.749622 kubelet[2754]: E0306 02:22:28.749198 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 02:22:28.749622 kubelet[2754]: W0306 02:22:28.749227 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 02:22:28.749622 kubelet[2754]: E0306 02:22:28.749252 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 02:22:28.750316 kubelet[2754]: E0306 02:22:28.750184 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 02:22:28.750316 kubelet[2754]: W0306 02:22:28.750233 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 02:22:28.750316 kubelet[2754]: E0306 02:22:28.750262 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 02:22:28.750718 kubelet[2754]: E0306 02:22:28.750671 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 02:22:28.750718 kubelet[2754]: W0306 02:22:28.750688 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 02:22:28.750718 kubelet[2754]: E0306 02:22:28.750702 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 02:22:28.751346 kubelet[2754]: E0306 02:22:28.751265 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 02:22:28.751346 kubelet[2754]: W0306 02:22:28.751301 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 02:22:28.751346 kubelet[2754]: E0306 02:22:28.751316 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 02:22:28.751850 kubelet[2754]: E0306 02:22:28.751768 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 02:22:28.751850 kubelet[2754]: W0306 02:22:28.751805 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 02:22:28.751850 kubelet[2754]: E0306 02:22:28.751819 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 02:22:28.752265 kubelet[2754]: E0306 02:22:28.752201 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 02:22:28.752265 kubelet[2754]: W0306 02:22:28.752237 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 02:22:28.752265 kubelet[2754]: E0306 02:22:28.752249 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 02:22:28.752836 kubelet[2754]: E0306 02:22:28.752745 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 02:22:28.752836 kubelet[2754]: W0306 02:22:28.752778 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 02:22:28.752836 kubelet[2754]: E0306 02:22:28.752791 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 02:22:28.753206 kubelet[2754]: E0306 02:22:28.753165 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 02:22:28.753206 kubelet[2754]: W0306 02:22:28.753196 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 02:22:28.753206 kubelet[2754]: E0306 02:22:28.753209 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 02:22:28.753751 kubelet[2754]: E0306 02:22:28.753697 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 02:22:28.753751 kubelet[2754]: W0306 02:22:28.753732 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 02:22:28.753751 kubelet[2754]: E0306 02:22:28.753745 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 02:22:28.754172 kubelet[2754]: E0306 02:22:28.754136 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 02:22:28.754172 kubelet[2754]: W0306 02:22:28.754168 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 02:22:28.754295 kubelet[2754]: E0306 02:22:28.754180 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 02:22:28.754495 kubelet[2754]: E0306 02:22:28.754462 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 02:22:28.754544 kubelet[2754]: W0306 02:22:28.754493 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 02:22:28.754544 kubelet[2754]: E0306 02:22:28.754509 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 02:22:28.755017 kubelet[2754]: E0306 02:22:28.754982 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 02:22:28.755017 kubelet[2754]: W0306 02:22:28.755012 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 02:22:28.755188 kubelet[2754]: E0306 02:22:28.755027 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 02:22:28.755517 kubelet[2754]: E0306 02:22:28.755486 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 02:22:28.755517 kubelet[2754]: W0306 02:22:28.755514 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 02:22:28.755652 kubelet[2754]: E0306 02:22:28.755526 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 02:22:28.755937 kubelet[2754]: E0306 02:22:28.755875 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 02:22:28.755937 kubelet[2754]: W0306 02:22:28.755916 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 02:22:28.755937 kubelet[2754]: E0306 02:22:28.755933 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 02:22:28.756414 kubelet[2754]: E0306 02:22:28.756351 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 02:22:28.756414 kubelet[2754]: W0306 02:22:28.756387 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 02:22:28.756414 kubelet[2754]: E0306 02:22:28.756403 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 02:22:28.757002 kubelet[2754]: E0306 02:22:28.756850 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 02:22:28.757002 kubelet[2754]: W0306 02:22:28.756885 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 02:22:28.757002 kubelet[2754]: E0306 02:22:28.756901 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 02:22:28.757743 kubelet[2754]: E0306 02:22:28.757681 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 02:22:28.757743 kubelet[2754]: W0306 02:22:28.757719 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 02:22:28.757743 kubelet[2754]: E0306 02:22:28.757733 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 02:22:28.759892 kubelet[2754]: E0306 02:22:28.759802 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 02:22:28.759892 kubelet[2754]: W0306 02:22:28.759833 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 02:22:28.759892 kubelet[2754]: E0306 02:22:28.759849 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 02:22:29.257650 containerd[1583]: time="2026-03-06T02:22:29.257468326Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 02:22:29.258550 containerd[1583]: time="2026-03-06T02:22:29.258459407Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4630250" Mar 6 02:22:29.260244 containerd[1583]: time="2026-03-06T02:22:29.260128203Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 02:22:29.263097 containerd[1583]: time="2026-03-06T02:22:29.262954776Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 02:22:29.264439 containerd[1583]: time="2026-03-06T02:22:29.264350630Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 1.633015598s" Mar 6 02:22:29.264439 containerd[1583]: time="2026-03-06T02:22:29.264415511Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Mar 6 02:22:29.270706 containerd[1583]: time="2026-03-06T02:22:29.270628918Z" level=info msg="CreateContainer within sandbox \"13407487b2cbd3f83227eeea8c087639d4d85e5eff06d1e664b062ea04a7c65f\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 6 02:22:29.282391 containerd[1583]: time="2026-03-06T02:22:29.282279894Z" level=info msg="Container 57084822df04bdd0d9a84b469725523d42d10905b2eadf52d3f3ddc3ce5aad92: CDI devices from CRI Config.CDIDevices: []" Mar 6 02:22:29.294388 containerd[1583]: time="2026-03-06T02:22:29.294344327Z" level=info msg="CreateContainer within sandbox \"13407487b2cbd3f83227eeea8c087639d4d85e5eff06d1e664b062ea04a7c65f\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"57084822df04bdd0d9a84b469725523d42d10905b2eadf52d3f3ddc3ce5aad92\"" Mar 6 02:22:29.296339 containerd[1583]: time="2026-03-06T02:22:29.296236988Z" level=info msg="StartContainer for \"57084822df04bdd0d9a84b469725523d42d10905b2eadf52d3f3ddc3ce5aad92\"" Mar 6 02:22:29.299110 containerd[1583]: time="2026-03-06T02:22:29.299029638Z" level=info msg="connecting to shim 57084822df04bdd0d9a84b469725523d42d10905b2eadf52d3f3ddc3ce5aad92" address="unix:///run/containerd/s/af7e64030c5d5bf953d9de6587ebd505b505e3aa31976784141541e3f085205e" protocol=ttrpc version=3 Mar 6 02:22:29.365748 systemd[1]: Started cri-containerd-57084822df04bdd0d9a84b469725523d42d10905b2eadf52d3f3ddc3ce5aad92.scope - libcontainer container 57084822df04bdd0d9a84b469725523d42d10905b2eadf52d3f3ddc3ce5aad92. Mar 6 02:22:29.544252 systemd[1]: cri-containerd-57084822df04bdd0d9a84b469725523d42d10905b2eadf52d3f3ddc3ce5aad92.scope: Deactivated successfully. Mar 6 02:22:29.545236 systemd[1]: cri-containerd-57084822df04bdd0d9a84b469725523d42d10905b2eadf52d3f3ddc3ce5aad92.scope: Consumed 94ms CPU time, 6.1M memory peak, 4.6M written to disk. Mar 6 02:22:29.620665 containerd[1583]: time="2026-03-06T02:22:29.620518684Z" level=info msg="received container exit event container_id:\"57084822df04bdd0d9a84b469725523d42d10905b2eadf52d3f3ddc3ce5aad92\" id:\"57084822df04bdd0d9a84b469725523d42d10905b2eadf52d3f3ddc3ce5aad92\" pid:3421 exited_at:{seconds:1772763749 nanos:546172344}" Mar 6 02:22:29.622803 containerd[1583]: time="2026-03-06T02:22:29.622742956Z" level=info msg="StartContainer for \"57084822df04bdd0d9a84b469725523d42d10905b2eadf52d3f3ddc3ce5aad92\" returns successfully" Mar 6 02:22:29.660634 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-57084822df04bdd0d9a84b469725523d42d10905b2eadf52d3f3ddc3ce5aad92-rootfs.mount: Deactivated successfully. Mar 6 02:22:29.683320 kubelet[2754]: I0306 02:22:29.683250 2754 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 6 02:22:29.683867 kubelet[2754]: E0306 02:22:29.683828 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:22:30.394733 kubelet[2754]: E0306 02:22:30.394562 2754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cnt7s" podUID="26067065-4369-4983-afda-5a2ac20a4352" Mar 6 02:22:30.689912 containerd[1583]: time="2026-03-06T02:22:30.689749713Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Mar 6 02:22:32.394989 kubelet[2754]: E0306 02:22:32.394900 2754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cnt7s" podUID="26067065-4369-4983-afda-5a2ac20a4352" Mar 6 02:22:34.394459 kubelet[2754]: E0306 02:22:34.394367 2754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cnt7s" podUID="26067065-4369-4983-afda-5a2ac20a4352" Mar 6 02:22:36.394519 kubelet[2754]: E0306 02:22:36.394408 2754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cnt7s" podUID="26067065-4369-4983-afda-5a2ac20a4352" Mar 6 02:22:37.051327 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount711094503.mount: Deactivated successfully. Mar 6 02:22:37.174695 containerd[1583]: time="2026-03-06T02:22:37.174457570Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 02:22:37.176025 containerd[1583]: time="2026-03-06T02:22:37.175907258Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Mar 6 02:22:37.177467 containerd[1583]: time="2026-03-06T02:22:37.177427391Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 02:22:37.179916 containerd[1583]: time="2026-03-06T02:22:37.179862236Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 02:22:37.180659 containerd[1583]: time="2026-03-06T02:22:37.180575194Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 6.490776812s" Mar 6 02:22:37.180659 containerd[1583]: time="2026-03-06T02:22:37.180649082Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Mar 6 02:22:37.186730 containerd[1583]: time="2026-03-06T02:22:37.186670846Z" level=info msg="CreateContainer within sandbox \"13407487b2cbd3f83227eeea8c087639d4d85e5eff06d1e664b062ea04a7c65f\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Mar 6 02:22:37.215560 containerd[1583]: time="2026-03-06T02:22:37.215457647Z" level=info msg="Container c4d130d5468170e4afc1db7683521b337617e7d741d9538034f6499ac8d5b671: CDI devices from CRI Config.CDIDevices: []" Mar 6 02:22:37.262119 containerd[1583]: time="2026-03-06T02:22:37.261934533Z" level=info msg="CreateContainer within sandbox \"13407487b2cbd3f83227eeea8c087639d4d85e5eff06d1e664b062ea04a7c65f\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"c4d130d5468170e4afc1db7683521b337617e7d741d9538034f6499ac8d5b671\"" Mar 6 02:22:37.263284 containerd[1583]: time="2026-03-06T02:22:37.263038949Z" level=info msg="StartContainer for \"c4d130d5468170e4afc1db7683521b337617e7d741d9538034f6499ac8d5b671\"" Mar 6 02:22:37.266263 containerd[1583]: time="2026-03-06T02:22:37.266140787Z" level=info msg="connecting to shim c4d130d5468170e4afc1db7683521b337617e7d741d9538034f6499ac8d5b671" address="unix:///run/containerd/s/af7e64030c5d5bf953d9de6587ebd505b505e3aa31976784141541e3f085205e" protocol=ttrpc version=3 Mar 6 02:22:37.302740 systemd[1]: Started cri-containerd-c4d130d5468170e4afc1db7683521b337617e7d741d9538034f6499ac8d5b671.scope - libcontainer container c4d130d5468170e4afc1db7683521b337617e7d741d9538034f6499ac8d5b671. Mar 6 02:22:37.451157 containerd[1583]: time="2026-03-06T02:22:37.450218387Z" level=info msg="StartContainer for \"c4d130d5468170e4afc1db7683521b337617e7d741d9538034f6499ac8d5b671\" returns successfully" Mar 6 02:22:37.507239 systemd[1]: cri-containerd-c4d130d5468170e4afc1db7683521b337617e7d741d9538034f6499ac8d5b671.scope: Deactivated successfully. Mar 6 02:22:37.511327 containerd[1583]: time="2026-03-06T02:22:37.511250182Z" level=info msg="received container exit event container_id:\"c4d130d5468170e4afc1db7683521b337617e7d741d9538034f6499ac8d5b671\" id:\"c4d130d5468170e4afc1db7683521b337617e7d741d9538034f6499ac8d5b671\" pid:3480 exited_at:{seconds:1772763757 nanos:509996772}" Mar 6 02:22:37.725949 containerd[1583]: time="2026-03-06T02:22:37.725156165Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Mar 6 02:22:38.052532 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c4d130d5468170e4afc1db7683521b337617e7d741d9538034f6499ac8d5b671-rootfs.mount: Deactivated successfully. Mar 6 02:22:38.059284 kubelet[2754]: I0306 02:22:38.059217 2754 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 6 02:22:38.060023 kubelet[2754]: E0306 02:22:38.059786 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:22:38.394945 kubelet[2754]: E0306 02:22:38.394723 2754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cnt7s" podUID="26067065-4369-4983-afda-5a2ac20a4352" Mar 6 02:22:38.737975 kubelet[2754]: E0306 02:22:38.737686 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:22:40.393909 kubelet[2754]: E0306 02:22:40.393853 2754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cnt7s" podUID="26067065-4369-4983-afda-5a2ac20a4352" Mar 6 02:22:41.137586 containerd[1583]: time="2026-03-06T02:22:41.137414461Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 02:22:41.139777 containerd[1583]: time="2026-03-06T02:22:41.139107147Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Mar 6 02:22:41.141453 containerd[1583]: time="2026-03-06T02:22:41.141363999Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 02:22:41.144636 containerd[1583]: time="2026-03-06T02:22:41.144525623Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 02:22:41.145539 containerd[1583]: time="2026-03-06T02:22:41.145449872Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 3.420223626s" Mar 6 02:22:41.145539 containerd[1583]: time="2026-03-06T02:22:41.145503853Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Mar 6 02:22:41.152914 containerd[1583]: time="2026-03-06T02:22:41.152848756Z" level=info msg="CreateContainer within sandbox \"13407487b2cbd3f83227eeea8c087639d4d85e5eff06d1e664b062ea04a7c65f\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 6 02:22:41.165333 containerd[1583]: time="2026-03-06T02:22:41.165234087Z" level=info msg="Container 3d6bb2155e3d2dede3c7eb7ca6902f46f568afa7f2d94886441edaee2ad5cb31: CDI devices from CRI Config.CDIDevices: []" Mar 6 02:22:41.275523 containerd[1583]: time="2026-03-06T02:22:41.275384461Z" level=info msg="CreateContainer within sandbox \"13407487b2cbd3f83227eeea8c087639d4d85e5eff06d1e664b062ea04a7c65f\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"3d6bb2155e3d2dede3c7eb7ca6902f46f568afa7f2d94886441edaee2ad5cb31\"" Mar 6 02:22:41.276331 containerd[1583]: time="2026-03-06T02:22:41.276141153Z" level=info msg="StartContainer for \"3d6bb2155e3d2dede3c7eb7ca6902f46f568afa7f2d94886441edaee2ad5cb31\"" Mar 6 02:22:41.277508 containerd[1583]: time="2026-03-06T02:22:41.277457530Z" level=info msg="connecting to shim 3d6bb2155e3d2dede3c7eb7ca6902f46f568afa7f2d94886441edaee2ad5cb31" address="unix:///run/containerd/s/af7e64030c5d5bf953d9de6587ebd505b505e3aa31976784141541e3f085205e" protocol=ttrpc version=3 Mar 6 02:22:41.349421 systemd[1]: Started cri-containerd-3d6bb2155e3d2dede3c7eb7ca6902f46f568afa7f2d94886441edaee2ad5cb31.scope - libcontainer container 3d6bb2155e3d2dede3c7eb7ca6902f46f568afa7f2d94886441edaee2ad5cb31. Mar 6 02:22:41.511544 containerd[1583]: time="2026-03-06T02:22:41.510860612Z" level=info msg="StartContainer for \"3d6bb2155e3d2dede3c7eb7ca6902f46f568afa7f2d94886441edaee2ad5cb31\" returns successfully" Mar 6 02:22:42.366711 systemd[1]: cri-containerd-3d6bb2155e3d2dede3c7eb7ca6902f46f568afa7f2d94886441edaee2ad5cb31.scope: Deactivated successfully. Mar 6 02:22:42.368267 systemd[1]: cri-containerd-3d6bb2155e3d2dede3c7eb7ca6902f46f568afa7f2d94886441edaee2ad5cb31.scope: Consumed 914ms CPU time, 178.8M memory peak, 2.5M read from disk, 177M written to disk. Mar 6 02:22:42.369022 containerd[1583]: time="2026-03-06T02:22:42.368947431Z" level=info msg="received container exit event container_id:\"3d6bb2155e3d2dede3c7eb7ca6902f46f568afa7f2d94886441edaee2ad5cb31\" id:\"3d6bb2155e3d2dede3c7eb7ca6902f46f568afa7f2d94886441edaee2ad5cb31\" pid:3544 exited_at:{seconds:1772763762 nanos:368015836}" Mar 6 02:22:42.394471 kubelet[2754]: E0306 02:22:42.394374 2754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cnt7s" podUID="26067065-4369-4983-afda-5a2ac20a4352" Mar 6 02:22:42.398454 kubelet[2754]: I0306 02:22:42.398406 2754 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Mar 6 02:22:42.408709 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3d6bb2155e3d2dede3c7eb7ca6902f46f568afa7f2d94886441edaee2ad5cb31-rootfs.mount: Deactivated successfully. Mar 6 02:22:42.479757 systemd[1]: Created slice kubepods-besteffort-pod285fd2ec_c090_4ff4_8f31_634507f6ef5a.slice - libcontainer container kubepods-besteffort-pod285fd2ec_c090_4ff4_8f31_634507f6ef5a.slice. Mar 6 02:22:42.488460 systemd[1]: Created slice kubepods-burstable-podf469a10c_9545_4124_9867_97946c582789.slice - libcontainer container kubepods-burstable-podf469a10c_9545_4124_9867_97946c582789.slice. Mar 6 02:22:42.498392 systemd[1]: Created slice kubepods-burstable-pod97416aaf_d20d_44e3_9e01_e71854f14d41.slice - libcontainer container kubepods-burstable-pod97416aaf_d20d_44e3_9e01_e71854f14d41.slice. Mar 6 02:22:42.509301 systemd[1]: Created slice kubepods-besteffort-podeaeab074_b6a5_4262_9696_6e61476a4648.slice - libcontainer container kubepods-besteffort-podeaeab074_b6a5_4262_9696_6e61476a4648.slice. Mar 6 02:22:42.579030 kubelet[2754]: I0306 02:22:42.578938 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vmdl\" (UniqueName: \"kubernetes.io/projected/eaeab074-b6a5-4262-9696-6e61476a4648-kube-api-access-5vmdl\") pod \"calico-apiserver-fdf95c748-msxqc\" (UID: \"eaeab074-b6a5-4262-9696-6e61476a4648\") " pod="calico-system/calico-apiserver-fdf95c748-msxqc" Mar 6 02:22:42.579030 kubelet[2754]: I0306 02:22:42.579024 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gz66q\" (UniqueName: \"kubernetes.io/projected/f469a10c-9545-4124-9867-97946c582789-kube-api-access-gz66q\") pod \"coredns-674b8bbfcf-64f8g\" (UID: \"f469a10c-9545-4124-9867-97946c582789\") " pod="kube-system/coredns-674b8bbfcf-64f8g" Mar 6 02:22:42.579387 kubelet[2754]: I0306 02:22:42.579151 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/285fd2ec-c090-4ff4-8f31-634507f6ef5a-whisker-backend-key-pair\") pod \"whisker-75c4ddc668-fw9pf\" (UID: \"285fd2ec-c090-4ff4-8f31-634507f6ef5a\") " pod="calico-system/whisker-75c4ddc668-fw9pf" Mar 6 02:22:42.579387 kubelet[2754]: I0306 02:22:42.579181 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/285fd2ec-c090-4ff4-8f31-634507f6ef5a-whisker-ca-bundle\") pod \"whisker-75c4ddc668-fw9pf\" (UID: \"285fd2ec-c090-4ff4-8f31-634507f6ef5a\") " pod="calico-system/whisker-75c4ddc668-fw9pf" Mar 6 02:22:42.579387 kubelet[2754]: I0306 02:22:42.579209 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/285fd2ec-c090-4ff4-8f31-634507f6ef5a-nginx-config\") pod \"whisker-75c4ddc668-fw9pf\" (UID: \"285fd2ec-c090-4ff4-8f31-634507f6ef5a\") " pod="calico-system/whisker-75c4ddc668-fw9pf" Mar 6 02:22:42.579387 kubelet[2754]: I0306 02:22:42.579238 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7drt\" (UniqueName: \"kubernetes.io/projected/285fd2ec-c090-4ff4-8f31-634507f6ef5a-kube-api-access-s7drt\") pod \"whisker-75c4ddc668-fw9pf\" (UID: \"285fd2ec-c090-4ff4-8f31-634507f6ef5a\") " pod="calico-system/whisker-75c4ddc668-fw9pf" Mar 6 02:22:42.579387 kubelet[2754]: I0306 02:22:42.579296 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgsmn\" (UniqueName: \"kubernetes.io/projected/97416aaf-d20d-44e3-9e01-e71854f14d41-kube-api-access-hgsmn\") pod \"coredns-674b8bbfcf-csmx6\" (UID: \"97416aaf-d20d-44e3-9e01-e71854f14d41\") " pod="kube-system/coredns-674b8bbfcf-csmx6" Mar 6 02:22:42.579565 kubelet[2754]: I0306 02:22:42.579338 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/eaeab074-b6a5-4262-9696-6e61476a4648-calico-apiserver-certs\") pod \"calico-apiserver-fdf95c748-msxqc\" (UID: \"eaeab074-b6a5-4262-9696-6e61476a4648\") " pod="calico-system/calico-apiserver-fdf95c748-msxqc" Mar 6 02:22:42.579565 kubelet[2754]: I0306 02:22:42.579368 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f469a10c-9545-4124-9867-97946c582789-config-volume\") pod \"coredns-674b8bbfcf-64f8g\" (UID: \"f469a10c-9545-4124-9867-97946c582789\") " pod="kube-system/coredns-674b8bbfcf-64f8g" Mar 6 02:22:42.579565 kubelet[2754]: I0306 02:22:42.579392 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/97416aaf-d20d-44e3-9e01-e71854f14d41-config-volume\") pod \"coredns-674b8bbfcf-csmx6\" (UID: \"97416aaf-d20d-44e3-9e01-e71854f14d41\") " pod="kube-system/coredns-674b8bbfcf-csmx6" Mar 6 02:22:42.789814 containerd[1583]: time="2026-03-06T02:22:42.789525738Z" level=info msg="CreateContainer within sandbox \"13407487b2cbd3f83227eeea8c087639d4d85e5eff06d1e664b062ea04a7c65f\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 6 02:22:42.845973 containerd[1583]: time="2026-03-06T02:22:42.845770362Z" level=info msg="Container 24f05cf506ca3bf1ab71cd68e3de86ece21ce4429475478ce4334377f2b348ab: CDI devices from CRI Config.CDIDevices: []" Mar 6 02:22:42.863895 containerd[1583]: time="2026-03-06T02:22:42.863766256Z" level=info msg="CreateContainer within sandbox \"13407487b2cbd3f83227eeea8c087639d4d85e5eff06d1e664b062ea04a7c65f\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"24f05cf506ca3bf1ab71cd68e3de86ece21ce4429475478ce4334377f2b348ab\"" Mar 6 02:22:42.864889 containerd[1583]: time="2026-03-06T02:22:42.864784326Z" level=info msg="StartContainer for \"24f05cf506ca3bf1ab71cd68e3de86ece21ce4429475478ce4334377f2b348ab\"" Mar 6 02:22:42.867080 containerd[1583]: time="2026-03-06T02:22:42.867016582Z" level=info msg="connecting to shim 24f05cf506ca3bf1ab71cd68e3de86ece21ce4429475478ce4334377f2b348ab" address="unix:///run/containerd/s/af7e64030c5d5bf953d9de6587ebd505b505e3aa31976784141541e3f085205e" protocol=ttrpc version=3 Mar 6 02:22:42.939384 systemd[1]: Started cri-containerd-24f05cf506ca3bf1ab71cd68e3de86ece21ce4429475478ce4334377f2b348ab.scope - libcontainer container 24f05cf506ca3bf1ab71cd68e3de86ece21ce4429475478ce4334377f2b348ab. Mar 6 02:22:42.946872 systemd[1]: Created slice kubepods-besteffort-pod7fb5e21c_5bf4_4f7d_a9eb_5209ee5ef76e.slice - libcontainer container kubepods-besteffort-pod7fb5e21c_5bf4_4f7d_a9eb_5209ee5ef76e.slice. Mar 6 02:22:42.962777 systemd[1]: Created slice kubepods-besteffort-podae401966_c657_4b7e_b0b4_3287b83c8954.slice - libcontainer container kubepods-besteffort-podae401966_c657_4b7e_b0b4_3287b83c8954.slice. Mar 6 02:22:42.972971 systemd[1]: Created slice kubepods-besteffort-pod1556a76d_b801_4f94_85d1_3c1662a146b7.slice - libcontainer container kubepods-besteffort-pod1556a76d_b801_4f94_85d1_3c1662a146b7.slice. Mar 6 02:22:42.984742 kubelet[2754]: I0306 02:22:42.984587 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1556a76d-b801-4f94-85d1-3c1662a146b7-calico-apiserver-certs\") pod \"calico-apiserver-fdf95c748-z4qz4\" (UID: \"1556a76d-b801-4f94-85d1-3c1662a146b7\") " pod="calico-system/calico-apiserver-fdf95c748-z4qz4" Mar 6 02:22:42.984742 kubelet[2754]: I0306 02:22:42.984726 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7fb5e21c-5bf4-4f7d-a9eb-5209ee5ef76e-tigera-ca-bundle\") pod \"calico-kube-controllers-c6d79556b-hmx8m\" (UID: \"7fb5e21c-5bf4-4f7d-a9eb-5209ee5ef76e\") " pod="calico-system/calico-kube-controllers-c6d79556b-hmx8m" Mar 6 02:22:42.984926 kubelet[2754]: I0306 02:22:42.984767 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5x9x\" (UniqueName: \"kubernetes.io/projected/7fb5e21c-5bf4-4f7d-a9eb-5209ee5ef76e-kube-api-access-f5x9x\") pod \"calico-kube-controllers-c6d79556b-hmx8m\" (UID: \"7fb5e21c-5bf4-4f7d-a9eb-5209ee5ef76e\") " pod="calico-system/calico-kube-controllers-c6d79556b-hmx8m" Mar 6 02:22:42.984926 kubelet[2754]: I0306 02:22:42.984821 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2vps\" (UniqueName: \"kubernetes.io/projected/1556a76d-b801-4f94-85d1-3c1662a146b7-kube-api-access-w2vps\") pod \"calico-apiserver-fdf95c748-z4qz4\" (UID: \"1556a76d-b801-4f94-85d1-3c1662a146b7\") " pod="calico-system/calico-apiserver-fdf95c748-z4qz4" Mar 6 02:22:43.085638 kubelet[2754]: I0306 02:22:43.085412 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tjg5\" (UniqueName: \"kubernetes.io/projected/ae401966-c657-4b7e-b0b4-3287b83c8954-kube-api-access-4tjg5\") pod \"goldmane-5b85766d88-rmjwl\" (UID: \"ae401966-c657-4b7e-b0b4-3287b83c8954\") " pod="calico-system/goldmane-5b85766d88-rmjwl" Mar 6 02:22:43.085638 kubelet[2754]: I0306 02:22:43.085498 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/ae401966-c657-4b7e-b0b4-3287b83c8954-goldmane-key-pair\") pod \"goldmane-5b85766d88-rmjwl\" (UID: \"ae401966-c657-4b7e-b0b4-3287b83c8954\") " pod="calico-system/goldmane-5b85766d88-rmjwl" Mar 6 02:22:43.085638 kubelet[2754]: I0306 02:22:43.085549 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae401966-c657-4b7e-b0b4-3287b83c8954-config\") pod \"goldmane-5b85766d88-rmjwl\" (UID: \"ae401966-c657-4b7e-b0b4-3287b83c8954\") " pod="calico-system/goldmane-5b85766d88-rmjwl" Mar 6 02:22:43.085931 kubelet[2754]: I0306 02:22:43.085647 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ae401966-c657-4b7e-b0b4-3287b83c8954-goldmane-ca-bundle\") pod \"goldmane-5b85766d88-rmjwl\" (UID: \"ae401966-c657-4b7e-b0b4-3287b83c8954\") " pod="calico-system/goldmane-5b85766d88-rmjwl" Mar 6 02:22:43.087569 containerd[1583]: time="2026-03-06T02:22:43.087493021Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-75c4ddc668-fw9pf,Uid:285fd2ec-c090-4ff4-8f31-634507f6ef5a,Namespace:calico-system,Attempt:0,}" Mar 6 02:22:43.095833 kubelet[2754]: E0306 02:22:43.095185 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:22:43.103847 kubelet[2754]: E0306 02:22:43.103720 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:22:43.124451 containerd[1583]: time="2026-03-06T02:22:43.116882823Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-csmx6,Uid:97416aaf-d20d-44e3-9e01-e71854f14d41,Namespace:kube-system,Attempt:0,}" Mar 6 02:22:43.125690 containerd[1583]: time="2026-03-06T02:22:43.123043310Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-64f8g,Uid:f469a10c-9545-4124-9867-97946c582789,Namespace:kube-system,Attempt:0,}" Mar 6 02:22:43.126029 containerd[1583]: time="2026-03-06T02:22:43.125452908Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-fdf95c748-msxqc,Uid:eaeab074-b6a5-4262-9696-6e61476a4648,Namespace:calico-system,Attempt:0,}" Mar 6 02:22:43.148461 containerd[1583]: time="2026-03-06T02:22:43.148374709Z" level=info msg="StartContainer for \"24f05cf506ca3bf1ab71cd68e3de86ece21ce4429475478ce4334377f2b348ab\" returns successfully" Mar 6 02:22:43.262231 containerd[1583]: time="2026-03-06T02:22:43.262132701Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c6d79556b-hmx8m,Uid:7fb5e21c-5bf4-4f7d-a9eb-5209ee5ef76e,Namespace:calico-system,Attempt:0,}" Mar 6 02:22:43.275224 containerd[1583]: time="2026-03-06T02:22:43.274999808Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-rmjwl,Uid:ae401966-c657-4b7e-b0b4-3287b83c8954,Namespace:calico-system,Attempt:0,}" Mar 6 02:22:43.279970 containerd[1583]: time="2026-03-06T02:22:43.279916311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-fdf95c748-z4qz4,Uid:1556a76d-b801-4f94-85d1-3c1662a146b7,Namespace:calico-system,Attempt:0,}" Mar 6 02:22:43.420242 containerd[1583]: time="2026-03-06T02:22:43.419655673Z" level=error msg="Failed to destroy network for sandbox \"b4c1cb2636d781480e38ff70aaa3195a0e29949f280c010e1e970fed63be1286\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 02:22:43.425988 containerd[1583]: time="2026-03-06T02:22:43.425902824Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-75c4ddc668-fw9pf,Uid:285fd2ec-c090-4ff4-8f31-634507f6ef5a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b4c1cb2636d781480e38ff70aaa3195a0e29949f280c010e1e970fed63be1286\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 02:22:43.430383 containerd[1583]: time="2026-03-06T02:22:43.430248794Z" level=error msg="Failed to destroy network for sandbox \"33ffd4490efdebe1cb3d57253643bacfab40d7bf145365f0be0365e82c2f7dc1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 02:22:43.436218 containerd[1583]: time="2026-03-06T02:22:43.436179003Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-csmx6,Uid:97416aaf-d20d-44e3-9e01-e71854f14d41,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"33ffd4490efdebe1cb3d57253643bacfab40d7bf145365f0be0365e82c2f7dc1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 02:22:43.443367 kubelet[2754]: E0306 02:22:43.443292 2754 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"33ffd4490efdebe1cb3d57253643bacfab40d7bf145365f0be0365e82c2f7dc1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 02:22:43.443925 kubelet[2754]: E0306 02:22:43.443422 2754 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"33ffd4490efdebe1cb3d57253643bacfab40d7bf145365f0be0365e82c2f7dc1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-csmx6" Mar 6 02:22:43.443925 kubelet[2754]: E0306 02:22:43.443457 2754 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"33ffd4490efdebe1cb3d57253643bacfab40d7bf145365f0be0365e82c2f7dc1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-csmx6" Mar 6 02:22:43.444244 kubelet[2754]: E0306 02:22:43.444164 2754 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b4c1cb2636d781480e38ff70aaa3195a0e29949f280c010e1e970fed63be1286\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 02:22:43.444244 kubelet[2754]: E0306 02:22:43.444237 2754 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b4c1cb2636d781480e38ff70aaa3195a0e29949f280c010e1e970fed63be1286\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-75c4ddc668-fw9pf" Mar 6 02:22:43.444354 kubelet[2754]: E0306 02:22:43.444263 2754 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b4c1cb2636d781480e38ff70aaa3195a0e29949f280c010e1e970fed63be1286\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-75c4ddc668-fw9pf" Mar 6 02:22:43.445698 kubelet[2754]: E0306 02:22:43.445648 2754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-csmx6_kube-system(97416aaf-d20d-44e3-9e01-e71854f14d41)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-csmx6_kube-system(97416aaf-d20d-44e3-9e01-e71854f14d41)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"33ffd4490efdebe1cb3d57253643bacfab40d7bf145365f0be0365e82c2f7dc1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-csmx6" podUID="97416aaf-d20d-44e3-9e01-e71854f14d41" Mar 6 02:22:43.445954 kubelet[2754]: E0306 02:22:43.445730 2754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-75c4ddc668-fw9pf_calico-system(285fd2ec-c090-4ff4-8f31-634507f6ef5a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-75c4ddc668-fw9pf_calico-system(285fd2ec-c090-4ff4-8f31-634507f6ef5a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b4c1cb2636d781480e38ff70aaa3195a0e29949f280c010e1e970fed63be1286\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-75c4ddc668-fw9pf" podUID="285fd2ec-c090-4ff4-8f31-634507f6ef5a" Mar 6 02:22:43.451736 systemd[1]: run-netns-cni\x2d4e0c6c0d\x2d9cc9\x2d0238\x2de13f\x2dcf80564c4b0b.mount: Deactivated successfully. Mar 6 02:22:43.453720 systemd[1]: run-netns-cni\x2d5dee29c1\x2d730f\x2d36ca\x2d5a89\x2dcf79ca6bf1c3.mount: Deactivated successfully. Mar 6 02:22:43.508226 containerd[1583]: time="2026-03-06T02:22:43.507738869Z" level=error msg="Failed to destroy network for sandbox \"a5a2f97ee968c919af827bd324d5ef490d85b4cf6d12cfb89ee8cd9edab4dcc8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 02:22:43.513763 containerd[1583]: time="2026-03-06T02:22:43.513476538Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-64f8g,Uid:f469a10c-9545-4124-9867-97946c582789,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5a2f97ee968c919af827bd324d5ef490d85b4cf6d12cfb89ee8cd9edab4dcc8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 02:22:43.515137 kubelet[2754]: E0306 02:22:43.514271 2754 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5a2f97ee968c919af827bd324d5ef490d85b4cf6d12cfb89ee8cd9edab4dcc8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 02:22:43.515137 kubelet[2754]: E0306 02:22:43.514366 2754 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5a2f97ee968c919af827bd324d5ef490d85b4cf6d12cfb89ee8cd9edab4dcc8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-64f8g" Mar 6 02:22:43.515137 kubelet[2754]: E0306 02:22:43.514394 2754 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5a2f97ee968c919af827bd324d5ef490d85b4cf6d12cfb89ee8cd9edab4dcc8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-64f8g" Mar 6 02:22:43.515322 kubelet[2754]: E0306 02:22:43.514475 2754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-64f8g_kube-system(f469a10c-9545-4124-9867-97946c582789)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-64f8g_kube-system(f469a10c-9545-4124-9867-97946c582789)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a5a2f97ee968c919af827bd324d5ef490d85b4cf6d12cfb89ee8cd9edab4dcc8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-64f8g" podUID="f469a10c-9545-4124-9867-97946c582789" Mar 6 02:22:43.515837 systemd[1]: run-netns-cni\x2d14933a2c\x2d18fa\x2da7a3\x2d76d0\x2da31a1a966f1f.mount: Deactivated successfully. Mar 6 02:22:43.520199 containerd[1583]: time="2026-03-06T02:22:43.519663021Z" level=error msg="Failed to destroy network for sandbox \"7c4a50ad962e633127efa59802e9bf45f459f09f19bb52704547f6fe323f4bd6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 02:22:43.525684 containerd[1583]: time="2026-03-06T02:22:43.524742579Z" level=error msg="Failed to destroy network for sandbox \"97d99d8613de8952cc5a9606ec2e35343581b90c28dbfe4b0ce9aca0e64354ff\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 02:22:43.525853 systemd[1]: run-netns-cni\x2db56f7bf6\x2d0c32\x2d844a\x2d9b48\x2d9f7d7b2079b0.mount: Deactivated successfully. Mar 6 02:22:43.528297 containerd[1583]: time="2026-03-06T02:22:43.528229920Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-fdf95c748-msxqc,Uid:eaeab074-b6a5-4262-9696-6e61476a4648,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c4a50ad962e633127efa59802e9bf45f459f09f19bb52704547f6fe323f4bd6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 02:22:43.530881 systemd[1]: run-netns-cni\x2d1805654a\x2d69f5\x2dbb96\x2d2064\x2d099abc3acb64.mount: Deactivated successfully. Mar 6 02:22:43.532116 kubelet[2754]: E0306 02:22:43.531535 2754 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c4a50ad962e633127efa59802e9bf45f459f09f19bb52704547f6fe323f4bd6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 02:22:43.532116 kubelet[2754]: E0306 02:22:43.531659 2754 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c4a50ad962e633127efa59802e9bf45f459f09f19bb52704547f6fe323f4bd6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-fdf95c748-msxqc" Mar 6 02:22:43.532116 kubelet[2754]: E0306 02:22:43.531691 2754 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c4a50ad962e633127efa59802e9bf45f459f09f19bb52704547f6fe323f4bd6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-fdf95c748-msxqc" Mar 6 02:22:43.532294 kubelet[2754]: E0306 02:22:43.531761 2754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-fdf95c748-msxqc_calico-system(eaeab074-b6a5-4262-9696-6e61476a4648)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-fdf95c748-msxqc_calico-system(eaeab074-b6a5-4262-9696-6e61476a4648)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7c4a50ad962e633127efa59802e9bf45f459f09f19bb52704547f6fe323f4bd6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-fdf95c748-msxqc" podUID="eaeab074-b6a5-4262-9696-6e61476a4648" Mar 6 02:22:43.534349 containerd[1583]: time="2026-03-06T02:22:43.534226171Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-fdf95c748-z4qz4,Uid:1556a76d-b801-4f94-85d1-3c1662a146b7,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"97d99d8613de8952cc5a9606ec2e35343581b90c28dbfe4b0ce9aca0e64354ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 02:22:43.534688 kubelet[2754]: E0306 02:22:43.534643 2754 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"97d99d8613de8952cc5a9606ec2e35343581b90c28dbfe4b0ce9aca0e64354ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 02:22:43.534752 kubelet[2754]: E0306 02:22:43.534697 2754 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"97d99d8613de8952cc5a9606ec2e35343581b90c28dbfe4b0ce9aca0e64354ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-fdf95c748-z4qz4" Mar 6 02:22:43.534752 kubelet[2754]: E0306 02:22:43.534717 2754 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"97d99d8613de8952cc5a9606ec2e35343581b90c28dbfe4b0ce9aca0e64354ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-fdf95c748-z4qz4" Mar 6 02:22:43.534821 kubelet[2754]: E0306 02:22:43.534761 2754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-fdf95c748-z4qz4_calico-system(1556a76d-b801-4f94-85d1-3c1662a146b7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-fdf95c748-z4qz4_calico-system(1556a76d-b801-4f94-85d1-3c1662a146b7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"97d99d8613de8952cc5a9606ec2e35343581b90c28dbfe4b0ce9aca0e64354ff\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-fdf95c748-z4qz4" podUID="1556a76d-b801-4f94-85d1-3c1662a146b7" Mar 6 02:22:43.560844 containerd[1583]: time="2026-03-06T02:22:43.560538553Z" level=error msg="Failed to destroy network for sandbox \"36d2dfc841d01e8987459058eb9bef638827995d992f53c81720c5ab73c647dd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 02:22:43.563296 containerd[1583]: time="2026-03-06T02:22:43.563147872Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-rmjwl,Uid:ae401966-c657-4b7e-b0b4-3287b83c8954,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"36d2dfc841d01e8987459058eb9bef638827995d992f53c81720c5ab73c647dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 02:22:43.564186 kubelet[2754]: E0306 02:22:43.564002 2754 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"36d2dfc841d01e8987459058eb9bef638827995d992f53c81720c5ab73c647dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 02:22:43.564705 kubelet[2754]: E0306 02:22:43.564367 2754 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"36d2dfc841d01e8987459058eb9bef638827995d992f53c81720c5ab73c647dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-rmjwl" Mar 6 02:22:43.564705 kubelet[2754]: E0306 02:22:43.564407 2754 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"36d2dfc841d01e8987459058eb9bef638827995d992f53c81720c5ab73c647dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-rmjwl" Mar 6 02:22:43.564834 kubelet[2754]: E0306 02:22:43.564753 2754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-5b85766d88-rmjwl_calico-system(ae401966-c657-4b7e-b0b4-3287b83c8954)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-5b85766d88-rmjwl_calico-system(ae401966-c657-4b7e-b0b4-3287b83c8954)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"36d2dfc841d01e8987459058eb9bef638827995d992f53c81720c5ab73c647dd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-5b85766d88-rmjwl" podUID="ae401966-c657-4b7e-b0b4-3287b83c8954" Mar 6 02:22:43.598349 containerd[1583]: time="2026-03-06T02:22:43.598202654Z" level=error msg="Failed to destroy network for sandbox \"686b71808bde533fb305f26b0e1a527d7dcd6b0d45b627aa1e55afe3891dc114\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 02:22:43.609277 containerd[1583]: time="2026-03-06T02:22:43.609207841Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c6d79556b-hmx8m,Uid:7fb5e21c-5bf4-4f7d-a9eb-5209ee5ef76e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"686b71808bde533fb305f26b0e1a527d7dcd6b0d45b627aa1e55afe3891dc114\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 02:22:43.610159 kubelet[2754]: E0306 02:22:43.609987 2754 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"686b71808bde533fb305f26b0e1a527d7dcd6b0d45b627aa1e55afe3891dc114\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 02:22:43.610159 kubelet[2754]: E0306 02:22:43.610129 2754 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"686b71808bde533fb305f26b0e1a527d7dcd6b0d45b627aa1e55afe3891dc114\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-c6d79556b-hmx8m" Mar 6 02:22:43.610505 kubelet[2754]: E0306 02:22:43.610162 2754 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"686b71808bde533fb305f26b0e1a527d7dcd6b0d45b627aa1e55afe3891dc114\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-c6d79556b-hmx8m" Mar 6 02:22:43.610505 kubelet[2754]: E0306 02:22:43.610249 2754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-c6d79556b-hmx8m_calico-system(7fb5e21c-5bf4-4f7d-a9eb-5209ee5ef76e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-c6d79556b-hmx8m_calico-system(7fb5e21c-5bf4-4f7d-a9eb-5209ee5ef76e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"686b71808bde533fb305f26b0e1a527d7dcd6b0d45b627aa1e55afe3891dc114\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-c6d79556b-hmx8m" podUID="7fb5e21c-5bf4-4f7d-a9eb-5209ee5ef76e" Mar 6 02:22:43.902471 kubelet[2754]: I0306 02:22:43.902424 2754 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/285fd2ec-c090-4ff4-8f31-634507f6ef5a-nginx-config\") pod \"285fd2ec-c090-4ff4-8f31-634507f6ef5a\" (UID: \"285fd2ec-c090-4ff4-8f31-634507f6ef5a\") " Mar 6 02:22:43.902664 kubelet[2754]: I0306 02:22:43.902488 2754 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/285fd2ec-c090-4ff4-8f31-634507f6ef5a-whisker-backend-key-pair\") pod \"285fd2ec-c090-4ff4-8f31-634507f6ef5a\" (UID: \"285fd2ec-c090-4ff4-8f31-634507f6ef5a\") " Mar 6 02:22:43.902664 kubelet[2754]: I0306 02:22:43.902503 2754 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/285fd2ec-c090-4ff4-8f31-634507f6ef5a-whisker-ca-bundle\") pod \"285fd2ec-c090-4ff4-8f31-634507f6ef5a\" (UID: \"285fd2ec-c090-4ff4-8f31-634507f6ef5a\") " Mar 6 02:22:43.902664 kubelet[2754]: I0306 02:22:43.902539 2754 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s7drt\" (UniqueName: \"kubernetes.io/projected/285fd2ec-c090-4ff4-8f31-634507f6ef5a-kube-api-access-s7drt\") pod \"285fd2ec-c090-4ff4-8f31-634507f6ef5a\" (UID: \"285fd2ec-c090-4ff4-8f31-634507f6ef5a\") " Mar 6 02:22:43.905881 kubelet[2754]: I0306 02:22:43.904001 2754 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/285fd2ec-c090-4ff4-8f31-634507f6ef5a-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "285fd2ec-c090-4ff4-8f31-634507f6ef5a" (UID: "285fd2ec-c090-4ff4-8f31-634507f6ef5a"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 6 02:22:43.908992 kubelet[2754]: I0306 02:22:43.907595 2754 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/285fd2ec-c090-4ff4-8f31-634507f6ef5a-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "285fd2ec-c090-4ff4-8f31-634507f6ef5a" (UID: "285fd2ec-c090-4ff4-8f31-634507f6ef5a"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 6 02:22:43.917003 kubelet[2754]: I0306 02:22:43.916919 2754 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/285fd2ec-c090-4ff4-8f31-634507f6ef5a-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "285fd2ec-c090-4ff4-8f31-634507f6ef5a" (UID: "285fd2ec-c090-4ff4-8f31-634507f6ef5a"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 6 02:22:43.917178 kubelet[2754]: I0306 02:22:43.917039 2754 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/285fd2ec-c090-4ff4-8f31-634507f6ef5a-kube-api-access-s7drt" (OuterVolumeSpecName: "kube-api-access-s7drt") pod "285fd2ec-c090-4ff4-8f31-634507f6ef5a" (UID: "285fd2ec-c090-4ff4-8f31-634507f6ef5a"). InnerVolumeSpecName "kube-api-access-s7drt". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 6 02:22:44.006013 kubelet[2754]: I0306 02:22:44.005757 2754 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/285fd2ec-c090-4ff4-8f31-634507f6ef5a-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Mar 6 02:22:44.006013 kubelet[2754]: I0306 02:22:44.005862 2754 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/285fd2ec-c090-4ff4-8f31-634507f6ef5a-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Mar 6 02:22:44.006013 kubelet[2754]: I0306 02:22:44.005877 2754 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-s7drt\" (UniqueName: \"kubernetes.io/projected/285fd2ec-c090-4ff4-8f31-634507f6ef5a-kube-api-access-s7drt\") on node \"localhost\" DevicePath \"\"" Mar 6 02:22:44.006013 kubelet[2754]: I0306 02:22:44.005889 2754 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/285fd2ec-c090-4ff4-8f31-634507f6ef5a-nginx-config\") on node \"localhost\" DevicePath \"\"" Mar 6 02:22:44.420116 systemd[1]: run-netns-cni\x2d32267ad4\x2d3c6d\x2d617a\x2d4508\x2dd697ac80cd15.mount: Deactivated successfully. Mar 6 02:22:44.420260 systemd[1]: run-netns-cni\x2d9e798740\x2d6266\x2d8a60\x2dcd53\x2dd0dc5712e152.mount: Deactivated successfully. Mar 6 02:22:44.420333 systemd[1]: var-lib-kubelet-pods-285fd2ec\x2dc090\x2d4ff4\x2d8f31\x2d634507f6ef5a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ds7drt.mount: Deactivated successfully. Mar 6 02:22:44.420431 systemd[1]: var-lib-kubelet-pods-285fd2ec\x2dc090\x2d4ff4\x2d8f31\x2d634507f6ef5a-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Mar 6 02:22:44.424743 systemd[1]: Created slice kubepods-besteffort-pod26067065_4369_4983_afda_5a2ac20a4352.slice - libcontainer container kubepods-besteffort-pod26067065_4369_4983_afda_5a2ac20a4352.slice. Mar 6 02:22:44.428742 containerd[1583]: time="2026-03-06T02:22:44.428553889Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cnt7s,Uid:26067065-4369-4983-afda-5a2ac20a4352,Namespace:calico-system,Attempt:0,}" Mar 6 02:22:44.662988 systemd-networkd[1479]: cali3db8d3c0af8: Link UP Mar 6 02:22:44.664166 systemd-networkd[1479]: cali3db8d3c0af8: Gained carrier Mar 6 02:22:44.683977 kubelet[2754]: I0306 02:22:44.683723 2754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-nxsxm" podStartSLOduration=4.801345417 podStartE2EDuration="22.683699263s" podCreationTimestamp="2026-03-06 02:22:22 +0000 UTC" firstStartedPulling="2026-03-06 02:22:23.264140701 +0000 UTC m=+18.135752987" lastFinishedPulling="2026-03-06 02:22:41.146494548 +0000 UTC m=+36.018106833" observedRunningTime="2026-03-06 02:22:43.855687967 +0000 UTC m=+38.727300253" watchObservedRunningTime="2026-03-06 02:22:44.683699263 +0000 UTC m=+39.555311540" Mar 6 02:22:44.691568 containerd[1583]: 2026-03-06 02:22:44.471 [ERROR][3906] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 6 02:22:44.691568 containerd[1583]: 2026-03-06 02:22:44.508 [INFO][3906] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--cnt7s-eth0 csi-node-driver- calico-system 26067065-4369-4983-afda-5a2ac20a4352 727 0 2026-03-06 02:22:22 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6d9d697c7c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-cnt7s eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali3db8d3c0af8 [] [] }} ContainerID="8b12c3d1584ca57587fa244cb3bc75f5b01ad1245b860ff75a5d2809663ca64a" Namespace="calico-system" Pod="csi-node-driver-cnt7s" WorkloadEndpoint="localhost-k8s-csi--node--driver--cnt7s-" Mar 6 02:22:44.691568 containerd[1583]: 2026-03-06 02:22:44.508 [INFO][3906] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8b12c3d1584ca57587fa244cb3bc75f5b01ad1245b860ff75a5d2809663ca64a" Namespace="calico-system" Pod="csi-node-driver-cnt7s" WorkloadEndpoint="localhost-k8s-csi--node--driver--cnt7s-eth0" Mar 6 02:22:44.691568 containerd[1583]: 2026-03-06 02:22:44.566 [INFO][3919] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8b12c3d1584ca57587fa244cb3bc75f5b01ad1245b860ff75a5d2809663ca64a" HandleID="k8s-pod-network.8b12c3d1584ca57587fa244cb3bc75f5b01ad1245b860ff75a5d2809663ca64a" Workload="localhost-k8s-csi--node--driver--cnt7s-eth0" Mar 6 02:22:44.692025 containerd[1583]: 2026-03-06 02:22:44.577 [INFO][3919] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="8b12c3d1584ca57587fa244cb3bc75f5b01ad1245b860ff75a5d2809663ca64a" HandleID="k8s-pod-network.8b12c3d1584ca57587fa244cb3bc75f5b01ad1245b860ff75a5d2809663ca64a" Workload="localhost-k8s-csi--node--driver--cnt7s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00048f6d0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-cnt7s", "timestamp":"2026-03-06 02:22:44.566261651 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0002f2dc0)} Mar 6 02:22:44.692025 containerd[1583]: 2026-03-06 02:22:44.577 [INFO][3919] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 02:22:44.692025 containerd[1583]: 2026-03-06 02:22:44.577 [INFO][3919] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 02:22:44.692025 containerd[1583]: 2026-03-06 02:22:44.577 [INFO][3919] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 6 02:22:44.692025 containerd[1583]: 2026-03-06 02:22:44.581 [INFO][3919] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.8b12c3d1584ca57587fa244cb3bc75f5b01ad1245b860ff75a5d2809663ca64a" host="localhost" Mar 6 02:22:44.692025 containerd[1583]: 2026-03-06 02:22:44.588 [INFO][3919] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 6 02:22:44.692025 containerd[1583]: 2026-03-06 02:22:44.596 [INFO][3919] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 6 02:22:44.692025 containerd[1583]: 2026-03-06 02:22:44.600 [INFO][3919] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 6 02:22:44.692025 containerd[1583]: 2026-03-06 02:22:44.607 [INFO][3919] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 6 02:22:44.692025 containerd[1583]: 2026-03-06 02:22:44.608 [INFO][3919] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8b12c3d1584ca57587fa244cb3bc75f5b01ad1245b860ff75a5d2809663ca64a" host="localhost" Mar 6 02:22:44.692527 containerd[1583]: 2026-03-06 02:22:44.624 [INFO][3919] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.8b12c3d1584ca57587fa244cb3bc75f5b01ad1245b860ff75a5d2809663ca64a Mar 6 02:22:44.692527 containerd[1583]: 2026-03-06 02:22:44.631 [INFO][3919] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8b12c3d1584ca57587fa244cb3bc75f5b01ad1245b860ff75a5d2809663ca64a" host="localhost" Mar 6 02:22:44.692527 containerd[1583]: 2026-03-06 02:22:44.640 [INFO][3919] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.8b12c3d1584ca57587fa244cb3bc75f5b01ad1245b860ff75a5d2809663ca64a" host="localhost" Mar 6 02:22:44.692527 containerd[1583]: 2026-03-06 02:22:44.640 [INFO][3919] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.8b12c3d1584ca57587fa244cb3bc75f5b01ad1245b860ff75a5d2809663ca64a" host="localhost" Mar 6 02:22:44.692527 containerd[1583]: 2026-03-06 02:22:44.640 [INFO][3919] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 02:22:44.692527 containerd[1583]: 2026-03-06 02:22:44.640 [INFO][3919] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="8b12c3d1584ca57587fa244cb3bc75f5b01ad1245b860ff75a5d2809663ca64a" HandleID="k8s-pod-network.8b12c3d1584ca57587fa244cb3bc75f5b01ad1245b860ff75a5d2809663ca64a" Workload="localhost-k8s-csi--node--driver--cnt7s-eth0" Mar 6 02:22:44.692771 containerd[1583]: 2026-03-06 02:22:44.645 [INFO][3906] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8b12c3d1584ca57587fa244cb3bc75f5b01ad1245b860ff75a5d2809663ca64a" Namespace="calico-system" Pod="csi-node-driver-cnt7s" WorkloadEndpoint="localhost-k8s-csi--node--driver--cnt7s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--cnt7s-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"26067065-4369-4983-afda-5a2ac20a4352", ResourceVersion:"727", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 2, 22, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-cnt7s", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3db8d3c0af8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 02:22:44.692899 containerd[1583]: 2026-03-06 02:22:44.645 [INFO][3906] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="8b12c3d1584ca57587fa244cb3bc75f5b01ad1245b860ff75a5d2809663ca64a" Namespace="calico-system" Pod="csi-node-driver-cnt7s" WorkloadEndpoint="localhost-k8s-csi--node--driver--cnt7s-eth0" Mar 6 02:22:44.692899 containerd[1583]: 2026-03-06 02:22:44.646 [INFO][3906] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3db8d3c0af8 ContainerID="8b12c3d1584ca57587fa244cb3bc75f5b01ad1245b860ff75a5d2809663ca64a" Namespace="calico-system" Pod="csi-node-driver-cnt7s" WorkloadEndpoint="localhost-k8s-csi--node--driver--cnt7s-eth0" Mar 6 02:22:44.692899 containerd[1583]: 2026-03-06 02:22:44.664 [INFO][3906] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8b12c3d1584ca57587fa244cb3bc75f5b01ad1245b860ff75a5d2809663ca64a" Namespace="calico-system" Pod="csi-node-driver-cnt7s" WorkloadEndpoint="localhost-k8s-csi--node--driver--cnt7s-eth0" Mar 6 02:22:44.693005 containerd[1583]: 2026-03-06 02:22:44.665 [INFO][3906] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8b12c3d1584ca57587fa244cb3bc75f5b01ad1245b860ff75a5d2809663ca64a" Namespace="calico-system" Pod="csi-node-driver-cnt7s" WorkloadEndpoint="localhost-k8s-csi--node--driver--cnt7s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--cnt7s-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"26067065-4369-4983-afda-5a2ac20a4352", ResourceVersion:"727", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 2, 22, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8b12c3d1584ca57587fa244cb3bc75f5b01ad1245b860ff75a5d2809663ca64a", Pod:"csi-node-driver-cnt7s", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3db8d3c0af8", MAC:"3a:c8:84:48:a7:66", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 02:22:44.693192 containerd[1583]: 2026-03-06 02:22:44.683 [INFO][3906] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8b12c3d1584ca57587fa244cb3bc75f5b01ad1245b860ff75a5d2809663ca64a" Namespace="calico-system" Pod="csi-node-driver-cnt7s" WorkloadEndpoint="localhost-k8s-csi--node--driver--cnt7s-eth0" Mar 6 02:22:44.804815 systemd[1]: Removed slice kubepods-besteffort-pod285fd2ec_c090_4ff4_8f31_634507f6ef5a.slice - libcontainer container kubepods-besteffort-pod285fd2ec_c090_4ff4_8f31_634507f6ef5a.slice. Mar 6 02:22:44.834799 containerd[1583]: time="2026-03-06T02:22:44.834288791Z" level=info msg="connecting to shim 8b12c3d1584ca57587fa244cb3bc75f5b01ad1245b860ff75a5d2809663ca64a" address="unix:///run/containerd/s/86796e2e5984040c3167a899c116ae61c339c347edd6eda8220baed5d3312fb3" namespace=k8s.io protocol=ttrpc version=3 Mar 6 02:22:44.910882 systemd[1]: Started cri-containerd-8b12c3d1584ca57587fa244cb3bc75f5b01ad1245b860ff75a5d2809663ca64a.scope - libcontainer container 8b12c3d1584ca57587fa244cb3bc75f5b01ad1245b860ff75a5d2809663ca64a. Mar 6 02:22:44.950389 systemd[1]: Created slice kubepods-besteffort-podd934bd28_4d5c_4ef1_825e_d27fd6966616.slice - libcontainer container kubepods-besteffort-podd934bd28_4d5c_4ef1_825e_d27fd6966616.slice. Mar 6 02:22:44.961710 systemd-resolved[1394]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 6 02:22:44.994915 containerd[1583]: time="2026-03-06T02:22:44.994846633Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cnt7s,Uid:26067065-4369-4983-afda-5a2ac20a4352,Namespace:calico-system,Attempt:0,} returns sandbox id \"8b12c3d1584ca57587fa244cb3bc75f5b01ad1245b860ff75a5d2809663ca64a\"" Mar 6 02:22:44.998174 containerd[1583]: time="2026-03-06T02:22:44.997990260Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Mar 6 02:22:45.030718 kubelet[2754]: I0306 02:22:45.030528 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2fvb\" (UniqueName: \"kubernetes.io/projected/d934bd28-4d5c-4ef1-825e-d27fd6966616-kube-api-access-d2fvb\") pod \"whisker-6dd8775bf-xk6pb\" (UID: \"d934bd28-4d5c-4ef1-825e-d27fd6966616\") " pod="calico-system/whisker-6dd8775bf-xk6pb" Mar 6 02:22:45.030718 kubelet[2754]: I0306 02:22:45.030672 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d934bd28-4d5c-4ef1-825e-d27fd6966616-whisker-backend-key-pair\") pod \"whisker-6dd8775bf-xk6pb\" (UID: \"d934bd28-4d5c-4ef1-825e-d27fd6966616\") " pod="calico-system/whisker-6dd8775bf-xk6pb" Mar 6 02:22:45.030718 kubelet[2754]: I0306 02:22:45.030711 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d934bd28-4d5c-4ef1-825e-d27fd6966616-whisker-ca-bundle\") pod \"whisker-6dd8775bf-xk6pb\" (UID: \"d934bd28-4d5c-4ef1-825e-d27fd6966616\") " pod="calico-system/whisker-6dd8775bf-xk6pb" Mar 6 02:22:45.030718 kubelet[2754]: I0306 02:22:45.030750 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/d934bd28-4d5c-4ef1-825e-d27fd6966616-nginx-config\") pod \"whisker-6dd8775bf-xk6pb\" (UID: \"d934bd28-4d5c-4ef1-825e-d27fd6966616\") " pod="calico-system/whisker-6dd8775bf-xk6pb" Mar 6 02:22:45.259737 containerd[1583]: time="2026-03-06T02:22:45.259526018Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6dd8775bf-xk6pb,Uid:d934bd28-4d5c-4ef1-825e-d27fd6966616,Namespace:calico-system,Attempt:0,}" Mar 6 02:22:45.408178 kubelet[2754]: I0306 02:22:45.407848 2754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="285fd2ec-c090-4ff4-8f31-634507f6ef5a" path="/var/lib/kubelet/pods/285fd2ec-c090-4ff4-8f31-634507f6ef5a/volumes" Mar 6 02:22:45.720817 systemd-networkd[1479]: cali83a8b0a2eeb: Link UP Mar 6 02:22:45.724203 systemd-networkd[1479]: cali83a8b0a2eeb: Gained carrier Mar 6 02:22:45.753911 containerd[1583]: 2026-03-06 02:22:45.380 [ERROR][4073] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 6 02:22:45.753911 containerd[1583]: 2026-03-06 02:22:45.437 [INFO][4073] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--6dd8775bf--xk6pb-eth0 whisker-6dd8775bf- calico-system d934bd28-4d5c-4ef1-825e-d27fd6966616 932 0 2026-03-06 02:22:44 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6dd8775bf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-6dd8775bf-xk6pb eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali83a8b0a2eeb [] [] }} ContainerID="192b8fb603121a9e6df1d2e27c556d5bbb73a0f12f52551f4a631f3f2745851d" Namespace="calico-system" Pod="whisker-6dd8775bf-xk6pb" WorkloadEndpoint="localhost-k8s-whisker--6dd8775bf--xk6pb-" Mar 6 02:22:45.753911 containerd[1583]: 2026-03-06 02:22:45.437 [INFO][4073] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="192b8fb603121a9e6df1d2e27c556d5bbb73a0f12f52551f4a631f3f2745851d" Namespace="calico-system" Pod="whisker-6dd8775bf-xk6pb" WorkloadEndpoint="localhost-k8s-whisker--6dd8775bf--xk6pb-eth0" Mar 6 02:22:45.753911 containerd[1583]: 2026-03-06 02:22:45.594 [INFO][4126] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="192b8fb603121a9e6df1d2e27c556d5bbb73a0f12f52551f4a631f3f2745851d" HandleID="k8s-pod-network.192b8fb603121a9e6df1d2e27c556d5bbb73a0f12f52551f4a631f3f2745851d" Workload="localhost-k8s-whisker--6dd8775bf--xk6pb-eth0" Mar 6 02:22:45.754952 containerd[1583]: 2026-03-06 02:22:45.629 [INFO][4126] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="192b8fb603121a9e6df1d2e27c556d5bbb73a0f12f52551f4a631f3f2745851d" HandleID="k8s-pod-network.192b8fb603121a9e6df1d2e27c556d5bbb73a0f12f52551f4a631f3f2745851d" Workload="localhost-k8s-whisker--6dd8775bf--xk6pb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000410360), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-6dd8775bf-xk6pb", "timestamp":"2026-03-06 02:22:45.593955427 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00030c420)} Mar 6 02:22:45.754952 containerd[1583]: 2026-03-06 02:22:45.630 [INFO][4126] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 02:22:45.754952 containerd[1583]: 2026-03-06 02:22:45.631 [INFO][4126] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 02:22:45.754952 containerd[1583]: 2026-03-06 02:22:45.631 [INFO][4126] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 6 02:22:45.754952 containerd[1583]: 2026-03-06 02:22:45.638 [INFO][4126] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.192b8fb603121a9e6df1d2e27c556d5bbb73a0f12f52551f4a631f3f2745851d" host="localhost" Mar 6 02:22:45.754952 containerd[1583]: 2026-03-06 02:22:45.646 [INFO][4126] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 6 02:22:45.754952 containerd[1583]: 2026-03-06 02:22:45.654 [INFO][4126] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 6 02:22:45.754952 containerd[1583]: 2026-03-06 02:22:45.657 [INFO][4126] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 6 02:22:45.754952 containerd[1583]: 2026-03-06 02:22:45.665 [INFO][4126] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 6 02:22:45.754952 containerd[1583]: 2026-03-06 02:22:45.665 [INFO][4126] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.192b8fb603121a9e6df1d2e27c556d5bbb73a0f12f52551f4a631f3f2745851d" host="localhost" Mar 6 02:22:45.755366 containerd[1583]: 2026-03-06 02:22:45.669 [INFO][4126] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.192b8fb603121a9e6df1d2e27c556d5bbb73a0f12f52551f4a631f3f2745851d Mar 6 02:22:45.755366 containerd[1583]: 2026-03-06 02:22:45.681 [INFO][4126] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.192b8fb603121a9e6df1d2e27c556d5bbb73a0f12f52551f4a631f3f2745851d" host="localhost" Mar 6 02:22:45.755366 containerd[1583]: 2026-03-06 02:22:45.692 [INFO][4126] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.192b8fb603121a9e6df1d2e27c556d5bbb73a0f12f52551f4a631f3f2745851d" host="localhost" Mar 6 02:22:45.755366 containerd[1583]: 2026-03-06 02:22:45.693 [INFO][4126] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.192b8fb603121a9e6df1d2e27c556d5bbb73a0f12f52551f4a631f3f2745851d" host="localhost" Mar 6 02:22:45.755366 containerd[1583]: 2026-03-06 02:22:45.693 [INFO][4126] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 02:22:45.755366 containerd[1583]: 2026-03-06 02:22:45.693 [INFO][4126] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="192b8fb603121a9e6df1d2e27c556d5bbb73a0f12f52551f4a631f3f2745851d" HandleID="k8s-pod-network.192b8fb603121a9e6df1d2e27c556d5bbb73a0f12f52551f4a631f3f2745851d" Workload="localhost-k8s-whisker--6dd8775bf--xk6pb-eth0" Mar 6 02:22:45.755535 containerd[1583]: 2026-03-06 02:22:45.698 [INFO][4073] cni-plugin/k8s.go 418: Populated endpoint ContainerID="192b8fb603121a9e6df1d2e27c556d5bbb73a0f12f52551f4a631f3f2745851d" Namespace="calico-system" Pod="whisker-6dd8775bf-xk6pb" WorkloadEndpoint="localhost-k8s-whisker--6dd8775bf--xk6pb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6dd8775bf--xk6pb-eth0", GenerateName:"whisker-6dd8775bf-", Namespace:"calico-system", SelfLink:"", UID:"d934bd28-4d5c-4ef1-825e-d27fd6966616", ResourceVersion:"932", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 2, 22, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6dd8775bf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-6dd8775bf-xk6pb", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali83a8b0a2eeb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 02:22:45.755535 containerd[1583]: 2026-03-06 02:22:45.698 [INFO][4073] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="192b8fb603121a9e6df1d2e27c556d5bbb73a0f12f52551f4a631f3f2745851d" Namespace="calico-system" Pod="whisker-6dd8775bf-xk6pb" WorkloadEndpoint="localhost-k8s-whisker--6dd8775bf--xk6pb-eth0" Mar 6 02:22:45.755727 containerd[1583]: 2026-03-06 02:22:45.698 [INFO][4073] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali83a8b0a2eeb ContainerID="192b8fb603121a9e6df1d2e27c556d5bbb73a0f12f52551f4a631f3f2745851d" Namespace="calico-system" Pod="whisker-6dd8775bf-xk6pb" WorkloadEndpoint="localhost-k8s-whisker--6dd8775bf--xk6pb-eth0" Mar 6 02:22:45.755727 containerd[1583]: 2026-03-06 02:22:45.722 [INFO][4073] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="192b8fb603121a9e6df1d2e27c556d5bbb73a0f12f52551f4a631f3f2745851d" Namespace="calico-system" Pod="whisker-6dd8775bf-xk6pb" WorkloadEndpoint="localhost-k8s-whisker--6dd8775bf--xk6pb-eth0" Mar 6 02:22:45.755795 containerd[1583]: 2026-03-06 02:22:45.729 [INFO][4073] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="192b8fb603121a9e6df1d2e27c556d5bbb73a0f12f52551f4a631f3f2745851d" Namespace="calico-system" Pod="whisker-6dd8775bf-xk6pb" WorkloadEndpoint="localhost-k8s-whisker--6dd8775bf--xk6pb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6dd8775bf--xk6pb-eth0", GenerateName:"whisker-6dd8775bf-", Namespace:"calico-system", SelfLink:"", UID:"d934bd28-4d5c-4ef1-825e-d27fd6966616", ResourceVersion:"932", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 2, 22, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6dd8775bf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"192b8fb603121a9e6df1d2e27c556d5bbb73a0f12f52551f4a631f3f2745851d", Pod:"whisker-6dd8775bf-xk6pb", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali83a8b0a2eeb", MAC:"1a:f9:01:1e:5a:7c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 02:22:45.755901 containerd[1583]: 2026-03-06 02:22:45.744 [INFO][4073] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="192b8fb603121a9e6df1d2e27c556d5bbb73a0f12f52551f4a631f3f2745851d" Namespace="calico-system" Pod="whisker-6dd8775bf-xk6pb" WorkloadEndpoint="localhost-k8s-whisker--6dd8775bf--xk6pb-eth0" Mar 6 02:22:45.813507 containerd[1583]: time="2026-03-06T02:22:45.813352613Z" level=info msg="connecting to shim 192b8fb603121a9e6df1d2e27c556d5bbb73a0f12f52551f4a631f3f2745851d" address="unix:///run/containerd/s/5c819ab68f6ca3b4629e931ceb4749dc94a043b3ddcbd7d8b89c252fd141b313" namespace=k8s.io protocol=ttrpc version=3 Mar 6 02:22:45.868546 systemd[1]: Started cri-containerd-192b8fb603121a9e6df1d2e27c556d5bbb73a0f12f52551f4a631f3f2745851d.scope - libcontainer container 192b8fb603121a9e6df1d2e27c556d5bbb73a0f12f52551f4a631f3f2745851d. Mar 6 02:22:45.929206 systemd-resolved[1394]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 6 02:22:46.056979 containerd[1583]: time="2026-03-06T02:22:46.056882699Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6dd8775bf-xk6pb,Uid:d934bd28-4d5c-4ef1-825e-d27fd6966616,Namespace:calico-system,Attempt:0,} returns sandbox id \"192b8fb603121a9e6df1d2e27c556d5bbb73a0f12f52551f4a631f3f2745851d\"" Mar 6 02:22:46.072511 containerd[1583]: time="2026-03-06T02:22:46.072356541Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 02:22:46.074315 containerd[1583]: time="2026-03-06T02:22:46.074288136Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Mar 6 02:22:46.075591 containerd[1583]: time="2026-03-06T02:22:46.075530626Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 02:22:46.079993 containerd[1583]: time="2026-03-06T02:22:46.079923409Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 02:22:46.080834 containerd[1583]: time="2026-03-06T02:22:46.080807765Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 1.082780116s" Mar 6 02:22:46.080926 containerd[1583]: time="2026-03-06T02:22:46.080912340Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Mar 6 02:22:46.083282 containerd[1583]: time="2026-03-06T02:22:46.083201471Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Mar 6 02:22:46.088508 containerd[1583]: time="2026-03-06T02:22:46.088465782Z" level=info msg="CreateContainer within sandbox \"8b12c3d1584ca57587fa244cb3bc75f5b01ad1245b860ff75a5d2809663ca64a\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 6 02:22:46.106398 containerd[1583]: time="2026-03-06T02:22:46.106323790Z" level=info msg="Container 67ac816e1a2ab43a3f72681e40ba80445960c485e4648d24e85f7f597cef8121: CDI devices from CRI Config.CDIDevices: []" Mar 6 02:22:46.110457 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2696406905.mount: Deactivated successfully. Mar 6 02:22:46.123740 containerd[1583]: time="2026-03-06T02:22:46.123700377Z" level=info msg="CreateContainer within sandbox \"8b12c3d1584ca57587fa244cb3bc75f5b01ad1245b860ff75a5d2809663ca64a\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"67ac816e1a2ab43a3f72681e40ba80445960c485e4648d24e85f7f597cef8121\"" Mar 6 02:22:46.130433 containerd[1583]: time="2026-03-06T02:22:46.130332736Z" level=info msg="StartContainer for \"67ac816e1a2ab43a3f72681e40ba80445960c485e4648d24e85f7f597cef8121\"" Mar 6 02:22:46.135723 containerd[1583]: time="2026-03-06T02:22:46.135674295Z" level=info msg="connecting to shim 67ac816e1a2ab43a3f72681e40ba80445960c485e4648d24e85f7f597cef8121" address="unix:///run/containerd/s/86796e2e5984040c3167a899c116ae61c339c347edd6eda8220baed5d3312fb3" protocol=ttrpc version=3 Mar 6 02:22:46.163355 systemd[1]: Started cri-containerd-67ac816e1a2ab43a3f72681e40ba80445960c485e4648d24e85f7f597cef8121.scope - libcontainer container 67ac816e1a2ab43a3f72681e40ba80445960c485e4648d24e85f7f597cef8121. Mar 6 02:22:46.350443 containerd[1583]: time="2026-03-06T02:22:46.350024777Z" level=info msg="StartContainer for \"67ac816e1a2ab43a3f72681e40ba80445960c485e4648d24e85f7f597cef8121\" returns successfully" Mar 6 02:22:46.447358 systemd-networkd[1479]: cali3db8d3c0af8: Gained IPv6LL Mar 6 02:22:46.721570 systemd-networkd[1479]: vxlan.calico: Link UP Mar 6 02:22:46.721583 systemd-networkd[1479]: vxlan.calico: Gained carrier Mar 6 02:22:47.727904 systemd-networkd[1479]: cali83a8b0a2eeb: Gained IPv6LL Mar 6 02:22:47.736489 containerd[1583]: time="2026-03-06T02:22:47.735743955Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 02:22:47.740372 containerd[1583]: time="2026-03-06T02:22:47.738450706Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Mar 6 02:22:47.744498 containerd[1583]: time="2026-03-06T02:22:47.744449774Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 02:22:47.750870 containerd[1583]: time="2026-03-06T02:22:47.750577296Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 02:22:47.754774 containerd[1583]: time="2026-03-06T02:22:47.754703726Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 1.671398121s" Mar 6 02:22:47.754967 containerd[1583]: time="2026-03-06T02:22:47.754920170Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Mar 6 02:22:47.757130 containerd[1583]: time="2026-03-06T02:22:47.756480630Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Mar 6 02:22:47.765922 containerd[1583]: time="2026-03-06T02:22:47.765844317Z" level=info msg="CreateContainer within sandbox \"192b8fb603121a9e6df1d2e27c556d5bbb73a0f12f52551f4a631f3f2745851d\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Mar 6 02:22:47.792461 containerd[1583]: time="2026-03-06T02:22:47.792359781Z" level=info msg="Container a1e7109d98ee5d9bad98561d5c6abf458ca83f47ae1b872e8b61af387c8e7902: CDI devices from CRI Config.CDIDevices: []" Mar 6 02:22:47.815552 containerd[1583]: time="2026-03-06T02:22:47.815449464Z" level=info msg="CreateContainer within sandbox \"192b8fb603121a9e6df1d2e27c556d5bbb73a0f12f52551f4a631f3f2745851d\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"a1e7109d98ee5d9bad98561d5c6abf458ca83f47ae1b872e8b61af387c8e7902\"" Mar 6 02:22:47.816483 containerd[1583]: time="2026-03-06T02:22:47.816406331Z" level=info msg="StartContainer for \"a1e7109d98ee5d9bad98561d5c6abf458ca83f47ae1b872e8b61af387c8e7902\"" Mar 6 02:22:47.818392 containerd[1583]: time="2026-03-06T02:22:47.818241867Z" level=info msg="connecting to shim a1e7109d98ee5d9bad98561d5c6abf458ca83f47ae1b872e8b61af387c8e7902" address="unix:///run/containerd/s/5c819ab68f6ca3b4629e931ceb4749dc94a043b3ddcbd7d8b89c252fd141b313" protocol=ttrpc version=3 Mar 6 02:22:47.854371 systemd[1]: Started cri-containerd-a1e7109d98ee5d9bad98561d5c6abf458ca83f47ae1b872e8b61af387c8e7902.scope - libcontainer container a1e7109d98ee5d9bad98561d5c6abf458ca83f47ae1b872e8b61af387c8e7902. Mar 6 02:22:47.954864 containerd[1583]: time="2026-03-06T02:22:47.954706676Z" level=info msg="StartContainer for \"a1e7109d98ee5d9bad98561d5c6abf458ca83f47ae1b872e8b61af387c8e7902\" returns successfully" Mar 6 02:22:47.984362 systemd-networkd[1479]: vxlan.calico: Gained IPv6LL Mar 6 02:22:48.560293 containerd[1583]: time="2026-03-06T02:22:48.560183930Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 02:22:48.561461 containerd[1583]: time="2026-03-06T02:22:48.561396363Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Mar 6 02:22:48.563076 containerd[1583]: time="2026-03-06T02:22:48.562954156Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 02:22:48.566689 containerd[1583]: time="2026-03-06T02:22:48.566575745Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 02:22:48.567313 containerd[1583]: time="2026-03-06T02:22:48.567220562Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 810.684179ms" Mar 6 02:22:48.567313 containerd[1583]: time="2026-03-06T02:22:48.567278360Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Mar 6 02:22:48.568582 containerd[1583]: time="2026-03-06T02:22:48.568504960Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Mar 6 02:22:48.575281 containerd[1583]: time="2026-03-06T02:22:48.574970796Z" level=info msg="CreateContainer within sandbox \"8b12c3d1584ca57587fa244cb3bc75f5b01ad1245b860ff75a5d2809663ca64a\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 6 02:22:48.592789 containerd[1583]: time="2026-03-06T02:22:48.592581445Z" level=info msg="Container 555231d0cbc8700cbd8d65bffbd940d3f06411726220ac7a4f79fba1588f6bb6: CDI devices from CRI Config.CDIDevices: []" Mar 6 02:22:48.620383 containerd[1583]: time="2026-03-06T02:22:48.620246745Z" level=info msg="CreateContainer within sandbox \"8b12c3d1584ca57587fa244cb3bc75f5b01ad1245b860ff75a5d2809663ca64a\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"555231d0cbc8700cbd8d65bffbd940d3f06411726220ac7a4f79fba1588f6bb6\"" Mar 6 02:22:48.621286 containerd[1583]: time="2026-03-06T02:22:48.621218777Z" level=info msg="StartContainer for \"555231d0cbc8700cbd8d65bffbd940d3f06411726220ac7a4f79fba1588f6bb6\"" Mar 6 02:22:48.623799 containerd[1583]: time="2026-03-06T02:22:48.623700390Z" level=info msg="connecting to shim 555231d0cbc8700cbd8d65bffbd940d3f06411726220ac7a4f79fba1588f6bb6" address="unix:///run/containerd/s/86796e2e5984040c3167a899c116ae61c339c347edd6eda8220baed5d3312fb3" protocol=ttrpc version=3 Mar 6 02:22:48.658366 systemd[1]: Started cri-containerd-555231d0cbc8700cbd8d65bffbd940d3f06411726220ac7a4f79fba1588f6bb6.scope - libcontainer container 555231d0cbc8700cbd8d65bffbd940d3f06411726220ac7a4f79fba1588f6bb6. Mar 6 02:22:48.831729 containerd[1583]: time="2026-03-06T02:22:48.831454079Z" level=info msg="StartContainer for \"555231d0cbc8700cbd8d65bffbd940d3f06411726220ac7a4f79fba1588f6bb6\" returns successfully" Mar 6 02:22:48.868681 kubelet[2754]: I0306 02:22:48.868433 2754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-cnt7s" podStartSLOduration=23.296776434 podStartE2EDuration="26.86832681s" podCreationTimestamp="2026-03-06 02:22:22 +0000 UTC" firstStartedPulling="2026-03-06 02:22:44.996737975 +0000 UTC m=+39.868350261" lastFinishedPulling="2026-03-06 02:22:48.568288351 +0000 UTC m=+43.439900637" observedRunningTime="2026-03-06 02:22:48.867829953 +0000 UTC m=+43.739442269" watchObservedRunningTime="2026-03-06 02:22:48.86832681 +0000 UTC m=+43.739939096" Mar 6 02:22:49.421159 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3591171116.mount: Deactivated successfully. Mar 6 02:22:49.448815 containerd[1583]: time="2026-03-06T02:22:49.448399522Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 02:22:49.449886 containerd[1583]: time="2026-03-06T02:22:49.449835812Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Mar 6 02:22:49.451417 containerd[1583]: time="2026-03-06T02:22:49.451374735Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 02:22:49.454485 containerd[1583]: time="2026-03-06T02:22:49.454379342Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 02:22:49.455185 containerd[1583]: time="2026-03-06T02:22:49.455115900Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 886.581364ms" Mar 6 02:22:49.455255 containerd[1583]: time="2026-03-06T02:22:49.455181312Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Mar 6 02:22:49.463027 containerd[1583]: time="2026-03-06T02:22:49.462321182Z" level=info msg="CreateContainer within sandbox \"192b8fb603121a9e6df1d2e27c556d5bbb73a0f12f52551f4a631f3f2745851d\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Mar 6 02:22:49.473901 containerd[1583]: time="2026-03-06T02:22:49.473825315Z" level=info msg="Container d798fc69cc65e110925fd1a6a7a1a91a012aeb33beafbaa000e47e42bb2b7128: CDI devices from CRI Config.CDIDevices: []" Mar 6 02:22:49.489464 containerd[1583]: time="2026-03-06T02:22:49.489392898Z" level=info msg="CreateContainer within sandbox \"192b8fb603121a9e6df1d2e27c556d5bbb73a0f12f52551f4a631f3f2745851d\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"d798fc69cc65e110925fd1a6a7a1a91a012aeb33beafbaa000e47e42bb2b7128\"" Mar 6 02:22:49.490195 containerd[1583]: time="2026-03-06T02:22:49.490163797Z" level=info msg="StartContainer for \"d798fc69cc65e110925fd1a6a7a1a91a012aeb33beafbaa000e47e42bb2b7128\"" Mar 6 02:22:49.491656 containerd[1583]: time="2026-03-06T02:22:49.491511732Z" level=info msg="connecting to shim d798fc69cc65e110925fd1a6a7a1a91a012aeb33beafbaa000e47e42bb2b7128" address="unix:///run/containerd/s/5c819ab68f6ca3b4629e931ceb4749dc94a043b3ddcbd7d8b89c252fd141b313" protocol=ttrpc version=3 Mar 6 02:22:49.546551 systemd[1]: Started cri-containerd-d798fc69cc65e110925fd1a6a7a1a91a012aeb33beafbaa000e47e42bb2b7128.scope - libcontainer container d798fc69cc65e110925fd1a6a7a1a91a012aeb33beafbaa000e47e42bb2b7128. Mar 6 02:22:49.639452 containerd[1583]: time="2026-03-06T02:22:49.639398119Z" level=info msg="StartContainer for \"d798fc69cc65e110925fd1a6a7a1a91a012aeb33beafbaa000e47e42bb2b7128\" returns successfully" Mar 6 02:22:49.669525 kubelet[2754]: I0306 02:22:49.669473 2754 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 6 02:22:49.672285 kubelet[2754]: I0306 02:22:49.671194 2754 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 6 02:22:49.873008 kubelet[2754]: I0306 02:22:49.872776 2754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-6dd8775bf-xk6pb" podStartSLOduration=2.475768379 podStartE2EDuration="5.872692701s" podCreationTimestamp="2026-03-06 02:22:44 +0000 UTC" firstStartedPulling="2026-03-06 02:22:46.059276312 +0000 UTC m=+40.930888598" lastFinishedPulling="2026-03-06 02:22:49.456200635 +0000 UTC m=+44.327812920" observedRunningTime="2026-03-06 02:22:49.871789841 +0000 UTC m=+44.743402157" watchObservedRunningTime="2026-03-06 02:22:49.872692701 +0000 UTC m=+44.744305008" Mar 6 02:22:54.395095 kubelet[2754]: E0306 02:22:54.394949 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:22:54.395828 containerd[1583]: time="2026-03-06T02:22:54.395686428Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-64f8g,Uid:f469a10c-9545-4124-9867-97946c582789,Namespace:kube-system,Attempt:0,}" Mar 6 02:22:54.456984 containerd[1583]: time="2026-03-06T02:22:54.456830783Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-fdf95c748-msxqc,Uid:eaeab074-b6a5-4262-9696-6e61476a4648,Namespace:calico-system,Attempt:0,}" Mar 6 02:22:54.663598 systemd-networkd[1479]: cali05b4baec653: Link UP Mar 6 02:22:54.664733 systemd-networkd[1479]: cali05b4baec653: Gained carrier Mar 6 02:22:54.685140 containerd[1583]: 2026-03-06 02:22:54.526 [INFO][4489] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--64f8g-eth0 coredns-674b8bbfcf- kube-system f469a10c-9545-4124-9867-97946c582789 878 0 2026-03-06 02:22:09 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-64f8g eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali05b4baec653 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="4ddc2b1958a14e90391811f27e62e7220f73487194aa3f9a82c274418634bc79" Namespace="kube-system" Pod="coredns-674b8bbfcf-64f8g" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--64f8g-" Mar 6 02:22:54.685140 containerd[1583]: 2026-03-06 02:22:54.527 [INFO][4489] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4ddc2b1958a14e90391811f27e62e7220f73487194aa3f9a82c274418634bc79" Namespace="kube-system" Pod="coredns-674b8bbfcf-64f8g" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--64f8g-eth0" Mar 6 02:22:54.685140 containerd[1583]: 2026-03-06 02:22:54.578 [INFO][4516] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4ddc2b1958a14e90391811f27e62e7220f73487194aa3f9a82c274418634bc79" HandleID="k8s-pod-network.4ddc2b1958a14e90391811f27e62e7220f73487194aa3f9a82c274418634bc79" Workload="localhost-k8s-coredns--674b8bbfcf--64f8g-eth0" Mar 6 02:22:54.685366 containerd[1583]: 2026-03-06 02:22:54.587 [INFO][4516] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="4ddc2b1958a14e90391811f27e62e7220f73487194aa3f9a82c274418634bc79" HandleID="k8s-pod-network.4ddc2b1958a14e90391811f27e62e7220f73487194aa3f9a82c274418634bc79" Workload="localhost-k8s-coredns--674b8bbfcf--64f8g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004b2070), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-64f8g", "timestamp":"2026-03-06 02:22:54.578784425 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00001e580)} Mar 6 02:22:54.685366 containerd[1583]: 2026-03-06 02:22:54.588 [INFO][4516] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 02:22:54.685366 containerd[1583]: 2026-03-06 02:22:54.588 [INFO][4516] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 02:22:54.685366 containerd[1583]: 2026-03-06 02:22:54.588 [INFO][4516] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 6 02:22:54.685366 containerd[1583]: 2026-03-06 02:22:54.597 [INFO][4516] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.4ddc2b1958a14e90391811f27e62e7220f73487194aa3f9a82c274418634bc79" host="localhost" Mar 6 02:22:54.685366 containerd[1583]: 2026-03-06 02:22:54.618 [INFO][4516] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 6 02:22:54.685366 containerd[1583]: 2026-03-06 02:22:54.626 [INFO][4516] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 6 02:22:54.685366 containerd[1583]: 2026-03-06 02:22:54.629 [INFO][4516] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 6 02:22:54.685366 containerd[1583]: 2026-03-06 02:22:54.633 [INFO][4516] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 6 02:22:54.685366 containerd[1583]: 2026-03-06 02:22:54.633 [INFO][4516] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4ddc2b1958a14e90391811f27e62e7220f73487194aa3f9a82c274418634bc79" host="localhost" Mar 6 02:22:54.685609 containerd[1583]: 2026-03-06 02:22:54.636 [INFO][4516] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.4ddc2b1958a14e90391811f27e62e7220f73487194aa3f9a82c274418634bc79 Mar 6 02:22:54.685609 containerd[1583]: 2026-03-06 02:22:54.641 [INFO][4516] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4ddc2b1958a14e90391811f27e62e7220f73487194aa3f9a82c274418634bc79" host="localhost" Mar 6 02:22:54.685609 containerd[1583]: 2026-03-06 02:22:54.656 [INFO][4516] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.4ddc2b1958a14e90391811f27e62e7220f73487194aa3f9a82c274418634bc79" host="localhost" Mar 6 02:22:54.685609 containerd[1583]: 2026-03-06 02:22:54.656 [INFO][4516] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.4ddc2b1958a14e90391811f27e62e7220f73487194aa3f9a82c274418634bc79" host="localhost" Mar 6 02:22:54.685609 containerd[1583]: 2026-03-06 02:22:54.656 [INFO][4516] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 02:22:54.685609 containerd[1583]: 2026-03-06 02:22:54.656 [INFO][4516] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="4ddc2b1958a14e90391811f27e62e7220f73487194aa3f9a82c274418634bc79" HandleID="k8s-pod-network.4ddc2b1958a14e90391811f27e62e7220f73487194aa3f9a82c274418634bc79" Workload="localhost-k8s-coredns--674b8bbfcf--64f8g-eth0" Mar 6 02:22:54.685773 containerd[1583]: 2026-03-06 02:22:54.658 [INFO][4489] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4ddc2b1958a14e90391811f27e62e7220f73487194aa3f9a82c274418634bc79" Namespace="kube-system" Pod="coredns-674b8bbfcf-64f8g" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--64f8g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--64f8g-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"f469a10c-9545-4124-9867-97946c582789", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 2, 22, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-64f8g", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali05b4baec653", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 02:22:54.686004 containerd[1583]: 2026-03-06 02:22:54.659 [INFO][4489] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="4ddc2b1958a14e90391811f27e62e7220f73487194aa3f9a82c274418634bc79" Namespace="kube-system" Pod="coredns-674b8bbfcf-64f8g" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--64f8g-eth0" Mar 6 02:22:54.686004 containerd[1583]: 2026-03-06 02:22:54.659 [INFO][4489] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali05b4baec653 ContainerID="4ddc2b1958a14e90391811f27e62e7220f73487194aa3f9a82c274418634bc79" Namespace="kube-system" Pod="coredns-674b8bbfcf-64f8g" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--64f8g-eth0" Mar 6 02:22:54.686004 containerd[1583]: 2026-03-06 02:22:54.665 [INFO][4489] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4ddc2b1958a14e90391811f27e62e7220f73487194aa3f9a82c274418634bc79" Namespace="kube-system" Pod="coredns-674b8bbfcf-64f8g" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--64f8g-eth0" Mar 6 02:22:54.686121 containerd[1583]: 2026-03-06 02:22:54.665 [INFO][4489] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4ddc2b1958a14e90391811f27e62e7220f73487194aa3f9a82c274418634bc79" Namespace="kube-system" Pod="coredns-674b8bbfcf-64f8g" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--64f8g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--64f8g-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"f469a10c-9545-4124-9867-97946c582789", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 2, 22, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4ddc2b1958a14e90391811f27e62e7220f73487194aa3f9a82c274418634bc79", Pod:"coredns-674b8bbfcf-64f8g", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali05b4baec653", MAC:"46:57:9c:3f:51:1d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 02:22:54.686121 containerd[1583]: 2026-03-06 02:22:54.680 [INFO][4489] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4ddc2b1958a14e90391811f27e62e7220f73487194aa3f9a82c274418634bc79" Namespace="kube-system" Pod="coredns-674b8bbfcf-64f8g" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--64f8g-eth0" Mar 6 02:22:54.748965 containerd[1583]: time="2026-03-06T02:22:54.748878749Z" level=info msg="connecting to shim 4ddc2b1958a14e90391811f27e62e7220f73487194aa3f9a82c274418634bc79" address="unix:///run/containerd/s/747226a867cf01d7bbc3e1983d86380477198c5e2963512679c37e79255c0266" namespace=k8s.io protocol=ttrpc version=3 Mar 6 02:22:54.785696 systemd-networkd[1479]: calieb9b9a07401: Link UP Mar 6 02:22:54.786250 systemd-networkd[1479]: calieb9b9a07401: Gained carrier Mar 6 02:22:54.803475 systemd[1]: Started cri-containerd-4ddc2b1958a14e90391811f27e62e7220f73487194aa3f9a82c274418634bc79.scope - libcontainer container 4ddc2b1958a14e90391811f27e62e7220f73487194aa3f9a82c274418634bc79. Mar 6 02:22:54.825130 containerd[1583]: 2026-03-06 02:22:54.540 [INFO][4500] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--fdf95c748--msxqc-eth0 calico-apiserver-fdf95c748- calico-system eaeab074-b6a5-4262-9696-6e61476a4648 872 0 2026-03-06 02:22:22 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:fdf95c748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-fdf95c748-msxqc eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calieb9b9a07401 [] [] }} ContainerID="dcf36ad51153ea79a5c893b14bc25ad6dc5ba61631fa922fcaf3bd10d8d0bb6d" Namespace="calico-system" Pod="calico-apiserver-fdf95c748-msxqc" WorkloadEndpoint="localhost-k8s-calico--apiserver--fdf95c748--msxqc-" Mar 6 02:22:54.825130 containerd[1583]: 2026-03-06 02:22:54.540 [INFO][4500] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="dcf36ad51153ea79a5c893b14bc25ad6dc5ba61631fa922fcaf3bd10d8d0bb6d" Namespace="calico-system" Pod="calico-apiserver-fdf95c748-msxqc" WorkloadEndpoint="localhost-k8s-calico--apiserver--fdf95c748--msxqc-eth0" Mar 6 02:22:54.825130 containerd[1583]: 2026-03-06 02:22:54.588 [INFO][4523] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dcf36ad51153ea79a5c893b14bc25ad6dc5ba61631fa922fcaf3bd10d8d0bb6d" HandleID="k8s-pod-network.dcf36ad51153ea79a5c893b14bc25ad6dc5ba61631fa922fcaf3bd10d8d0bb6d" Workload="localhost-k8s-calico--apiserver--fdf95c748--msxqc-eth0" Mar 6 02:22:54.825130 containerd[1583]: 2026-03-06 02:22:54.598 [INFO][4523] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="dcf36ad51153ea79a5c893b14bc25ad6dc5ba61631fa922fcaf3bd10d8d0bb6d" HandleID="k8s-pod-network.dcf36ad51153ea79a5c893b14bc25ad6dc5ba61631fa922fcaf3bd10d8d0bb6d" Workload="localhost-k8s-calico--apiserver--fdf95c748--msxqc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fb90), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-fdf95c748-msxqc", "timestamp":"2026-03-06 02:22:54.588398188 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0000fe2c0)} Mar 6 02:22:54.825130 containerd[1583]: 2026-03-06 02:22:54.599 [INFO][4523] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 02:22:54.825130 containerd[1583]: 2026-03-06 02:22:54.656 [INFO][4523] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 02:22:54.825130 containerd[1583]: 2026-03-06 02:22:54.657 [INFO][4523] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 6 02:22:54.825130 containerd[1583]: 2026-03-06 02:22:54.696 [INFO][4523] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.dcf36ad51153ea79a5c893b14bc25ad6dc5ba61631fa922fcaf3bd10d8d0bb6d" host="localhost" Mar 6 02:22:54.825130 containerd[1583]: 2026-03-06 02:22:54.719 [INFO][4523] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 6 02:22:54.825130 containerd[1583]: 2026-03-06 02:22:54.735 [INFO][4523] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 6 02:22:54.825130 containerd[1583]: 2026-03-06 02:22:54.741 [INFO][4523] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 6 02:22:54.825130 containerd[1583]: 2026-03-06 02:22:54.750 [INFO][4523] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 6 02:22:54.825130 containerd[1583]: 2026-03-06 02:22:54.751 [INFO][4523] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.dcf36ad51153ea79a5c893b14bc25ad6dc5ba61631fa922fcaf3bd10d8d0bb6d" host="localhost" Mar 6 02:22:54.825130 containerd[1583]: 2026-03-06 02:22:54.755 [INFO][4523] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.dcf36ad51153ea79a5c893b14bc25ad6dc5ba61631fa922fcaf3bd10d8d0bb6d Mar 6 02:22:54.825130 containerd[1583]: 2026-03-06 02:22:54.767 [INFO][4523] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.dcf36ad51153ea79a5c893b14bc25ad6dc5ba61631fa922fcaf3bd10d8d0bb6d" host="localhost" Mar 6 02:22:54.825130 containerd[1583]: 2026-03-06 02:22:54.775 [INFO][4523] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.dcf36ad51153ea79a5c893b14bc25ad6dc5ba61631fa922fcaf3bd10d8d0bb6d" host="localhost" Mar 6 02:22:54.825130 containerd[1583]: 2026-03-06 02:22:54.775 [INFO][4523] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.dcf36ad51153ea79a5c893b14bc25ad6dc5ba61631fa922fcaf3bd10d8d0bb6d" host="localhost" Mar 6 02:22:54.825130 containerd[1583]: 2026-03-06 02:22:54.775 [INFO][4523] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 02:22:54.825130 containerd[1583]: 2026-03-06 02:22:54.775 [INFO][4523] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="dcf36ad51153ea79a5c893b14bc25ad6dc5ba61631fa922fcaf3bd10d8d0bb6d" HandleID="k8s-pod-network.dcf36ad51153ea79a5c893b14bc25ad6dc5ba61631fa922fcaf3bd10d8d0bb6d" Workload="localhost-k8s-calico--apiserver--fdf95c748--msxqc-eth0" Mar 6 02:22:54.825969 containerd[1583]: 2026-03-06 02:22:54.781 [INFO][4500] cni-plugin/k8s.go 418: Populated endpoint ContainerID="dcf36ad51153ea79a5c893b14bc25ad6dc5ba61631fa922fcaf3bd10d8d0bb6d" Namespace="calico-system" Pod="calico-apiserver-fdf95c748-msxqc" WorkloadEndpoint="localhost-k8s-calico--apiserver--fdf95c748--msxqc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--fdf95c748--msxqc-eth0", GenerateName:"calico-apiserver-fdf95c748-", Namespace:"calico-system", SelfLink:"", UID:"eaeab074-b6a5-4262-9696-6e61476a4648", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 2, 22, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"fdf95c748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-fdf95c748-msxqc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calieb9b9a07401", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 02:22:54.825969 containerd[1583]: 2026-03-06 02:22:54.781 [INFO][4500] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="dcf36ad51153ea79a5c893b14bc25ad6dc5ba61631fa922fcaf3bd10d8d0bb6d" Namespace="calico-system" Pod="calico-apiserver-fdf95c748-msxqc" WorkloadEndpoint="localhost-k8s-calico--apiserver--fdf95c748--msxqc-eth0" Mar 6 02:22:54.825969 containerd[1583]: 2026-03-06 02:22:54.781 [INFO][4500] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calieb9b9a07401 ContainerID="dcf36ad51153ea79a5c893b14bc25ad6dc5ba61631fa922fcaf3bd10d8d0bb6d" Namespace="calico-system" Pod="calico-apiserver-fdf95c748-msxqc" WorkloadEndpoint="localhost-k8s-calico--apiserver--fdf95c748--msxqc-eth0" Mar 6 02:22:54.825969 containerd[1583]: 2026-03-06 02:22:54.787 [INFO][4500] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dcf36ad51153ea79a5c893b14bc25ad6dc5ba61631fa922fcaf3bd10d8d0bb6d" Namespace="calico-system" Pod="calico-apiserver-fdf95c748-msxqc" WorkloadEndpoint="localhost-k8s-calico--apiserver--fdf95c748--msxqc-eth0" Mar 6 02:22:54.825969 containerd[1583]: 2026-03-06 02:22:54.789 [INFO][4500] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="dcf36ad51153ea79a5c893b14bc25ad6dc5ba61631fa922fcaf3bd10d8d0bb6d" Namespace="calico-system" Pod="calico-apiserver-fdf95c748-msxqc" WorkloadEndpoint="localhost-k8s-calico--apiserver--fdf95c748--msxqc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--fdf95c748--msxqc-eth0", GenerateName:"calico-apiserver-fdf95c748-", Namespace:"calico-system", SelfLink:"", UID:"eaeab074-b6a5-4262-9696-6e61476a4648", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 2, 22, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"fdf95c748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"dcf36ad51153ea79a5c893b14bc25ad6dc5ba61631fa922fcaf3bd10d8d0bb6d", Pod:"calico-apiserver-fdf95c748-msxqc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calieb9b9a07401", MAC:"b2:9c:32:64:61:8a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 02:22:54.825969 containerd[1583]: 2026-03-06 02:22:54.819 [INFO][4500] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="dcf36ad51153ea79a5c893b14bc25ad6dc5ba61631fa922fcaf3bd10d8d0bb6d" Namespace="calico-system" Pod="calico-apiserver-fdf95c748-msxqc" WorkloadEndpoint="localhost-k8s-calico--apiserver--fdf95c748--msxqc-eth0" Mar 6 02:22:54.835524 systemd-resolved[1394]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 6 02:22:54.869104 containerd[1583]: time="2026-03-06T02:22:54.868239682Z" level=info msg="connecting to shim dcf36ad51153ea79a5c893b14bc25ad6dc5ba61631fa922fcaf3bd10d8d0bb6d" address="unix:///run/containerd/s/72219a0e40b4c826002ec757f58f570c455ea3651d91d7ae33603787dc2f4905" namespace=k8s.io protocol=ttrpc version=3 Mar 6 02:22:54.917426 systemd[1]: Started cri-containerd-dcf36ad51153ea79a5c893b14bc25ad6dc5ba61631fa922fcaf3bd10d8d0bb6d.scope - libcontainer container dcf36ad51153ea79a5c893b14bc25ad6dc5ba61631fa922fcaf3bd10d8d0bb6d. Mar 6 02:22:54.928845 containerd[1583]: time="2026-03-06T02:22:54.928774531Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-64f8g,Uid:f469a10c-9545-4124-9867-97946c582789,Namespace:kube-system,Attempt:0,} returns sandbox id \"4ddc2b1958a14e90391811f27e62e7220f73487194aa3f9a82c274418634bc79\"" Mar 6 02:22:54.930683 kubelet[2754]: E0306 02:22:54.930577 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:22:54.943348 containerd[1583]: time="2026-03-06T02:22:54.943281611Z" level=info msg="CreateContainer within sandbox \"4ddc2b1958a14e90391811f27e62e7220f73487194aa3f9a82c274418634bc79\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 6 02:22:54.966711 systemd-resolved[1394]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 6 02:22:54.972796 containerd[1583]: time="2026-03-06T02:22:54.972738991Z" level=info msg="Container 11b7de8dcdcdde0a42319b57476ae4f887865d84c96330ff207dd0ab82606d0d: CDI devices from CRI Config.CDIDevices: []" Mar 6 02:22:54.985622 containerd[1583]: time="2026-03-06T02:22:54.985515514Z" level=info msg="CreateContainer within sandbox \"4ddc2b1958a14e90391811f27e62e7220f73487194aa3f9a82c274418634bc79\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"11b7de8dcdcdde0a42319b57476ae4f887865d84c96330ff207dd0ab82606d0d\"" Mar 6 02:22:54.987037 containerd[1583]: time="2026-03-06T02:22:54.986948378Z" level=info msg="StartContainer for \"11b7de8dcdcdde0a42319b57476ae4f887865d84c96330ff207dd0ab82606d0d\"" Mar 6 02:22:54.988839 containerd[1583]: time="2026-03-06T02:22:54.988617474Z" level=info msg="connecting to shim 11b7de8dcdcdde0a42319b57476ae4f887865d84c96330ff207dd0ab82606d0d" address="unix:///run/containerd/s/747226a867cf01d7bbc3e1983d86380477198c5e2963512679c37e79255c0266" protocol=ttrpc version=3 Mar 6 02:22:55.025397 systemd[1]: Started cri-containerd-11b7de8dcdcdde0a42319b57476ae4f887865d84c96330ff207dd0ab82606d0d.scope - libcontainer container 11b7de8dcdcdde0a42319b57476ae4f887865d84c96330ff207dd0ab82606d0d. Mar 6 02:22:55.032142 containerd[1583]: time="2026-03-06T02:22:55.031994700Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-fdf95c748-msxqc,Uid:eaeab074-b6a5-4262-9696-6e61476a4648,Namespace:calico-system,Attempt:0,} returns sandbox id \"dcf36ad51153ea79a5c893b14bc25ad6dc5ba61631fa922fcaf3bd10d8d0bb6d\"" Mar 6 02:22:55.051804 containerd[1583]: time="2026-03-06T02:22:55.051741911Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 6 02:22:55.086021 containerd[1583]: time="2026-03-06T02:22:55.085781556Z" level=info msg="StartContainer for \"11b7de8dcdcdde0a42319b57476ae4f887865d84c96330ff207dd0ab82606d0d\" returns successfully" Mar 6 02:22:55.396031 containerd[1583]: time="2026-03-06T02:22:55.395876091Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c6d79556b-hmx8m,Uid:7fb5e21c-5bf4-4f7d-a9eb-5209ee5ef76e,Namespace:calico-system,Attempt:0,}" Mar 6 02:22:55.622699 systemd-networkd[1479]: caliccb3b0103f8: Link UP Mar 6 02:22:55.623186 systemd-networkd[1479]: caliccb3b0103f8: Gained carrier Mar 6 02:22:55.655973 containerd[1583]: 2026-03-06 02:22:55.444 [INFO][4702] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--c6d79556b--hmx8m-eth0 calico-kube-controllers-c6d79556b- calico-system 7fb5e21c-5bf4-4f7d-a9eb-5209ee5ef76e 874 0 2026-03-06 02:22:22 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:c6d79556b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-c6d79556b-hmx8m eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] caliccb3b0103f8 [] [] }} ContainerID="b7188461f0af6af2eae6361d9d3097bb8e3ad4236a3ab60097b3ebb55adf5f0e" Namespace="calico-system" Pod="calico-kube-controllers-c6d79556b-hmx8m" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c6d79556b--hmx8m-" Mar 6 02:22:55.655973 containerd[1583]: 2026-03-06 02:22:55.444 [INFO][4702] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b7188461f0af6af2eae6361d9d3097bb8e3ad4236a3ab60097b3ebb55adf5f0e" Namespace="calico-system" Pod="calico-kube-controllers-c6d79556b-hmx8m" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c6d79556b--hmx8m-eth0" Mar 6 02:22:55.655973 containerd[1583]: 2026-03-06 02:22:55.497 [INFO][4716] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b7188461f0af6af2eae6361d9d3097bb8e3ad4236a3ab60097b3ebb55adf5f0e" HandleID="k8s-pod-network.b7188461f0af6af2eae6361d9d3097bb8e3ad4236a3ab60097b3ebb55adf5f0e" Workload="localhost-k8s-calico--kube--controllers--c6d79556b--hmx8m-eth0" Mar 6 02:22:55.655973 containerd[1583]: 2026-03-06 02:22:55.515 [INFO][4716] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="b7188461f0af6af2eae6361d9d3097bb8e3ad4236a3ab60097b3ebb55adf5f0e" HandleID="k8s-pod-network.b7188461f0af6af2eae6361d9d3097bb8e3ad4236a3ab60097b3ebb55adf5f0e" Workload="localhost-k8s-calico--kube--controllers--c6d79556b--hmx8m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000503930), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-c6d79556b-hmx8m", "timestamp":"2026-03-06 02:22:55.497777819 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001731e0)} Mar 6 02:22:55.655973 containerd[1583]: 2026-03-06 02:22:55.515 [INFO][4716] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 02:22:55.655973 containerd[1583]: 2026-03-06 02:22:55.515 [INFO][4716] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 02:22:55.655973 containerd[1583]: 2026-03-06 02:22:55.515 [INFO][4716] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 6 02:22:55.655973 containerd[1583]: 2026-03-06 02:22:55.520 [INFO][4716] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.b7188461f0af6af2eae6361d9d3097bb8e3ad4236a3ab60097b3ebb55adf5f0e" host="localhost" Mar 6 02:22:55.655973 containerd[1583]: 2026-03-06 02:22:55.531 [INFO][4716] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 6 02:22:55.655973 containerd[1583]: 2026-03-06 02:22:55.540 [INFO][4716] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 6 02:22:55.655973 containerd[1583]: 2026-03-06 02:22:55.547 [INFO][4716] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 6 02:22:55.655973 containerd[1583]: 2026-03-06 02:22:55.561 [INFO][4716] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 6 02:22:55.655973 containerd[1583]: 2026-03-06 02:22:55.562 [INFO][4716] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b7188461f0af6af2eae6361d9d3097bb8e3ad4236a3ab60097b3ebb55adf5f0e" host="localhost" Mar 6 02:22:55.655973 containerd[1583]: 2026-03-06 02:22:55.565 [INFO][4716] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.b7188461f0af6af2eae6361d9d3097bb8e3ad4236a3ab60097b3ebb55adf5f0e Mar 6 02:22:55.655973 containerd[1583]: 2026-03-06 02:22:55.572 [INFO][4716] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b7188461f0af6af2eae6361d9d3097bb8e3ad4236a3ab60097b3ebb55adf5f0e" host="localhost" Mar 6 02:22:55.655973 containerd[1583]: 2026-03-06 02:22:55.589 [INFO][4716] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.b7188461f0af6af2eae6361d9d3097bb8e3ad4236a3ab60097b3ebb55adf5f0e" host="localhost" Mar 6 02:22:55.655973 containerd[1583]: 2026-03-06 02:22:55.590 [INFO][4716] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.b7188461f0af6af2eae6361d9d3097bb8e3ad4236a3ab60097b3ebb55adf5f0e" host="localhost" Mar 6 02:22:55.655973 containerd[1583]: 2026-03-06 02:22:55.590 [INFO][4716] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 02:22:55.655973 containerd[1583]: 2026-03-06 02:22:55.590 [INFO][4716] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="b7188461f0af6af2eae6361d9d3097bb8e3ad4236a3ab60097b3ebb55adf5f0e" HandleID="k8s-pod-network.b7188461f0af6af2eae6361d9d3097bb8e3ad4236a3ab60097b3ebb55adf5f0e" Workload="localhost-k8s-calico--kube--controllers--c6d79556b--hmx8m-eth0" Mar 6 02:22:55.656728 containerd[1583]: 2026-03-06 02:22:55.618 [INFO][4702] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b7188461f0af6af2eae6361d9d3097bb8e3ad4236a3ab60097b3ebb55adf5f0e" Namespace="calico-system" Pod="calico-kube-controllers-c6d79556b-hmx8m" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c6d79556b--hmx8m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--c6d79556b--hmx8m-eth0", GenerateName:"calico-kube-controllers-c6d79556b-", Namespace:"calico-system", SelfLink:"", UID:"7fb5e21c-5bf4-4f7d-a9eb-5209ee5ef76e", ResourceVersion:"874", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 2, 22, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"c6d79556b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-c6d79556b-hmx8m", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliccb3b0103f8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 02:22:55.656728 containerd[1583]: 2026-03-06 02:22:55.618 [INFO][4702] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="b7188461f0af6af2eae6361d9d3097bb8e3ad4236a3ab60097b3ebb55adf5f0e" Namespace="calico-system" Pod="calico-kube-controllers-c6d79556b-hmx8m" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c6d79556b--hmx8m-eth0" Mar 6 02:22:55.656728 containerd[1583]: 2026-03-06 02:22:55.618 [INFO][4702] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliccb3b0103f8 ContainerID="b7188461f0af6af2eae6361d9d3097bb8e3ad4236a3ab60097b3ebb55adf5f0e" Namespace="calico-system" Pod="calico-kube-controllers-c6d79556b-hmx8m" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c6d79556b--hmx8m-eth0" Mar 6 02:22:55.656728 containerd[1583]: 2026-03-06 02:22:55.626 [INFO][4702] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b7188461f0af6af2eae6361d9d3097bb8e3ad4236a3ab60097b3ebb55adf5f0e" Namespace="calico-system" Pod="calico-kube-controllers-c6d79556b-hmx8m" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c6d79556b--hmx8m-eth0" Mar 6 02:22:55.656728 containerd[1583]: 2026-03-06 02:22:55.627 [INFO][4702] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b7188461f0af6af2eae6361d9d3097bb8e3ad4236a3ab60097b3ebb55adf5f0e" Namespace="calico-system" Pod="calico-kube-controllers-c6d79556b-hmx8m" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c6d79556b--hmx8m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--c6d79556b--hmx8m-eth0", GenerateName:"calico-kube-controllers-c6d79556b-", Namespace:"calico-system", SelfLink:"", UID:"7fb5e21c-5bf4-4f7d-a9eb-5209ee5ef76e", ResourceVersion:"874", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 2, 22, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"c6d79556b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b7188461f0af6af2eae6361d9d3097bb8e3ad4236a3ab60097b3ebb55adf5f0e", Pod:"calico-kube-controllers-c6d79556b-hmx8m", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliccb3b0103f8", MAC:"82:05:87:2a:07:74", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 02:22:55.656728 containerd[1583]: 2026-03-06 02:22:55.649 [INFO][4702] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b7188461f0af6af2eae6361d9d3097bb8e3ad4236a3ab60097b3ebb55adf5f0e" Namespace="calico-system" Pod="calico-kube-controllers-c6d79556b-hmx8m" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c6d79556b--hmx8m-eth0" Mar 6 02:22:55.771446 containerd[1583]: time="2026-03-06T02:22:55.771371482Z" level=info msg="connecting to shim b7188461f0af6af2eae6361d9d3097bb8e3ad4236a3ab60097b3ebb55adf5f0e" address="unix:///run/containerd/s/5d141d8ba9e7739459a65baecbd3a71a40f2803f884e7f302e9b283dac473079" namespace=k8s.io protocol=ttrpc version=3 Mar 6 02:22:55.837202 systemd[1]: Started cri-containerd-b7188461f0af6af2eae6361d9d3097bb8e3ad4236a3ab60097b3ebb55adf5f0e.scope - libcontainer container b7188461f0af6af2eae6361d9d3097bb8e3ad4236a3ab60097b3ebb55adf5f0e. Mar 6 02:22:55.871921 systemd-resolved[1394]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 6 02:22:55.887272 kubelet[2754]: E0306 02:22:55.887231 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:22:55.941713 kubelet[2754]: I0306 02:22:55.941322 2754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-64f8g" podStartSLOduration=46.941295523 podStartE2EDuration="46.941295523s" podCreationTimestamp="2026-03-06 02:22:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-06 02:22:55.933600465 +0000 UTC m=+50.805212751" watchObservedRunningTime="2026-03-06 02:22:55.941295523 +0000 UTC m=+50.812907820" Mar 6 02:22:55.972760 containerd[1583]: time="2026-03-06T02:22:55.972561439Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c6d79556b-hmx8m,Uid:7fb5e21c-5bf4-4f7d-a9eb-5209ee5ef76e,Namespace:calico-system,Attempt:0,} returns sandbox id \"b7188461f0af6af2eae6361d9d3097bb8e3ad4236a3ab60097b3ebb55adf5f0e\"" Mar 6 02:22:56.048489 systemd-networkd[1479]: cali05b4baec653: Gained IPv6LL Mar 6 02:22:56.113592 systemd-networkd[1479]: calieb9b9a07401: Gained IPv6LL Mar 6 02:22:56.403425 kubelet[2754]: E0306 02:22:56.402933 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:22:56.431924 containerd[1583]: time="2026-03-06T02:22:56.430450109Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-fdf95c748-z4qz4,Uid:1556a76d-b801-4f94-85d1-3c1662a146b7,Namespace:calico-system,Attempt:0,}" Mar 6 02:22:56.464333 containerd[1583]: time="2026-03-06T02:22:56.464268622Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-csmx6,Uid:97416aaf-d20d-44e3-9e01-e71854f14d41,Namespace:kube-system,Attempt:0,}" Mar 6 02:22:56.688423 systemd-networkd[1479]: caliccb3b0103f8: Gained IPv6LL Mar 6 02:22:56.801865 systemd-networkd[1479]: calidf657671a76: Link UP Mar 6 02:22:56.814542 systemd-networkd[1479]: calidf657671a76: Gained carrier Mar 6 02:22:56.853358 containerd[1583]: 2026-03-06 02:22:56.574 [INFO][4815] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--csmx6-eth0 coredns-674b8bbfcf- kube-system 97416aaf-d20d-44e3-9e01-e71854f14d41 879 0 2026-03-06 02:22:09 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-csmx6 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calidf657671a76 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="5c4947933033c491c17ad41bd00b34833dce0626972ac40e7fcb71d5bcfda597" Namespace="kube-system" Pod="coredns-674b8bbfcf-csmx6" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--csmx6-" Mar 6 02:22:56.853358 containerd[1583]: 2026-03-06 02:22:56.576 [INFO][4815] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5c4947933033c491c17ad41bd00b34833dce0626972ac40e7fcb71d5bcfda597" Namespace="kube-system" Pod="coredns-674b8bbfcf-csmx6" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--csmx6-eth0" Mar 6 02:22:56.853358 containerd[1583]: 2026-03-06 02:22:56.683 [INFO][4834] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5c4947933033c491c17ad41bd00b34833dce0626972ac40e7fcb71d5bcfda597" HandleID="k8s-pod-network.5c4947933033c491c17ad41bd00b34833dce0626972ac40e7fcb71d5bcfda597" Workload="localhost-k8s-coredns--674b8bbfcf--csmx6-eth0" Mar 6 02:22:56.853358 containerd[1583]: 2026-03-06 02:22:56.699 [INFO][4834] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="5c4947933033c491c17ad41bd00b34833dce0626972ac40e7fcb71d5bcfda597" HandleID="k8s-pod-network.5c4947933033c491c17ad41bd00b34833dce0626972ac40e7fcb71d5bcfda597" Workload="localhost-k8s-coredns--674b8bbfcf--csmx6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fe80), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-csmx6", "timestamp":"2026-03-06 02:22:56.683314207 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000443b80)} Mar 6 02:22:56.853358 containerd[1583]: 2026-03-06 02:22:56.699 [INFO][4834] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 02:22:56.853358 containerd[1583]: 2026-03-06 02:22:56.699 [INFO][4834] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 02:22:56.853358 containerd[1583]: 2026-03-06 02:22:56.699 [INFO][4834] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 6 02:22:56.853358 containerd[1583]: 2026-03-06 02:22:56.717 [INFO][4834] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.5c4947933033c491c17ad41bd00b34833dce0626972ac40e7fcb71d5bcfda597" host="localhost" Mar 6 02:22:56.853358 containerd[1583]: 2026-03-06 02:22:56.731 [INFO][4834] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 6 02:22:56.853358 containerd[1583]: 2026-03-06 02:22:56.739 [INFO][4834] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 6 02:22:56.853358 containerd[1583]: 2026-03-06 02:22:56.744 [INFO][4834] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 6 02:22:56.853358 containerd[1583]: 2026-03-06 02:22:56.753 [INFO][4834] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 6 02:22:56.853358 containerd[1583]: 2026-03-06 02:22:56.754 [INFO][4834] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5c4947933033c491c17ad41bd00b34833dce0626972ac40e7fcb71d5bcfda597" host="localhost" Mar 6 02:22:56.853358 containerd[1583]: 2026-03-06 02:22:56.760 [INFO][4834] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.5c4947933033c491c17ad41bd00b34833dce0626972ac40e7fcb71d5bcfda597 Mar 6 02:22:56.853358 containerd[1583]: 2026-03-06 02:22:56.773 [INFO][4834] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5c4947933033c491c17ad41bd00b34833dce0626972ac40e7fcb71d5bcfda597" host="localhost" Mar 6 02:22:56.853358 containerd[1583]: 2026-03-06 02:22:56.790 [INFO][4834] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.5c4947933033c491c17ad41bd00b34833dce0626972ac40e7fcb71d5bcfda597" host="localhost" Mar 6 02:22:56.853358 containerd[1583]: 2026-03-06 02:22:56.791 [INFO][4834] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.5c4947933033c491c17ad41bd00b34833dce0626972ac40e7fcb71d5bcfda597" host="localhost" Mar 6 02:22:56.853358 containerd[1583]: 2026-03-06 02:22:56.791 [INFO][4834] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 02:22:56.853358 containerd[1583]: 2026-03-06 02:22:56.792 [INFO][4834] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="5c4947933033c491c17ad41bd00b34833dce0626972ac40e7fcb71d5bcfda597" HandleID="k8s-pod-network.5c4947933033c491c17ad41bd00b34833dce0626972ac40e7fcb71d5bcfda597" Workload="localhost-k8s-coredns--674b8bbfcf--csmx6-eth0" Mar 6 02:22:56.855716 containerd[1583]: 2026-03-06 02:22:56.798 [INFO][4815] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5c4947933033c491c17ad41bd00b34833dce0626972ac40e7fcb71d5bcfda597" Namespace="kube-system" Pod="coredns-674b8bbfcf-csmx6" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--csmx6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--csmx6-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"97416aaf-d20d-44e3-9e01-e71854f14d41", ResourceVersion:"879", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 2, 22, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-csmx6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidf657671a76", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 02:22:56.855716 containerd[1583]: 2026-03-06 02:22:56.798 [INFO][4815] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="5c4947933033c491c17ad41bd00b34833dce0626972ac40e7fcb71d5bcfda597" Namespace="kube-system" Pod="coredns-674b8bbfcf-csmx6" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--csmx6-eth0" Mar 6 02:22:56.855716 containerd[1583]: 2026-03-06 02:22:56.798 [INFO][4815] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidf657671a76 ContainerID="5c4947933033c491c17ad41bd00b34833dce0626972ac40e7fcb71d5bcfda597" Namespace="kube-system" Pod="coredns-674b8bbfcf-csmx6" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--csmx6-eth0" Mar 6 02:22:56.855716 containerd[1583]: 2026-03-06 02:22:56.815 [INFO][4815] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5c4947933033c491c17ad41bd00b34833dce0626972ac40e7fcb71d5bcfda597" Namespace="kube-system" Pod="coredns-674b8bbfcf-csmx6" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--csmx6-eth0" Mar 6 02:22:56.855716 containerd[1583]: 2026-03-06 02:22:56.819 [INFO][4815] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5c4947933033c491c17ad41bd00b34833dce0626972ac40e7fcb71d5bcfda597" Namespace="kube-system" Pod="coredns-674b8bbfcf-csmx6" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--csmx6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--csmx6-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"97416aaf-d20d-44e3-9e01-e71854f14d41", ResourceVersion:"879", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 2, 22, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5c4947933033c491c17ad41bd00b34833dce0626972ac40e7fcb71d5bcfda597", Pod:"coredns-674b8bbfcf-csmx6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidf657671a76", MAC:"c6:97:85:05:80:13", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 02:22:56.855716 containerd[1583]: 2026-03-06 02:22:56.845 [INFO][4815] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5c4947933033c491c17ad41bd00b34833dce0626972ac40e7fcb71d5bcfda597" Namespace="kube-system" Pod="coredns-674b8bbfcf-csmx6" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--csmx6-eth0" Mar 6 02:22:56.921472 kubelet[2754]: E0306 02:22:56.921345 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:22:56.954516 containerd[1583]: time="2026-03-06T02:22:56.954401397Z" level=info msg="connecting to shim 5c4947933033c491c17ad41bd00b34833dce0626972ac40e7fcb71d5bcfda597" address="unix:///run/containerd/s/48d218e9bccdf46813c5b5f5cff4215789fe231d9f566bcfbdb02133cb4e9c7f" namespace=k8s.io protocol=ttrpc version=3 Mar 6 02:22:56.958625 systemd-networkd[1479]: cali0063d471b45: Link UP Mar 6 02:22:56.960377 systemd-networkd[1479]: cali0063d471b45: Gained carrier Mar 6 02:22:57.016301 containerd[1583]: 2026-03-06 02:22:56.586 [INFO][4805] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--fdf95c748--z4qz4-eth0 calico-apiserver-fdf95c748- calico-system 1556a76d-b801-4f94-85d1-3c1662a146b7 877 0 2026-03-06 02:22:21 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:fdf95c748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-fdf95c748-z4qz4 eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali0063d471b45 [] [] }} ContainerID="d0f671cfc8c9b7a0d58925c787fef66bd9fa009e7be4a0dfed18173bd53d5d77" Namespace="calico-system" Pod="calico-apiserver-fdf95c748-z4qz4" WorkloadEndpoint="localhost-k8s-calico--apiserver--fdf95c748--z4qz4-" Mar 6 02:22:57.016301 containerd[1583]: 2026-03-06 02:22:56.587 [INFO][4805] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d0f671cfc8c9b7a0d58925c787fef66bd9fa009e7be4a0dfed18173bd53d5d77" Namespace="calico-system" Pod="calico-apiserver-fdf95c748-z4qz4" WorkloadEndpoint="localhost-k8s-calico--apiserver--fdf95c748--z4qz4-eth0" Mar 6 02:22:57.016301 containerd[1583]: 2026-03-06 02:22:56.680 [INFO][4840] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d0f671cfc8c9b7a0d58925c787fef66bd9fa009e7be4a0dfed18173bd53d5d77" HandleID="k8s-pod-network.d0f671cfc8c9b7a0d58925c787fef66bd9fa009e7be4a0dfed18173bd53d5d77" Workload="localhost-k8s-calico--apiserver--fdf95c748--z4qz4-eth0" Mar 6 02:22:57.016301 containerd[1583]: 2026-03-06 02:22:56.699 [INFO][4840] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="d0f671cfc8c9b7a0d58925c787fef66bd9fa009e7be4a0dfed18173bd53d5d77" HandleID="k8s-pod-network.d0f671cfc8c9b7a0d58925c787fef66bd9fa009e7be4a0dfed18173bd53d5d77" Workload="localhost-k8s-calico--apiserver--fdf95c748--z4qz4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001246b0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-fdf95c748-z4qz4", "timestamp":"2026-03-06 02:22:56.680805393 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0000fe840)} Mar 6 02:22:57.016301 containerd[1583]: 2026-03-06 02:22:56.699 [INFO][4840] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 02:22:57.016301 containerd[1583]: 2026-03-06 02:22:56.793 [INFO][4840] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 02:22:57.016301 containerd[1583]: 2026-03-06 02:22:56.793 [INFO][4840] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 6 02:22:57.016301 containerd[1583]: 2026-03-06 02:22:56.819 [INFO][4840] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.d0f671cfc8c9b7a0d58925c787fef66bd9fa009e7be4a0dfed18173bd53d5d77" host="localhost" Mar 6 02:22:57.016301 containerd[1583]: 2026-03-06 02:22:56.848 [INFO][4840] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 6 02:22:57.016301 containerd[1583]: 2026-03-06 02:22:56.859 [INFO][4840] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 6 02:22:57.016301 containerd[1583]: 2026-03-06 02:22:56.864 [INFO][4840] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 6 02:22:57.016301 containerd[1583]: 2026-03-06 02:22:56.869 [INFO][4840] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 6 02:22:57.016301 containerd[1583]: 2026-03-06 02:22:56.869 [INFO][4840] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d0f671cfc8c9b7a0d58925c787fef66bd9fa009e7be4a0dfed18173bd53d5d77" host="localhost" Mar 6 02:22:57.016301 containerd[1583]: 2026-03-06 02:22:56.874 [INFO][4840] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.d0f671cfc8c9b7a0d58925c787fef66bd9fa009e7be4a0dfed18173bd53d5d77 Mar 6 02:22:57.016301 containerd[1583]: 2026-03-06 02:22:56.880 [INFO][4840] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d0f671cfc8c9b7a0d58925c787fef66bd9fa009e7be4a0dfed18173bd53d5d77" host="localhost" Mar 6 02:22:57.016301 containerd[1583]: 2026-03-06 02:22:56.896 [INFO][4840] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.d0f671cfc8c9b7a0d58925c787fef66bd9fa009e7be4a0dfed18173bd53d5d77" host="localhost" Mar 6 02:22:57.016301 containerd[1583]: 2026-03-06 02:22:56.898 [INFO][4840] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.d0f671cfc8c9b7a0d58925c787fef66bd9fa009e7be4a0dfed18173bd53d5d77" host="localhost" Mar 6 02:22:57.016301 containerd[1583]: 2026-03-06 02:22:56.910 [INFO][4840] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 02:22:57.016301 containerd[1583]: 2026-03-06 02:22:56.911 [INFO][4840] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="d0f671cfc8c9b7a0d58925c787fef66bd9fa009e7be4a0dfed18173bd53d5d77" HandleID="k8s-pod-network.d0f671cfc8c9b7a0d58925c787fef66bd9fa009e7be4a0dfed18173bd53d5d77" Workload="localhost-k8s-calico--apiserver--fdf95c748--z4qz4-eth0" Mar 6 02:22:57.017261 containerd[1583]: 2026-03-06 02:22:56.929 [INFO][4805] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d0f671cfc8c9b7a0d58925c787fef66bd9fa009e7be4a0dfed18173bd53d5d77" Namespace="calico-system" Pod="calico-apiserver-fdf95c748-z4qz4" WorkloadEndpoint="localhost-k8s-calico--apiserver--fdf95c748--z4qz4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--fdf95c748--z4qz4-eth0", GenerateName:"calico-apiserver-fdf95c748-", Namespace:"calico-system", SelfLink:"", UID:"1556a76d-b801-4f94-85d1-3c1662a146b7", ResourceVersion:"877", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 2, 22, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"fdf95c748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-fdf95c748-z4qz4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali0063d471b45", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 02:22:57.017261 containerd[1583]: 2026-03-06 02:22:56.929 [INFO][4805] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="d0f671cfc8c9b7a0d58925c787fef66bd9fa009e7be4a0dfed18173bd53d5d77" Namespace="calico-system" Pod="calico-apiserver-fdf95c748-z4qz4" WorkloadEndpoint="localhost-k8s-calico--apiserver--fdf95c748--z4qz4-eth0" Mar 6 02:22:57.017261 containerd[1583]: 2026-03-06 02:22:56.929 [INFO][4805] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0063d471b45 ContainerID="d0f671cfc8c9b7a0d58925c787fef66bd9fa009e7be4a0dfed18173bd53d5d77" Namespace="calico-system" Pod="calico-apiserver-fdf95c748-z4qz4" WorkloadEndpoint="localhost-k8s-calico--apiserver--fdf95c748--z4qz4-eth0" Mar 6 02:22:57.017261 containerd[1583]: 2026-03-06 02:22:56.959 [INFO][4805] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d0f671cfc8c9b7a0d58925c787fef66bd9fa009e7be4a0dfed18173bd53d5d77" Namespace="calico-system" Pod="calico-apiserver-fdf95c748-z4qz4" WorkloadEndpoint="localhost-k8s-calico--apiserver--fdf95c748--z4qz4-eth0" Mar 6 02:22:57.017261 containerd[1583]: 2026-03-06 02:22:56.961 [INFO][4805] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d0f671cfc8c9b7a0d58925c787fef66bd9fa009e7be4a0dfed18173bd53d5d77" Namespace="calico-system" Pod="calico-apiserver-fdf95c748-z4qz4" WorkloadEndpoint="localhost-k8s-calico--apiserver--fdf95c748--z4qz4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--fdf95c748--z4qz4-eth0", GenerateName:"calico-apiserver-fdf95c748-", Namespace:"calico-system", SelfLink:"", UID:"1556a76d-b801-4f94-85d1-3c1662a146b7", ResourceVersion:"877", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 2, 22, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"fdf95c748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d0f671cfc8c9b7a0d58925c787fef66bd9fa009e7be4a0dfed18173bd53d5d77", Pod:"calico-apiserver-fdf95c748-z4qz4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali0063d471b45", MAC:"72:ee:08:7c:a6:bd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 02:22:57.017261 containerd[1583]: 2026-03-06 02:22:56.992 [INFO][4805] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d0f671cfc8c9b7a0d58925c787fef66bd9fa009e7be4a0dfed18173bd53d5d77" Namespace="calico-system" Pod="calico-apiserver-fdf95c748-z4qz4" WorkloadEndpoint="localhost-k8s-calico--apiserver--fdf95c748--z4qz4-eth0" Mar 6 02:22:57.044868 systemd[1]: Started cri-containerd-5c4947933033c491c17ad41bd00b34833dce0626972ac40e7fcb71d5bcfda597.scope - libcontainer container 5c4947933033c491c17ad41bd00b34833dce0626972ac40e7fcb71d5bcfda597. Mar 6 02:22:57.129715 systemd-resolved[1394]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 6 02:22:57.149195 containerd[1583]: time="2026-03-06T02:22:57.148958271Z" level=info msg="connecting to shim d0f671cfc8c9b7a0d58925c787fef66bd9fa009e7be4a0dfed18173bd53d5d77" address="unix:///run/containerd/s/24e66ae55060207f94307b0dbd5870b20050366dba4ef6b883eafcca661e5e76" namespace=k8s.io protocol=ttrpc version=3 Mar 6 02:22:57.223604 containerd[1583]: time="2026-03-06T02:22:57.222554194Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-csmx6,Uid:97416aaf-d20d-44e3-9e01-e71854f14d41,Namespace:kube-system,Attempt:0,} returns sandbox id \"5c4947933033c491c17ad41bd00b34833dce0626972ac40e7fcb71d5bcfda597\"" Mar 6 02:22:57.225408 kubelet[2754]: E0306 02:22:57.224371 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:22:57.239252 containerd[1583]: time="2026-03-06T02:22:57.238900901Z" level=info msg="CreateContainer within sandbox \"5c4947933033c491c17ad41bd00b34833dce0626972ac40e7fcb71d5bcfda597\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 6 02:22:57.240288 systemd[1]: Started cri-containerd-d0f671cfc8c9b7a0d58925c787fef66bd9fa009e7be4a0dfed18173bd53d5d77.scope - libcontainer container d0f671cfc8c9b7a0d58925c787fef66bd9fa009e7be4a0dfed18173bd53d5d77. Mar 6 02:22:57.259617 containerd[1583]: time="2026-03-06T02:22:57.259512679Z" level=info msg="Container ca764b8a18f681e08ce4d56069573c0ccc56b2ecf03e0703702468d8994357ec: CDI devices from CRI Config.CDIDevices: []" Mar 6 02:22:57.275106 containerd[1583]: time="2026-03-06T02:22:57.274798579Z" level=info msg="CreateContainer within sandbox \"5c4947933033c491c17ad41bd00b34833dce0626972ac40e7fcb71d5bcfda597\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ca764b8a18f681e08ce4d56069573c0ccc56b2ecf03e0703702468d8994357ec\"" Mar 6 02:22:57.277162 containerd[1583]: time="2026-03-06T02:22:57.277031326Z" level=info msg="StartContainer for \"ca764b8a18f681e08ce4d56069573c0ccc56b2ecf03e0703702468d8994357ec\"" Mar 6 02:22:57.281380 containerd[1583]: time="2026-03-06T02:22:57.280919894Z" level=info msg="connecting to shim ca764b8a18f681e08ce4d56069573c0ccc56b2ecf03e0703702468d8994357ec" address="unix:///run/containerd/s/48d218e9bccdf46813c5b5f5cff4215789fe231d9f566bcfbdb02133cb4e9c7f" protocol=ttrpc version=3 Mar 6 02:22:57.287308 systemd-resolved[1394]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 6 02:22:57.334449 systemd[1]: Started cri-containerd-ca764b8a18f681e08ce4d56069573c0ccc56b2ecf03e0703702468d8994357ec.scope - libcontainer container ca764b8a18f681e08ce4d56069573c0ccc56b2ecf03e0703702468d8994357ec. Mar 6 02:22:57.390603 containerd[1583]: time="2026-03-06T02:22:57.390488061Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-fdf95c748-z4qz4,Uid:1556a76d-b801-4f94-85d1-3c1662a146b7,Namespace:calico-system,Attempt:0,} returns sandbox id \"d0f671cfc8c9b7a0d58925c787fef66bd9fa009e7be4a0dfed18173bd53d5d77\"" Mar 6 02:22:57.509557 containerd[1583]: time="2026-03-06T02:22:57.508867072Z" level=info msg="StartContainer for \"ca764b8a18f681e08ce4d56069573c0ccc56b2ecf03e0703702468d8994357ec\" returns successfully" Mar 6 02:22:57.926678 kubelet[2754]: E0306 02:22:57.926587 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:22:57.927381 kubelet[2754]: E0306 02:22:57.927106 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:22:57.938384 containerd[1583]: time="2026-03-06T02:22:57.938296224Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Mar 6 02:22:57.940313 containerd[1583]: time="2026-03-06T02:22:57.940252602Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 02:22:57.943147 containerd[1583]: time="2026-03-06T02:22:57.942921926Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 02:22:57.947797 containerd[1583]: time="2026-03-06T02:22:57.946459130Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 02:22:57.947797 containerd[1583]: time="2026-03-06T02:22:57.947580324Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 2.895780566s" Mar 6 02:22:57.947797 containerd[1583]: time="2026-03-06T02:22:57.947616892Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 6 02:22:57.952229 kubelet[2754]: I0306 02:22:57.950620 2754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-csmx6" podStartSLOduration=48.950598171 podStartE2EDuration="48.950598171s" podCreationTimestamp="2026-03-06 02:22:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-06 02:22:57.950499822 +0000 UTC m=+52.822112108" watchObservedRunningTime="2026-03-06 02:22:57.950598171 +0000 UTC m=+52.822210457" Mar 6 02:22:57.954488 containerd[1583]: time="2026-03-06T02:22:57.954303079Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Mar 6 02:22:57.961925 containerd[1583]: time="2026-03-06T02:22:57.961860378Z" level=info msg="CreateContainer within sandbox \"dcf36ad51153ea79a5c893b14bc25ad6dc5ba61631fa922fcaf3bd10d8d0bb6d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 6 02:22:57.991285 containerd[1583]: time="2026-03-06T02:22:57.988891485Z" level=info msg="Container 6404c493afcbd7996587676407d318ed4c5f764851c87511cf26e6dfe69d6bdb: CDI devices from CRI Config.CDIDevices: []" Mar 6 02:22:57.999571 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount346345918.mount: Deactivated successfully. Mar 6 02:22:58.040936 containerd[1583]: time="2026-03-06T02:22:58.040835617Z" level=info msg="CreateContainer within sandbox \"dcf36ad51153ea79a5c893b14bc25ad6dc5ba61631fa922fcaf3bd10d8d0bb6d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"6404c493afcbd7996587676407d318ed4c5f764851c87511cf26e6dfe69d6bdb\"" Mar 6 02:22:58.043810 containerd[1583]: time="2026-03-06T02:22:58.041946969Z" level=info msg="StartContainer for \"6404c493afcbd7996587676407d318ed4c5f764851c87511cf26e6dfe69d6bdb\"" Mar 6 02:22:58.044042 containerd[1583]: time="2026-03-06T02:22:58.044011363Z" level=info msg="connecting to shim 6404c493afcbd7996587676407d318ed4c5f764851c87511cf26e6dfe69d6bdb" address="unix:///run/containerd/s/72219a0e40b4c826002ec757f58f570c455ea3651d91d7ae33603787dc2f4905" protocol=ttrpc version=3 Mar 6 02:22:58.122532 systemd[1]: Started cri-containerd-6404c493afcbd7996587676407d318ed4c5f764851c87511cf26e6dfe69d6bdb.scope - libcontainer container 6404c493afcbd7996587676407d318ed4c5f764851c87511cf26e6dfe69d6bdb. Mar 6 02:22:58.231213 containerd[1583]: time="2026-03-06T02:22:58.230893470Z" level=info msg="StartContainer for \"6404c493afcbd7996587676407d318ed4c5f764851c87511cf26e6dfe69d6bdb\" returns successfully" Mar 6 02:22:58.395323 containerd[1583]: time="2026-03-06T02:22:58.395165509Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-rmjwl,Uid:ae401966-c657-4b7e-b0b4-3287b83c8954,Namespace:calico-system,Attempt:0,}" Mar 6 02:22:58.418622 systemd-networkd[1479]: cali0063d471b45: Gained IPv6LL Mar 6 02:22:58.736696 systemd-networkd[1479]: calidf657671a76: Gained IPv6LL Mar 6 02:22:58.896036 systemd-networkd[1479]: calicab8b0e776c: Link UP Mar 6 02:22:58.906040 systemd-networkd[1479]: calicab8b0e776c: Gained carrier Mar 6 02:22:58.937315 kubelet[2754]: E0306 02:22:58.936822 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:22:58.957000 containerd[1583]: 2026-03-06 02:22:58.553 [INFO][5073] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--5b85766d88--rmjwl-eth0 goldmane-5b85766d88- calico-system ae401966-c657-4b7e-b0b4-3287b83c8954 876 0 2026-03-06 02:22:22 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:5b85766d88 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-5b85766d88-rmjwl eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calicab8b0e776c [] [] }} ContainerID="bf55e236a771377403ead9f91a1652adc5d5d9a41a6b50dce655f1dee278e090" Namespace="calico-system" Pod="goldmane-5b85766d88-rmjwl" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--rmjwl-" Mar 6 02:22:58.957000 containerd[1583]: 2026-03-06 02:22:58.553 [INFO][5073] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bf55e236a771377403ead9f91a1652adc5d5d9a41a6b50dce655f1dee278e090" Namespace="calico-system" Pod="goldmane-5b85766d88-rmjwl" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--rmjwl-eth0" Mar 6 02:22:58.957000 containerd[1583]: 2026-03-06 02:22:58.630 [INFO][5087] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bf55e236a771377403ead9f91a1652adc5d5d9a41a6b50dce655f1dee278e090" HandleID="k8s-pod-network.bf55e236a771377403ead9f91a1652adc5d5d9a41a6b50dce655f1dee278e090" Workload="localhost-k8s-goldmane--5b85766d88--rmjwl-eth0" Mar 6 02:22:58.957000 containerd[1583]: 2026-03-06 02:22:58.644 [INFO][5087] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="bf55e236a771377403ead9f91a1652adc5d5d9a41a6b50dce655f1dee278e090" HandleID="k8s-pod-network.bf55e236a771377403ead9f91a1652adc5d5d9a41a6b50dce655f1dee278e090" Workload="localhost-k8s-goldmane--5b85766d88--rmjwl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000582a80), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-5b85766d88-rmjwl", "timestamp":"2026-03-06 02:22:58.630199787 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001ae580)} Mar 6 02:22:58.957000 containerd[1583]: 2026-03-06 02:22:58.644 [INFO][5087] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 02:22:58.957000 containerd[1583]: 2026-03-06 02:22:58.645 [INFO][5087] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 02:22:58.957000 containerd[1583]: 2026-03-06 02:22:58.645 [INFO][5087] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 6 02:22:58.957000 containerd[1583]: 2026-03-06 02:22:58.654 [INFO][5087] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.bf55e236a771377403ead9f91a1652adc5d5d9a41a6b50dce655f1dee278e090" host="localhost" Mar 6 02:22:58.957000 containerd[1583]: 2026-03-06 02:22:58.679 [INFO][5087] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 6 02:22:58.957000 containerd[1583]: 2026-03-06 02:22:58.757 [INFO][5087] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 6 02:22:58.957000 containerd[1583]: 2026-03-06 02:22:58.769 [INFO][5087] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 6 02:22:58.957000 containerd[1583]: 2026-03-06 02:22:58.789 [INFO][5087] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 6 02:22:58.957000 containerd[1583]: 2026-03-06 02:22:58.790 [INFO][5087] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.bf55e236a771377403ead9f91a1652adc5d5d9a41a6b50dce655f1dee278e090" host="localhost" Mar 6 02:22:58.957000 containerd[1583]: 2026-03-06 02:22:58.805 [INFO][5087] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.bf55e236a771377403ead9f91a1652adc5d5d9a41a6b50dce655f1dee278e090 Mar 6 02:22:58.957000 containerd[1583]: 2026-03-06 02:22:58.840 [INFO][5087] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.bf55e236a771377403ead9f91a1652adc5d5d9a41a6b50dce655f1dee278e090" host="localhost" Mar 6 02:22:58.957000 containerd[1583]: 2026-03-06 02:22:58.866 [INFO][5087] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.bf55e236a771377403ead9f91a1652adc5d5d9a41a6b50dce655f1dee278e090" host="localhost" Mar 6 02:22:58.957000 containerd[1583]: 2026-03-06 02:22:58.866 [INFO][5087] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.bf55e236a771377403ead9f91a1652adc5d5d9a41a6b50dce655f1dee278e090" host="localhost" Mar 6 02:22:58.957000 containerd[1583]: 2026-03-06 02:22:58.866 [INFO][5087] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 02:22:58.957000 containerd[1583]: 2026-03-06 02:22:58.866 [INFO][5087] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="bf55e236a771377403ead9f91a1652adc5d5d9a41a6b50dce655f1dee278e090" HandleID="k8s-pod-network.bf55e236a771377403ead9f91a1652adc5d5d9a41a6b50dce655f1dee278e090" Workload="localhost-k8s-goldmane--5b85766d88--rmjwl-eth0" Mar 6 02:22:58.960006 containerd[1583]: 2026-03-06 02:22:58.875 [INFO][5073] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bf55e236a771377403ead9f91a1652adc5d5d9a41a6b50dce655f1dee278e090" Namespace="calico-system" Pod="goldmane-5b85766d88-rmjwl" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--rmjwl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--5b85766d88--rmjwl-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"ae401966-c657-4b7e-b0b4-3287b83c8954", ResourceVersion:"876", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 2, 22, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-5b85766d88-rmjwl", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calicab8b0e776c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 02:22:58.960006 containerd[1583]: 2026-03-06 02:22:58.875 [INFO][5073] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="bf55e236a771377403ead9f91a1652adc5d5d9a41a6b50dce655f1dee278e090" Namespace="calico-system" Pod="goldmane-5b85766d88-rmjwl" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--rmjwl-eth0" Mar 6 02:22:58.960006 containerd[1583]: 2026-03-06 02:22:58.875 [INFO][5073] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicab8b0e776c ContainerID="bf55e236a771377403ead9f91a1652adc5d5d9a41a6b50dce655f1dee278e090" Namespace="calico-system" Pod="goldmane-5b85766d88-rmjwl" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--rmjwl-eth0" Mar 6 02:22:58.960006 containerd[1583]: 2026-03-06 02:22:58.907 [INFO][5073] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bf55e236a771377403ead9f91a1652adc5d5d9a41a6b50dce655f1dee278e090" Namespace="calico-system" Pod="goldmane-5b85766d88-rmjwl" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--rmjwl-eth0" Mar 6 02:22:58.960006 containerd[1583]: 2026-03-06 02:22:58.913 [INFO][5073] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bf55e236a771377403ead9f91a1652adc5d5d9a41a6b50dce655f1dee278e090" Namespace="calico-system" Pod="goldmane-5b85766d88-rmjwl" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--rmjwl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--5b85766d88--rmjwl-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"ae401966-c657-4b7e-b0b4-3287b83c8954", ResourceVersion:"876", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 2, 22, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bf55e236a771377403ead9f91a1652adc5d5d9a41a6b50dce655f1dee278e090", Pod:"goldmane-5b85766d88-rmjwl", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calicab8b0e776c", MAC:"56:8a:17:ed:e5:3d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 02:22:58.960006 containerd[1583]: 2026-03-06 02:22:58.947 [INFO][5073] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bf55e236a771377403ead9f91a1652adc5d5d9a41a6b50dce655f1dee278e090" Namespace="calico-system" Pod="goldmane-5b85766d88-rmjwl" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--rmjwl-eth0" Mar 6 02:22:58.970162 kubelet[2754]: I0306 02:22:58.969661 2754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-fdf95c748-msxqc" podStartSLOduration=34.066330121 podStartE2EDuration="36.969314125s" podCreationTimestamp="2026-03-06 02:22:22 +0000 UTC" firstStartedPulling="2026-03-06 02:22:55.050013335 +0000 UTC m=+49.921625621" lastFinishedPulling="2026-03-06 02:22:57.952997339 +0000 UTC m=+52.824609625" observedRunningTime="2026-03-06 02:22:58.964559523 +0000 UTC m=+53.836171829" watchObservedRunningTime="2026-03-06 02:22:58.969314125 +0000 UTC m=+53.840926431" Mar 6 02:22:59.056104 containerd[1583]: time="2026-03-06T02:22:59.055988743Z" level=info msg="connecting to shim bf55e236a771377403ead9f91a1652adc5d5d9a41a6b50dce655f1dee278e090" address="unix:///run/containerd/s/d4c1cd2a20667ea7216912caf6b1924784bfee3219f4428776c4ec345f6b6620" namespace=k8s.io protocol=ttrpc version=3 Mar 6 02:22:59.139566 systemd[1]: Started cri-containerd-bf55e236a771377403ead9f91a1652adc5d5d9a41a6b50dce655f1dee278e090.scope - libcontainer container bf55e236a771377403ead9f91a1652adc5d5d9a41a6b50dce655f1dee278e090. Mar 6 02:22:59.174862 systemd-resolved[1394]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 6 02:22:59.254816 containerd[1583]: time="2026-03-06T02:22:59.254597930Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-rmjwl,Uid:ae401966-c657-4b7e-b0b4-3287b83c8954,Namespace:calico-system,Attempt:0,} returns sandbox id \"bf55e236a771377403ead9f91a1652adc5d5d9a41a6b50dce655f1dee278e090\"" Mar 6 02:22:59.967504 kubelet[2754]: I0306 02:22:59.967434 2754 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 6 02:22:59.971291 kubelet[2754]: E0306 02:22:59.967709 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:23:00.914306 systemd-networkd[1479]: calicab8b0e776c: Gained IPv6LL Mar 6 02:23:02.185342 containerd[1583]: time="2026-03-06T02:23:02.185261059Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 02:23:02.187296 containerd[1583]: time="2026-03-06T02:23:02.187011188Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Mar 6 02:23:02.189603 containerd[1583]: time="2026-03-06T02:23:02.189445141Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 02:23:02.193584 containerd[1583]: time="2026-03-06T02:23:02.193522253Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 02:23:02.197310 containerd[1583]: time="2026-03-06T02:23:02.197219900Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 4.242722862s" Mar 6 02:23:02.197310 containerd[1583]: time="2026-03-06T02:23:02.197276326Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Mar 6 02:23:02.199966 containerd[1583]: time="2026-03-06T02:23:02.199005196Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 6 02:23:02.243839 containerd[1583]: time="2026-03-06T02:23:02.243678926Z" level=info msg="CreateContainer within sandbox \"b7188461f0af6af2eae6361d9d3097bb8e3ad4236a3ab60097b3ebb55adf5f0e\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 6 02:23:02.259828 containerd[1583]: time="2026-03-06T02:23:02.259750265Z" level=info msg="Container 2a303fd46eab4b27d8827f9c2d324a413ef84efbc30ed56a74ebdc6431171011: CDI devices from CRI Config.CDIDevices: []" Mar 6 02:23:02.277176 containerd[1583]: time="2026-03-06T02:23:02.276974808Z" level=info msg="CreateContainer within sandbox \"b7188461f0af6af2eae6361d9d3097bb8e3ad4236a3ab60097b3ebb55adf5f0e\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"2a303fd46eab4b27d8827f9c2d324a413ef84efbc30ed56a74ebdc6431171011\"" Mar 6 02:23:02.279481 containerd[1583]: time="2026-03-06T02:23:02.279450175Z" level=info msg="StartContainer for \"2a303fd46eab4b27d8827f9c2d324a413ef84efbc30ed56a74ebdc6431171011\"" Mar 6 02:23:02.282031 containerd[1583]: time="2026-03-06T02:23:02.281481798Z" level=info msg="connecting to shim 2a303fd46eab4b27d8827f9c2d324a413ef84efbc30ed56a74ebdc6431171011" address="unix:///run/containerd/s/5d141d8ba9e7739459a65baecbd3a71a40f2803f884e7f302e9b283dac473079" protocol=ttrpc version=3 Mar 6 02:23:02.361268 systemd[1]: Started cri-containerd-2a303fd46eab4b27d8827f9c2d324a413ef84efbc30ed56a74ebdc6431171011.scope - libcontainer container 2a303fd46eab4b27d8827f9c2d324a413ef84efbc30ed56a74ebdc6431171011. Mar 6 02:23:02.429430 containerd[1583]: time="2026-03-06T02:23:02.429341666Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 02:23:02.431509 containerd[1583]: time="2026-03-06T02:23:02.431414198Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Mar 6 02:23:02.439742 containerd[1583]: time="2026-03-06T02:23:02.439548987Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 240.499218ms" Mar 6 02:23:02.439742 containerd[1583]: time="2026-03-06T02:23:02.439607016Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 6 02:23:02.443286 containerd[1583]: time="2026-03-06T02:23:02.443222904Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Mar 6 02:23:02.450903 containerd[1583]: time="2026-03-06T02:23:02.450554442Z" level=info msg="CreateContainer within sandbox \"d0f671cfc8c9b7a0d58925c787fef66bd9fa009e7be4a0dfed18173bd53d5d77\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 6 02:23:02.483341 containerd[1583]: time="2026-03-06T02:23:02.482872493Z" level=info msg="StartContainer for \"2a303fd46eab4b27d8827f9c2d324a413ef84efbc30ed56a74ebdc6431171011\" returns successfully" Mar 6 02:23:02.491029 containerd[1583]: time="2026-03-06T02:23:02.490352615Z" level=info msg="Container d5f0ef6eb0fa5b14fe77fa58c551d3de5751c04bd502b15b36dd99505d2f34a9: CDI devices from CRI Config.CDIDevices: []" Mar 6 02:23:02.498177 kubelet[2754]: I0306 02:23:02.498034 2754 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 6 02:23:02.532759 containerd[1583]: time="2026-03-06T02:23:02.532581022Z" level=info msg="CreateContainer within sandbox \"d0f671cfc8c9b7a0d58925c787fef66bd9fa009e7be4a0dfed18173bd53d5d77\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"d5f0ef6eb0fa5b14fe77fa58c551d3de5751c04bd502b15b36dd99505d2f34a9\"" Mar 6 02:23:02.535959 containerd[1583]: time="2026-03-06T02:23:02.535763465Z" level=info msg="StartContainer for \"d5f0ef6eb0fa5b14fe77fa58c551d3de5751c04bd502b15b36dd99505d2f34a9\"" Mar 6 02:23:02.544544 containerd[1583]: time="2026-03-06T02:23:02.544409726Z" level=info msg="connecting to shim d5f0ef6eb0fa5b14fe77fa58c551d3de5751c04bd502b15b36dd99505d2f34a9" address="unix:///run/containerd/s/24e66ae55060207f94307b0dbd5870b20050366dba4ef6b883eafcca661e5e76" protocol=ttrpc version=3 Mar 6 02:23:02.631689 systemd[1]: Started cri-containerd-d5f0ef6eb0fa5b14fe77fa58c551d3de5751c04bd502b15b36dd99505d2f34a9.scope - libcontainer container d5f0ef6eb0fa5b14fe77fa58c551d3de5751c04bd502b15b36dd99505d2f34a9. Mar 6 02:23:02.892131 containerd[1583]: time="2026-03-06T02:23:02.889713347Z" level=info msg="StartContainer for \"d5f0ef6eb0fa5b14fe77fa58c551d3de5751c04bd502b15b36dd99505d2f34a9\" returns successfully" Mar 6 02:23:03.045532 kubelet[2754]: I0306 02:23:03.045409 2754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-c6d79556b-hmx8m" podStartSLOduration=34.82145391 podStartE2EDuration="41.045383543s" podCreationTimestamp="2026-03-06 02:22:22 +0000 UTC" firstStartedPulling="2026-03-06 02:22:55.974868835 +0000 UTC m=+50.846481121" lastFinishedPulling="2026-03-06 02:23:02.198798469 +0000 UTC m=+57.070410754" observedRunningTime="2026-03-06 02:23:03.04093598 +0000 UTC m=+57.912548276" watchObservedRunningTime="2026-03-06 02:23:03.045383543 +0000 UTC m=+57.916995829" Mar 6 02:23:03.259694 kubelet[2754]: I0306 02:23:03.259451 2754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-fdf95c748-z4qz4" podStartSLOduration=37.216524192 podStartE2EDuration="42.259431369s" podCreationTimestamp="2026-03-06 02:22:21 +0000 UTC" firstStartedPulling="2026-03-06 02:22:57.398874927 +0000 UTC m=+52.270487204" lastFinishedPulling="2026-03-06 02:23:02.441782106 +0000 UTC m=+57.313394381" observedRunningTime="2026-03-06 02:23:03.094220996 +0000 UTC m=+57.965833311" watchObservedRunningTime="2026-03-06 02:23:03.259431369 +0000 UTC m=+58.131043655" Mar 6 02:23:04.003399 kubelet[2754]: I0306 02:23:04.002003 2754 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 6 02:23:04.221114 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2913229441.mount: Deactivated successfully. Mar 6 02:23:04.960364 containerd[1583]: time="2026-03-06T02:23:04.960259170Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 02:23:04.962354 containerd[1583]: time="2026-03-06T02:23:04.962320133Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Mar 6 02:23:04.971258 containerd[1583]: time="2026-03-06T02:23:04.971175143Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 02:23:04.974194 containerd[1583]: time="2026-03-06T02:23:04.974112002Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 02:23:04.974936 containerd[1583]: time="2026-03-06T02:23:04.974837036Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 2.531583255s" Mar 6 02:23:04.974936 containerd[1583]: time="2026-03-06T02:23:04.974883362Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Mar 6 02:23:04.981447 containerd[1583]: time="2026-03-06T02:23:04.981252591Z" level=info msg="CreateContainer within sandbox \"bf55e236a771377403ead9f91a1652adc5d5d9a41a6b50dce655f1dee278e090\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Mar 6 02:23:04.990459 containerd[1583]: time="2026-03-06T02:23:04.990357399Z" level=info msg="Container 92a03e843025066da224cb236bd5cc0ea8c9db12804aba422b7e9bf529021b4e: CDI devices from CRI Config.CDIDevices: []" Mar 6 02:23:05.005806 containerd[1583]: time="2026-03-06T02:23:05.005373808Z" level=info msg="CreateContainer within sandbox \"bf55e236a771377403ead9f91a1652adc5d5d9a41a6b50dce655f1dee278e090\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"92a03e843025066da224cb236bd5cc0ea8c9db12804aba422b7e9bf529021b4e\"" Mar 6 02:23:05.007600 containerd[1583]: time="2026-03-06T02:23:05.007509786Z" level=info msg="StartContainer for \"92a03e843025066da224cb236bd5cc0ea8c9db12804aba422b7e9bf529021b4e\"" Mar 6 02:23:05.015306 containerd[1583]: time="2026-03-06T02:23:05.015168762Z" level=info msg="connecting to shim 92a03e843025066da224cb236bd5cc0ea8c9db12804aba422b7e9bf529021b4e" address="unix:///run/containerd/s/d4c1cd2a20667ea7216912caf6b1924784bfee3219f4428776c4ec345f6b6620" protocol=ttrpc version=3 Mar 6 02:23:05.048270 systemd[1]: Started cri-containerd-92a03e843025066da224cb236bd5cc0ea8c9db12804aba422b7e9bf529021b4e.scope - libcontainer container 92a03e843025066da224cb236bd5cc0ea8c9db12804aba422b7e9bf529021b4e. Mar 6 02:23:05.147021 containerd[1583]: time="2026-03-06T02:23:05.146821335Z" level=info msg="StartContainer for \"92a03e843025066da224cb236bd5cc0ea8c9db12804aba422b7e9bf529021b4e\" returns successfully" Mar 6 02:23:06.048640 kubelet[2754]: I0306 02:23:06.048433 2754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-5b85766d88-rmjwl" podStartSLOduration=38.330406381 podStartE2EDuration="44.047412589s" podCreationTimestamp="2026-03-06 02:22:22 +0000 UTC" firstStartedPulling="2026-03-06 02:22:59.25887001 +0000 UTC m=+54.130482296" lastFinishedPulling="2026-03-06 02:23:04.975876208 +0000 UTC m=+59.847488504" observedRunningTime="2026-03-06 02:23:06.042352233 +0000 UTC m=+60.913964519" watchObservedRunningTime="2026-03-06 02:23:06.047412589 +0000 UTC m=+60.919024946" Mar 6 02:23:18.095458 systemd[1]: Started sshd@7-10.0.0.33:22-10.0.0.1:46336.service - OpenSSH per-connection server daemon (10.0.0.1:46336). Mar 6 02:23:18.213788 sshd[5445]: Accepted publickey for core from 10.0.0.1 port 46336 ssh2: RSA SHA256:gQgg0JALxZbqjBb2wrGCiVnsME27E+oGvBgjcLvBgwM Mar 6 02:23:18.216553 sshd-session[5445]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 02:23:18.226886 systemd-logind[1555]: New session 8 of user core. Mar 6 02:23:18.241457 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 6 02:23:18.460353 sshd[5448]: Connection closed by 10.0.0.1 port 46336 Mar 6 02:23:18.460765 sshd-session[5445]: pam_unix(sshd:session): session closed for user core Mar 6 02:23:18.465479 systemd[1]: sshd@7-10.0.0.33:22-10.0.0.1:46336.service: Deactivated successfully. Mar 6 02:23:18.467759 systemd[1]: session-8.scope: Deactivated successfully. Mar 6 02:23:18.470880 systemd-logind[1555]: Session 8 logged out. Waiting for processes to exit. Mar 6 02:23:18.472506 systemd-logind[1555]: Removed session 8. Mar 6 02:23:23.473124 systemd[1]: Started sshd@8-10.0.0.33:22-10.0.0.1:52968.service - OpenSSH per-connection server daemon (10.0.0.1:52968). Mar 6 02:23:23.555950 sshd[5530]: Accepted publickey for core from 10.0.0.1 port 52968 ssh2: RSA SHA256:gQgg0JALxZbqjBb2wrGCiVnsME27E+oGvBgjcLvBgwM Mar 6 02:23:23.557705 sshd-session[5530]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 02:23:23.563983 systemd-logind[1555]: New session 9 of user core. Mar 6 02:23:23.574282 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 6 02:23:23.701445 sshd[5533]: Connection closed by 10.0.0.1 port 52968 Mar 6 02:23:23.701904 sshd-session[5530]: pam_unix(sshd:session): session closed for user core Mar 6 02:23:23.707284 systemd[1]: sshd@8-10.0.0.33:22-10.0.0.1:52968.service: Deactivated successfully. Mar 6 02:23:23.709484 systemd[1]: session-9.scope: Deactivated successfully. Mar 6 02:23:23.710665 systemd-logind[1555]: Session 9 logged out. Waiting for processes to exit. Mar 6 02:23:23.712787 systemd-logind[1555]: Removed session 9. Mar 6 02:23:27.395144 kubelet[2754]: E0306 02:23:27.395011 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:23:28.395166 kubelet[2754]: E0306 02:23:28.395033 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:23:28.718169 systemd[1]: Started sshd@9-10.0.0.33:22-10.0.0.1:52984.service - OpenSSH per-connection server daemon (10.0.0.1:52984). Mar 6 02:23:28.786268 sshd[5558]: Accepted publickey for core from 10.0.0.1 port 52984 ssh2: RSA SHA256:gQgg0JALxZbqjBb2wrGCiVnsME27E+oGvBgjcLvBgwM Mar 6 02:23:28.787834 sshd-session[5558]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 02:23:28.794290 systemd-logind[1555]: New session 10 of user core. Mar 6 02:23:28.801377 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 6 02:23:28.917369 sshd[5561]: Connection closed by 10.0.0.1 port 52984 Mar 6 02:23:28.918384 sshd-session[5558]: pam_unix(sshd:session): session closed for user core Mar 6 02:23:28.925513 systemd[1]: sshd@9-10.0.0.33:22-10.0.0.1:52984.service: Deactivated successfully. Mar 6 02:23:28.928490 systemd[1]: session-10.scope: Deactivated successfully. Mar 6 02:23:28.930335 systemd-logind[1555]: Session 10 logged out. Waiting for processes to exit. Mar 6 02:23:28.932470 systemd-logind[1555]: Removed session 10. Mar 6 02:23:30.395321 kubelet[2754]: E0306 02:23:30.395225 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:23:32.395930 kubelet[2754]: E0306 02:23:32.395423 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:23:33.955273 systemd[1]: Started sshd@10-10.0.0.33:22-10.0.0.1:56606.service - OpenSSH per-connection server daemon (10.0.0.1:56606). Mar 6 02:23:34.479849 sshd[5619]: Accepted publickey for core from 10.0.0.1 port 56606 ssh2: RSA SHA256:gQgg0JALxZbqjBb2wrGCiVnsME27E+oGvBgjcLvBgwM Mar 6 02:23:34.498936 sshd-session[5619]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 02:23:34.533529 systemd-logind[1555]: New session 11 of user core. Mar 6 02:23:34.539856 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 6 02:23:35.501902 sshd[5622]: Connection closed by 10.0.0.1 port 56606 Mar 6 02:23:35.505292 sshd-session[5619]: pam_unix(sshd:session): session closed for user core Mar 6 02:23:35.531556 systemd[1]: sshd@10-10.0.0.33:22-10.0.0.1:56606.service: Deactivated successfully. Mar 6 02:23:35.543348 systemd[1]: session-11.scope: Deactivated successfully. Mar 6 02:23:35.548283 systemd-logind[1555]: Session 11 logged out. Waiting for processes to exit. Mar 6 02:23:35.558844 systemd[1]: Started sshd@11-10.0.0.33:22-10.0.0.1:56618.service - OpenSSH per-connection server daemon (10.0.0.1:56618). Mar 6 02:23:35.568916 systemd-logind[1555]: Removed session 11. Mar 6 02:23:35.745280 sshd[5636]: Accepted publickey for core from 10.0.0.1 port 56618 ssh2: RSA SHA256:gQgg0JALxZbqjBb2wrGCiVnsME27E+oGvBgjcLvBgwM Mar 6 02:23:35.749903 sshd-session[5636]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 02:23:35.772909 systemd-logind[1555]: New session 12 of user core. Mar 6 02:23:35.784449 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 6 02:23:36.303364 sshd[5639]: Connection closed by 10.0.0.1 port 56618 Mar 6 02:23:36.302596 sshd-session[5636]: pam_unix(sshd:session): session closed for user core Mar 6 02:23:36.363551 systemd[1]: sshd@11-10.0.0.33:22-10.0.0.1:56618.service: Deactivated successfully. Mar 6 02:23:36.381847 systemd[1]: session-12.scope: Deactivated successfully. Mar 6 02:23:36.386788 systemd-logind[1555]: Session 12 logged out. Waiting for processes to exit. Mar 6 02:23:36.409324 systemd[1]: Started sshd@12-10.0.0.33:22-10.0.0.1:56632.service - OpenSSH per-connection server daemon (10.0.0.1:56632). Mar 6 02:23:36.438560 systemd-logind[1555]: Removed session 12. Mar 6 02:23:36.666251 kubelet[2754]: I0306 02:23:36.665358 2754 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 6 02:23:36.749552 sshd[5652]: Accepted publickey for core from 10.0.0.1 port 56632 ssh2: RSA SHA256:gQgg0JALxZbqjBb2wrGCiVnsME27E+oGvBgjcLvBgwM Mar 6 02:23:36.757902 sshd-session[5652]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 02:23:36.784942 systemd-logind[1555]: New session 13 of user core. Mar 6 02:23:36.794854 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 6 02:23:37.365306 sshd[5655]: Connection closed by 10.0.0.1 port 56632 Mar 6 02:23:37.364456 sshd-session[5652]: pam_unix(sshd:session): session closed for user core Mar 6 02:23:37.384474 systemd[1]: sshd@12-10.0.0.33:22-10.0.0.1:56632.service: Deactivated successfully. Mar 6 02:23:37.390844 systemd[1]: session-13.scope: Deactivated successfully. Mar 6 02:23:37.407850 systemd-logind[1555]: Session 13 logged out. Waiting for processes to exit. Mar 6 02:23:37.419309 systemd-logind[1555]: Removed session 13. Mar 6 02:23:42.387309 systemd[1]: Started sshd@13-10.0.0.33:22-10.0.0.1:42904.service - OpenSSH per-connection server daemon (10.0.0.1:42904). Mar 6 02:23:42.610530 sshd[5708]: Accepted publickey for core from 10.0.0.1 port 42904 ssh2: RSA SHA256:gQgg0JALxZbqjBb2wrGCiVnsME27E+oGvBgjcLvBgwM Mar 6 02:23:42.616467 sshd-session[5708]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 02:23:42.641376 systemd-logind[1555]: New session 14 of user core. Mar 6 02:23:42.658459 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 6 02:23:43.378609 sshd[5711]: Connection closed by 10.0.0.1 port 42904 Mar 6 02:23:43.383033 sshd-session[5708]: pam_unix(sshd:session): session closed for user core Mar 6 02:23:43.393932 systemd[1]: sshd@13-10.0.0.33:22-10.0.0.1:42904.service: Deactivated successfully. Mar 6 02:23:43.407980 systemd[1]: session-14.scope: Deactivated successfully. Mar 6 02:23:43.420834 systemd-logind[1555]: Session 14 logged out. Waiting for processes to exit. Mar 6 02:23:43.433301 systemd-logind[1555]: Removed session 14. Mar 6 02:23:48.398398 systemd[1]: Started sshd@14-10.0.0.33:22-10.0.0.1:42906.service - OpenSSH per-connection server daemon (10.0.0.1:42906). Mar 6 02:23:48.604439 sshd[5751]: Accepted publickey for core from 10.0.0.1 port 42906 ssh2: RSA SHA256:gQgg0JALxZbqjBb2wrGCiVnsME27E+oGvBgjcLvBgwM Mar 6 02:23:48.609841 sshd-session[5751]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 02:23:48.658630 systemd-logind[1555]: New session 15 of user core. Mar 6 02:23:48.677337 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 6 02:23:49.162631 sshd[5754]: Connection closed by 10.0.0.1 port 42906 Mar 6 02:23:49.164600 sshd-session[5751]: pam_unix(sshd:session): session closed for user core Mar 6 02:23:49.180459 systemd[1]: sshd@14-10.0.0.33:22-10.0.0.1:42906.service: Deactivated successfully. Mar 6 02:23:49.189575 systemd[1]: session-15.scope: Deactivated successfully. Mar 6 02:23:49.197385 systemd-logind[1555]: Session 15 logged out. Waiting for processes to exit. Mar 6 02:23:49.208519 systemd-logind[1555]: Removed session 15. Mar 6 02:23:54.222930 systemd[1]: Started sshd@15-10.0.0.33:22-10.0.0.1:55060.service - OpenSSH per-connection server daemon (10.0.0.1:55060). Mar 6 02:23:54.424950 sshd[5767]: Accepted publickey for core from 10.0.0.1 port 55060 ssh2: RSA SHA256:gQgg0JALxZbqjBb2wrGCiVnsME27E+oGvBgjcLvBgwM Mar 6 02:23:54.426946 sshd-session[5767]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 02:23:54.451861 systemd-logind[1555]: New session 16 of user core. Mar 6 02:23:54.466576 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 6 02:23:55.251915 sshd[5770]: Connection closed by 10.0.0.1 port 55060 Mar 6 02:23:55.252566 sshd-session[5767]: pam_unix(sshd:session): session closed for user core Mar 6 02:23:55.275318 systemd[1]: sshd@15-10.0.0.33:22-10.0.0.1:55060.service: Deactivated successfully. Mar 6 02:23:55.284870 systemd[1]: session-16.scope: Deactivated successfully. Mar 6 02:23:55.289023 systemd-logind[1555]: Session 16 logged out. Waiting for processes to exit. Mar 6 02:23:55.297028 systemd-logind[1555]: Removed session 16. Mar 6 02:24:00.293477 systemd[1]: Started sshd@16-10.0.0.33:22-10.0.0.1:43874.service - OpenSSH per-connection server daemon (10.0.0.1:43874). Mar 6 02:24:00.748418 sshd[5784]: Accepted publickey for core from 10.0.0.1 port 43874 ssh2: RSA SHA256:gQgg0JALxZbqjBb2wrGCiVnsME27E+oGvBgjcLvBgwM Mar 6 02:24:00.754596 sshd-session[5784]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 02:24:00.785515 systemd-logind[1555]: New session 17 of user core. Mar 6 02:24:00.801656 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 6 02:24:01.660009 sshd[5787]: Connection closed by 10.0.0.1 port 43874 Mar 6 02:24:01.661469 sshd-session[5784]: pam_unix(sshd:session): session closed for user core Mar 6 02:24:01.674484 systemd[1]: sshd@16-10.0.0.33:22-10.0.0.1:43874.service: Deactivated successfully. Mar 6 02:24:01.682830 systemd[1]: session-17.scope: Deactivated successfully. Mar 6 02:24:01.689955 systemd-logind[1555]: Session 17 logged out. Waiting for processes to exit. Mar 6 02:24:01.695417 systemd-logind[1555]: Removed session 17. Mar 6 02:24:03.398370 kubelet[2754]: E0306 02:24:03.396470 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:24:06.397284 kubelet[2754]: E0306 02:24:06.395626 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:24:06.734893 systemd[1]: Started sshd@17-10.0.0.33:22-10.0.0.1:43878.service - OpenSSH per-connection server daemon (10.0.0.1:43878). Mar 6 02:24:06.933909 sshd[5823]: Accepted publickey for core from 10.0.0.1 port 43878 ssh2: RSA SHA256:gQgg0JALxZbqjBb2wrGCiVnsME27E+oGvBgjcLvBgwM Mar 6 02:24:06.943559 sshd-session[5823]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 02:24:06.973657 systemd-logind[1555]: New session 18 of user core. Mar 6 02:24:06.982276 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 6 02:24:07.801425 sshd[5826]: Connection closed by 10.0.0.1 port 43878 Mar 6 02:24:07.798891 sshd-session[5823]: pam_unix(sshd:session): session closed for user core Mar 6 02:24:07.813578 systemd-logind[1555]: Session 18 logged out. Waiting for processes to exit. Mar 6 02:24:07.832637 systemd[1]: sshd@17-10.0.0.33:22-10.0.0.1:43878.service: Deactivated successfully. Mar 6 02:24:07.842501 systemd[1]: session-18.scope: Deactivated successfully. Mar 6 02:24:07.855375 systemd-logind[1555]: Removed session 18. Mar 6 02:24:12.827316 systemd[1]: Started sshd@18-10.0.0.33:22-10.0.0.1:56642.service - OpenSSH per-connection server daemon (10.0.0.1:56642). Mar 6 02:24:13.123433 sshd[5875]: Accepted publickey for core from 10.0.0.1 port 56642 ssh2: RSA SHA256:gQgg0JALxZbqjBb2wrGCiVnsME27E+oGvBgjcLvBgwM Mar 6 02:24:13.128590 sshd-session[5875]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 02:24:13.142860 systemd-logind[1555]: New session 19 of user core. Mar 6 02:24:13.151312 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 6 02:24:13.975555 sshd[5878]: Connection closed by 10.0.0.1 port 56642 Mar 6 02:24:13.977864 sshd-session[5875]: pam_unix(sshd:session): session closed for user core Mar 6 02:24:13.990666 systemd[1]: sshd@18-10.0.0.33:22-10.0.0.1:56642.service: Deactivated successfully. Mar 6 02:24:14.000595 systemd[1]: session-19.scope: Deactivated successfully. Mar 6 02:24:14.009038 systemd-logind[1555]: Session 19 logged out. Waiting for processes to exit. Mar 6 02:24:14.013876 systemd[1]: Started sshd@19-10.0.0.33:22-10.0.0.1:56646.service - OpenSSH per-connection server daemon (10.0.0.1:56646). Mar 6 02:24:14.029996 systemd-logind[1555]: Removed session 19. Mar 6 02:24:14.163401 sshd[5892]: Accepted publickey for core from 10.0.0.1 port 56646 ssh2: RSA SHA256:gQgg0JALxZbqjBb2wrGCiVnsME27E+oGvBgjcLvBgwM Mar 6 02:24:14.167294 sshd-session[5892]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 02:24:14.189538 systemd-logind[1555]: New session 20 of user core. Mar 6 02:24:14.204538 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 6 02:24:15.163925 sshd[5895]: Connection closed by 10.0.0.1 port 56646 Mar 6 02:24:15.164965 sshd-session[5892]: pam_unix(sshd:session): session closed for user core Mar 6 02:24:15.189659 systemd[1]: Started sshd@20-10.0.0.33:22-10.0.0.1:56662.service - OpenSSH per-connection server daemon (10.0.0.1:56662). Mar 6 02:24:15.190643 systemd[1]: sshd@19-10.0.0.33:22-10.0.0.1:56646.service: Deactivated successfully. Mar 6 02:24:15.209986 systemd[1]: session-20.scope: Deactivated successfully. Mar 6 02:24:15.225462 systemd-logind[1555]: Session 20 logged out. Waiting for processes to exit. Mar 6 02:24:15.251986 systemd-logind[1555]: Removed session 20. Mar 6 02:24:15.602865 sshd[5928]: Accepted publickey for core from 10.0.0.1 port 56662 ssh2: RSA SHA256:gQgg0JALxZbqjBb2wrGCiVnsME27E+oGvBgjcLvBgwM Mar 6 02:24:15.606964 sshd-session[5928]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 02:24:15.641449 systemd-logind[1555]: New session 21 of user core. Mar 6 02:24:15.657355 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 6 02:24:17.465387 sshd[5936]: Connection closed by 10.0.0.1 port 56662 Mar 6 02:24:17.465322 sshd-session[5928]: pam_unix(sshd:session): session closed for user core Mar 6 02:24:17.483582 systemd[1]: Started sshd@21-10.0.0.33:22-10.0.0.1:56674.service - OpenSSH per-connection server daemon (10.0.0.1:56674). Mar 6 02:24:17.494996 systemd[1]: sshd@20-10.0.0.33:22-10.0.0.1:56662.service: Deactivated successfully. Mar 6 02:24:17.501340 systemd[1]: session-21.scope: Deactivated successfully. Mar 6 02:24:17.501902 systemd[1]: session-21.scope: Consumed 1.327s CPU time, 45.2M memory peak. Mar 6 02:24:17.510678 systemd-logind[1555]: Session 21 logged out. Waiting for processes to exit. Mar 6 02:24:17.516447 systemd-logind[1555]: Removed session 21. Mar 6 02:24:17.710403 sshd[5961]: Accepted publickey for core from 10.0.0.1 port 56674 ssh2: RSA SHA256:gQgg0JALxZbqjBb2wrGCiVnsME27E+oGvBgjcLvBgwM Mar 6 02:24:17.717495 sshd-session[5961]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 02:24:17.748887 systemd-logind[1555]: New session 22 of user core. Mar 6 02:24:17.767392 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 6 02:24:18.395352 kubelet[2754]: E0306 02:24:18.394943 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:24:19.311287 sshd[5969]: Connection closed by 10.0.0.1 port 56674 Mar 6 02:24:19.312456 sshd-session[5961]: pam_unix(sshd:session): session closed for user core Mar 6 02:24:19.334656 systemd[1]: sshd@21-10.0.0.33:22-10.0.0.1:56674.service: Deactivated successfully. Mar 6 02:24:19.343655 systemd[1]: session-22.scope: Deactivated successfully. Mar 6 02:24:19.344507 systemd[1]: session-22.scope: Consumed 1.165s CPU time, 37.9M memory peak. Mar 6 02:24:19.351318 systemd-logind[1555]: Session 22 logged out. Waiting for processes to exit. Mar 6 02:24:19.364540 systemd[1]: Started sshd@22-10.0.0.33:22-10.0.0.1:56686.service - OpenSSH per-connection server daemon (10.0.0.1:56686). Mar 6 02:24:19.375980 systemd-logind[1555]: Removed session 22. Mar 6 02:24:19.606503 sshd[5984]: Accepted publickey for core from 10.0.0.1 port 56686 ssh2: RSA SHA256:gQgg0JALxZbqjBb2wrGCiVnsME27E+oGvBgjcLvBgwM Mar 6 02:24:19.611511 sshd-session[5984]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 02:24:19.632465 systemd-logind[1555]: New session 23 of user core. Mar 6 02:24:19.643947 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 6 02:24:20.066002 sshd[5990]: Connection closed by 10.0.0.1 port 56686 Mar 6 02:24:20.067021 sshd-session[5984]: pam_unix(sshd:session): session closed for user core Mar 6 02:24:20.078434 systemd[1]: sshd@22-10.0.0.33:22-10.0.0.1:56686.service: Deactivated successfully. Mar 6 02:24:20.083041 systemd[1]: session-23.scope: Deactivated successfully. Mar 6 02:24:20.089497 systemd-logind[1555]: Session 23 logged out. Waiting for processes to exit. Mar 6 02:24:20.099595 systemd-logind[1555]: Removed session 23. Mar 6 02:24:25.090572 systemd[1]: Started sshd@23-10.0.0.33:22-10.0.0.1:43992.service - OpenSSH per-connection server daemon (10.0.0.1:43992). Mar 6 02:24:25.247296 sshd[6066]: Accepted publickey for core from 10.0.0.1 port 43992 ssh2: RSA SHA256:gQgg0JALxZbqjBb2wrGCiVnsME27E+oGvBgjcLvBgwM Mar 6 02:24:25.249937 sshd-session[6066]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 02:24:25.273008 systemd-logind[1555]: New session 24 of user core. Mar 6 02:24:25.285615 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 6 02:24:25.679708 sshd[6069]: Connection closed by 10.0.0.1 port 43992 Mar 6 02:24:25.681306 sshd-session[6066]: pam_unix(sshd:session): session closed for user core Mar 6 02:24:25.695950 systemd[1]: sshd@23-10.0.0.33:22-10.0.0.1:43992.service: Deactivated successfully. Mar 6 02:24:25.700874 systemd-logind[1555]: Session 24 logged out. Waiting for processes to exit. Mar 6 02:24:25.708418 systemd[1]: session-24.scope: Deactivated successfully. Mar 6 02:24:25.719013 systemd-logind[1555]: Removed session 24. Mar 6 02:24:30.710356 systemd[1]: Started sshd@24-10.0.0.33:22-10.0.0.1:43628.service - OpenSSH per-connection server daemon (10.0.0.1:43628). Mar 6 02:24:30.839182 sshd[6103]: Accepted publickey for core from 10.0.0.1 port 43628 ssh2: RSA SHA256:gQgg0JALxZbqjBb2wrGCiVnsME27E+oGvBgjcLvBgwM Mar 6 02:24:30.841609 sshd-session[6103]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 02:24:30.857666 systemd-logind[1555]: New session 25 of user core. Mar 6 02:24:30.873929 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 6 02:24:31.205477 sshd[6106]: Connection closed by 10.0.0.1 port 43628 Mar 6 02:24:31.206514 sshd-session[6103]: pam_unix(sshd:session): session closed for user core Mar 6 02:24:31.217492 systemd[1]: sshd@24-10.0.0.33:22-10.0.0.1:43628.service: Deactivated successfully. Mar 6 02:24:31.222611 systemd[1]: session-25.scope: Deactivated successfully. Mar 6 02:24:31.228392 systemd-logind[1555]: Session 25 logged out. Waiting for processes to exit. Mar 6 02:24:31.234627 systemd-logind[1555]: Removed session 25. Mar 6 02:24:36.230546 systemd[1]: Started sshd@25-10.0.0.33:22-10.0.0.1:43632.service - OpenSSH per-connection server daemon (10.0.0.1:43632). Mar 6 02:24:36.376909 sshd[6142]: Accepted publickey for core from 10.0.0.1 port 43632 ssh2: RSA SHA256:gQgg0JALxZbqjBb2wrGCiVnsME27E+oGvBgjcLvBgwM Mar 6 02:24:36.381540 sshd-session[6142]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 02:24:36.407570 systemd-logind[1555]: New session 26 of user core. Mar 6 02:24:36.422863 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 6 02:24:36.814915 sshd[6145]: Connection closed by 10.0.0.1 port 43632 Mar 6 02:24:36.815587 sshd-session[6142]: pam_unix(sshd:session): session closed for user core Mar 6 02:24:36.826026 systemd[1]: sshd@25-10.0.0.33:22-10.0.0.1:43632.service: Deactivated successfully. Mar 6 02:24:36.837569 systemd[1]: session-26.scope: Deactivated successfully. Mar 6 02:24:36.845285 systemd-logind[1555]: Session 26 logged out. Waiting for processes to exit. Mar 6 02:24:36.851415 systemd-logind[1555]: Removed session 26. Mar 6 02:24:39.401449 kubelet[2754]: E0306 02:24:39.401405 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:24:41.842671 systemd[1]: Started sshd@26-10.0.0.33:22-10.0.0.1:48284.service - OpenSSH per-connection server daemon (10.0.0.1:48284). Mar 6 02:24:42.137398 sshd[6188]: Accepted publickey for core from 10.0.0.1 port 48284 ssh2: RSA SHA256:gQgg0JALxZbqjBb2wrGCiVnsME27E+oGvBgjcLvBgwM Mar 6 02:24:42.141616 sshd-session[6188]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 02:24:42.169273 systemd-logind[1555]: New session 27 of user core. Mar 6 02:24:42.181710 systemd[1]: Started session-27.scope - Session 27 of User core. Mar 6 02:24:42.719263 sshd[6191]: Connection closed by 10.0.0.1 port 48284 Mar 6 02:24:42.719689 sshd-session[6188]: pam_unix(sshd:session): session closed for user core Mar 6 02:24:42.730999 systemd[1]: sshd@26-10.0.0.33:22-10.0.0.1:48284.service: Deactivated successfully. Mar 6 02:24:42.738936 systemd[1]: session-27.scope: Deactivated successfully. Mar 6 02:24:42.747012 systemd-logind[1555]: Session 27 logged out. Waiting for processes to exit. Mar 6 02:24:42.752743 systemd-logind[1555]: Removed session 27. Mar 6 02:24:43.398957 kubelet[2754]: E0306 02:24:43.398034 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"