Mar 11 01:23:30.693548 kernel: Linux version 6.12.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue Mar 10 22:50:44 -00 2026 Mar 11 01:23:30.693577 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ff6feea892c44d03e54b3d5ffbb43831b88909b30f2f39fbf5cd79dd05d89672 Mar 11 01:23:30.693592 kernel: BIOS-provided physical RAM map: Mar 11 01:23:30.693600 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Mar 11 01:23:30.693608 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Mar 11 01:23:30.693616 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Mar 11 01:23:30.693626 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Mar 11 01:23:30.693719 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Mar 11 01:23:30.693728 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 11 01:23:30.693736 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Mar 11 01:23:30.693745 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 11 01:23:30.693757 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Mar 11 01:23:30.693765 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 11 01:23:30.693774 kernel: NX (Execute Disable) protection: active Mar 11 01:23:30.693785 kernel: APIC: Static calls initialized Mar 11 01:23:30.693794 kernel: SMBIOS 2.8 present. Mar 11 01:23:30.693806 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Mar 11 01:23:30.693814 kernel: DMI: Memory slots populated: 1/1 Mar 11 01:23:30.693823 kernel: Hypervisor detected: KVM Mar 11 01:23:30.693832 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Mar 11 01:23:30.693841 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 11 01:23:30.693850 kernel: kvm-clock: using sched offset of 21135802396 cycles Mar 11 01:23:30.693860 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 11 01:23:30.693869 kernel: tsc: Detected 2445.426 MHz processor Mar 11 01:23:30.693878 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 11 01:23:30.693888 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 11 01:23:30.693901 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Mar 11 01:23:30.693910 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Mar 11 01:23:30.693919 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 11 01:23:30.693928 kernel: Using GB pages for direct mapping Mar 11 01:23:30.693937 kernel: ACPI: Early table checksum verification disabled Mar 11 01:23:30.693947 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Mar 11 01:23:30.693956 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 11 01:23:30.693966 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 11 01:23:30.693975 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 11 01:23:30.693988 kernel: ACPI: FACS 0x000000009CFE0000 000040 Mar 11 01:23:30.693997 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 11 01:23:30.694007 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 11 01:23:30.694016 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 11 01:23:30.694026 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 11 01:23:30.694039 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Mar 11 01:23:30.694052 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Mar 11 01:23:30.694062 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Mar 11 01:23:30.694072 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Mar 11 01:23:30.694082 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Mar 11 01:23:30.694092 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Mar 11 01:23:30.694101 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Mar 11 01:23:30.694112 kernel: No NUMA configuration found Mar 11 01:23:30.694122 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Mar 11 01:23:30.694233 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Mar 11 01:23:30.694244 kernel: Zone ranges: Mar 11 01:23:30.694254 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 11 01:23:30.694264 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Mar 11 01:23:30.694273 kernel: Normal empty Mar 11 01:23:30.694283 kernel: Device empty Mar 11 01:23:30.694292 kernel: Movable zone start for each node Mar 11 01:23:30.694302 kernel: Early memory node ranges Mar 11 01:23:30.694313 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Mar 11 01:23:30.694327 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Mar 11 01:23:30.694337 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Mar 11 01:23:30.694346 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 11 01:23:30.694356 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Mar 11 01:23:30.694366 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Mar 11 01:23:30.694376 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 11 01:23:30.694385 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 11 01:23:30.694395 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 11 01:23:30.694405 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 11 01:23:30.694417 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 11 01:23:30.694427 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 11 01:23:30.694437 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 11 01:23:30.694447 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 11 01:23:30.694457 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 11 01:23:30.694467 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 11 01:23:30.694477 kernel: TSC deadline timer available Mar 11 01:23:30.694486 kernel: CPU topo: Max. logical packages: 1 Mar 11 01:23:30.694496 kernel: CPU topo: Max. logical dies: 1 Mar 11 01:23:30.694508 kernel: CPU topo: Max. dies per package: 1 Mar 11 01:23:30.694518 kernel: CPU topo: Max. threads per core: 1 Mar 11 01:23:30.694528 kernel: CPU topo: Num. cores per package: 4 Mar 11 01:23:30.694538 kernel: CPU topo: Num. threads per package: 4 Mar 11 01:23:30.694547 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Mar 11 01:23:30.694557 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 11 01:23:30.694567 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 11 01:23:30.694576 kernel: kvm-guest: setup PV sched yield Mar 11 01:23:30.694586 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Mar 11 01:23:30.694596 kernel: Booting paravirtualized kernel on KVM Mar 11 01:23:30.694609 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 11 01:23:30.694619 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Mar 11 01:23:30.694629 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Mar 11 01:23:30.694730 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Mar 11 01:23:30.694740 kernel: pcpu-alloc: [0] 0 1 2 3 Mar 11 01:23:30.694750 kernel: kvm-guest: PV spinlocks enabled Mar 11 01:23:30.694760 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 11 01:23:30.694771 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ff6feea892c44d03e54b3d5ffbb43831b88909b30f2f39fbf5cd79dd05d89672 Mar 11 01:23:30.694785 kernel: random: crng init done Mar 11 01:23:30.694796 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 11 01:23:30.694806 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 11 01:23:30.694816 kernel: Fallback order for Node 0: 0 Mar 11 01:23:30.694825 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Mar 11 01:23:30.694835 kernel: Policy zone: DMA32 Mar 11 01:23:30.694845 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 11 01:23:30.694855 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 11 01:23:30.694865 kernel: ftrace: allocating 40099 entries in 157 pages Mar 11 01:23:30.694879 kernel: ftrace: allocated 157 pages with 5 groups Mar 11 01:23:30.694890 kernel: Dynamic Preempt: voluntary Mar 11 01:23:30.694901 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 11 01:23:30.694913 kernel: rcu: RCU event tracing is enabled. Mar 11 01:23:30.694925 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 11 01:23:30.694936 kernel: Trampoline variant of Tasks RCU enabled. Mar 11 01:23:30.694947 kernel: Rude variant of Tasks RCU enabled. Mar 11 01:23:30.694958 kernel: Tracing variant of Tasks RCU enabled. Mar 11 01:23:30.694968 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 11 01:23:30.694984 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 11 01:23:30.694994 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 11 01:23:30.695005 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 11 01:23:30.695016 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 11 01:23:30.695027 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Mar 11 01:23:30.695037 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 11 01:23:30.695058 kernel: Console: colour VGA+ 80x25 Mar 11 01:23:30.695071 kernel: printk: legacy console [ttyS0] enabled Mar 11 01:23:30.695081 kernel: ACPI: Core revision 20240827 Mar 11 01:23:30.695092 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 11 01:23:30.695102 kernel: APIC: Switch to symmetric I/O mode setup Mar 11 01:23:30.695112 kernel: x2apic enabled Mar 11 01:23:30.695125 kernel: APIC: Switched APIC routing to: physical x2apic Mar 11 01:23:30.695237 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Mar 11 01:23:30.695248 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Mar 11 01:23:30.695258 kernel: kvm-guest: setup PV IPIs Mar 11 01:23:30.695269 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 11 01:23:30.695284 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Mar 11 01:23:30.695294 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Mar 11 01:23:30.695304 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 11 01:23:30.695315 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 11 01:23:30.695325 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 11 01:23:30.695335 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 11 01:23:30.695345 kernel: Spectre V2 : Mitigation: Retpolines Mar 11 01:23:30.695355 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 11 01:23:30.695368 kernel: Speculative Store Bypass: Vulnerable Mar 11 01:23:30.695379 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Mar 11 01:23:30.695390 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Mar 11 01:23:30.695400 kernel: active return thunk: srso_alias_return_thunk Mar 11 01:23:30.695411 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Mar 11 01:23:30.695421 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Mar 11 01:23:30.695431 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Mar 11 01:23:30.695442 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 11 01:23:30.695452 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 11 01:23:30.695466 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 11 01:23:30.695476 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 11 01:23:30.695486 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Mar 11 01:23:30.695496 kernel: Freeing SMP alternatives memory: 32K Mar 11 01:23:30.695507 kernel: pid_max: default: 32768 minimum: 301 Mar 11 01:23:30.695517 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Mar 11 01:23:30.695526 kernel: landlock: Up and running. Mar 11 01:23:30.695537 kernel: SELinux: Initializing. Mar 11 01:23:30.695547 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 11 01:23:30.695560 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 11 01:23:30.695576 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Mar 11 01:23:30.695587 kernel: Performance Events: PMU not available due to virtualization, using software events only. Mar 11 01:23:30.695598 kernel: signal: max sigframe size: 1776 Mar 11 01:23:30.695608 kernel: rcu: Hierarchical SRCU implementation. Mar 11 01:23:30.695619 kernel: rcu: Max phase no-delay instances is 400. Mar 11 01:23:30.695629 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Mar 11 01:23:30.695719 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 11 01:23:30.695731 kernel: smp: Bringing up secondary CPUs ... Mar 11 01:23:30.695746 kernel: smpboot: x86: Booting SMP configuration: Mar 11 01:23:30.695756 kernel: .... node #0, CPUs: #1 #2 #3 Mar 11 01:23:30.695766 kernel: smp: Brought up 1 node, 4 CPUs Mar 11 01:23:30.695776 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Mar 11 01:23:30.695788 kernel: Memory: 2420720K/2571752K available (14336K kernel code, 2445K rwdata, 26064K rodata, 46196K init, 2564K bss, 145092K reserved, 0K cma-reserved) Mar 11 01:23:30.695798 kernel: devtmpfs: initialized Mar 11 01:23:30.695808 kernel: x86/mm: Memory block size: 128MB Mar 11 01:23:30.695819 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 11 01:23:30.695832 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 11 01:23:30.695843 kernel: pinctrl core: initialized pinctrl subsystem Mar 11 01:23:30.695854 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 11 01:23:30.695864 kernel: audit: initializing netlink subsys (disabled) Mar 11 01:23:30.695874 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 11 01:23:30.695885 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 11 01:23:30.695895 kernel: audit: type=2000 audit(1773192196.094:1): state=initialized audit_enabled=0 res=1 Mar 11 01:23:30.695905 kernel: cpuidle: using governor menu Mar 11 01:23:30.695916 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 11 01:23:30.695929 kernel: dca service started, version 1.12.1 Mar 11 01:23:30.695939 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Mar 11 01:23:30.695949 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Mar 11 01:23:30.695960 kernel: PCI: Using configuration type 1 for base access Mar 11 01:23:30.695971 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 11 01:23:30.695981 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 11 01:23:30.695991 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 11 01:23:30.696001 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 11 01:23:30.696012 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 11 01:23:30.696025 kernel: ACPI: Added _OSI(Module Device) Mar 11 01:23:30.696035 kernel: ACPI: Added _OSI(Processor Device) Mar 11 01:23:30.696045 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 11 01:23:30.696055 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 11 01:23:30.696065 kernel: ACPI: Interpreter enabled Mar 11 01:23:30.696075 kernel: ACPI: PM: (supports S0 S3 S5) Mar 11 01:23:30.696085 kernel: ACPI: Using IOAPIC for interrupt routing Mar 11 01:23:30.696095 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 11 01:23:30.696106 kernel: PCI: Using E820 reservations for host bridge windows Mar 11 01:23:30.696120 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 11 01:23:30.696956 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 11 01:23:30.707465 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 11 01:23:30.707731 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 11 01:23:30.707950 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 11 01:23:30.707972 kernel: PCI host bridge to bus 0000:00 Mar 11 01:23:30.708299 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 11 01:23:30.708482 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 11 01:23:30.708621 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 11 01:23:30.708809 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Mar 11 01:23:30.708964 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 11 01:23:30.709104 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Mar 11 01:23:30.709305 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 11 01:23:30.709572 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Mar 11 01:23:30.709851 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Mar 11 01:23:30.710023 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Mar 11 01:23:30.710242 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Mar 11 01:23:30.710399 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Mar 11 01:23:30.710558 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 11 01:23:30.710857 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Mar 11 01:23:30.711028 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Mar 11 01:23:30.711259 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Mar 11 01:23:30.711438 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Mar 11 01:23:30.711684 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Mar 11 01:23:30.711893 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Mar 11 01:23:30.712110 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Mar 11 01:23:30.712340 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Mar 11 01:23:30.712618 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Mar 11 01:23:30.712905 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Mar 11 01:23:30.713101 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Mar 11 01:23:30.713340 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Mar 11 01:23:30.713498 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Mar 11 01:23:30.713773 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Mar 11 01:23:30.713951 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 11 01:23:30.714107 kernel: pci 0000:00:1f.0: quirk_ich7_lpc+0x0/0xc0 took 12695 usecs Mar 11 01:23:30.714514 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Mar 11 01:23:30.714736 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Mar 11 01:23:30.714904 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Mar 11 01:23:30.715216 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Mar 11 01:23:30.715404 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Mar 11 01:23:30.715424 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 11 01:23:30.715441 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 11 01:23:30.715453 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 11 01:23:30.715463 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 11 01:23:30.715474 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 11 01:23:30.715485 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 11 01:23:30.715496 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 11 01:23:30.715507 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 11 01:23:30.715518 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 11 01:23:30.715529 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 11 01:23:30.715544 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 11 01:23:30.715555 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 11 01:23:30.715566 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 11 01:23:30.715577 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 11 01:23:30.715588 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 11 01:23:30.715600 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 11 01:23:30.715611 kernel: iommu: Default domain type: Translated Mar 11 01:23:30.715622 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 11 01:23:30.715684 kernel: PCI: Using ACPI for IRQ routing Mar 11 01:23:30.715704 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 11 01:23:30.715716 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Mar 11 01:23:30.715728 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Mar 11 01:23:30.715916 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 11 01:23:30.716089 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 11 01:23:30.716330 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 11 01:23:30.716350 kernel: vgaarb: loaded Mar 11 01:23:30.716362 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 11 01:23:30.716380 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 11 01:23:30.716394 kernel: clocksource: Switched to clocksource kvm-clock Mar 11 01:23:30.716405 kernel: VFS: Disk quotas dquot_6.6.0 Mar 11 01:23:30.716417 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 11 01:23:30.716429 kernel: pnp: PnP ACPI init Mar 11 01:23:30.716914 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Mar 11 01:23:30.716934 kernel: pnp: PnP ACPI: found 6 devices Mar 11 01:23:30.716945 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 11 01:23:30.716960 kernel: NET: Registered PF_INET protocol family Mar 11 01:23:30.716970 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 11 01:23:30.716981 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 11 01:23:30.716993 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 11 01:23:30.717007 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 11 01:23:30.717019 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 11 01:23:30.717032 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 11 01:23:30.717043 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 11 01:23:30.717053 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 11 01:23:30.717069 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 11 01:23:30.717081 kernel: NET: Registered PF_XDP protocol family Mar 11 01:23:30.717309 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 11 01:23:30.717473 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 11 01:23:30.717676 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 11 01:23:30.717839 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Mar 11 01:23:30.717995 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Mar 11 01:23:30.718218 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Mar 11 01:23:30.718242 kernel: PCI: CLS 0 bytes, default 64 Mar 11 01:23:30.718254 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Mar 11 01:23:30.718266 kernel: Initialise system trusted keyrings Mar 11 01:23:30.718277 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 11 01:23:30.718288 kernel: Key type asymmetric registered Mar 11 01:23:30.718300 kernel: Asymmetric key parser 'x509' registered Mar 11 01:23:30.718314 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 11 01:23:30.718325 kernel: io scheduler mq-deadline registered Mar 11 01:23:30.718335 kernel: io scheduler kyber registered Mar 11 01:23:30.718351 kernel: io scheduler bfq registered Mar 11 01:23:30.718362 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 11 01:23:30.718374 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 11 01:23:30.718388 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 11 01:23:30.718402 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 11 01:23:30.718416 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 11 01:23:30.718430 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 11 01:23:30.718444 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 11 01:23:30.718458 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 11 01:23:30.718476 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 11 01:23:30.718778 kernel: rtc_cmos 00:04: RTC can wake from S4 Mar 11 01:23:30.718802 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 11 01:23:30.718994 kernel: rtc_cmos 00:04: registered as rtc0 Mar 11 01:23:30.719231 kernel: rtc_cmos 00:04: setting system clock to 2026-03-11T01:23:29 UTC (1773192209) Mar 11 01:23:30.719392 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Mar 11 01:23:30.719409 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 11 01:23:30.719420 kernel: NET: Registered PF_INET6 protocol family Mar 11 01:23:30.719435 kernel: Segment Routing with IPv6 Mar 11 01:23:30.719446 kernel: In-situ OAM (IOAM) with IPv6 Mar 11 01:23:30.719457 kernel: NET: Registered PF_PACKET protocol family Mar 11 01:23:30.719467 kernel: Key type dns_resolver registered Mar 11 01:23:30.719477 kernel: IPI shorthand broadcast: enabled Mar 11 01:23:30.719487 kernel: sched_clock: Marking stable (7643071690, 3133802429)->(14305515627, -3528641508) Mar 11 01:23:30.719498 kernel: registered taskstats version 1 Mar 11 01:23:30.719508 kernel: Loading compiled-in X.509 certificates Mar 11 01:23:30.719518 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.74-flatcar: f12e91812e886b584a72b5c6c0acfeca2f4315f4' Mar 11 01:23:30.719531 kernel: Demotion targets for Node 0: null Mar 11 01:23:30.719542 kernel: Key type .fscrypt registered Mar 11 01:23:30.719553 kernel: Key type fscrypt-provisioning registered Mar 11 01:23:30.719566 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 11 01:23:30.719578 kernel: ima: Allocated hash algorithm: sha1 Mar 11 01:23:30.719591 kernel: ima: No architecture policies found Mar 11 01:23:30.719604 kernel: clk: Disabling unused clocks Mar 11 01:23:30.719618 kernel: Warning: unable to open an initial console. Mar 11 01:23:30.719661 kernel: Freeing unused kernel image (initmem) memory: 46196K Mar 11 01:23:30.719680 kernel: Write protecting the kernel read-only data: 40960k Mar 11 01:23:30.719692 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Mar 11 01:23:30.719703 kernel: Run /init as init process Mar 11 01:23:30.719713 kernel: with arguments: Mar 11 01:23:30.719723 kernel: /init Mar 11 01:23:30.719734 kernel: with environment: Mar 11 01:23:30.719744 kernel: HOME=/ Mar 11 01:23:30.719755 kernel: TERM=linux Mar 11 01:23:30.719767 systemd[1]: Successfully made /usr/ read-only. Mar 11 01:23:30.719786 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 11 01:23:30.719798 systemd[1]: Detected virtualization kvm. Mar 11 01:23:30.719809 systemd[1]: Detected architecture x86-64. Mar 11 01:23:30.719819 systemd[1]: Running in initrd. Mar 11 01:23:30.719830 systemd[1]: No hostname configured, using default hostname. Mar 11 01:23:30.719841 systemd[1]: Hostname set to . Mar 11 01:23:30.719851 systemd[1]: Initializing machine ID from VM UUID. Mar 11 01:23:30.719865 systemd[1]: Queued start job for default target initrd.target. Mar 11 01:23:30.719889 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 11 01:23:30.719903 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 11 01:23:30.719916 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 11 01:23:30.719927 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 11 01:23:30.719939 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 11 01:23:30.719955 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 11 01:23:30.719967 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 11 01:23:30.719978 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 11 01:23:30.719989 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 11 01:23:30.720000 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 11 01:23:30.720011 systemd[1]: Reached target paths.target - Path Units. Mar 11 01:23:30.720022 systemd[1]: Reached target slices.target - Slice Units. Mar 11 01:23:30.720035 systemd[1]: Reached target swap.target - Swaps. Mar 11 01:23:30.720047 systemd[1]: Reached target timers.target - Timer Units. Mar 11 01:23:30.720059 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 11 01:23:30.720070 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 11 01:23:30.720082 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 11 01:23:30.720093 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 11 01:23:30.720104 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 11 01:23:30.720116 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 11 01:23:30.720198 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 11 01:23:30.720217 systemd[1]: Reached target sockets.target - Socket Units. Mar 11 01:23:30.720229 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 11 01:23:30.720241 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 11 01:23:30.720252 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 11 01:23:30.720264 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Mar 11 01:23:30.720276 systemd[1]: Starting systemd-fsck-usr.service... Mar 11 01:23:30.720288 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 11 01:23:30.720300 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 11 01:23:30.720314 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 11 01:23:30.720326 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 11 01:23:30.720344 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 11 01:23:30.720354 systemd[1]: Finished systemd-fsck-usr.service. Mar 11 01:23:30.720401 systemd-journald[203]: Collecting audit messages is disabled. Mar 11 01:23:30.720432 systemd-journald[203]: Journal started Mar 11 01:23:30.720455 systemd-journald[203]: Runtime Journal (/run/log/journal/f3881276884b4c24972249aa9db58310) is 6M, max 48.3M, 42.2M free. Mar 11 01:23:30.689959 systemd-modules-load[204]: Inserted module 'overlay' Mar 11 01:23:30.738572 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 11 01:23:30.738628 systemd[1]: Started systemd-journald.service - Journal Service. Mar 11 01:23:30.761982 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 11 01:23:30.775784 kernel: Bridge firewalling registered Mar 11 01:23:30.775957 systemd-modules-load[204]: Inserted module 'br_netfilter' Mar 11 01:23:30.984231 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 11 01:23:31.018068 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 11 01:23:31.022939 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 11 01:23:31.049316 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 11 01:23:31.060818 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 11 01:23:31.065537 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 11 01:23:31.088405 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 11 01:23:31.096272 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 11 01:23:31.121967 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 11 01:23:31.129719 systemd-tmpfiles[224]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Mar 11 01:23:31.130230 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 11 01:23:31.135277 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 11 01:23:31.149115 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 11 01:23:31.171763 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 11 01:23:31.179418 dracut-cmdline[238]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ff6feea892c44d03e54b3d5ffbb43831b88909b30f2f39fbf5cd79dd05d89672 Mar 11 01:23:31.274403 systemd-resolved[248]: Positive Trust Anchors: Mar 11 01:23:31.274770 systemd-resolved[248]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 11 01:23:31.274811 systemd-resolved[248]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 11 01:23:31.279686 systemd-resolved[248]: Defaulting to hostname 'linux'. Mar 11 01:23:31.281024 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 11 01:23:31.286617 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 11 01:23:31.460225 kernel: SCSI subsystem initialized Mar 11 01:23:31.480665 kernel: Loading iSCSI transport class v2.0-870. Mar 11 01:23:31.518246 kernel: iscsi: registered transport (tcp) Mar 11 01:23:31.553711 kernel: iscsi: registered transport (qla4xxx) Mar 11 01:23:31.553836 kernel: QLogic iSCSI HBA Driver Mar 11 01:23:31.603398 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 11 01:23:31.644032 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 11 01:23:31.650406 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 11 01:23:31.788697 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 11 01:23:31.810991 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 11 01:23:31.921507 kernel: raid6: avx2x4 gen() 20050 MB/s Mar 11 01:23:31.940859 kernel: raid6: avx2x2 gen() 20027 MB/s Mar 11 01:23:31.961687 kernel: raid6: avx2x1 gen() 11768 MB/s Mar 11 01:23:31.961982 kernel: raid6: using algorithm avx2x4 gen() 20050 MB/s Mar 11 01:23:31.984728 kernel: raid6: .... xor() 5037 MB/s, rmw enabled Mar 11 01:23:31.984871 kernel: raid6: using avx2x2 recovery algorithm Mar 11 01:23:32.028229 kernel: xor: automatically using best checksumming function avx Mar 11 01:23:32.779308 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 11 01:23:32.820052 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 11 01:23:32.833115 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 11 01:23:32.898929 systemd-udevd[451]: Using default interface naming scheme 'v255'. Mar 11 01:23:32.907294 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 11 01:23:32.919326 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 11 01:23:33.002757 dracut-pre-trigger[460]: rd.md=0: removing MD RAID activation Mar 11 01:23:33.124416 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 11 01:23:33.136470 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 11 01:23:33.309020 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 11 01:23:33.321923 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 11 01:23:33.394368 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Mar 11 01:23:33.423567 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 11 01:23:33.427175 kernel: cryptd: max_cpu_qlen set to 1000 Mar 11 01:23:33.459208 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 11 01:23:33.459288 kernel: GPT:9289727 != 19775487 Mar 11 01:23:33.459304 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 11 01:23:33.459320 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Mar 11 01:23:33.459336 kernel: GPT:9289727 != 19775487 Mar 11 01:23:33.459350 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 11 01:23:33.459364 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 11 01:23:33.462215 kernel: libata version 3.00 loaded. Mar 11 01:23:33.471069 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 11 01:23:33.471361 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 11 01:23:33.491785 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 11 01:23:33.524468 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 11 01:23:33.539449 kernel: AES CTR mode by8 optimization enabled Mar 11 01:23:33.533368 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 11 01:23:33.553082 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 11 01:23:33.568847 kernel: ahci 0000:00:1f.2: version 3.0 Mar 11 01:23:33.575811 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 11 01:23:33.630727 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Mar 11 01:23:33.632526 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Mar 11 01:23:33.632788 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 11 01:23:33.648830 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 11 01:23:33.665393 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 11 01:23:33.915992 kernel: scsi host0: ahci Mar 11 01:23:33.917790 kernel: scsi host1: ahci Mar 11 01:23:33.929446 kernel: scsi host2: ahci Mar 11 01:23:33.929772 kernel: scsi host3: ahci Mar 11 01:23:33.940984 kernel: scsi host4: ahci Mar 11 01:23:33.941717 kernel: scsi host5: ahci Mar 11 01:23:33.941965 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 lpm-pol 1 Mar 11 01:23:33.941984 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 lpm-pol 1 Mar 11 01:23:33.942000 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 lpm-pol 1 Mar 11 01:23:33.942042 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 lpm-pol 1 Mar 11 01:23:33.942059 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 lpm-pol 1 Mar 11 01:23:33.942074 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 lpm-pol 1 Mar 11 01:23:33.930979 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 11 01:23:33.976440 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 11 01:23:34.051464 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 11 01:23:34.051504 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 11 01:23:34.051557 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Mar 11 01:23:34.051578 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 11 01:23:34.051596 kernel: ata3.00: LPM support broken, forcing max_power Mar 11 01:23:34.051613 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Mar 11 01:23:34.051631 kernel: ata3.00: applying bridge limits Mar 11 01:23:34.051647 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 11 01:23:34.038864 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 11 01:23:34.130553 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 11 01:23:34.130595 kernel: ata3.00: LPM support broken, forcing max_power Mar 11 01:23:34.130614 kernel: ata3.00: configured for UDMA/100 Mar 11 01:23:34.130632 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 11 01:23:34.097042 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 11 01:23:34.215223 disk-uuid[615]: Primary Header is updated. Mar 11 01:23:34.215223 disk-uuid[615]: Secondary Entries is updated. Mar 11 01:23:34.215223 disk-uuid[615]: Secondary Header is updated. Mar 11 01:23:34.246204 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 11 01:23:34.338719 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Mar 11 01:23:34.339071 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 11 01:23:34.364528 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Mar 11 01:23:34.963920 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 11 01:23:34.976899 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 11 01:23:35.006646 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 11 01:23:35.021485 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 11 01:23:35.037004 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 11 01:23:35.129424 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 11 01:23:35.315777 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 11 01:23:35.327922 disk-uuid[616]: The operation has completed successfully. Mar 11 01:23:35.456750 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 11 01:23:35.456956 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 11 01:23:35.538040 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 11 01:23:35.591993 sh[645]: Success Mar 11 01:23:35.666407 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 11 01:23:35.666566 kernel: device-mapper: uevent: version 1.0.3 Mar 11 01:23:35.670014 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Mar 11 01:23:35.752258 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Mar 11 01:23:35.920667 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 11 01:23:35.949235 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 11 01:23:35.963807 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 11 01:23:36.004505 kernel: BTRFS: device fsid c4a8fdff-e73d-4b80-8dd0-34744c0eaa22 devid 1 transid 39 /dev/mapper/usr (253:0) scanned by mount (657) Mar 11 01:23:36.013073 kernel: BTRFS info (device dm-0): first mount of filesystem c4a8fdff-e73d-4b80-8dd0-34744c0eaa22 Mar 11 01:23:36.013646 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 11 01:23:36.076294 kernel: BTRFS info (device dm-0 state E): disabling log replay at mount time Mar 11 01:23:36.076515 kernel: BTRFS info (device dm-0 state E): enabling free space tree Mar 11 01:23:36.080098 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 11 01:23:36.087253 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Mar 11 01:23:36.102619 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 11 01:23:36.115095 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 11 01:23:36.125412 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 11 01:23:36.282310 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (696) Mar 11 01:23:36.301786 kernel: BTRFS info (device vda6): first mount of filesystem 9d3cb493-f0b4-4e82-903e-5b69b46b4bc5 Mar 11 01:23:36.301929 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 11 01:23:36.337448 kernel: BTRFS info (device vda6): turning on async discard Mar 11 01:23:36.337530 kernel: BTRFS info (device vda6): enabling free space tree Mar 11 01:23:36.352342 kernel: BTRFS info (device vda6): last unmount of filesystem 9d3cb493-f0b4-4e82-903e-5b69b46b4bc5 Mar 11 01:23:36.376631 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 11 01:23:36.404390 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 11 01:23:37.120390 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 11 01:23:37.136592 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 11 01:23:37.152810 ignition[758]: Ignition 2.22.0 Mar 11 01:23:37.152821 ignition[758]: Stage: fetch-offline Mar 11 01:23:37.152906 ignition[758]: no configs at "/usr/lib/ignition/base.d" Mar 11 01:23:37.152922 ignition[758]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 11 01:23:37.153096 ignition[758]: parsed url from cmdline: "" Mar 11 01:23:37.153102 ignition[758]: no config URL provided Mar 11 01:23:37.153111 ignition[758]: reading system config file "/usr/lib/ignition/user.ign" Mar 11 01:23:37.153124 ignition[758]: no config at "/usr/lib/ignition/user.ign" Mar 11 01:23:37.153281 ignition[758]: op(1): [started] loading QEMU firmware config module Mar 11 01:23:37.153290 ignition[758]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 11 01:23:37.216340 ignition[758]: op(1): [finished] loading QEMU firmware config module Mar 11 01:23:37.315908 systemd-networkd[833]: lo: Link UP Mar 11 01:23:37.315939 systemd-networkd[833]: lo: Gained carrier Mar 11 01:23:37.318380 systemd-networkd[833]: Enumeration completed Mar 11 01:23:37.318532 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 11 01:23:37.320127 systemd-networkd[833]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 11 01:23:37.320236 systemd-networkd[833]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 11 01:23:37.324385 systemd-networkd[833]: eth0: Link UP Mar 11 01:23:37.324602 systemd-networkd[833]: eth0: Gained carrier Mar 11 01:23:37.324615 systemd-networkd[833]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 11 01:23:37.325942 systemd[1]: Reached target network.target - Network. Mar 11 01:23:37.362282 systemd-networkd[833]: eth0: DHCPv4 address 10.0.0.26/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 11 01:23:37.473587 systemd-resolved[248]: Detected conflict on linux IN A 10.0.0.26 Mar 11 01:23:37.473628 systemd-resolved[248]: Hostname conflict, changing published hostname from 'linux' to 'linux7'. Mar 11 01:23:37.524114 ignition[758]: parsing config with SHA512: e8855e058cd43e72b3f1a3b12c2301870fa1a58334a0813fd4ba3687bf0058cab1f6d009f6d59fea28d159c2a9b5e936635315fc4340bde6ef55d6fcfb6ffb8c Mar 11 01:23:37.555949 unknown[758]: fetched base config from "system" Mar 11 01:23:37.558919 unknown[758]: fetched user config from "qemu" Mar 11 01:23:37.559796 ignition[758]: fetch-offline: fetch-offline passed Mar 11 01:23:37.569044 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 11 01:23:37.559944 ignition[758]: Ignition finished successfully Mar 11 01:23:37.580839 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 11 01:23:37.583928 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 11 01:23:37.902024 ignition[840]: Ignition 2.22.0 Mar 11 01:23:37.902058 ignition[840]: Stage: kargs Mar 11 01:23:37.902322 ignition[840]: no configs at "/usr/lib/ignition/base.d" Mar 11 01:23:37.902335 ignition[840]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 11 01:23:37.910827 ignition[840]: kargs: kargs passed Mar 11 01:23:37.926874 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 11 01:23:37.914526 ignition[840]: Ignition finished successfully Mar 11 01:23:37.941280 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 11 01:23:38.553247 ignition[848]: Ignition 2.22.0 Mar 11 01:23:38.553294 ignition[848]: Stage: disks Mar 11 01:23:38.553918 ignition[848]: no configs at "/usr/lib/ignition/base.d" Mar 11 01:23:38.553934 ignition[848]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 11 01:23:38.564317 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 11 01:23:38.556947 ignition[848]: disks: disks passed Mar 11 01:23:38.571561 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 11 01:23:38.557019 ignition[848]: Ignition finished successfully Mar 11 01:23:38.578299 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 11 01:23:38.582382 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 11 01:23:38.586383 systemd[1]: Reached target sysinit.target - System Initialization. Mar 11 01:23:38.593804 systemd[1]: Reached target basic.target - Basic System. Mar 11 01:23:38.622990 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 11 01:23:38.705895 systemd-fsck[857]: ROOT: clean, 15/553520 files, 52789/553472 blocks Mar 11 01:23:38.718397 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 11 01:23:38.729590 systemd-networkd[833]: eth0: Gained IPv6LL Mar 11 01:23:38.734910 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 11 01:23:39.374323 kernel: EXT4-fs (vda9): mounted filesystem c1f3cdbf-5b21-4788-be5d-09feabe58a3e r/w with ordered data mode. Quota mode: none. Mar 11 01:23:39.377029 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 11 01:23:39.378011 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 11 01:23:39.403993 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 11 01:23:39.416929 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 11 01:23:39.420745 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 11 01:23:39.420830 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 11 01:23:39.420870 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 11 01:23:39.455409 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 11 01:23:39.468360 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 11 01:23:39.489452 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (865) Mar 11 01:23:39.489532 kernel: BTRFS info (device vda6): first mount of filesystem 9d3cb493-f0b4-4e82-903e-5b69b46b4bc5 Mar 11 01:23:39.489548 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 11 01:23:39.505574 kernel: BTRFS info (device vda6): turning on async discard Mar 11 01:23:39.505656 kernel: BTRFS info (device vda6): enabling free space tree Mar 11 01:23:39.508986 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 11 01:23:39.633352 initrd-setup-root[889]: cut: /sysroot/etc/passwd: No such file or directory Mar 11 01:23:39.647275 initrd-setup-root[896]: cut: /sysroot/etc/group: No such file or directory Mar 11 01:23:39.664574 initrd-setup-root[903]: cut: /sysroot/etc/shadow: No such file or directory Mar 11 01:23:39.675557 initrd-setup-root[910]: cut: /sysroot/etc/gshadow: No such file or directory Mar 11 01:23:40.131655 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 11 01:23:40.145736 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 11 01:23:40.154448 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 11 01:23:40.175791 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 11 01:23:40.187260 kernel: BTRFS info (device vda6): last unmount of filesystem 9d3cb493-f0b4-4e82-903e-5b69b46b4bc5 Mar 11 01:23:40.260790 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 11 01:23:40.373600 ignition[978]: INFO : Ignition 2.22.0 Mar 11 01:23:40.373600 ignition[978]: INFO : Stage: mount Mar 11 01:23:40.384604 ignition[978]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 11 01:23:40.384604 ignition[978]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 11 01:23:40.652548 ignition[978]: INFO : mount: mount passed Mar 11 01:23:40.652548 ignition[978]: INFO : Ignition finished successfully Mar 11 01:23:40.663481 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 11 01:23:40.671319 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 11 01:23:40.732005 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 11 01:23:40.776224 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (991) Mar 11 01:23:40.783222 kernel: BTRFS info (device vda6): first mount of filesystem 9d3cb493-f0b4-4e82-903e-5b69b46b4bc5 Mar 11 01:23:40.783285 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 11 01:23:40.835989 kernel: BTRFS info (device vda6): turning on async discard Mar 11 01:23:40.836617 kernel: BTRFS info (device vda6): enabling free space tree Mar 11 01:23:40.851955 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 11 01:23:41.327321 ignition[1008]: INFO : Ignition 2.22.0 Mar 11 01:23:41.327321 ignition[1008]: INFO : Stage: files Mar 11 01:23:41.338352 ignition[1008]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 11 01:23:41.338352 ignition[1008]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 11 01:23:41.349290 ignition[1008]: DEBUG : files: compiled without relabeling support, skipping Mar 11 01:23:41.353536 ignition[1008]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 11 01:23:41.353536 ignition[1008]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 11 01:23:41.376827 ignition[1008]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 11 01:23:41.383406 ignition[1008]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 11 01:23:41.409598 unknown[1008]: wrote ssh authorized keys file for user: core Mar 11 01:23:41.416516 ignition[1008]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 11 01:23:41.416516 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 11 01:23:41.416516 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 11 01:23:41.553847 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 11 01:23:43.508408 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 11 01:23:43.508408 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 11 01:23:43.533086 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 11 01:23:43.533086 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 11 01:23:43.533086 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 11 01:23:43.533086 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 11 01:23:43.533086 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 11 01:23:43.533086 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 11 01:23:43.533086 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 11 01:23:43.581718 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 11 01:23:43.581718 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 11 01:23:43.581718 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 11 01:23:43.633032 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 11 01:23:43.633032 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 11 01:23:43.633032 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-x86-64.raw: attempt #1 Mar 11 01:23:44.118458 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 11 01:23:45.010689 kernel: hrtimer: interrupt took 6745851 ns Mar 11 01:23:47.479582 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 11 01:23:47.479582 ignition[1008]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 11 01:23:47.506934 ignition[1008]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 11 01:23:47.639251 ignition[1008]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 11 01:23:47.639251 ignition[1008]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 11 01:23:47.639251 ignition[1008]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Mar 11 01:23:47.722321 ignition[1008]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 11 01:23:47.769893 ignition[1008]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 11 01:23:47.802924 ignition[1008]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Mar 11 01:23:47.819032 ignition[1008]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Mar 11 01:23:48.036656 ignition[1008]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 11 01:23:48.063093 ignition[1008]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 11 01:23:48.080194 ignition[1008]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Mar 11 01:23:48.080194 ignition[1008]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Mar 11 01:23:48.080194 ignition[1008]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Mar 11 01:23:48.080194 ignition[1008]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 11 01:23:48.080194 ignition[1008]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 11 01:23:48.080194 ignition[1008]: INFO : files: files passed Mar 11 01:23:48.080194 ignition[1008]: INFO : Ignition finished successfully Mar 11 01:23:48.072584 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 11 01:23:48.088228 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 11 01:23:48.112330 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 11 01:23:48.167008 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 11 01:23:48.167255 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 11 01:23:48.222439 initrd-setup-root-after-ignition[1037]: grep: /sysroot/oem/oem-release: No such file or directory Mar 11 01:23:48.243427 initrd-setup-root-after-ignition[1039]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 11 01:23:48.243427 initrd-setup-root-after-ignition[1039]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 11 01:23:48.260866 initrd-setup-root-after-ignition[1043]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 11 01:23:48.279697 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 11 01:23:48.308352 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 11 01:23:48.319970 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 11 01:23:48.426092 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 11 01:23:48.426357 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 11 01:23:48.439876 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 11 01:23:48.443479 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 11 01:23:48.466915 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 11 01:23:48.471410 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 11 01:23:48.584022 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 11 01:23:48.604516 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 11 01:23:48.748075 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 11 01:23:48.761432 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 11 01:23:48.779712 systemd[1]: Stopped target timers.target - Timer Units. Mar 11 01:23:48.811211 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 11 01:23:48.812927 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 11 01:23:48.826808 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 11 01:23:48.831951 systemd[1]: Stopped target basic.target - Basic System. Mar 11 01:23:48.840050 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 11 01:23:48.855024 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 11 01:23:48.868013 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 11 01:23:48.872115 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Mar 11 01:23:48.876951 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 11 01:23:48.915190 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 11 01:23:48.920603 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 11 01:23:48.928977 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 11 01:23:48.936705 systemd[1]: Stopped target swap.target - Swaps. Mar 11 01:23:48.956038 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 11 01:23:48.956376 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 11 01:23:48.963017 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 11 01:23:48.971427 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 11 01:23:48.977470 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 11 01:23:48.980992 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 11 01:23:49.011019 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 11 01:23:49.011301 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 11 01:23:49.026986 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 11 01:23:49.027239 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 11 01:23:49.037079 systemd[1]: Stopped target paths.target - Path Units. Mar 11 01:23:49.047926 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 11 01:23:49.048398 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 11 01:23:49.053348 systemd[1]: Stopped target slices.target - Slice Units. Mar 11 01:23:49.067214 systemd[1]: Stopped target sockets.target - Socket Units. Mar 11 01:23:49.084366 systemd[1]: iscsid.socket: Deactivated successfully. Mar 11 01:23:49.084495 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 11 01:23:49.097512 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 11 01:23:49.097651 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 11 01:23:49.103326 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 11 01:23:49.103574 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 11 01:23:49.122063 systemd[1]: ignition-files.service: Deactivated successfully. Mar 11 01:23:49.126465 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 11 01:23:49.144582 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 11 01:23:49.155447 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 11 01:23:49.156807 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 11 01:23:49.208901 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 11 01:23:49.213645 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 11 01:23:49.213917 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 11 01:23:49.222993 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 11 01:23:49.224740 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 11 01:23:49.320389 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 11 01:23:49.327066 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 11 01:23:49.327309 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 11 01:23:49.340808 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 11 01:23:49.341004 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 11 01:23:49.365869 ignition[1063]: INFO : Ignition 2.22.0 Mar 11 01:23:49.365869 ignition[1063]: INFO : Stage: umount Mar 11 01:23:49.370862 ignition[1063]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 11 01:23:49.370862 ignition[1063]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 11 01:23:49.370862 ignition[1063]: INFO : umount: umount passed Mar 11 01:23:49.370862 ignition[1063]: INFO : Ignition finished successfully Mar 11 01:23:49.381730 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 11 01:23:49.381974 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 11 01:23:49.385097 systemd[1]: Stopped target network.target - Network. Mar 11 01:23:49.401197 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 11 01:23:49.401390 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 11 01:23:49.405866 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 11 01:23:49.405968 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 11 01:23:49.419929 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 11 01:23:49.420025 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 11 01:23:49.428615 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 11 01:23:49.428718 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 11 01:23:49.441045 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 11 01:23:49.441223 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 11 01:23:49.441632 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 11 01:23:49.456858 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 11 01:23:49.472011 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 11 01:23:49.472315 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 11 01:23:49.508904 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Mar 11 01:23:49.509396 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 11 01:23:49.509598 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 11 01:23:49.533356 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Mar 11 01:23:49.534652 systemd[1]: Stopped target network-pre.target - Preparation for Network. Mar 11 01:23:49.548650 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 11 01:23:49.548756 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 11 01:23:49.562588 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 11 01:23:49.566076 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 11 01:23:49.566233 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 11 01:23:49.566461 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 11 01:23:49.567751 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 11 01:23:49.577716 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 11 01:23:49.577821 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 11 01:23:49.600395 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 11 01:23:49.600514 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 11 01:23:49.615676 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 11 01:23:49.619934 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 11 01:23:49.620055 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Mar 11 01:23:49.668305 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 11 01:23:49.671854 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 11 01:23:49.685671 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 11 01:23:49.689510 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 11 01:23:49.706548 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 11 01:23:49.706965 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 11 01:23:49.716332 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 11 01:23:49.716579 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 11 01:23:49.731220 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 11 01:23:49.733020 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 11 01:23:49.748974 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 11 01:23:49.749399 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 11 01:23:49.779614 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 11 01:23:49.786601 systemd[1]: systemd-network-generator.service: Deactivated successfully. Mar 11 01:23:49.792939 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Mar 11 01:23:49.810725 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 11 01:23:49.811323 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 11 01:23:49.834418 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 11 01:23:49.834668 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 11 01:23:49.854529 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Mar 11 01:23:49.854668 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Mar 11 01:23:49.854831 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 11 01:23:49.855997 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 11 01:23:49.856260 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 11 01:23:49.868971 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 11 01:23:49.869561 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 11 01:23:49.911025 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 11 01:23:49.936176 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 11 01:23:50.032639 systemd[1]: Switching root. Mar 11 01:23:50.131978 systemd-journald[203]: Received SIGTERM from PID 1 (systemd). Mar 11 01:23:50.132231 systemd-journald[203]: Journal stopped Mar 11 01:23:52.478426 kernel: SELinux: policy capability network_peer_controls=1 Mar 11 01:23:52.478536 kernel: SELinux: policy capability open_perms=1 Mar 11 01:23:52.478554 kernel: SELinux: policy capability extended_socket_class=1 Mar 11 01:23:52.478569 kernel: SELinux: policy capability always_check_network=0 Mar 11 01:23:52.478583 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 11 01:23:52.478598 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 11 01:23:52.478612 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 11 01:23:52.478634 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 11 01:23:52.478650 kernel: SELinux: policy capability userspace_initial_context=0 Mar 11 01:23:52.478668 kernel: audit: type=1403 audit(1773192230.527:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 11 01:23:52.478688 systemd[1]: Successfully loaded SELinux policy in 112.222ms. Mar 11 01:23:52.478714 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 11.765ms. Mar 11 01:23:52.478732 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 11 01:23:52.478755 systemd[1]: Detected virtualization kvm. Mar 11 01:23:52.478848 systemd[1]: Detected architecture x86-64. Mar 11 01:23:52.478867 systemd[1]: Detected first boot. Mar 11 01:23:52.478885 systemd[1]: Initializing machine ID from VM UUID. Mar 11 01:23:52.478902 zram_generator::config[1109]: No configuration found. Mar 11 01:23:52.478925 kernel: Guest personality initialized and is inactive Mar 11 01:23:52.478942 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Mar 11 01:23:52.478960 kernel: Initialized host personality Mar 11 01:23:52.479002 kernel: NET: Registered PF_VSOCK protocol family Mar 11 01:23:52.479021 systemd[1]: Populated /etc with preset unit settings. Mar 11 01:23:52.479042 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Mar 11 01:23:52.479063 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 11 01:23:52.479081 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 11 01:23:52.479100 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 11 01:23:52.479124 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 11 01:23:52.479203 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 11 01:23:52.479224 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 11 01:23:52.479245 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 11 01:23:52.479263 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 11 01:23:52.479283 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 11 01:23:52.479302 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 11 01:23:52.479318 systemd[1]: Created slice user.slice - User and Session Slice. Mar 11 01:23:52.479388 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 11 01:23:52.479409 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 11 01:23:52.479425 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 11 01:23:52.479440 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 11 01:23:52.479457 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 11 01:23:52.479478 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 11 01:23:52.479497 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 11 01:23:52.479512 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 11 01:23:52.479565 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 11 01:23:52.479586 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 11 01:23:52.479604 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 11 01:23:52.479622 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 11 01:23:52.479642 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 11 01:23:52.479659 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 11 01:23:52.479678 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 11 01:23:52.479694 systemd[1]: Reached target slices.target - Slice Units. Mar 11 01:23:52.479714 systemd[1]: Reached target swap.target - Swaps. Mar 11 01:23:52.479737 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 11 01:23:52.479756 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 11 01:23:52.479774 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 11 01:23:52.479827 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 11 01:23:52.479882 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 11 01:23:52.479902 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 11 01:23:52.479924 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 11 01:23:52.479942 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 11 01:23:52.479959 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 11 01:23:52.479980 systemd[1]: Mounting media.mount - External Media Directory... Mar 11 01:23:52.480001 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 11 01:23:52.480021 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 11 01:23:52.480038 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 11 01:23:52.480057 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 11 01:23:52.480075 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 11 01:23:52.480095 systemd[1]: Reached target machines.target - Containers. Mar 11 01:23:52.480114 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 11 01:23:52.480189 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 11 01:23:52.480212 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 11 01:23:52.480229 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 11 01:23:52.480249 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 11 01:23:52.480269 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 11 01:23:52.480288 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 11 01:23:52.480306 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 11 01:23:52.480326 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 11 01:23:52.480381 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 11 01:23:52.480409 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 11 01:23:52.480427 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 11 01:23:52.480445 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 11 01:23:52.480461 systemd[1]: Stopped systemd-fsck-usr.service. Mar 11 01:23:52.480479 kernel: fuse: init (API version 7.41) Mar 11 01:23:52.480500 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 11 01:23:52.480515 kernel: ACPI: bus type drm_connector registered Mar 11 01:23:52.480532 kernel: loop: module loaded Mar 11 01:23:52.480553 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 11 01:23:52.480574 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 11 01:23:52.480594 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 11 01:23:52.480613 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 11 01:23:52.480630 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 11 01:23:52.480649 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 11 01:23:52.480671 systemd[1]: verity-setup.service: Deactivated successfully. Mar 11 01:23:52.480691 systemd[1]: Stopped verity-setup.service. Mar 11 01:23:52.480710 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 11 01:23:52.480767 systemd-journald[1194]: Collecting audit messages is disabled. Mar 11 01:23:52.480842 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 11 01:23:52.480865 systemd-journald[1194]: Journal started Mar 11 01:23:52.480960 systemd-journald[1194]: Runtime Journal (/run/log/journal/f3881276884b4c24972249aa9db58310) is 6M, max 48.3M, 42.2M free. Mar 11 01:23:51.609463 systemd[1]: Queued start job for default target multi-user.target. Mar 11 01:23:51.634107 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 11 01:23:51.635006 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 11 01:23:51.635698 systemd[1]: systemd-journald.service: Consumed 1.105s CPU time. Mar 11 01:23:52.503909 systemd[1]: Started systemd-journald.service - Journal Service. Mar 11 01:23:52.509213 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 11 01:23:52.513738 systemd[1]: Mounted media.mount - External Media Directory. Mar 11 01:23:52.517097 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 11 01:23:52.522524 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 11 01:23:52.528881 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 11 01:23:52.534625 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 11 01:23:52.539934 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 11 01:23:52.548744 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 11 01:23:52.550091 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 11 01:23:52.555181 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 11 01:23:52.555439 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 11 01:23:52.560069 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 11 01:23:52.560555 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 11 01:23:52.564493 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 11 01:23:52.564745 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 11 01:23:52.570318 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 11 01:23:52.570628 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 11 01:23:52.576777 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 11 01:23:52.577196 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 11 01:23:52.581448 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 11 01:23:52.585582 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 11 01:23:52.607990 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 11 01:23:52.613449 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 11 01:23:52.631378 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 11 01:23:52.639344 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 11 01:23:52.645760 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 11 01:23:52.649479 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 11 01:23:52.649552 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 11 01:23:52.653454 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 11 01:23:52.666439 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 11 01:23:52.669918 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 11 01:23:52.671586 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 11 01:23:52.686195 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 11 01:23:52.693634 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 11 01:23:52.697339 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 11 01:23:52.717710 systemd-journald[1194]: Time spent on flushing to /var/log/journal/f3881276884b4c24972249aa9db58310 is 47.526ms for 974 entries. Mar 11 01:23:52.717710 systemd-journald[1194]: System Journal (/var/log/journal/f3881276884b4c24972249aa9db58310) is 8M, max 195.6M, 187.6M free. Mar 11 01:23:52.791474 systemd-journald[1194]: Received client request to flush runtime journal. Mar 11 01:23:52.791544 kernel: loop0: detected capacity change from 0 to 219192 Mar 11 01:23:52.713916 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 11 01:23:52.716575 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 11 01:23:52.731842 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 11 01:23:52.738528 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 11 01:23:52.745763 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 11 01:23:52.750670 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 11 01:23:52.759259 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 11 01:23:52.787278 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 11 01:23:52.821334 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 11 01:23:52.809385 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 11 01:23:52.818234 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 11 01:23:52.826567 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 11 01:23:52.834780 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 11 01:23:52.853339 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 11 01:23:52.861341 kernel: loop1: detected capacity change from 0 to 128560 Mar 11 01:23:52.863315 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 11 01:23:52.909653 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 11 01:23:52.911370 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 11 01:23:52.997632 systemd-tmpfiles[1246]: ACLs are not supported, ignoring. Mar 11 01:23:53.008424 kernel: loop2: detected capacity change from 0 to 110984 Mar 11 01:23:53.001620 systemd-tmpfiles[1246]: ACLs are not supported, ignoring. Mar 11 01:23:53.016436 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 11 01:23:53.107878 kernel: loop3: detected capacity change from 0 to 219192 Mar 11 01:23:53.212230 kernel: loop4: detected capacity change from 0 to 128560 Mar 11 01:23:53.267224 kernel: loop5: detected capacity change from 0 to 110984 Mar 11 01:23:54.364656 (sd-merge)[1253]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 11 01:23:54.367438 (sd-merge)[1253]: Merged extensions into '/usr'. Mar 11 01:23:54.384030 systemd[1]: Reload requested from client PID 1228 ('systemd-sysext') (unit systemd-sysext.service)... Mar 11 01:23:54.384057 systemd[1]: Reloading... Mar 11 01:23:54.845392 zram_generator::config[1276]: No configuration found. Mar 11 01:23:55.987281 ldconfig[1223]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 11 01:23:56.348085 systemd[1]: Reloading finished in 1963 ms. Mar 11 01:23:56.407007 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 11 01:23:56.414028 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 11 01:23:56.444499 systemd[1]: Starting ensure-sysext.service... Mar 11 01:23:56.448624 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 11 01:23:56.482864 systemd[1]: Reload requested from client PID 1316 ('systemctl') (unit ensure-sysext.service)... Mar 11 01:23:56.482906 systemd[1]: Reloading... Mar 11 01:23:56.736658 systemd-tmpfiles[1317]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Mar 11 01:23:56.738766 systemd-tmpfiles[1317]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Mar 11 01:23:56.739558 systemd-tmpfiles[1317]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 11 01:23:56.744052 systemd-tmpfiles[1317]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 11 01:23:56.746449 systemd-tmpfiles[1317]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 11 01:23:56.746925 systemd-tmpfiles[1317]: ACLs are not supported, ignoring. Mar 11 01:23:56.747020 systemd-tmpfiles[1317]: ACLs are not supported, ignoring. Mar 11 01:23:56.756716 systemd-tmpfiles[1317]: Detected autofs mount point /boot during canonicalization of boot. Mar 11 01:23:56.756766 systemd-tmpfiles[1317]: Skipping /boot Mar 11 01:23:56.797770 systemd-tmpfiles[1317]: Detected autofs mount point /boot during canonicalization of boot. Mar 11 01:23:56.799705 systemd-tmpfiles[1317]: Skipping /boot Mar 11 01:23:57.104284 zram_generator::config[1343]: No configuration found. Mar 11 01:23:57.508039 systemd[1]: Reloading finished in 1024 ms. Mar 11 01:23:57.533489 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 11 01:23:57.571607 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 11 01:23:57.588717 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 11 01:23:57.623406 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 11 01:23:57.638190 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 11 01:23:57.646523 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 11 01:23:57.662411 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 11 01:23:57.697445 augenrules[1407]: No rules Mar 11 01:23:57.987327 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 11 01:23:57.997653 systemd[1]: audit-rules.service: Deactivated successfully. Mar 11 01:23:57.998169 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 11 01:23:58.004328 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 11 01:23:58.025582 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 11 01:23:58.026039 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 11 01:23:58.030474 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 11 01:23:58.037591 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 11 01:23:58.044765 systemd-udevd[1396]: Using default interface naming scheme 'v255'. Mar 11 01:23:58.046655 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 11 01:23:58.053515 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 11 01:23:58.053972 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 11 01:23:58.062396 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 11 01:23:58.071112 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 11 01:23:58.075738 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 11 01:23:58.080089 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 11 01:23:58.080679 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 11 01:23:58.100229 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 11 01:23:58.110560 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 11 01:23:58.117985 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 11 01:23:58.125371 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 11 01:23:58.125701 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 11 01:23:58.135602 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 11 01:23:58.143374 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 11 01:23:58.150289 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 11 01:23:58.182449 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 11 01:23:58.196973 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 11 01:23:58.200489 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 11 01:23:58.206414 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 11 01:23:58.216750 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 11 01:23:58.233254 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 11 01:23:58.280728 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 11 01:23:58.287010 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 11 01:23:58.317644 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 11 01:23:58.354069 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 11 01:23:58.359741 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 11 01:23:58.360233 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 11 01:23:58.363192 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 11 01:23:58.416083 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 11 01:23:58.417279 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 11 01:23:58.428009 systemd[1]: Finished ensure-sysext.service. Mar 11 01:23:58.431935 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 11 01:23:58.438281 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 11 01:23:58.442562 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 11 01:23:58.442825 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 11 01:23:58.447032 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 11 01:23:58.447426 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 11 01:23:58.462273 augenrules[1451]: /sbin/augenrules: No change Mar 11 01:23:58.463063 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 11 01:23:58.463215 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 11 01:23:58.467061 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 11 01:23:58.485592 augenrules[1487]: No rules Mar 11 01:23:58.496872 systemd[1]: audit-rules.service: Deactivated successfully. Mar 11 01:23:58.497311 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 11 01:23:58.515121 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 11 01:23:58.706994 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 11 01:23:58.720825 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 11 01:23:59.820440 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 11 01:23:59.847212 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Mar 11 01:23:59.853446 kernel: mousedev: PS/2 mouse device common for all mice Mar 11 01:23:59.853302 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 11 01:23:59.855672 systemd-resolved[1394]: Positive Trust Anchors: Mar 11 01:23:59.855709 systemd-resolved[1394]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 11 01:23:59.855736 systemd-resolved[1394]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 11 01:23:59.878252 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 11 01:23:59.878778 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 11 01:23:59.887348 systemd-resolved[1394]: Defaulting to hostname 'linux'. Mar 11 01:23:59.906706 kernel: ACPI: button: Power Button [PWRF] Mar 11 01:23:59.910025 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 11 01:23:59.910439 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 11 01:23:59.975788 systemd-networkd[1466]: lo: Link UP Mar 11 01:23:59.976395 systemd-networkd[1466]: lo: Gained carrier Mar 11 01:23:59.987394 systemd-networkd[1466]: Enumeration completed Mar 11 01:24:00.011618 systemd-networkd[1466]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 11 01:24:00.011899 systemd-networkd[1466]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 11 01:24:00.030968 systemd-networkd[1466]: eth0: Link UP Mar 11 01:24:00.031296 systemd-networkd[1466]: eth0: Gained carrier Mar 11 01:24:00.031410 systemd-networkd[1466]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 11 01:24:00.156106 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 11 01:24:00.170932 systemd[1]: Reached target network.target - Network. Mar 11 01:24:00.180332 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 11 01:24:00.683365 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 11 01:24:00.710773 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 11 01:24:00.730762 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 11 01:24:00.737296 systemd[1]: Reached target sysinit.target - System Initialization. Mar 11 01:24:00.744126 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 11 01:24:00.747333 systemd-networkd[1466]: eth0: DHCPv4 address 10.0.0.26/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 11 01:24:00.749384 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 11 01:24:00.750432 systemd-timesyncd[1480]: Network configuration changed, trying to establish connection. Mar 11 01:24:00.755793 systemd-timesyncd[1480]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 11 01:24:00.755902 systemd-timesyncd[1480]: Initial clock synchronization to Wed 2026-03-11 01:24:00.866699 UTC. Mar 11 01:24:00.757407 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Mar 11 01:24:00.762760 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 11 01:24:00.767238 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 11 01:24:00.767370 systemd[1]: Reached target paths.target - Path Units. Mar 11 01:24:00.770280 systemd[1]: Reached target time-set.target - System Time Set. Mar 11 01:24:00.774302 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 11 01:24:00.778993 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 11 01:24:00.783983 systemd[1]: Reached target timers.target - Timer Units. Mar 11 01:24:00.807567 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 11 01:24:00.817104 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 11 01:24:00.824265 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 11 01:24:00.829107 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Mar 11 01:24:00.836241 systemd[1]: Reached target ssh-access.target - SSH Access Available. Mar 11 01:24:00.851985 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 11 01:24:00.856564 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 11 01:24:00.865961 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 11 01:24:00.872428 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 11 01:24:00.879097 systemd[1]: Reached target sockets.target - Socket Units. Mar 11 01:24:00.882374 systemd[1]: Reached target basic.target - Basic System. Mar 11 01:24:00.885682 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 11 01:24:00.885743 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 11 01:24:00.889363 systemd[1]: Starting containerd.service - containerd container runtime... Mar 11 01:24:00.914899 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 11 01:24:00.930740 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 11 01:24:00.951007 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 11 01:24:01.114673 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 11 01:24:01.118856 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 11 01:24:01.122298 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Mar 11 01:24:01.132883 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 11 01:24:01.136195 jq[1536]: false Mar 11 01:24:01.139417 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 11 01:24:01.156283 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 11 01:24:01.162877 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 11 01:24:01.180904 google_oslogin_nss_cache[1538]: oslogin_cache_refresh[1538]: Refreshing passwd entry cache Mar 11 01:24:01.176355 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 11 01:24:01.174578 oslogin_cache_refresh[1538]: Refreshing passwd entry cache Mar 11 01:24:01.186492 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 11 01:24:01.189558 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 11 01:24:01.191569 systemd[1]: Starting update-engine.service - Update Engine... Mar 11 01:24:01.203791 extend-filesystems[1537]: Found /dev/vda6 Mar 11 01:24:01.213120 kernel: kvm_amd: TSC scaling supported Mar 11 01:24:01.213718 kernel: kvm_amd: Nested Virtualization enabled Mar 11 01:24:01.213746 kernel: kvm_amd: Nested Paging enabled Mar 11 01:24:01.213209 oslogin_cache_refresh[1538]: Failure getting users, quitting Mar 11 01:24:01.214280 google_oslogin_nss_cache[1538]: oslogin_cache_refresh[1538]: Failure getting users, quitting Mar 11 01:24:01.214280 google_oslogin_nss_cache[1538]: oslogin_cache_refresh[1538]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Mar 11 01:24:01.214280 google_oslogin_nss_cache[1538]: oslogin_cache_refresh[1538]: Refreshing group entry cache Mar 11 01:24:01.213293 oslogin_cache_refresh[1538]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Mar 11 01:24:01.213486 oslogin_cache_refresh[1538]: Refreshing group entry cache Mar 11 01:24:01.218380 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 11 01:24:01.226199 extend-filesystems[1537]: Found /dev/vda9 Mar 11 01:24:01.249053 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Mar 11 01:24:01.249123 kernel: kvm_amd: PMU virtualization is disabled Mar 11 01:24:01.254868 extend-filesystems[1537]: Checking size of /dev/vda9 Mar 11 01:24:01.269690 google_oslogin_nss_cache[1538]: oslogin_cache_refresh[1538]: Failure getting groups, quitting Mar 11 01:24:01.269690 google_oslogin_nss_cache[1538]: oslogin_cache_refresh[1538]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Mar 11 01:24:01.233354 oslogin_cache_refresh[1538]: Failure getting groups, quitting Mar 11 01:24:01.241436 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 11 01:24:01.270112 extend-filesystems[1537]: Resized partition /dev/vda9 Mar 11 01:24:01.233373 oslogin_cache_refresh[1538]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Mar 11 01:24:01.247046 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 11 01:24:01.314275 extend-filesystems[1564]: resize2fs 1.47.3 (8-Jul-2025) Mar 11 01:24:01.247736 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 11 01:24:01.323946 jq[1554]: true Mar 11 01:24:01.248448 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Mar 11 01:24:01.249437 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Mar 11 01:24:01.254485 systemd[1]: motdgen.service: Deactivated successfully. Mar 11 01:24:01.258107 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 11 01:24:01.268754 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 11 01:24:01.270423 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 11 01:24:01.333590 jq[1566]: true Mar 11 01:24:01.364870 tar[1565]: linux-amd64/LICENSE Mar 11 01:24:01.374511 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 11 01:24:01.366741 (ntainerd)[1579]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 11 01:24:01.375348 tar[1565]: linux-amd64/helm Mar 11 01:24:01.366756 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 11 01:24:01.386249 update_engine[1549]: I20260311 01:24:01.386071 1549 main.cc:92] Flatcar Update Engine starting Mar 11 01:24:01.459971 dbus-daemon[1534]: [system] SELinux support is enabled Mar 11 01:24:01.460812 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 11 01:24:01.554109 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 11 01:24:01.782458 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 11 01:24:01.782592 update_engine[1549]: I20260311 01:24:01.777383 1549 update_check_scheduler.cc:74] Next update check in 9m52s Mar 11 01:24:01.782674 extend-filesystems[1564]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 11 01:24:01.782674 extend-filesystems[1564]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 11 01:24:01.782674 extend-filesystems[1564]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 11 01:24:01.558825 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 11 01:24:01.827930 sshd_keygen[1558]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 11 01:24:01.828070 extend-filesystems[1537]: Resized filesystem in /dev/vda9 Mar 11 01:24:01.569484 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 11 01:24:01.844559 bash[1597]: Updated "/home/core/.ssh/authorized_keys" Mar 11 01:24:01.569615 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 11 01:24:01.753251 systemd[1]: Started update-engine.service - Update Engine. Mar 11 01:24:01.779100 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 11 01:24:01.779652 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 11 01:24:01.784374 systemd-logind[1547]: Watching system buttons on /dev/input/event2 (Power Button) Mar 11 01:24:01.784412 systemd-logind[1547]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 11 01:24:01.788353 systemd-logind[1547]: New seat seat0. Mar 11 01:24:01.795258 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 11 01:24:01.827880 systemd[1]: Started systemd-logind.service - User Login Management. Mar 11 01:24:01.869087 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 11 01:24:01.885694 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 11 01:24:01.896310 systemd-networkd[1466]: eth0: Gained IPv6LL Mar 11 01:24:01.934268 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 11 01:24:01.941637 systemd[1]: Reached target network-online.target - Network is Online. Mar 11 01:24:01.952648 locksmithd[1601]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 11 01:24:01.959578 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 11 01:24:01.966682 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 11 01:24:01.974080 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 11 01:24:01.980665 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 11 01:24:01.999851 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 11 01:24:02.036532 systemd[1]: Started sshd@0-10.0.0.26:22-10.0.0.1:52124.service - OpenSSH per-connection server daemon (10.0.0.1:52124). Mar 11 01:24:02.054042 kernel: EDAC MC: Ver: 3.0.0 Mar 11 01:24:02.079958 systemd[1]: issuegen.service: Deactivated successfully. Mar 11 01:24:02.082430 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 11 01:24:02.090747 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 11 01:24:02.112543 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 11 01:24:02.112985 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 11 01:24:03.005670 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 1248500269 wd_nsec: 1248499811 Mar 11 01:24:03.083448 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 11 01:24:03.093069 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 11 01:24:03.145190 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 11 01:24:03.163090 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 11 01:24:03.357427 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 11 01:24:03.365833 systemd[1]: Reached target getty.target - Login Prompts. Mar 11 01:24:03.450551 sshd[1622]: Accepted publickey for core from 10.0.0.1 port 52124 ssh2: RSA SHA256:ov5Rj6fEl6UjPOqNhoR9JzHhu6CIWP/x84fpjn8AOgI Mar 11 01:24:03.464333 sshd-session[1622]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 01:24:03.843264 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 11 01:24:03.852798 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 11 01:24:03.883418 systemd-logind[1547]: New session 1 of user core. Mar 11 01:24:03.946978 containerd[1579]: time="2026-03-11T01:24:03Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Mar 11 01:24:03.949389 containerd[1579]: time="2026-03-11T01:24:03.949254103Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Mar 11 01:24:03.962392 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 11 01:24:03.982543 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 11 01:24:04.050930 containerd[1579]: time="2026-03-11T01:24:04.050763049Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="115.619µs" Mar 11 01:24:04.051257 containerd[1579]: time="2026-03-11T01:24:04.051228184Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Mar 11 01:24:04.051400 containerd[1579]: time="2026-03-11T01:24:04.051381559Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Mar 11 01:24:04.051850 containerd[1579]: time="2026-03-11T01:24:04.051828846Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Mar 11 01:24:04.051974 containerd[1579]: time="2026-03-11T01:24:04.051957751Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Mar 11 01:24:04.052065 containerd[1579]: time="2026-03-11T01:24:04.052049435Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 11 01:24:04.052232 containerd[1579]: time="2026-03-11T01:24:04.052213309Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 11 01:24:04.052307 containerd[1579]: time="2026-03-11T01:24:04.052295062Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 11 01:24:04.052795 containerd[1579]: time="2026-03-11T01:24:04.052764544Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 11 01:24:04.052936 containerd[1579]: time="2026-03-11T01:24:04.052912935Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 11 01:24:04.053728 containerd[1579]: time="2026-03-11T01:24:04.053027502Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 11 01:24:04.053728 containerd[1579]: time="2026-03-11T01:24:04.053047321Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Mar 11 01:24:04.053728 containerd[1579]: time="2026-03-11T01:24:04.053251853Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Mar 11 01:24:04.053728 containerd[1579]: time="2026-03-11T01:24:04.053626163Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 11 01:24:04.053728 containerd[1579]: time="2026-03-11T01:24:04.053671140Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 11 01:24:04.053728 containerd[1579]: time="2026-03-11T01:24:04.053690706Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Mar 11 01:24:04.054215 containerd[1579]: time="2026-03-11T01:24:04.054095068Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Mar 11 01:24:04.055759 containerd[1579]: time="2026-03-11T01:24:04.055733114Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Mar 11 01:24:04.055916 containerd[1579]: time="2026-03-11T01:24:04.055895601Z" level=info msg="metadata content store policy set" policy=shared Mar 11 01:24:04.072218 (systemd)[1654]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 11 01:24:04.084570 systemd-logind[1547]: New session c1 of user core. Mar 11 01:24:04.106704 containerd[1579]: time="2026-03-11T01:24:04.106520739Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Mar 11 01:24:04.107620 containerd[1579]: time="2026-03-11T01:24:04.107525905Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Mar 11 01:24:04.107620 containerd[1579]: time="2026-03-11T01:24:04.107563714Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Mar 11 01:24:04.107795 containerd[1579]: time="2026-03-11T01:24:04.107773281Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Mar 11 01:24:04.108122 containerd[1579]: time="2026-03-11T01:24:04.108031396Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Mar 11 01:24:04.108361 containerd[1579]: time="2026-03-11T01:24:04.108229881Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Mar 11 01:24:04.108486 containerd[1579]: time="2026-03-11T01:24:04.108465820Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Mar 11 01:24:04.110198 containerd[1579]: time="2026-03-11T01:24:04.108629845Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Mar 11 01:24:04.110198 containerd[1579]: time="2026-03-11T01:24:04.108711588Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Mar 11 01:24:04.110198 containerd[1579]: time="2026-03-11T01:24:04.108728566Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Mar 11 01:24:04.110198 containerd[1579]: time="2026-03-11T01:24:04.108742540Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Mar 11 01:24:04.110198 containerd[1579]: time="2026-03-11T01:24:04.108759114Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Mar 11 01:24:04.110198 containerd[1579]: time="2026-03-11T01:24:04.108973040Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Mar 11 01:24:04.110198 containerd[1579]: time="2026-03-11T01:24:04.109697077Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Mar 11 01:24:04.110198 containerd[1579]: time="2026-03-11T01:24:04.109729020Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Mar 11 01:24:04.110198 containerd[1579]: time="2026-03-11T01:24:04.109744178Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Mar 11 01:24:04.110198 containerd[1579]: time="2026-03-11T01:24:04.109759042Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Mar 11 01:24:04.110198 containerd[1579]: time="2026-03-11T01:24:04.109778598Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Mar 11 01:24:04.110198 containerd[1579]: time="2026-03-11T01:24:04.109796001Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Mar 11 01:24:04.110198 containerd[1579]: time="2026-03-11T01:24:04.109809510Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Mar 11 01:24:04.110198 containerd[1579]: time="2026-03-11T01:24:04.109826053Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Mar 11 01:24:04.110198 containerd[1579]: time="2026-03-11T01:24:04.109889242Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Mar 11 01:24:04.110718 containerd[1579]: time="2026-03-11T01:24:04.109910396Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Mar 11 01:24:04.110718 containerd[1579]: time="2026-03-11T01:24:04.109986286Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Mar 11 01:24:04.110819 containerd[1579]: time="2026-03-11T01:24:04.110798215Z" level=info msg="Start snapshots syncer" Mar 11 01:24:04.111019 containerd[1579]: time="2026-03-11T01:24:04.110992817Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Mar 11 01:24:04.115087 containerd[1579]: time="2026-03-11T01:24:04.115019024Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Mar 11 01:24:04.115959 containerd[1579]: time="2026-03-11T01:24:04.115933274Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Mar 11 01:24:04.133864 containerd[1579]: time="2026-03-11T01:24:04.133712102Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Mar 11 01:24:04.134744 containerd[1579]: time="2026-03-11T01:24:04.134719320Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Mar 11 01:24:04.134848 containerd[1579]: time="2026-03-11T01:24:04.134826819Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Mar 11 01:24:04.135448 containerd[1579]: time="2026-03-11T01:24:04.135422992Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Mar 11 01:24:04.135590 containerd[1579]: time="2026-03-11T01:24:04.135511593Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Mar 11 01:24:04.135738 containerd[1579]: time="2026-03-11T01:24:04.135720301Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Mar 11 01:24:04.135879 containerd[1579]: time="2026-03-11T01:24:04.135858781Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Mar 11 01:24:04.135947 containerd[1579]: time="2026-03-11T01:24:04.135932264Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Mar 11 01:24:04.136524 containerd[1579]: time="2026-03-11T01:24:04.136502066Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Mar 11 01:24:04.136716 containerd[1579]: time="2026-03-11T01:24:04.136694504Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Mar 11 01:24:04.136787 containerd[1579]: time="2026-03-11T01:24:04.136770706Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Mar 11 01:24:04.144060 containerd[1579]: time="2026-03-11T01:24:04.144029088Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 11 01:24:04.145821 containerd[1579]: time="2026-03-11T01:24:04.145797263Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 11 01:24:04.146766 containerd[1579]: time="2026-03-11T01:24:04.146735670Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 11 01:24:04.146871 containerd[1579]: time="2026-03-11T01:24:04.146848842Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 11 01:24:04.146981 containerd[1579]: time="2026-03-11T01:24:04.146956685Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Mar 11 01:24:04.147120 containerd[1579]: time="2026-03-11T01:24:04.147094003Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Mar 11 01:24:04.147328 containerd[1579]: time="2026-03-11T01:24:04.147301438Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Mar 11 01:24:04.147623 containerd[1579]: time="2026-03-11T01:24:04.147600596Z" level=info msg="runtime interface created" Mar 11 01:24:04.147694 containerd[1579]: time="2026-03-11T01:24:04.147678064Z" level=info msg="created NRI interface" Mar 11 01:24:04.147764 containerd[1579]: time="2026-03-11T01:24:04.147745569Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Mar 11 01:24:04.147842 containerd[1579]: time="2026-03-11T01:24:04.147825119Z" level=info msg="Connect containerd service" Mar 11 01:24:04.147962 containerd[1579]: time="2026-03-11T01:24:04.147938290Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 11 01:24:04.149473 containerd[1579]: time="2026-03-11T01:24:04.149403343Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 11 01:24:04.986938 systemd[1654]: Queued start job for default target default.target. Mar 11 01:24:04.994785 systemd[1654]: Created slice app.slice - User Application Slice. Mar 11 01:24:04.994852 systemd[1654]: Reached target paths.target - Paths. Mar 11 01:24:04.995676 systemd[1654]: Reached target timers.target - Timers. Mar 11 01:24:05.001413 systemd[1654]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 11 01:24:05.027220 containerd[1579]: time="2026-03-11T01:24:05.027011007Z" level=info msg="Start subscribing containerd event" Mar 11 01:24:05.027853 containerd[1579]: time="2026-03-11T01:24:05.027340798Z" level=info msg="Start recovering state" Mar 11 01:24:05.027853 containerd[1579]: time="2026-03-11T01:24:05.027792439Z" level=info msg="Start event monitor" Mar 11 01:24:05.027920 containerd[1579]: time="2026-03-11T01:24:05.027868331Z" level=info msg="Start cni network conf syncer for default" Mar 11 01:24:05.027920 containerd[1579]: time="2026-03-11T01:24:05.027883482Z" level=info msg="Start streaming server" Mar 11 01:24:05.027920 containerd[1579]: time="2026-03-11T01:24:05.027910934Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Mar 11 01:24:05.028035 containerd[1579]: time="2026-03-11T01:24:05.028009361Z" level=info msg="runtime interface starting up..." Mar 11 01:24:05.028132 containerd[1579]: time="2026-03-11T01:24:05.028038540Z" level=info msg="starting plugins..." Mar 11 01:24:05.028132 containerd[1579]: time="2026-03-11T01:24:05.028065537Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Mar 11 01:24:05.033181 containerd[1579]: time="2026-03-11T01:24:05.030515510Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 11 01:24:05.033181 containerd[1579]: time="2026-03-11T01:24:05.030992401Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 11 01:24:05.033341 systemd[1654]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 11 01:24:05.033715 systemd[1654]: Reached target sockets.target - Sockets. Mar 11 01:24:05.033951 systemd[1654]: Reached target basic.target - Basic System. Mar 11 01:24:05.034121 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 11 01:24:05.034694 systemd[1654]: Reached target default.target - Main User Target. Mar 11 01:24:05.034745 systemd[1654]: Startup finished in 928ms. Mar 11 01:24:05.036643 containerd[1579]: time="2026-03-11T01:24:05.036468479Z" level=info msg="containerd successfully booted in 1.114282s" Mar 11 01:24:05.049875 systemd[1]: Started containerd.service - containerd container runtime. Mar 11 01:24:05.063716 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 11 01:24:05.066731 tar[1565]: linux-amd64/README.md Mar 11 01:24:05.103689 systemd[1]: Started sshd@1-10.0.0.26:22-10.0.0.1:57918.service - OpenSSH per-connection server daemon (10.0.0.1:57918). Mar 11 01:24:05.111554 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 11 01:24:05.439082 sshd[1682]: Accepted publickey for core from 10.0.0.1 port 57918 ssh2: RSA SHA256:ov5Rj6fEl6UjPOqNhoR9JzHhu6CIWP/x84fpjn8AOgI Mar 11 01:24:05.441926 sshd-session[1682]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 01:24:05.454297 systemd-logind[1547]: New session 2 of user core. Mar 11 01:24:05.461454 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 11 01:24:05.486502 sshd[1686]: Connection closed by 10.0.0.1 port 57918 Mar 11 01:24:05.486775 sshd-session[1682]: pam_unix(sshd:session): session closed for user core Mar 11 01:24:05.497708 systemd[1]: sshd@1-10.0.0.26:22-10.0.0.1:57918.service: Deactivated successfully. Mar 11 01:24:05.500250 systemd[1]: session-2.scope: Deactivated successfully. Mar 11 01:24:05.501707 systemd-logind[1547]: Session 2 logged out. Waiting for processes to exit. Mar 11 01:24:05.505345 systemd[1]: Started sshd@2-10.0.0.26:22-10.0.0.1:57922.service - OpenSSH per-connection server daemon (10.0.0.1:57922). Mar 11 01:24:05.512679 systemd-logind[1547]: Removed session 2. Mar 11 01:24:05.580590 sshd[1692]: Accepted publickey for core from 10.0.0.1 port 57922 ssh2: RSA SHA256:ov5Rj6fEl6UjPOqNhoR9JzHhu6CIWP/x84fpjn8AOgI Mar 11 01:24:05.582405 sshd-session[1692]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 01:24:05.591241 systemd-logind[1547]: New session 3 of user core. Mar 11 01:24:05.600410 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 11 01:24:05.622369 sshd[1696]: Connection closed by 10.0.0.1 port 57922 Mar 11 01:24:05.622762 sshd-session[1692]: pam_unix(sshd:session): session closed for user core Mar 11 01:24:05.628094 systemd[1]: sshd@2-10.0.0.26:22-10.0.0.1:57922.service: Deactivated successfully. Mar 11 01:24:05.630588 systemd[1]: session-3.scope: Deactivated successfully. Mar 11 01:24:05.631823 systemd-logind[1547]: Session 3 logged out. Waiting for processes to exit. Mar 11 01:24:05.633721 systemd-logind[1547]: Removed session 3. Mar 11 01:24:07.427682 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 11 01:24:07.435412 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 11 01:24:07.442847 systemd[1]: Startup finished in 7.858s (kernel) + 20.651s (initrd) + 17.026s (userspace) = 45.535s. Mar 11 01:24:07.445025 (kubelet)[1706]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 11 01:24:09.784346 kubelet[1706]: E0311 01:24:09.783745 1706 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 11 01:24:09.791006 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 11 01:24:09.791368 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 11 01:24:09.792198 systemd[1]: kubelet.service: Consumed 4.414s CPU time, 259.6M memory peak. Mar 11 01:24:15.713119 systemd[1]: Started sshd@3-10.0.0.26:22-10.0.0.1:36980.service - OpenSSH per-connection server daemon (10.0.0.1:36980). Mar 11 01:24:15.835319 sshd[1719]: Accepted publickey for core from 10.0.0.1 port 36980 ssh2: RSA SHA256:ov5Rj6fEl6UjPOqNhoR9JzHhu6CIWP/x84fpjn8AOgI Mar 11 01:24:15.838499 sshd-session[1719]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 01:24:15.852296 systemd-logind[1547]: New session 4 of user core. Mar 11 01:24:15.863394 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 11 01:24:15.890241 sshd[1722]: Connection closed by 10.0.0.1 port 36980 Mar 11 01:24:15.889374 sshd-session[1719]: pam_unix(sshd:session): session closed for user core Mar 11 01:24:15.903393 systemd[1]: sshd@3-10.0.0.26:22-10.0.0.1:36980.service: Deactivated successfully. Mar 11 01:24:15.906296 systemd[1]: session-4.scope: Deactivated successfully. Mar 11 01:24:15.910216 systemd-logind[1547]: Session 4 logged out. Waiting for processes to exit. Mar 11 01:24:15.914171 systemd[1]: Started sshd@4-10.0.0.26:22-10.0.0.1:36982.service - OpenSSH per-connection server daemon (10.0.0.1:36982). Mar 11 01:24:15.917425 systemd-logind[1547]: Removed session 4. Mar 11 01:24:15.989907 sshd[1728]: Accepted publickey for core from 10.0.0.1 port 36982 ssh2: RSA SHA256:ov5Rj6fEl6UjPOqNhoR9JzHhu6CIWP/x84fpjn8AOgI Mar 11 01:24:15.992336 sshd-session[1728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 01:24:16.001967 systemd-logind[1547]: New session 5 of user core. Mar 11 01:24:16.011506 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 11 01:24:16.034226 sshd[1731]: Connection closed by 10.0.0.1 port 36982 Mar 11 01:24:16.035061 sshd-session[1728]: pam_unix(sshd:session): session closed for user core Mar 11 01:24:16.044523 systemd[1]: sshd@4-10.0.0.26:22-10.0.0.1:36982.service: Deactivated successfully. Mar 11 01:24:16.046886 systemd[1]: session-5.scope: Deactivated successfully. Mar 11 01:24:16.049353 systemd-logind[1547]: Session 5 logged out. Waiting for processes to exit. Mar 11 01:24:16.051989 systemd[1]: Started sshd@5-10.0.0.26:22-10.0.0.1:36998.service - OpenSSH per-connection server daemon (10.0.0.1:36998). Mar 11 01:24:16.055893 systemd-logind[1547]: Removed session 5. Mar 11 01:24:16.128247 sshd[1737]: Accepted publickey for core from 10.0.0.1 port 36998 ssh2: RSA SHA256:ov5Rj6fEl6UjPOqNhoR9JzHhu6CIWP/x84fpjn8AOgI Mar 11 01:24:16.130397 sshd-session[1737]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 01:24:16.137313 systemd-logind[1547]: New session 6 of user core. Mar 11 01:24:16.151636 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 11 01:24:16.172179 sshd[1740]: Connection closed by 10.0.0.1 port 36998 Mar 11 01:24:16.172682 sshd-session[1737]: pam_unix(sshd:session): session closed for user core Mar 11 01:24:16.187413 systemd[1]: sshd@5-10.0.0.26:22-10.0.0.1:36998.service: Deactivated successfully. Mar 11 01:24:16.191224 systemd[1]: session-6.scope: Deactivated successfully. Mar 11 01:24:16.193109 systemd-logind[1547]: Session 6 logged out. Waiting for processes to exit. Mar 11 01:24:16.196655 systemd[1]: Started sshd@6-10.0.0.26:22-10.0.0.1:37000.service - OpenSSH per-connection server daemon (10.0.0.1:37000). Mar 11 01:24:16.197985 systemd-logind[1547]: Removed session 6. Mar 11 01:24:16.263788 sshd[1746]: Accepted publickey for core from 10.0.0.1 port 37000 ssh2: RSA SHA256:ov5Rj6fEl6UjPOqNhoR9JzHhu6CIWP/x84fpjn8AOgI Mar 11 01:24:16.269254 sshd-session[1746]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 01:24:16.281304 systemd-logind[1547]: New session 7 of user core. Mar 11 01:24:16.298960 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 11 01:24:16.380581 sudo[1750]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 11 01:24:16.381274 sudo[1750]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 11 01:24:16.416709 sudo[1750]: pam_unix(sudo:session): session closed for user root Mar 11 01:24:16.420581 sshd[1749]: Connection closed by 10.0.0.1 port 37000 Mar 11 01:24:16.421258 sshd-session[1746]: pam_unix(sshd:session): session closed for user core Mar 11 01:24:16.434791 systemd[1]: sshd@6-10.0.0.26:22-10.0.0.1:37000.service: Deactivated successfully. Mar 11 01:24:16.437513 systemd[1]: session-7.scope: Deactivated successfully. Mar 11 01:24:16.438845 systemd-logind[1547]: Session 7 logged out. Waiting for processes to exit. Mar 11 01:24:16.442883 systemd[1]: Started sshd@7-10.0.0.26:22-10.0.0.1:37004.service - OpenSSH per-connection server daemon (10.0.0.1:37004). Mar 11 01:24:16.443948 systemd-logind[1547]: Removed session 7. Mar 11 01:24:16.536034 sshd[1756]: Accepted publickey for core from 10.0.0.1 port 37004 ssh2: RSA SHA256:ov5Rj6fEl6UjPOqNhoR9JzHhu6CIWP/x84fpjn8AOgI Mar 11 01:24:16.538482 sshd-session[1756]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 01:24:16.555040 systemd-logind[1547]: New session 8 of user core. Mar 11 01:24:16.572627 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 11 01:24:16.603020 sudo[1761]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 11 01:24:16.603594 sudo[1761]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 11 01:24:16.616667 sudo[1761]: pam_unix(sudo:session): session closed for user root Mar 11 01:24:16.627296 sudo[1760]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 11 01:24:16.627789 sudo[1760]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 11 01:24:16.645510 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 11 01:24:17.234066 augenrules[1783]: No rules Mar 11 01:24:17.237593 systemd[1]: audit-rules.service: Deactivated successfully. Mar 11 01:24:17.238283 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 11 01:24:17.243727 sudo[1760]: pam_unix(sudo:session): session closed for user root Mar 11 01:24:17.246580 sshd[1759]: Connection closed by 10.0.0.1 port 37004 Mar 11 01:24:17.247528 sshd-session[1756]: pam_unix(sshd:session): session closed for user core Mar 11 01:24:17.257480 systemd[1]: sshd@7-10.0.0.26:22-10.0.0.1:37004.service: Deactivated successfully. Mar 11 01:24:17.260006 systemd[1]: session-8.scope: Deactivated successfully. Mar 11 01:24:17.261925 systemd-logind[1547]: Session 8 logged out. Waiting for processes to exit. Mar 11 01:24:17.265711 systemd[1]: Started sshd@8-10.0.0.26:22-10.0.0.1:37012.service - OpenSSH per-connection server daemon (10.0.0.1:37012). Mar 11 01:24:17.267977 systemd-logind[1547]: Removed session 8. Mar 11 01:24:17.366436 sshd[1792]: Accepted publickey for core from 10.0.0.1 port 37012 ssh2: RSA SHA256:ov5Rj6fEl6UjPOqNhoR9JzHhu6CIWP/x84fpjn8AOgI Mar 11 01:24:17.368685 sshd-session[1792]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 01:24:17.376948 systemd-logind[1547]: New session 9 of user core. Mar 11 01:24:17.392710 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 11 01:24:17.437595 sudo[1796]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 11 01:24:17.438090 sudo[1796]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 11 01:24:19.880410 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 11 01:24:19.883857 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 11 01:24:19.885674 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 11 01:24:19.924911 (dockerd)[1816]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 11 01:24:21.586592 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 11 01:24:21.631935 (kubelet)[1830]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 11 01:24:22.249883 kubelet[1830]: E0311 01:24:22.249477 1830 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 11 01:24:22.257351 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 11 01:24:22.257671 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 11 01:24:22.258455 systemd[1]: kubelet.service: Consumed 1.470s CPU time, 110.4M memory peak. Mar 11 01:24:23.658037 dockerd[1816]: time="2026-03-11T01:24:23.656113972Z" level=info msg="Starting up" Mar 11 01:24:23.664799 dockerd[1816]: time="2026-03-11T01:24:23.660098050Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Mar 11 01:24:23.959450 dockerd[1816]: time="2026-03-11T01:24:23.944319909Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Mar 11 01:24:24.418300 dockerd[1816]: time="2026-03-11T01:24:24.414566551Z" level=info msg="Loading containers: start." Mar 11 01:24:24.460535 kernel: Initializing XFRM netlink socket Mar 11 01:24:27.901349 systemd-networkd[1466]: docker0: Link UP Mar 11 01:24:27.918303 dockerd[1816]: time="2026-03-11T01:24:27.917929051Z" level=info msg="Loading containers: done." Mar 11 01:24:28.492675 dockerd[1816]: time="2026-03-11T01:24:28.492505228Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 11 01:24:28.493792 dockerd[1816]: time="2026-03-11T01:24:28.493113542Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Mar 11 01:24:28.493792 dockerd[1816]: time="2026-03-11T01:24:28.493606107Z" level=info msg="Initializing buildkit" Mar 11 01:24:28.808848 dockerd[1816]: time="2026-03-11T01:24:28.800464311Z" level=info msg="Completed buildkit initialization" Mar 11 01:24:28.832442 dockerd[1816]: time="2026-03-11T01:24:28.831998340Z" level=info msg="Daemon has completed initialization" Mar 11 01:24:28.848051 dockerd[1816]: time="2026-03-11T01:24:28.832793078Z" level=info msg="API listen on /run/docker.sock" Mar 11 01:24:28.844181 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 11 01:24:32.470818 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 11 01:24:32.479880 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 11 01:24:34.044691 containerd[1579]: time="2026-03-11T01:24:34.042614108Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.5\"" Mar 11 01:24:34.418311 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 11 01:24:34.432843 (kubelet)[2062]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 11 01:24:34.769737 kubelet[2062]: E0311 01:24:34.769560 2062 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 11 01:24:34.777619 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 11 01:24:34.778758 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 11 01:24:34.782103 systemd[1]: kubelet.service: Consumed 1.377s CPU time, 110.5M memory peak. Mar 11 01:24:35.922882 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4231045629.mount: Deactivated successfully. Mar 11 01:24:43.383079 containerd[1579]: time="2026-03-11T01:24:43.382819724Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:24:43.385519 containerd[1579]: time="2026-03-11T01:24:43.384626334Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.5: active requests=0, bytes read=27074497" Mar 11 01:24:43.390826 containerd[1579]: time="2026-03-11T01:24:43.390577934Z" level=info msg="ImageCreate event name:\"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:24:43.400730 containerd[1579]: time="2026-03-11T01:24:43.398363885Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:c548633fcd3b4aad59b70815be4c8be54a0fddaddc3fcffa9371eedb0e96417a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:24:43.406611 containerd[1579]: time="2026-03-11T01:24:43.405372664Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.5\" with image id \"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:c548633fcd3b4aad59b70815be4c8be54a0fddaddc3fcffa9371eedb0e96417a\", size \"27071096\" in 9.362507831s" Mar 11 01:24:43.406611 containerd[1579]: time="2026-03-11T01:24:43.405508590Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.5\" returns image reference \"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\"" Mar 11 01:24:43.416170 containerd[1579]: time="2026-03-11T01:24:43.415755613Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.5\"" Mar 11 01:24:44.972924 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 11 01:24:44.980660 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 11 01:24:45.575369 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 11 01:24:45.607395 (kubelet)[2139]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 11 01:24:46.116733 kubelet[2139]: E0311 01:24:46.115810 2139 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 11 01:24:46.170286 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 11 01:24:46.184852 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 11 01:24:46.191670 systemd[1]: kubelet.service: Consumed 816ms CPU time, 109.1M memory peak. Mar 11 01:24:46.909710 update_engine[1549]: I20260311 01:24:46.908602 1549 update_attempter.cc:509] Updating boot flags... Mar 11 01:24:48.447708 containerd[1579]: time="2026-03-11T01:24:48.445689486Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:24:48.447708 containerd[1579]: time="2026-03-11T01:24:48.447110464Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.5: active requests=0, bytes read=21165823" Mar 11 01:24:48.449497 containerd[1579]: time="2026-03-11T01:24:48.449201235Z" level=info msg="ImageCreate event name:\"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:24:48.457660 containerd[1579]: time="2026-03-11T01:24:48.455092277Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f0426100c873816560c520d542fa28999a98dad909edd04365f3b0eead790da3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:24:48.457660 containerd[1579]: time="2026-03-11T01:24:48.456960611Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.5\" with image id \"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f0426100c873816560c520d542fa28999a98dad909edd04365f3b0eead790da3\", size \"22822771\" in 5.041147264s" Mar 11 01:24:48.457660 containerd[1579]: time="2026-03-11T01:24:48.457038376Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.5\" returns image reference \"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\"" Mar 11 01:24:48.460274 containerd[1579]: time="2026-03-11T01:24:48.460178455Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.5\"" Mar 11 01:24:51.395889 containerd[1579]: time="2026-03-11T01:24:51.395630198Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:24:51.397978 containerd[1579]: time="2026-03-11T01:24:51.397899595Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.5: active requests=0, bytes read=15729824" Mar 11 01:24:51.400867 containerd[1579]: time="2026-03-11T01:24:51.400739976Z" level=info msg="ImageCreate event name:\"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:24:51.407706 containerd[1579]: time="2026-03-11T01:24:51.407599647Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b67b0d627c8e99ffa362bd4d9a60ca9a6c449e363a5f88d2aa8c224bd84ca51d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:24:51.409349 containerd[1579]: time="2026-03-11T01:24:51.409252602Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.5\" with image id \"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b67b0d627c8e99ffa362bd4d9a60ca9a6c449e363a5f88d2aa8c224bd84ca51d\", size \"17386790\" in 2.949034079s" Mar 11 01:24:51.409406 containerd[1579]: time="2026-03-11T01:24:51.409382955Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.5\" returns image reference \"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\"" Mar 11 01:24:51.415271 containerd[1579]: time="2026-03-11T01:24:51.415226986Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.5\"" Mar 11 01:24:56.046024 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2453282805.mount: Deactivated successfully. Mar 11 01:24:56.224866 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Mar 11 01:24:56.230956 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 11 01:24:57.331217 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 11 01:24:57.383341 (kubelet)[2189]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 11 01:24:57.796611 kubelet[2189]: E0311 01:24:57.796461 2189 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 11 01:24:57.803967 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 11 01:24:57.804534 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 11 01:24:57.806985 systemd[1]: kubelet.service: Consumed 953ms CPU time, 109.8M memory peak. Mar 11 01:24:58.278851 containerd[1579]: time="2026-03-11T01:24:58.277773760Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:24:58.284636 containerd[1579]: time="2026-03-11T01:24:58.281007993Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.5: active requests=0, bytes read=25861770" Mar 11 01:24:58.288264 containerd[1579]: time="2026-03-11T01:24:58.284843403Z" level=info msg="ImageCreate event name:\"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:24:58.291803 containerd[1579]: time="2026-03-11T01:24:58.291441184Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8a22a3bf452d07af3b5a3064b089d2ad6579d5dd3b850386e05cc0f36dc3f4cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:24:58.296401 containerd[1579]: time="2026-03-11T01:24:58.294509162Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.5\" with image id \"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\", repo tag \"registry.k8s.io/kube-proxy:v1.34.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:8a22a3bf452d07af3b5a3064b089d2ad6579d5dd3b850386e05cc0f36dc3f4cf\", size \"25860789\" in 6.878997295s" Mar 11 01:24:58.296401 containerd[1579]: time="2026-03-11T01:24:58.295685358Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.5\" returns image reference \"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\"" Mar 11 01:24:58.398020 containerd[1579]: time="2026-03-11T01:24:58.397645807Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Mar 11 01:24:59.168312 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1709074710.mount: Deactivated successfully. Mar 11 01:25:03.892789 containerd[1579]: time="2026-03-11T01:25:03.892590333Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:25:03.901615 containerd[1579]: time="2026-03-11T01:25:03.898826683Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388007" Mar 11 01:25:03.904772 containerd[1579]: time="2026-03-11T01:25:03.904671201Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:25:03.912386 containerd[1579]: time="2026-03-11T01:25:03.912268891Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:25:03.918358 containerd[1579]: time="2026-03-11T01:25:03.916735874Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 5.518961556s" Mar 11 01:25:03.918358 containerd[1579]: time="2026-03-11T01:25:03.916874819Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Mar 11 01:25:03.922814 containerd[1579]: time="2026-03-11T01:25:03.922528647Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Mar 11 01:25:06.934932 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount575495680.mount: Deactivated successfully. Mar 11 01:25:06.988627 containerd[1579]: time="2026-03-11T01:25:06.987026937Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:25:06.992674 containerd[1579]: time="2026-03-11T01:25:06.992525625Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Mar 11 01:25:06.994798 containerd[1579]: time="2026-03-11T01:25:06.994423767Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:25:07.000119 containerd[1579]: time="2026-03-11T01:25:06.999789776Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:25:07.001356 containerd[1579]: time="2026-03-11T01:25:07.001045704Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 3.078485393s" Mar 11 01:25:07.001356 containerd[1579]: time="2026-03-11T01:25:07.001241290Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Mar 11 01:25:07.019724 containerd[1579]: time="2026-03-11T01:25:07.016689335Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\"" Mar 11 01:25:07.999040 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Mar 11 01:25:08.006411 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 11 01:25:08.343564 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3520024891.mount: Deactivated successfully. Mar 11 01:25:08.789469 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 11 01:25:08.854579 (kubelet)[2271]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 11 01:25:09.117903 kubelet[2271]: E0311 01:25:09.117491 2271 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 11 01:25:09.132083 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 11 01:25:09.132422 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 11 01:25:09.136647 systemd[1]: kubelet.service: Consumed 560ms CPU time, 108.4M memory peak. Mar 11 01:25:17.949036 containerd[1579]: time="2026-03-11T01:25:17.948863318Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:25:17.957021 containerd[1579]: time="2026-03-11T01:25:17.956937422Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.5-0: active requests=0, bytes read=22860674" Mar 11 01:25:17.961059 containerd[1579]: time="2026-03-11T01:25:17.960346907Z" level=info msg="ImageCreate event name:\"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:25:17.975682 containerd[1579]: time="2026-03-11T01:25:17.974857638Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:25:17.976572 containerd[1579]: time="2026-03-11T01:25:17.976502558Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.5-0\" with image id \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\", repo tag \"registry.k8s.io/etcd:3.6.5-0\", repo digest \"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\", size \"22871747\" in 10.954630501s" Mar 11 01:25:17.977180 containerd[1579]: time="2026-03-11T01:25:17.976770481Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\" returns image reference \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\"" Mar 11 01:25:19.195872 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Mar 11 01:25:19.205015 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 11 01:25:20.085783 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 11 01:25:20.100739 (kubelet)[2364]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 11 01:25:20.602921 kubelet[2364]: E0311 01:25:20.602041 2364 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 11 01:25:20.608718 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 11 01:25:20.609103 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 11 01:25:20.610336 systemd[1]: kubelet.service: Consumed 648ms CPU time, 110.8M memory peak. Mar 11 01:25:27.434856 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 11 01:25:27.435094 systemd[1]: kubelet.service: Consumed 648ms CPU time, 110.8M memory peak. Mar 11 01:25:27.441881 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 11 01:25:27.520594 systemd[1]: Reload requested from client PID 2382 ('systemctl') (unit session-9.scope)... Mar 11 01:25:27.520634 systemd[1]: Reloading... Mar 11 01:25:27.765288 zram_generator::config[2434]: No configuration found. Mar 11 01:25:28.475092 systemd[1]: Reloading finished in 953 ms. Mar 11 01:25:28.654020 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 11 01:25:28.654648 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 11 01:25:28.660378 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 11 01:25:28.660452 systemd[1]: kubelet.service: Consumed 222ms CPU time, 98.1M memory peak. Mar 11 01:25:28.671525 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 11 01:25:29.131282 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 11 01:25:29.157663 (kubelet)[2472]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 11 01:25:29.536733 kubelet[2472]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 11 01:25:29.536733 kubelet[2472]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 11 01:25:29.536733 kubelet[2472]: I0311 01:25:29.535315 2472 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 11 01:25:30.409504 kubelet[2472]: I0311 01:25:30.401958 2472 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Mar 11 01:25:30.409504 kubelet[2472]: I0311 01:25:30.402123 2472 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 11 01:25:30.413335 kubelet[2472]: I0311 01:25:30.413183 2472 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 11 01:25:30.413335 kubelet[2472]: I0311 01:25:30.413229 2472 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 11 01:25:30.413828 kubelet[2472]: I0311 01:25:30.413728 2472 server.go:956] "Client rotation is on, will bootstrap in background" Mar 11 01:25:30.598024 kubelet[2472]: E0311 01:25:30.597722 2472 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.26:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.26:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 11 01:25:30.705733 kubelet[2472]: I0311 01:25:30.702351 2472 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 11 01:25:30.796765 kubelet[2472]: I0311 01:25:30.796373 2472 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 11 01:25:31.045299 kubelet[2472]: I0311 01:25:31.043790 2472 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 11 01:25:31.058062 kubelet[2472]: I0311 01:25:31.055614 2472 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 11 01:25:31.058062 kubelet[2472]: I0311 01:25:31.055735 2472 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 11 01:25:31.058062 kubelet[2472]: I0311 01:25:31.056569 2472 topology_manager.go:138] "Creating topology manager with none policy" Mar 11 01:25:31.058062 kubelet[2472]: I0311 01:25:31.056587 2472 container_manager_linux.go:306] "Creating device plugin manager" Mar 11 01:25:31.059362 kubelet[2472]: I0311 01:25:31.057410 2472 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Mar 11 01:25:31.090329 kubelet[2472]: I0311 01:25:31.085025 2472 state_mem.go:36] "Initialized new in-memory state store" Mar 11 01:25:31.090329 kubelet[2472]: I0311 01:25:31.086357 2472 kubelet.go:475] "Attempting to sync node with API server" Mar 11 01:25:31.090329 kubelet[2472]: I0311 01:25:31.086485 2472 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 11 01:25:31.090329 kubelet[2472]: I0311 01:25:31.086691 2472 kubelet.go:387] "Adding apiserver pod source" Mar 11 01:25:31.090329 kubelet[2472]: I0311 01:25:31.086807 2472 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 11 01:25:31.090329 kubelet[2472]: E0311 01:25:31.089813 2472 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.26:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.26:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 11 01:25:31.092054 kubelet[2472]: E0311 01:25:31.091589 2472 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.26:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.26:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 11 01:25:31.148429 kubelet[2472]: I0311 01:25:31.148249 2472 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Mar 11 01:25:31.150945 kubelet[2472]: I0311 01:25:31.150761 2472 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 11 01:25:31.152214 kubelet[2472]: I0311 01:25:31.151957 2472 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 11 01:25:31.155373 kubelet[2472]: W0311 01:25:31.152386 2472 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 11 01:25:31.187342 kubelet[2472]: I0311 01:25:31.182700 2472 server.go:1262] "Started kubelet" Mar 11 01:25:31.187342 kubelet[2472]: I0311 01:25:31.184183 2472 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 11 01:25:31.187342 kubelet[2472]: I0311 01:25:31.186658 2472 server.go:310] "Adding debug handlers to kubelet server" Mar 11 01:25:31.189458 kubelet[2472]: I0311 01:25:31.188195 2472 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 11 01:25:31.189458 kubelet[2472]: I0311 01:25:31.188276 2472 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 11 01:25:31.189458 kubelet[2472]: I0311 01:25:31.189259 2472 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 11 01:25:31.192044 kubelet[2472]: I0311 01:25:31.191979 2472 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 11 01:25:31.192257 kubelet[2472]: I0311 01:25:31.192196 2472 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 11 01:25:31.226346 kubelet[2472]: E0311 01:25:31.202561 2472 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.26:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.26:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.189ba507ca93f5d0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-11 01:25:31.182470608 +0000 UTC m=+2.003770979,LastTimestamp:2026-03-11 01:25:31.182470608 +0000 UTC m=+2.003770979,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 11 01:25:31.226346 kubelet[2472]: I0311 01:25:31.225741 2472 volume_manager.go:313] "Starting Kubelet Volume Manager" Mar 11 01:25:31.226346 kubelet[2472]: I0311 01:25:31.225841 2472 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 11 01:25:31.226346 kubelet[2472]: E0311 01:25:31.226900 2472 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 11 01:25:31.237028 kubelet[2472]: E0311 01:25:31.231979 2472 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.26:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.26:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 11 01:25:31.237028 kubelet[2472]: I0311 01:25:31.231987 2472 reconciler.go:29] "Reconciler: start to sync state" Mar 11 01:25:31.239395 kubelet[2472]: E0311 01:25:31.238991 2472 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.26:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.26:6443: connect: connection refused" interval="200ms" Mar 11 01:25:31.241690 kubelet[2472]: I0311 01:25:31.241182 2472 factory.go:223] Registration of the systemd container factory successfully Mar 11 01:25:31.241690 kubelet[2472]: I0311 01:25:31.241383 2472 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 11 01:25:31.242945 kubelet[2472]: E0311 01:25:31.242874 2472 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 11 01:25:31.248466 kubelet[2472]: I0311 01:25:31.244763 2472 factory.go:223] Registration of the containerd container factory successfully Mar 11 01:25:31.300950 kubelet[2472]: I0311 01:25:31.296344 2472 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 11 01:25:31.300950 kubelet[2472]: I0311 01:25:31.298000 2472 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 11 01:25:31.300950 kubelet[2472]: I0311 01:25:31.298016 2472 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 11 01:25:31.300950 kubelet[2472]: I0311 01:25:31.298088 2472 state_mem.go:36] "Initialized new in-memory state store" Mar 11 01:25:31.321522 kubelet[2472]: I0311 01:25:31.319598 2472 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 11 01:25:31.321522 kubelet[2472]: I0311 01:25:31.319660 2472 status_manager.go:244] "Starting to sync pod status with apiserver" Mar 11 01:25:31.321522 kubelet[2472]: I0311 01:25:31.319747 2472 kubelet.go:2428] "Starting kubelet main sync loop" Mar 11 01:25:31.321522 kubelet[2472]: E0311 01:25:31.319811 2472 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 11 01:25:31.321522 kubelet[2472]: E0311 01:25:31.320873 2472 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.26:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.26:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 11 01:25:31.324523 kubelet[2472]: I0311 01:25:31.323937 2472 policy_none.go:49] "None policy: Start" Mar 11 01:25:31.325429 kubelet[2472]: I0311 01:25:31.325374 2472 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 11 01:25:31.325429 kubelet[2472]: I0311 01:25:31.325424 2472 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 11 01:25:31.330524 kubelet[2472]: E0311 01:25:31.327212 2472 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 11 01:25:31.334279 kubelet[2472]: I0311 01:25:31.331845 2472 policy_none.go:47] "Start" Mar 11 01:25:31.348187 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 11 01:25:31.371680 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 11 01:25:31.382621 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 11 01:25:31.408044 kubelet[2472]: E0311 01:25:31.407362 2472 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 11 01:25:31.409415 kubelet[2472]: I0311 01:25:31.409302 2472 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 11 01:25:31.410247 kubelet[2472]: I0311 01:25:31.410040 2472 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 11 01:25:31.418073 kubelet[2472]: I0311 01:25:31.417083 2472 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 11 01:25:31.421801 kubelet[2472]: E0311 01:25:31.421737 2472 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 11 01:25:31.421974 kubelet[2472]: E0311 01:25:31.421910 2472 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 11 01:25:31.440847 kubelet[2472]: I0311 01:25:31.438083 2472 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/50b1443cf04c17c30567be9af8a53ee2-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"50b1443cf04c17c30567be9af8a53ee2\") " pod="kube-system/kube-apiserver-localhost" Mar 11 01:25:31.440847 kubelet[2472]: I0311 01:25:31.439642 2472 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/50b1443cf04c17c30567be9af8a53ee2-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"50b1443cf04c17c30567be9af8a53ee2\") " pod="kube-system/kube-apiserver-localhost" Mar 11 01:25:31.440847 kubelet[2472]: I0311 01:25:31.439672 2472 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/50b1443cf04c17c30567be9af8a53ee2-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"50b1443cf04c17c30567be9af8a53ee2\") " pod="kube-system/kube-apiserver-localhost" Mar 11 01:25:31.440847 kubelet[2472]: E0311 01:25:31.440812 2472 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.26:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.26:6443: connect: connection refused" interval="400ms" Mar 11 01:25:31.452290 systemd[1]: Created slice kubepods-burstable-pod50b1443cf04c17c30567be9af8a53ee2.slice - libcontainer container kubepods-burstable-pod50b1443cf04c17c30567be9af8a53ee2.slice. Mar 11 01:25:31.479827 kubelet[2472]: E0311 01:25:31.479008 2472 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 11 01:25:31.495678 systemd[1]: Created slice kubepods-burstable-poddb0989cdb653dfec284dd4f35625e9e7.slice - libcontainer container kubepods-burstable-poddb0989cdb653dfec284dd4f35625e9e7.slice. Mar 11 01:25:31.516564 kubelet[2472]: I0311 01:25:31.516054 2472 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 11 01:25:31.516564 kubelet[2472]: E0311 01:25:31.516526 2472 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.26:6443/api/v1/nodes\": dial tcp 10.0.0.26:6443: connect: connection refused" node="localhost" Mar 11 01:25:31.524408 kubelet[2472]: E0311 01:25:31.524331 2472 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 11 01:25:31.536641 systemd[1]: Created slice kubepods-burstable-pod89efda49e166906783d8d868d41ebb86.slice - libcontainer container kubepods-burstable-pod89efda49e166906783d8d868d41ebb86.slice. Mar 11 01:25:31.540204 kubelet[2472]: I0311 01:25:31.540107 2472 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 11 01:25:31.542976 kubelet[2472]: I0311 01:25:31.540717 2472 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 11 01:25:31.542976 kubelet[2472]: I0311 01:25:31.540747 2472 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 11 01:25:31.542976 kubelet[2472]: I0311 01:25:31.540779 2472 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/89efda49e166906783d8d868d41ebb86-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"89efda49e166906783d8d868d41ebb86\") " pod="kube-system/kube-scheduler-localhost" Mar 11 01:25:31.542976 kubelet[2472]: I0311 01:25:31.540799 2472 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 11 01:25:31.543487 kubelet[2472]: I0311 01:25:31.543403 2472 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 11 01:25:31.550337 kubelet[2472]: E0311 01:25:31.549317 2472 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 11 01:25:31.766008 kubelet[2472]: I0311 01:25:31.765596 2472 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 11 01:25:31.771319 kubelet[2472]: E0311 01:25:31.769783 2472 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.26:6443/api/v1/nodes\": dial tcp 10.0.0.26:6443: connect: connection refused" node="localhost" Mar 11 01:25:31.804889 kubelet[2472]: E0311 01:25:31.804831 2472 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 01:25:31.810054 containerd[1579]: time="2026-03-11T01:25:31.807440620Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:50b1443cf04c17c30567be9af8a53ee2,Namespace:kube-system,Attempt:0,}" Mar 11 01:25:31.844531 kubelet[2472]: E0311 01:25:31.842554 2472 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 01:25:31.844699 containerd[1579]: time="2026-03-11T01:25:31.843814692Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:db0989cdb653dfec284dd4f35625e9e7,Namespace:kube-system,Attempt:0,}" Mar 11 01:25:31.846090 kubelet[2472]: E0311 01:25:31.845722 2472 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.26:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.26:6443: connect: connection refused" interval="800ms" Mar 11 01:25:31.874331 kubelet[2472]: E0311 01:25:31.872637 2472 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 01:25:31.874453 containerd[1579]: time="2026-03-11T01:25:31.873532711Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:89efda49e166906783d8d868d41ebb86,Namespace:kube-system,Attempt:0,}" Mar 11 01:25:32.145094 kubelet[2472]: E0311 01:25:32.144890 2472 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.26:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.26:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 11 01:25:32.173786 kubelet[2472]: I0311 01:25:32.173252 2472 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 11 01:25:32.173786 kubelet[2472]: E0311 01:25:32.173670 2472 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.26:6443/api/v1/nodes\": dial tcp 10.0.0.26:6443: connect: connection refused" node="localhost" Mar 11 01:25:32.495981 kubelet[2472]: E0311 01:25:32.495693 2472 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.26:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.26:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 11 01:25:32.559409 kubelet[2472]: E0311 01:25:32.559083 2472 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.26:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.26:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 11 01:25:32.580348 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3255313066.mount: Deactivated successfully. Mar 11 01:25:32.604590 kubelet[2472]: E0311 01:25:32.602898 2472 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.26:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.26:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 11 01:25:32.624502 containerd[1579]: time="2026-03-11T01:25:32.624360687Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 11 01:25:32.631000 containerd[1579]: time="2026-03-11T01:25:32.630839540Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Mar 11 01:25:32.640283 containerd[1579]: time="2026-03-11T01:25:32.640197059Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 11 01:25:32.645971 containerd[1579]: time="2026-03-11T01:25:32.645109733Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 11 01:25:32.649194 kubelet[2472]: E0311 01:25:32.647753 2472 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.26:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.26:6443: connect: connection refused" interval="1.6s" Mar 11 01:25:32.649352 containerd[1579]: time="2026-03-11T01:25:32.649244039Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Mar 11 01:25:32.657656 containerd[1579]: time="2026-03-11T01:25:32.656969297Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 11 01:25:32.663928 containerd[1579]: time="2026-03-11T01:25:32.661472302Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 11 01:25:32.668761 containerd[1579]: time="2026-03-11T01:25:32.665754155Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 849.897003ms" Mar 11 01:25:32.668761 containerd[1579]: time="2026-03-11T01:25:32.666742733Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Mar 11 01:25:32.671855 containerd[1579]: time="2026-03-11T01:25:32.671810164Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 821.061884ms" Mar 11 01:25:32.677763 containerd[1579]: time="2026-03-11T01:25:32.676332811Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 793.485599ms" Mar 11 01:25:32.684589 kubelet[2472]: E0311 01:25:32.684322 2472 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.26:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.26:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 11 01:25:32.844805 containerd[1579]: time="2026-03-11T01:25:32.841883883Z" level=info msg="connecting to shim ff3875320548351faf941abda22c0934cf998875869cca4b636830c11530233e" address="unix:///run/containerd/s/2061a4d338498375dd197b3c08dcdf25a980872b05f9261026e7b2cf2ce5d63c" namespace=k8s.io protocol=ttrpc version=3 Mar 11 01:25:32.852575 containerd[1579]: time="2026-03-11T01:25:32.852495661Z" level=info msg="connecting to shim bcefa65120f175049337d222d75c5b1acc07a0e5add9a024b9c416761761397f" address="unix:///run/containerd/s/daccbb50cdb2f060bf564f35e4b7ade150c4cf2d433dd7703dfbe007adc18df7" namespace=k8s.io protocol=ttrpc version=3 Mar 11 01:25:32.877917 containerd[1579]: time="2026-03-11T01:25:32.876014350Z" level=info msg="connecting to shim 7929e5b4f9d9fe2c04bce28ad50f83e76c30fb43a524216a5afc9d4f6d451ae4" address="unix:///run/containerd/s/c817e0dc26fe339e947f0fc9c3bcfc8a15acfca1cb7207b59767c9e870260afa" namespace=k8s.io protocol=ttrpc version=3 Mar 11 01:25:33.075489 kubelet[2472]: I0311 01:25:33.068350 2472 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 11 01:25:33.095718 kubelet[2472]: E0311 01:25:33.094681 2472 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.26:6443/api/v1/nodes\": dial tcp 10.0.0.26:6443: connect: connection refused" node="localhost" Mar 11 01:25:33.161636 systemd[1]: Started cri-containerd-bcefa65120f175049337d222d75c5b1acc07a0e5add9a024b9c416761761397f.scope - libcontainer container bcefa65120f175049337d222d75c5b1acc07a0e5add9a024b9c416761761397f. Mar 11 01:25:33.164870 systemd[1]: Started cri-containerd-ff3875320548351faf941abda22c0934cf998875869cca4b636830c11530233e.scope - libcontainer container ff3875320548351faf941abda22c0934cf998875869cca4b636830c11530233e. Mar 11 01:25:33.244981 systemd[1]: Started cri-containerd-7929e5b4f9d9fe2c04bce28ad50f83e76c30fb43a524216a5afc9d4f6d451ae4.scope - libcontainer container 7929e5b4f9d9fe2c04bce28ad50f83e76c30fb43a524216a5afc9d4f6d451ae4. Mar 11 01:25:34.034248 containerd[1579]: time="2026-03-11T01:25:34.033546037Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:50b1443cf04c17c30567be9af8a53ee2,Namespace:kube-system,Attempt:0,} returns sandbox id \"bcefa65120f175049337d222d75c5b1acc07a0e5add9a024b9c416761761397f\"" Mar 11 01:25:34.036583 kubelet[2472]: E0311 01:25:34.036533 2472 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 01:25:34.048418 containerd[1579]: time="2026-03-11T01:25:34.048338193Z" level=info msg="CreateContainer within sandbox \"bcefa65120f175049337d222d75c5b1acc07a0e5add9a024b9c416761761397f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 11 01:25:34.077879 containerd[1579]: time="2026-03-11T01:25:34.075724833Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:89efda49e166906783d8d868d41ebb86,Namespace:kube-system,Attempt:0,} returns sandbox id \"ff3875320548351faf941abda22c0934cf998875869cca4b636830c11530233e\"" Mar 11 01:25:34.079695 kubelet[2472]: E0311 01:25:34.079246 2472 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 01:25:34.086477 containerd[1579]: time="2026-03-11T01:25:34.086242869Z" level=info msg="CreateContainer within sandbox \"ff3875320548351faf941abda22c0934cf998875869cca4b636830c11530233e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 11 01:25:34.087892 containerd[1579]: time="2026-03-11T01:25:34.087310381Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:db0989cdb653dfec284dd4f35625e9e7,Namespace:kube-system,Attempt:0,} returns sandbox id \"7929e5b4f9d9fe2c04bce28ad50f83e76c30fb43a524216a5afc9d4f6d451ae4\"" Mar 11 01:25:34.088885 kubelet[2472]: E0311 01:25:34.088369 2472 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 01:25:34.089348 containerd[1579]: time="2026-03-11T01:25:34.089319931Z" level=info msg="Container 507e839bdc17046406fb9582b6afa1e56b6b9e3dcf264c9bac97ac1e022b7f99: CDI devices from CRI Config.CDIDevices: []" Mar 11 01:25:34.103336 containerd[1579]: time="2026-03-11T01:25:34.102749681Z" level=info msg="CreateContainer within sandbox \"7929e5b4f9d9fe2c04bce28ad50f83e76c30fb43a524216a5afc9d4f6d451ae4\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 11 01:25:34.129972 containerd[1579]: time="2026-03-11T01:25:34.129926821Z" level=info msg="CreateContainer within sandbox \"bcefa65120f175049337d222d75c5b1acc07a0e5add9a024b9c416761761397f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"507e839bdc17046406fb9582b6afa1e56b6b9e3dcf264c9bac97ac1e022b7f99\"" Mar 11 01:25:34.131819 containerd[1579]: time="2026-03-11T01:25:34.130919180Z" level=info msg="Container cecd0ff009ce9a5e64250732ed2f263b87e84da473a55c38b5a7db9991be7f5a: CDI devices from CRI Config.CDIDevices: []" Mar 11 01:25:34.131819 containerd[1579]: time="2026-03-11T01:25:34.131309410Z" level=info msg="StartContainer for \"507e839bdc17046406fb9582b6afa1e56b6b9e3dcf264c9bac97ac1e022b7f99\"" Mar 11 01:25:34.133469 containerd[1579]: time="2026-03-11T01:25:34.133438572Z" level=info msg="connecting to shim 507e839bdc17046406fb9582b6afa1e56b6b9e3dcf264c9bac97ac1e022b7f99" address="unix:///run/containerd/s/daccbb50cdb2f060bf564f35e4b7ade150c4cf2d433dd7703dfbe007adc18df7" protocol=ttrpc version=3 Mar 11 01:25:34.149439 containerd[1579]: time="2026-03-11T01:25:34.149354553Z" level=info msg="CreateContainer within sandbox \"ff3875320548351faf941abda22c0934cf998875869cca4b636830c11530233e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"cecd0ff009ce9a5e64250732ed2f263b87e84da473a55c38b5a7db9991be7f5a\"" Mar 11 01:25:34.152002 containerd[1579]: time="2026-03-11T01:25:34.150558955Z" level=info msg="StartContainer for \"cecd0ff009ce9a5e64250732ed2f263b87e84da473a55c38b5a7db9991be7f5a\"" Mar 11 01:25:34.153710 containerd[1579]: time="2026-03-11T01:25:34.153583702Z" level=info msg="Container 5cc1458252bd908a18bb3adc228f10a5793c389eb9910da542cbd586c898c4bf: CDI devices from CRI Config.CDIDevices: []" Mar 11 01:25:34.154235 containerd[1579]: time="2026-03-11T01:25:34.154095470Z" level=info msg="connecting to shim cecd0ff009ce9a5e64250732ed2f263b87e84da473a55c38b5a7db9991be7f5a" address="unix:///run/containerd/s/2061a4d338498375dd197b3c08dcdf25a980872b05f9261026e7b2cf2ce5d63c" protocol=ttrpc version=3 Mar 11 01:25:34.180403 systemd[1]: Started cri-containerd-507e839bdc17046406fb9582b6afa1e56b6b9e3dcf264c9bac97ac1e022b7f99.scope - libcontainer container 507e839bdc17046406fb9582b6afa1e56b6b9e3dcf264c9bac97ac1e022b7f99. Mar 11 01:25:34.186376 containerd[1579]: time="2026-03-11T01:25:34.186312055Z" level=info msg="CreateContainer within sandbox \"7929e5b4f9d9fe2c04bce28ad50f83e76c30fb43a524216a5afc9d4f6d451ae4\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"5cc1458252bd908a18bb3adc228f10a5793c389eb9910da542cbd586c898c4bf\"" Mar 11 01:25:34.188025 containerd[1579]: time="2026-03-11T01:25:34.187997526Z" level=info msg="StartContainer for \"5cc1458252bd908a18bb3adc228f10a5793c389eb9910da542cbd586c898c4bf\"" Mar 11 01:25:34.194895 containerd[1579]: time="2026-03-11T01:25:34.194648758Z" level=info msg="connecting to shim 5cc1458252bd908a18bb3adc228f10a5793c389eb9910da542cbd586c898c4bf" address="unix:///run/containerd/s/c817e0dc26fe339e947f0fc9c3bcfc8a15acfca1cb7207b59767c9e870260afa" protocol=ttrpc version=3 Mar 11 01:25:34.195189 kubelet[2472]: E0311 01:25:34.194937 2472 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.26:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.26:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 11 01:25:34.228792 systemd[1]: Started cri-containerd-cecd0ff009ce9a5e64250732ed2f263b87e84da473a55c38b5a7db9991be7f5a.scope - libcontainer container cecd0ff009ce9a5e64250732ed2f263b87e84da473a55c38b5a7db9991be7f5a. Mar 11 01:25:34.248904 kubelet[2472]: E0311 01:25:34.248864 2472 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.26:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.26:6443: connect: connection refused" interval="3.2s" Mar 11 01:25:34.251508 systemd[1]: Started cri-containerd-5cc1458252bd908a18bb3adc228f10a5793c389eb9910da542cbd586c898c4bf.scope - libcontainer container 5cc1458252bd908a18bb3adc228f10a5793c389eb9910da542cbd586c898c4bf. Mar 11 01:25:34.305078 containerd[1579]: time="2026-03-11T01:25:34.304630517Z" level=info msg="StartContainer for \"507e839bdc17046406fb9582b6afa1e56b6b9e3dcf264c9bac97ac1e022b7f99\" returns successfully" Mar 11 01:25:34.394791 containerd[1579]: time="2026-03-11T01:25:34.393803983Z" level=info msg="StartContainer for \"cecd0ff009ce9a5e64250732ed2f263b87e84da473a55c38b5a7db9991be7f5a\" returns successfully" Mar 11 01:25:34.474181 containerd[1579]: time="2026-03-11T01:25:34.473968958Z" level=info msg="StartContainer for \"5cc1458252bd908a18bb3adc228f10a5793c389eb9910da542cbd586c898c4bf\" returns successfully" Mar 11 01:25:34.748858 kubelet[2472]: I0311 01:25:34.745944 2472 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 11 01:25:35.049198 kubelet[2472]: E0311 01:25:35.044022 2472 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 11 01:25:35.049198 kubelet[2472]: E0311 01:25:35.047261 2472 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 01:25:35.059777 kubelet[2472]: E0311 01:25:35.059663 2472 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 11 01:25:35.060698 kubelet[2472]: E0311 01:25:35.060618 2472 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 01:25:35.072741 kubelet[2472]: E0311 01:25:35.072493 2472 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 11 01:25:35.072741 kubelet[2472]: E0311 01:25:35.072683 2472 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 01:25:36.105560 kubelet[2472]: E0311 01:25:36.104985 2472 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 11 01:25:36.105560 kubelet[2472]: E0311 01:25:36.105114 2472 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 11 01:25:36.105560 kubelet[2472]: E0311 01:25:36.105374 2472 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 01:25:36.105560 kubelet[2472]: E0311 01:25:36.105451 2472 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 01:25:39.641906 kubelet[2472]: E0311 01:25:39.641758 2472 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 11 01:25:39.641906 kubelet[2472]: E0311 01:25:39.642623 2472 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 01:25:40.520918 kubelet[2472]: E0311 01:25:40.520846 2472 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 11 01:25:40.527207 kubelet[2472]: E0311 01:25:40.527054 2472 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 01:25:41.483692 kubelet[2472]: E0311 01:25:41.480274 2472 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 11 01:25:43.123279 kubelet[2472]: E0311 01:25:43.122940 2472 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Mar 11 01:25:43.205554 kubelet[2472]: I0311 01:25:43.201275 2472 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Mar 11 01:25:43.248247 kubelet[2472]: I0311 01:25:43.244450 2472 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 11 01:25:43.391675 kubelet[2472]: E0311 01:25:43.384116 2472 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.189ba507ca93f5d0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-11 01:25:31.182470608 +0000 UTC m=+2.003770979,LastTimestamp:2026-03-11 01:25:31.182470608 +0000 UTC m=+2.003770979,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 11 01:25:43.391675 kubelet[2472]: I0311 01:25:43.386419 2472 apiserver.go:52] "Watching apiserver" Mar 11 01:25:43.411308 kubelet[2472]: E0311 01:25:43.408924 2472 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Mar 11 01:25:43.411308 kubelet[2472]: I0311 01:25:43.409111 2472 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 11 01:25:43.417197 kubelet[2472]: E0311 01:25:43.413519 2472 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Mar 11 01:25:43.417197 kubelet[2472]: I0311 01:25:43.413548 2472 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 11 01:25:43.419635 kubelet[2472]: E0311 01:25:43.419536 2472 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Mar 11 01:25:43.427539 kubelet[2472]: I0311 01:25:43.427126 2472 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 11 01:25:44.778348 kubelet[2472]: I0311 01:25:44.774783 2472 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 11 01:25:44.968696 kubelet[2472]: E0311 01:25:44.968570 2472 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 01:25:46.041915 kubelet[2472]: E0311 01:25:46.041832 2472 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 01:25:51.834801 kubelet[2472]: I0311 01:25:51.811919 2472 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 11 01:25:51.973706 kubelet[2472]: I0311 01:25:51.969336 2472 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=7.967606483 podStartE2EDuration="7.967606483s" podCreationTimestamp="2026-03-11 01:25:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 01:25:46.343895793 +0000 UTC m=+17.165196174" watchObservedRunningTime="2026-03-11 01:25:51.967606483 +0000 UTC m=+22.788906854" Mar 11 01:25:51.973706 kubelet[2472]: E0311 01:25:51.971515 2472 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 01:25:52.853328 kubelet[2472]: E0311 01:25:52.852768 2472 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 01:25:59.476792 systemd[1]: Reload requested from client PID 2769 ('systemctl') (unit session-9.scope)... Mar 11 01:25:59.477531 systemd[1]: Reloading... Mar 11 01:25:59.578280 kubelet[2472]: I0311 01:25:59.577381 2472 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 11 01:25:59.882990 kubelet[2472]: E0311 01:25:59.870716 2472 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 01:26:00.249236 kubelet[2472]: I0311 01:26:00.244922 2472 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=9.244898441 podStartE2EDuration="9.244898441s" podCreationTimestamp="2026-03-11 01:25:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 01:25:59.871614856 +0000 UTC m=+30.692915247" watchObservedRunningTime="2026-03-11 01:26:00.244898441 +0000 UTC m=+31.066198812" Mar 11 01:26:00.260228 zram_generator::config[2809]: No configuration found. Mar 11 01:26:00.297273 kubelet[2472]: I0311 01:26:00.293343 2472 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.293320329 podStartE2EDuration="1.293320329s" podCreationTimestamp="2026-03-11 01:25:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 01:26:00.245741291 +0000 UTC m=+31.067041692" watchObservedRunningTime="2026-03-11 01:26:00.293320329 +0000 UTC m=+31.114620730" Mar 11 01:26:00.310729 kubelet[2472]: E0311 01:26:00.310700 2472 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 01:26:02.184837 systemd[1]: Reloading finished in 2706 ms. Mar 11 01:26:02.337563 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 11 01:26:02.390352 systemd[1]: kubelet.service: Deactivated successfully. Mar 11 01:26:02.391021 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 11 01:26:02.391089 systemd[1]: kubelet.service: Consumed 7.632s CPU time, 131.8M memory peak. Mar 11 01:26:02.422193 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 11 01:26:03.219803 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 11 01:26:03.259012 (kubelet)[2857]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 11 01:26:03.508657 kubelet[2857]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 11 01:26:03.508657 kubelet[2857]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 11 01:26:03.508657 kubelet[2857]: I0311 01:26:03.508534 2857 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 11 01:26:03.733399 kubelet[2857]: I0311 01:26:03.729945 2857 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Mar 11 01:26:03.733399 kubelet[2857]: I0311 01:26:03.730491 2857 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 11 01:26:03.733399 kubelet[2857]: I0311 01:26:03.733724 2857 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 11 01:26:03.733399 kubelet[2857]: I0311 01:26:03.733751 2857 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 11 01:26:03.735251 kubelet[2857]: I0311 01:26:03.734229 2857 server.go:956] "Client rotation is on, will bootstrap in background" Mar 11 01:26:03.736914 kubelet[2857]: I0311 01:26:03.736298 2857 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 11 01:26:03.758323 kubelet[2857]: I0311 01:26:03.755896 2857 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 11 01:26:03.800502 kubelet[2857]: I0311 01:26:03.795072 2857 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 11 01:26:03.874850 kubelet[2857]: I0311 01:26:03.873937 2857 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 11 01:26:03.874850 kubelet[2857]: I0311 01:26:03.874357 2857 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 11 01:26:03.874850 kubelet[2857]: I0311 01:26:03.874405 2857 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 11 01:26:03.874850 kubelet[2857]: I0311 01:26:03.874712 2857 topology_manager.go:138] "Creating topology manager with none policy" Mar 11 01:26:03.875480 kubelet[2857]: I0311 01:26:03.874733 2857 container_manager_linux.go:306] "Creating device plugin manager" Mar 11 01:26:03.875480 kubelet[2857]: I0311 01:26:03.874796 2857 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Mar 11 01:26:03.875480 kubelet[2857]: I0311 01:26:03.875077 2857 state_mem.go:36] "Initialized new in-memory state store" Mar 11 01:26:03.875480 kubelet[2857]: I0311 01:26:03.875323 2857 kubelet.go:475] "Attempting to sync node with API server" Mar 11 01:26:03.875480 kubelet[2857]: I0311 01:26:03.875345 2857 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 11 01:26:03.875480 kubelet[2857]: I0311 01:26:03.875376 2857 kubelet.go:387] "Adding apiserver pod source" Mar 11 01:26:03.875480 kubelet[2857]: I0311 01:26:03.875392 2857 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 11 01:26:03.887803 kubelet[2857]: I0311 01:26:03.886026 2857 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Mar 11 01:26:03.891826 kubelet[2857]: I0311 01:26:03.889726 2857 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 11 01:26:03.891826 kubelet[2857]: I0311 01:26:03.889856 2857 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 11 01:26:03.966420 kubelet[2857]: I0311 01:26:03.965216 2857 server.go:1262] "Started kubelet" Mar 11 01:26:03.975832 kubelet[2857]: I0311 01:26:03.971411 2857 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 11 01:26:03.975832 kubelet[2857]: I0311 01:26:03.971491 2857 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 11 01:26:03.975832 kubelet[2857]: I0311 01:26:03.972028 2857 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 11 01:26:03.976310 kubelet[2857]: I0311 01:26:03.976272 2857 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 11 01:26:03.978016 kubelet[2857]: I0311 01:26:03.977993 2857 server.go:310] "Adding debug handlers to kubelet server" Mar 11 01:26:03.996351 kubelet[2857]: I0311 01:26:03.992586 2857 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 11 01:26:03.996351 kubelet[2857]: I0311 01:26:03.993225 2857 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 11 01:26:03.996351 kubelet[2857]: I0311 01:26:03.995358 2857 volume_manager.go:313] "Starting Kubelet Volume Manager" Mar 11 01:26:03.996351 kubelet[2857]: I0311 01:26:03.995478 2857 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 11 01:26:04.000352 kubelet[2857]: I0311 01:26:03.996817 2857 reconciler.go:29] "Reconciler: start to sync state" Mar 11 01:26:04.011653 kubelet[2857]: I0311 01:26:04.006979 2857 factory.go:223] Registration of the systemd container factory successfully Mar 11 01:26:04.011653 kubelet[2857]: I0311 01:26:04.007380 2857 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 11 01:26:04.014198 kubelet[2857]: E0311 01:26:04.012010 2857 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 11 01:26:04.027806 kubelet[2857]: I0311 01:26:04.027319 2857 factory.go:223] Registration of the containerd container factory successfully Mar 11 01:26:04.176336 kubelet[2857]: I0311 01:26:04.175592 2857 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 11 01:26:04.214376 kubelet[2857]: I0311 01:26:04.214327 2857 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 11 01:26:04.214561 kubelet[2857]: I0311 01:26:04.214546 2857 status_manager.go:244] "Starting to sync pod status with apiserver" Mar 11 01:26:04.214694 kubelet[2857]: I0311 01:26:04.214680 2857 kubelet.go:2428] "Starting kubelet main sync loop" Mar 11 01:26:04.222180 kubelet[2857]: E0311 01:26:04.216893 2857 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 11 01:26:04.370268 kubelet[2857]: E0311 01:26:04.359817 2857 kubelet.go:2452] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 11 01:26:04.571308 kubelet[2857]: E0311 01:26:04.569757 2857 kubelet.go:2452] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 11 01:26:04.731814 kubelet[2857]: I0311 01:26:04.729685 2857 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 11 01:26:04.731814 kubelet[2857]: I0311 01:26:04.729771 2857 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 11 01:26:04.731814 kubelet[2857]: I0311 01:26:04.729859 2857 state_mem.go:36] "Initialized new in-memory state store" Mar 11 01:26:04.731814 kubelet[2857]: I0311 01:26:04.731724 2857 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 11 01:26:04.731814 kubelet[2857]: I0311 01:26:04.731748 2857 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 11 01:26:04.731814 kubelet[2857]: I0311 01:26:04.731776 2857 policy_none.go:49] "None policy: Start" Mar 11 01:26:04.731814 kubelet[2857]: I0311 01:26:04.731792 2857 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 11 01:26:04.731814 kubelet[2857]: I0311 01:26:04.731810 2857 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 11 01:26:04.731814 kubelet[2857]: I0311 01:26:04.732030 2857 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Mar 11 01:26:04.731814 kubelet[2857]: I0311 01:26:04.732044 2857 policy_none.go:47] "Start" Mar 11 01:26:04.879879 kubelet[2857]: E0311 01:26:04.843311 2857 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 11 01:26:04.879879 kubelet[2857]: I0311 01:26:04.859105 2857 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 11 01:26:04.879879 kubelet[2857]: I0311 01:26:04.859183 2857 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 11 01:26:04.880976 kubelet[2857]: I0311 01:26:04.880537 2857 apiserver.go:52] "Watching apiserver" Mar 11 01:26:04.893464 kubelet[2857]: E0311 01:26:04.882595 2857 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 11 01:26:04.936333 kubelet[2857]: I0311 01:26:04.936222 2857 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 11 01:26:04.999052 kubelet[2857]: I0311 01:26:04.996858 2857 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 11 01:26:05.085282 kubelet[2857]: I0311 01:26:05.081269 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/50b1443cf04c17c30567be9af8a53ee2-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"50b1443cf04c17c30567be9af8a53ee2\") " pod="kube-system/kube-apiserver-localhost" Mar 11 01:26:05.085282 kubelet[2857]: I0311 01:26:05.081361 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 11 01:26:05.085282 kubelet[2857]: I0311 01:26:05.081394 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 11 01:26:05.085282 kubelet[2857]: I0311 01:26:05.081422 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 11 01:26:05.085282 kubelet[2857]: I0311 01:26:05.081447 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/50b1443cf04c17c30567be9af8a53ee2-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"50b1443cf04c17c30567be9af8a53ee2\") " pod="kube-system/kube-apiserver-localhost" Mar 11 01:26:05.087215 kubelet[2857]: I0311 01:26:05.081472 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 11 01:26:05.087215 kubelet[2857]: I0311 01:26:05.081495 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 11 01:26:05.087215 kubelet[2857]: I0311 01:26:05.081514 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/89efda49e166906783d8d868d41ebb86-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"89efda49e166906783d8d868d41ebb86\") " pod="kube-system/kube-scheduler-localhost" Mar 11 01:26:05.087215 kubelet[2857]: I0311 01:26:05.081536 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/50b1443cf04c17c30567be9af8a53ee2-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"50b1443cf04c17c30567be9af8a53ee2\") " pod="kube-system/kube-apiserver-localhost" Mar 11 01:26:05.134608 kubelet[2857]: I0311 01:26:05.132092 2857 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 11 01:26:05.297500 kubelet[2857]: E0311 01:26:05.296413 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 01:26:05.301879 kubelet[2857]: E0311 01:26:05.300746 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 01:26:05.305949 kubelet[2857]: E0311 01:26:05.304935 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 01:26:05.350814 kubelet[2857]: I0311 01:26:05.349450 2857 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Mar 11 01:26:05.350814 kubelet[2857]: I0311 01:26:05.349792 2857 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Mar 11 01:26:05.746490 kubelet[2857]: E0311 01:26:05.745884 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 01:26:05.746490 kubelet[2857]: E0311 01:26:05.746354 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 01:26:07.104423 kubelet[2857]: E0311 01:26:07.098471 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 01:26:07.104423 kubelet[2857]: E0311 01:26:07.098780 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 01:26:10.492723 kubelet[2857]: E0311 01:26:10.471005 2857 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.223s" Mar 11 01:26:11.270210 kubelet[2857]: E0311 01:26:11.243714 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 01:26:13.064625 kubelet[2857]: I0311 01:26:13.060529 2857 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 11 01:26:13.078186 kubelet[2857]: E0311 01:26:13.062633 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 01:26:13.088405 containerd[1579]: time="2026-03-11T01:26:13.088270660Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 11 01:26:13.091393 kubelet[2857]: I0311 01:26:13.089853 2857 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 11 01:26:13.108917 kubelet[2857]: E0311 01:26:13.108883 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 01:26:13.898624 systemd[1]: Created slice kubepods-besteffort-pod045f89c3_d79c_4714_bc7f_3c55f48a5ca8.slice - libcontainer container kubepods-besteffort-pod045f89c3_d79c_4714_bc7f_3c55f48a5ca8.slice. Mar 11 01:26:14.014487 kubelet[2857]: I0311 01:26:14.013943 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/045f89c3-d79c-4714-bc7f-3c55f48a5ca8-kube-proxy\") pod \"kube-proxy-8nz4h\" (UID: \"045f89c3-d79c-4714-bc7f-3c55f48a5ca8\") " pod="kube-system/kube-proxy-8nz4h" Mar 11 01:26:14.014487 kubelet[2857]: I0311 01:26:14.014014 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/045f89c3-d79c-4714-bc7f-3c55f48a5ca8-xtables-lock\") pod \"kube-proxy-8nz4h\" (UID: \"045f89c3-d79c-4714-bc7f-3c55f48a5ca8\") " pod="kube-system/kube-proxy-8nz4h" Mar 11 01:26:14.014487 kubelet[2857]: I0311 01:26:14.014041 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/045f89c3-d79c-4714-bc7f-3c55f48a5ca8-lib-modules\") pod \"kube-proxy-8nz4h\" (UID: \"045f89c3-d79c-4714-bc7f-3c55f48a5ca8\") " pod="kube-system/kube-proxy-8nz4h" Mar 11 01:26:14.014487 kubelet[2857]: I0311 01:26:14.014066 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dv258\" (UniqueName: \"kubernetes.io/projected/045f89c3-d79c-4714-bc7f-3c55f48a5ca8-kube-api-access-dv258\") pod \"kube-proxy-8nz4h\" (UID: \"045f89c3-d79c-4714-bc7f-3c55f48a5ca8\") " pod="kube-system/kube-proxy-8nz4h" Mar 11 01:26:14.226419 kubelet[2857]: E0311 01:26:14.222355 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 01:26:14.230370 containerd[1579]: time="2026-03-11T01:26:14.224538445Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8nz4h,Uid:045f89c3-d79c-4714-bc7f-3c55f48a5ca8,Namespace:kube-system,Attempt:0,}" Mar 11 01:26:14.341209 containerd[1579]: time="2026-03-11T01:26:14.339976399Z" level=info msg="connecting to shim 36e2baf3a7f1df8730cb8740fd9c7f36eb78738769f697739d92ac72ea110e34" address="unix:///run/containerd/s/8842d2103a7316f40279cf915f0e16aa489d7e6a3071f1333d86006f4c6959ae" namespace=k8s.io protocol=ttrpc version=3 Mar 11 01:26:14.347048 kubelet[2857]: E0311 01:26:14.345066 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 01:26:14.712465 systemd[1]: Started cri-containerd-36e2baf3a7f1df8730cb8740fd9c7f36eb78738769f697739d92ac72ea110e34.scope - libcontainer container 36e2baf3a7f1df8730cb8740fd9c7f36eb78738769f697739d92ac72ea110e34. Mar 11 01:26:15.233817 kubelet[2857]: E0311 01:26:15.233519 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 01:26:15.852305 kubelet[2857]: I0311 01:26:15.850763 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/4fdf546b-9706-4c85-89bf-dabf7ea5afd2-var-lib-calico\") pod \"tigera-operator-5588576f44-zj8qk\" (UID: \"4fdf546b-9706-4c85-89bf-dabf7ea5afd2\") " pod="tigera-operator/tigera-operator-5588576f44-zj8qk" Mar 11 01:26:15.852305 kubelet[2857]: I0311 01:26:15.850855 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmgvj\" (UniqueName: \"kubernetes.io/projected/4fdf546b-9706-4c85-89bf-dabf7ea5afd2-kube-api-access-cmgvj\") pod \"tigera-operator-5588576f44-zj8qk\" (UID: \"4fdf546b-9706-4c85-89bf-dabf7ea5afd2\") " pod="tigera-operator/tigera-operator-5588576f44-zj8qk" Mar 11 01:26:16.097426 systemd[1]: Created slice kubepods-besteffort-pod4fdf546b_9706_4c85_89bf_dabf7ea5afd2.slice - libcontainer container kubepods-besteffort-pod4fdf546b_9706_4c85_89bf_dabf7ea5afd2.slice. Mar 11 01:26:16.455012 containerd[1579]: time="2026-03-11T01:26:16.452498840Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8nz4h,Uid:045f89c3-d79c-4714-bc7f-3c55f48a5ca8,Namespace:kube-system,Attempt:0,} returns sandbox id \"36e2baf3a7f1df8730cb8740fd9c7f36eb78738769f697739d92ac72ea110e34\"" Mar 11 01:26:16.458577 kubelet[2857]: E0311 01:26:16.453909 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 01:26:16.473400 containerd[1579]: time="2026-03-11T01:26:16.472917870Z" level=info msg="CreateContainer within sandbox \"36e2baf3a7f1df8730cb8740fd9c7f36eb78738769f697739d92ac72ea110e34\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 11 01:26:16.517686 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount729398926.mount: Deactivated successfully. Mar 11 01:26:16.531260 containerd[1579]: time="2026-03-11T01:26:16.529998972Z" level=info msg="Container 75988794ce73dfd34b9b51596d9b1d51a8ca8c05f2e49509529c6f0efba6393a: CDI devices from CRI Config.CDIDevices: []" Mar 11 01:26:16.692920 containerd[1579]: time="2026-03-11T01:26:16.685099631Z" level=info msg="CreateContainer within sandbox \"36e2baf3a7f1df8730cb8740fd9c7f36eb78738769f697739d92ac72ea110e34\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"75988794ce73dfd34b9b51596d9b1d51a8ca8c05f2e49509529c6f0efba6393a\"" Mar 11 01:26:16.829000 containerd[1579]: time="2026-03-11T01:26:16.828672543Z" level=info msg="StartContainer for \"75988794ce73dfd34b9b51596d9b1d51a8ca8c05f2e49509529c6f0efba6393a\"" Mar 11 01:26:16.835214 containerd[1579]: time="2026-03-11T01:26:16.834232580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5588576f44-zj8qk,Uid:4fdf546b-9706-4c85-89bf-dabf7ea5afd2,Namespace:tigera-operator,Attempt:0,}" Mar 11 01:26:16.842883 containerd[1579]: time="2026-03-11T01:26:16.840749408Z" level=info msg="connecting to shim 75988794ce73dfd34b9b51596d9b1d51a8ca8c05f2e49509529c6f0efba6393a" address="unix:///run/containerd/s/8842d2103a7316f40279cf915f0e16aa489d7e6a3071f1333d86006f4c6959ae" protocol=ttrpc version=3 Mar 11 01:26:16.941827 systemd[1]: Started cri-containerd-75988794ce73dfd34b9b51596d9b1d51a8ca8c05f2e49509529c6f0efba6393a.scope - libcontainer container 75988794ce73dfd34b9b51596d9b1d51a8ca8c05f2e49509529c6f0efba6393a. Mar 11 01:26:16.955968 containerd[1579]: time="2026-03-11T01:26:16.955902322Z" level=info msg="connecting to shim 9b7c807dd91a1da7b18806aeac8361f2cef1d65ffeea889d8db7c2476e93685a" address="unix:///run/containerd/s/390280e0d6e31e5d3bb303cde35f123ea066c76bd014ab5a28ac496c3f39b1a1" namespace=k8s.io protocol=ttrpc version=3 Mar 11 01:26:17.284547 systemd[1]: Started cri-containerd-9b7c807dd91a1da7b18806aeac8361f2cef1d65ffeea889d8db7c2476e93685a.scope - libcontainer container 9b7c807dd91a1da7b18806aeac8361f2cef1d65ffeea889d8db7c2476e93685a. Mar 11 01:26:18.394370 containerd[1579]: time="2026-03-11T01:26:18.393184549Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5588576f44-zj8qk,Uid:4fdf546b-9706-4c85-89bf-dabf7ea5afd2,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"9b7c807dd91a1da7b18806aeac8361f2cef1d65ffeea889d8db7c2476e93685a\"" Mar 11 01:26:18.428513 containerd[1579]: time="2026-03-11T01:26:18.416307859Z" level=info msg="StartContainer for \"75988794ce73dfd34b9b51596d9b1d51a8ca8c05f2e49509529c6f0efba6393a\" returns successfully" Mar 11 01:26:18.433951 containerd[1579]: time="2026-03-11T01:26:18.433538143Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Mar 11 01:26:18.449915 kubelet[2857]: E0311 01:26:18.449440 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 01:26:18.579714 kubelet[2857]: I0311 01:26:18.578965 2857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-8nz4h" podStartSLOduration=5.578863768 podStartE2EDuration="5.578863768s" podCreationTimestamp="2026-03-11 01:26:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 01:26:18.577841826 +0000 UTC m=+15.256818458" watchObservedRunningTime="2026-03-11 01:26:18.578863768 +0000 UTC m=+15.257840420" Mar 11 01:26:19.471232 kubelet[2857]: E0311 01:26:19.470101 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 01:26:20.665238 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount158326984.mount: Deactivated successfully. Mar 11 01:26:21.465572 kubelet[2857]: E0311 01:26:21.463242 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 01:26:30.370840 kubelet[2857]: E0311 01:26:30.370736 2857 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.023s" Mar 11 01:26:34.206124 containerd[1579]: time="2026-03-11T01:26:34.204016344Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:26:34.212684 containerd[1579]: time="2026-03-11T01:26:34.212119260Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Mar 11 01:26:34.221449 containerd[1579]: time="2026-03-11T01:26:34.220054580Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:26:34.241357 containerd[1579]: time="2026-03-11T01:26:34.241261891Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:26:34.245937 containerd[1579]: time="2026-03-11T01:26:34.244420350Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 15.81080278s" Mar 11 01:26:34.245937 containerd[1579]: time="2026-03-11T01:26:34.244471975Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Mar 11 01:26:34.273778 containerd[1579]: time="2026-03-11T01:26:34.272837584Z" level=info msg="CreateContainer within sandbox \"9b7c807dd91a1da7b18806aeac8361f2cef1d65ffeea889d8db7c2476e93685a\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 11 01:26:34.302347 containerd[1579]: time="2026-03-11T01:26:34.300746926Z" level=info msg="Container 3629f50b670711a1be187d87fc7faa2645fce6f40ca9861cf245a7c58f8dd3ff: CDI devices from CRI Config.CDIDevices: []" Mar 11 01:26:34.360607 containerd[1579]: time="2026-03-11T01:26:34.360509380Z" level=info msg="CreateContainer within sandbox \"9b7c807dd91a1da7b18806aeac8361f2cef1d65ffeea889d8db7c2476e93685a\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"3629f50b670711a1be187d87fc7faa2645fce6f40ca9861cf245a7c58f8dd3ff\"" Mar 11 01:26:34.365428 containerd[1579]: time="2026-03-11T01:26:34.362445134Z" level=info msg="StartContainer for \"3629f50b670711a1be187d87fc7faa2645fce6f40ca9861cf245a7c58f8dd3ff\"" Mar 11 01:26:34.365428 containerd[1579]: time="2026-03-11T01:26:34.363733855Z" level=info msg="connecting to shim 3629f50b670711a1be187d87fc7faa2645fce6f40ca9861cf245a7c58f8dd3ff" address="unix:///run/containerd/s/390280e0d6e31e5d3bb303cde35f123ea066c76bd014ab5a28ac496c3f39b1a1" protocol=ttrpc version=3 Mar 11 01:26:34.465845 systemd[1]: Started cri-containerd-3629f50b670711a1be187d87fc7faa2645fce6f40ca9861cf245a7c58f8dd3ff.scope - libcontainer container 3629f50b670711a1be187d87fc7faa2645fce6f40ca9861cf245a7c58f8dd3ff. Mar 11 01:26:34.619339 containerd[1579]: time="2026-03-11T01:26:34.618968241Z" level=info msg="StartContainer for \"3629f50b670711a1be187d87fc7faa2645fce6f40ca9861cf245a7c58f8dd3ff\" returns successfully" Mar 11 01:26:46.285928 sudo[1796]: pam_unix(sudo:session): session closed for user root Mar 11 01:26:46.296747 sshd[1795]: Connection closed by 10.0.0.1 port 37012 Mar 11 01:26:46.297755 sshd-session[1792]: pam_unix(sshd:session): session closed for user core Mar 11 01:26:46.309519 systemd[1]: sshd@8-10.0.0.26:22-10.0.0.1:37012.service: Deactivated successfully. Mar 11 01:26:46.317107 systemd[1]: session-9.scope: Deactivated successfully. Mar 11 01:26:46.320878 systemd[1]: session-9.scope: Consumed 22.574s CPU time, 238M memory peak. Mar 11 01:26:46.337223 systemd-logind[1547]: Session 9 logged out. Waiting for processes to exit. Mar 11 01:26:46.348489 systemd-logind[1547]: Removed session 9. Mar 11 01:26:50.209615 kubelet[2857]: I0311 01:26:50.208428 2857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5588576f44-zj8qk" podStartSLOduration=20.392163324 podStartE2EDuration="36.208379646s" podCreationTimestamp="2026-03-11 01:26:14 +0000 UTC" firstStartedPulling="2026-03-11 01:26:18.432879433 +0000 UTC m=+15.111856075" lastFinishedPulling="2026-03-11 01:26:34.249095764 +0000 UTC m=+30.928072397" observedRunningTime="2026-03-11 01:26:35.321087328 +0000 UTC m=+32.000064001" watchObservedRunningTime="2026-03-11 01:26:50.208379646 +0000 UTC m=+46.887356278" Mar 11 01:26:50.274374 systemd[1]: Created slice kubepods-besteffort-podb705fd9c_03d5_4bf2_8d68_37795c9779ea.slice - libcontainer container kubepods-besteffort-podb705fd9c_03d5_4bf2_8d68_37795c9779ea.slice. Mar 11 01:26:50.293501 kubelet[2857]: I0311 01:26:50.293323 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b705fd9c-03d5-4bf2-8d68-37795c9779ea-tigera-ca-bundle\") pod \"calico-typha-7c9d8bbf4b-cbjdl\" (UID: \"b705fd9c-03d5-4bf2-8d68-37795c9779ea\") " pod="calico-system/calico-typha-7c9d8bbf4b-cbjdl" Mar 11 01:26:50.293795 kubelet[2857]: I0311 01:26:50.293697 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/b705fd9c-03d5-4bf2-8d68-37795c9779ea-typha-certs\") pod \"calico-typha-7c9d8bbf4b-cbjdl\" (UID: \"b705fd9c-03d5-4bf2-8d68-37795c9779ea\") " pod="calico-system/calico-typha-7c9d8bbf4b-cbjdl" Mar 11 01:26:50.293795 kubelet[2857]: I0311 01:26:50.293741 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kx9j5\" (UniqueName: \"kubernetes.io/projected/b705fd9c-03d5-4bf2-8d68-37795c9779ea-kube-api-access-kx9j5\") pod \"calico-typha-7c9d8bbf4b-cbjdl\" (UID: \"b705fd9c-03d5-4bf2-8d68-37795c9779ea\") " pod="calico-system/calico-typha-7c9d8bbf4b-cbjdl" Mar 11 01:26:50.613728 kubelet[2857]: E0311 01:26:50.610843 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 01:26:50.614301 containerd[1579]: time="2026-03-11T01:26:50.612615423Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7c9d8bbf4b-cbjdl,Uid:b705fd9c-03d5-4bf2-8d68-37795c9779ea,Namespace:calico-system,Attempt:0,}" Mar 11 01:26:50.642436 systemd[1]: Created slice kubepods-besteffort-pod4307cd87_cda7_4045_8bd1_36c5af7168c5.slice - libcontainer container kubepods-besteffort-pod4307cd87_cda7_4045_8bd1_36c5af7168c5.slice. Mar 11 01:26:50.697831 kubelet[2857]: I0311 01:26:50.697727 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/4307cd87-cda7-4045-8bd1-36c5af7168c5-cni-log-dir\") pod \"calico-node-g5psr\" (UID: \"4307cd87-cda7-4045-8bd1-36c5af7168c5\") " pod="calico-system/calico-node-g5psr" Mar 11 01:26:50.698191 kubelet[2857]: I0311 01:26:50.698090 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4307cd87-cda7-4045-8bd1-36c5af7168c5-xtables-lock\") pod \"calico-node-g5psr\" (UID: \"4307cd87-cda7-4045-8bd1-36c5af7168c5\") " pod="calico-system/calico-node-g5psr" Mar 11 01:26:50.698312 kubelet[2857]: I0311 01:26:50.698263 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/4307cd87-cda7-4045-8bd1-36c5af7168c5-cni-bin-dir\") pod \"calico-node-g5psr\" (UID: \"4307cd87-cda7-4045-8bd1-36c5af7168c5\") " pod="calico-system/calico-node-g5psr" Mar 11 01:26:50.698463 kubelet[2857]: I0311 01:26:50.698409 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/4307cd87-cda7-4045-8bd1-36c5af7168c5-nodeproc\") pod \"calico-node-g5psr\" (UID: \"4307cd87-cda7-4045-8bd1-36c5af7168c5\") " pod="calico-system/calico-node-g5psr" Mar 11 01:26:50.698606 kubelet[2857]: I0311 01:26:50.698580 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/4307cd87-cda7-4045-8bd1-36c5af7168c5-flexvol-driver-host\") pod \"calico-node-g5psr\" (UID: \"4307cd87-cda7-4045-8bd1-36c5af7168c5\") " pod="calico-system/calico-node-g5psr" Mar 11 01:26:50.698714 kubelet[2857]: I0311 01:26:50.698697 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/4307cd87-cda7-4045-8bd1-36c5af7168c5-policysync\") pod \"calico-node-g5psr\" (UID: \"4307cd87-cda7-4045-8bd1-36c5af7168c5\") " pod="calico-system/calico-node-g5psr" Mar 11 01:26:50.699034 kubelet[2857]: I0311 01:26:50.698837 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4307cd87-cda7-4045-8bd1-36c5af7168c5-tigera-ca-bundle\") pod \"calico-node-g5psr\" (UID: \"4307cd87-cda7-4045-8bd1-36c5af7168c5\") " pod="calico-system/calico-node-g5psr" Mar 11 01:26:50.699034 kubelet[2857]: I0311 01:26:50.698940 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/4307cd87-cda7-4045-8bd1-36c5af7168c5-var-run-calico\") pod \"calico-node-g5psr\" (UID: \"4307cd87-cda7-4045-8bd1-36c5af7168c5\") " pod="calico-system/calico-node-g5psr" Mar 11 01:26:50.699034 kubelet[2857]: I0311 01:26:50.699018 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/4307cd87-cda7-4045-8bd1-36c5af7168c5-bpffs\") pod \"calico-node-g5psr\" (UID: \"4307cd87-cda7-4045-8bd1-36c5af7168c5\") " pod="calico-system/calico-node-g5psr" Mar 11 01:26:50.699333 kubelet[2857]: I0311 01:26:50.699054 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4307cd87-cda7-4045-8bd1-36c5af7168c5-lib-modules\") pod \"calico-node-g5psr\" (UID: \"4307cd87-cda7-4045-8bd1-36c5af7168c5\") " pod="calico-system/calico-node-g5psr" Mar 11 01:26:50.699333 kubelet[2857]: I0311 01:26:50.699081 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/4307cd87-cda7-4045-8bd1-36c5af7168c5-sys-fs\") pod \"calico-node-g5psr\" (UID: \"4307cd87-cda7-4045-8bd1-36c5af7168c5\") " pod="calico-system/calico-node-g5psr" Mar 11 01:26:50.699333 kubelet[2857]: I0311 01:26:50.699120 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/4307cd87-cda7-4045-8bd1-36c5af7168c5-cni-net-dir\") pod \"calico-node-g5psr\" (UID: \"4307cd87-cda7-4045-8bd1-36c5af7168c5\") " pod="calico-system/calico-node-g5psr" Mar 11 01:26:50.699333 kubelet[2857]: I0311 01:26:50.699207 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/4307cd87-cda7-4045-8bd1-36c5af7168c5-var-lib-calico\") pod \"calico-node-g5psr\" (UID: \"4307cd87-cda7-4045-8bd1-36c5af7168c5\") " pod="calico-system/calico-node-g5psr" Mar 11 01:26:50.699333 kubelet[2857]: I0311 01:26:50.699285 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kx8w8\" (UniqueName: \"kubernetes.io/projected/4307cd87-cda7-4045-8bd1-36c5af7168c5-kube-api-access-kx8w8\") pod \"calico-node-g5psr\" (UID: \"4307cd87-cda7-4045-8bd1-36c5af7168c5\") " pod="calico-system/calico-node-g5psr" Mar 11 01:26:50.699489 kubelet[2857]: I0311 01:26:50.699324 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/4307cd87-cda7-4045-8bd1-36c5af7168c5-node-certs\") pod \"calico-node-g5psr\" (UID: \"4307cd87-cda7-4045-8bd1-36c5af7168c5\") " pod="calico-system/calico-node-g5psr" Mar 11 01:26:50.804393 kubelet[2857]: E0311 01:26:50.803430 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:50.804393 kubelet[2857]: W0311 01:26:50.803489 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:50.804393 kubelet[2857]: E0311 01:26:50.803553 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:50.805400 containerd[1579]: time="2026-03-11T01:26:50.805182673Z" level=info msg="connecting to shim 45618bbf7fb641674bc1f0a1def907987f122b826f9a3db66e4620b38a09da3f" address="unix:///run/containerd/s/de9ed693257d2e402672371fbdd6dfb409370beb2837a9ab47de7dd7d3f78753" namespace=k8s.io protocol=ttrpc version=3 Mar 11 01:26:50.810180 kubelet[2857]: E0311 01:26:50.808263 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:50.810287 kubelet[2857]: W0311 01:26:50.810202 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:50.810287 kubelet[2857]: E0311 01:26:50.810235 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:50.812378 kubelet[2857]: E0311 01:26:50.812315 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:50.812586 kubelet[2857]: W0311 01:26:50.812351 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:50.812586 kubelet[2857]: E0311 01:26:50.812568 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:50.814301 kubelet[2857]: E0311 01:26:50.813984 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:50.814301 kubelet[2857]: W0311 01:26:50.814036 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:50.814301 kubelet[2857]: E0311 01:26:50.814056 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:50.826483 kubelet[2857]: E0311 01:26:50.826338 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:50.826483 kubelet[2857]: W0311 01:26:50.826424 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:50.826483 kubelet[2857]: E0311 01:26:50.826460 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:50.827968 kubelet[2857]: E0311 01:26:50.827846 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:50.827968 kubelet[2857]: W0311 01:26:50.827914 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:50.827968 kubelet[2857]: E0311 01:26:50.827931 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:50.831619 kubelet[2857]: E0311 01:26:50.829285 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:50.831619 kubelet[2857]: W0311 01:26:50.829320 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:50.831619 kubelet[2857]: E0311 01:26:50.829335 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:50.831619 kubelet[2857]: E0311 01:26:50.831498 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:50.831619 kubelet[2857]: W0311 01:26:50.831509 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:50.831619 kubelet[2857]: E0311 01:26:50.831523 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:50.833969 kubelet[2857]: E0311 01:26:50.833866 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:50.833969 kubelet[2857]: W0311 01:26:50.833885 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:50.833969 kubelet[2857]: E0311 01:26:50.833899 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:50.834479 kubelet[2857]: E0311 01:26:50.834457 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:50.834602 kubelet[2857]: W0311 01:26:50.834584 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:50.834833 kubelet[2857]: E0311 01:26:50.834815 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:50.842238 kubelet[2857]: E0311 01:26:50.838653 2857 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-45sfz" podUID="bd5d987a-c72c-4abd-9acd-7762cad20217" Mar 11 01:26:50.842922 kubelet[2857]: E0311 01:26:50.842418 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:50.842922 kubelet[2857]: W0311 01:26:50.842458 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:50.842922 kubelet[2857]: E0311 01:26:50.842538 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:50.845737 kubelet[2857]: E0311 01:26:50.843488 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:50.845999 kubelet[2857]: W0311 01:26:50.845894 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:50.846492 kubelet[2857]: E0311 01:26:50.846296 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:50.847846 kubelet[2857]: E0311 01:26:50.847781 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:50.847972 kubelet[2857]: W0311 01:26:50.847928 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:50.848539 kubelet[2857]: E0311 01:26:50.848339 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:50.850208 kubelet[2857]: E0311 01:26:50.850112 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:50.850277 kubelet[2857]: W0311 01:26:50.850232 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:50.850277 kubelet[2857]: E0311 01:26:50.850252 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:50.850968 kubelet[2857]: E0311 01:26:50.850848 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:50.850968 kubelet[2857]: W0311 01:26:50.850885 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:50.850968 kubelet[2857]: E0311 01:26:50.850899 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:50.851775 kubelet[2857]: E0311 01:26:50.851600 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:50.853683 kubelet[2857]: W0311 01:26:50.852329 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:50.853683 kubelet[2857]: E0311 01:26:50.852351 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:50.855478 kubelet[2857]: E0311 01:26:50.855426 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:50.855478 kubelet[2857]: W0311 01:26:50.855463 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:50.855478 kubelet[2857]: E0311 01:26:50.855480 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:50.874256 kubelet[2857]: E0311 01:26:50.864577 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:50.874256 kubelet[2857]: W0311 01:26:50.864619 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:50.874256 kubelet[2857]: E0311 01:26:50.864651 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:50.911277 kubelet[2857]: E0311 01:26:50.911239 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:50.913199 kubelet[2857]: W0311 01:26:50.911483 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:50.913199 kubelet[2857]: E0311 01:26:50.911518 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:50.915585 kubelet[2857]: E0311 01:26:50.915393 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:50.916388 kubelet[2857]: W0311 01:26:50.916365 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:50.916632 kubelet[2857]: E0311 01:26:50.916612 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:50.920118 kubelet[2857]: E0311 01:26:50.919499 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:50.920118 kubelet[2857]: W0311 01:26:50.919518 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:50.921621 kubelet[2857]: E0311 01:26:50.921035 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:50.928233 kubelet[2857]: E0311 01:26:50.928114 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:50.928233 kubelet[2857]: W0311 01:26:50.928202 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:50.928233 kubelet[2857]: E0311 01:26:50.928226 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:50.932033 kubelet[2857]: E0311 01:26:50.929371 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:50.932033 kubelet[2857]: W0311 01:26:50.929402 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:50.932033 kubelet[2857]: E0311 01:26:50.929510 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:50.933565 kubelet[2857]: E0311 01:26:50.933478 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:50.933565 kubelet[2857]: W0311 01:26:50.933514 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:50.933565 kubelet[2857]: E0311 01:26:50.933532 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:50.937026 kubelet[2857]: E0311 01:26:50.936550 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:50.937026 kubelet[2857]: W0311 01:26:50.936563 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:50.937026 kubelet[2857]: E0311 01:26:50.936579 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:50.946830 kubelet[2857]: E0311 01:26:50.946108 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:50.946830 kubelet[2857]: W0311 01:26:50.946197 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:50.946830 kubelet[2857]: E0311 01:26:50.946225 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:50.948195 kubelet[2857]: E0311 01:26:50.947803 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:50.948195 kubelet[2857]: W0311 01:26:50.947834 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:50.948195 kubelet[2857]: E0311 01:26:50.947858 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:50.949734 kubelet[2857]: E0311 01:26:50.948519 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:50.951245 kubelet[2857]: W0311 01:26:50.949888 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:50.954093 kubelet[2857]: E0311 01:26:50.952849 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:50.954093 kubelet[2857]: E0311 01:26:50.953656 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:50.954093 kubelet[2857]: W0311 01:26:50.953669 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:50.954093 kubelet[2857]: E0311 01:26:50.953682 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:50.956522 kubelet[2857]: E0311 01:26:50.956271 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:50.956522 kubelet[2857]: W0311 01:26:50.956286 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:50.956522 kubelet[2857]: E0311 01:26:50.956302 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:50.969420 kubelet[2857]: E0311 01:26:50.960075 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:50.969420 kubelet[2857]: W0311 01:26:50.960093 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:50.969420 kubelet[2857]: E0311 01:26:50.960109 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:50.969420 kubelet[2857]: E0311 01:26:50.962377 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:50.969420 kubelet[2857]: W0311 01:26:50.962393 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:50.969420 kubelet[2857]: E0311 01:26:50.962411 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:50.969420 kubelet[2857]: E0311 01:26:50.968055 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:50.969420 kubelet[2857]: W0311 01:26:50.968077 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:50.969420 kubelet[2857]: E0311 01:26:50.968100 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:50.969420 kubelet[2857]: E0311 01:26:50.969367 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:50.969997 kubelet[2857]: W0311 01:26:50.969380 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:50.969997 kubelet[2857]: E0311 01:26:50.969397 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:50.995825 kubelet[2857]: E0311 01:26:50.995238 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:50.995825 kubelet[2857]: W0311 01:26:50.995676 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:51.031494 kubelet[2857]: E0311 01:26:50.997361 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:51.038027 kubelet[2857]: E0311 01:26:51.033851 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:51.038027 kubelet[2857]: W0311 01:26:51.036677 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:51.038027 kubelet[2857]: E0311 01:26:51.037525 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:51.039067 kubelet[2857]: E0311 01:26:51.039048 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:51.039278 kubelet[2857]: W0311 01:26:51.039257 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:51.039358 kubelet[2857]: E0311 01:26:51.039342 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:51.039809 kubelet[2857]: E0311 01:26:51.039793 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:51.041509 kubelet[2857]: W0311 01:26:51.039870 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:51.041909 kubelet[2857]: E0311 01:26:51.041697 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:51.048001 kubelet[2857]: E0311 01:26:51.046430 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:51.048001 kubelet[2857]: W0311 01:26:51.046472 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:51.048001 kubelet[2857]: E0311 01:26:51.046495 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:51.048001 kubelet[2857]: I0311 01:26:51.047553 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bd5d987a-c72c-4abd-9acd-7762cad20217-kubelet-dir\") pod \"csi-node-driver-45sfz\" (UID: \"bd5d987a-c72c-4abd-9acd-7762cad20217\") " pod="calico-system/csi-node-driver-45sfz" Mar 11 01:26:51.048001 kubelet[2857]: E0311 01:26:51.047747 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:51.048001 kubelet[2857]: W0311 01:26:51.047761 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:51.048001 kubelet[2857]: E0311 01:26:51.047783 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:51.061568 kubelet[2857]: E0311 01:26:51.059325 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:51.061568 kubelet[2857]: W0311 01:26:51.059353 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:51.061568 kubelet[2857]: E0311 01:26:51.059446 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:51.065005 kubelet[2857]: E0311 01:26:51.062688 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:51.065005 kubelet[2857]: W0311 01:26:51.062777 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:51.065005 kubelet[2857]: E0311 01:26:51.062799 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:51.065005 kubelet[2857]: I0311 01:26:51.062909 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnhx7\" (UniqueName: \"kubernetes.io/projected/bd5d987a-c72c-4abd-9acd-7762cad20217-kube-api-access-tnhx7\") pod \"csi-node-driver-45sfz\" (UID: \"bd5d987a-c72c-4abd-9acd-7762cad20217\") " pod="calico-system/csi-node-driver-45sfz" Mar 11 01:26:51.065005 kubelet[2857]: E0311 01:26:51.063567 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:51.065005 kubelet[2857]: W0311 01:26:51.063580 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:51.065005 kubelet[2857]: E0311 01:26:51.063593 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:51.065005 kubelet[2857]: I0311 01:26:51.063754 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/bd5d987a-c72c-4abd-9acd-7762cad20217-varrun\") pod \"csi-node-driver-45sfz\" (UID: \"bd5d987a-c72c-4abd-9acd-7762cad20217\") " pod="calico-system/csi-node-driver-45sfz" Mar 11 01:26:51.065005 kubelet[2857]: E0311 01:26:51.064573 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:51.065395 kubelet[2857]: W0311 01:26:51.064590 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:51.065395 kubelet[2857]: E0311 01:26:51.064608 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:51.065395 kubelet[2857]: I0311 01:26:51.064633 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/bd5d987a-c72c-4abd-9acd-7762cad20217-registration-dir\") pod \"csi-node-driver-45sfz\" (UID: \"bd5d987a-c72c-4abd-9acd-7762cad20217\") " pod="calico-system/csi-node-driver-45sfz" Mar 11 01:26:51.067775 containerd[1579]: time="2026-03-11T01:26:51.067733927Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-g5psr,Uid:4307cd87-cda7-4045-8bd1-36c5af7168c5,Namespace:calico-system,Attempt:0,}" Mar 11 01:26:51.074996 kubelet[2857]: E0311 01:26:51.070055 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:51.074996 kubelet[2857]: W0311 01:26:51.070093 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:51.074996 kubelet[2857]: E0311 01:26:51.070114 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:51.074996 kubelet[2857]: I0311 01:26:51.074220 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/bd5d987a-c72c-4abd-9acd-7762cad20217-socket-dir\") pod \"csi-node-driver-45sfz\" (UID: \"bd5d987a-c72c-4abd-9acd-7762cad20217\") " pod="calico-system/csi-node-driver-45sfz" Mar 11 01:26:51.081608 kubelet[2857]: E0311 01:26:51.081533 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:51.081608 kubelet[2857]: W0311 01:26:51.081580 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:51.081608 kubelet[2857]: E0311 01:26:51.081612 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:51.085239 kubelet[2857]: E0311 01:26:51.085093 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:51.085239 kubelet[2857]: W0311 01:26:51.085113 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:51.085239 kubelet[2857]: E0311 01:26:51.085224 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:51.091973 kubelet[2857]: E0311 01:26:51.090600 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:51.091973 kubelet[2857]: W0311 01:26:51.090618 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:51.091973 kubelet[2857]: E0311 01:26:51.090639 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:51.096862 kubelet[2857]: E0311 01:26:51.095627 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:51.096862 kubelet[2857]: W0311 01:26:51.095662 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:51.096862 kubelet[2857]: E0311 01:26:51.095681 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:51.096862 kubelet[2857]: E0311 01:26:51.096250 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:51.096862 kubelet[2857]: W0311 01:26:51.096264 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:51.096862 kubelet[2857]: E0311 01:26:51.096283 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:51.097620 systemd[1]: Started cri-containerd-45618bbf7fb641674bc1f0a1def907987f122b826f9a3db66e4620b38a09da3f.scope - libcontainer container 45618bbf7fb641674bc1f0a1def907987f122b826f9a3db66e4620b38a09da3f. Mar 11 01:26:51.099517 kubelet[2857]: E0311 01:26:51.099485 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:51.099517 kubelet[2857]: W0311 01:26:51.099504 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:51.099720 kubelet[2857]: E0311 01:26:51.099523 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:51.102718 kubelet[2857]: E0311 01:26:51.102240 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:51.102718 kubelet[2857]: W0311 01:26:51.102271 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:51.102718 kubelet[2857]: E0311 01:26:51.102287 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:51.106007 kubelet[2857]: E0311 01:26:51.105807 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:51.106007 kubelet[2857]: W0311 01:26:51.105841 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:51.106007 kubelet[2857]: E0311 01:26:51.105857 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:51.180522 containerd[1579]: time="2026-03-11T01:26:51.179973150Z" level=info msg="connecting to shim b615ea67cb5cfa8ec597d2f8ac0224720bda5e81d089c0e18916885b30f6bbe0" address="unix:///run/containerd/s/a215e501455b421e76609bb9bd3dd31123b7f5ad550a3ad6a4c76388a1cc6fb5" namespace=k8s.io protocol=ttrpc version=3 Mar 11 01:26:51.186004 kubelet[2857]: E0311 01:26:51.183786 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:51.186004 kubelet[2857]: W0311 01:26:51.183826 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:51.186004 kubelet[2857]: E0311 01:26:51.183896 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:51.206007 kubelet[2857]: E0311 01:26:51.200825 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:51.206007 kubelet[2857]: W0311 01:26:51.200891 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:51.206007 kubelet[2857]: E0311 01:26:51.200927 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:51.212448 kubelet[2857]: E0311 01:26:51.210539 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:51.212448 kubelet[2857]: W0311 01:26:51.210564 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:51.212448 kubelet[2857]: E0311 01:26:51.210597 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:51.216933 kubelet[2857]: E0311 01:26:51.216657 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:51.216933 kubelet[2857]: W0311 01:26:51.216678 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:51.216933 kubelet[2857]: E0311 01:26:51.216704 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:51.217460 kubelet[2857]: E0311 01:26:51.217385 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:51.217516 kubelet[2857]: W0311 01:26:51.217403 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:51.217716 kubelet[2857]: E0311 01:26:51.217501 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:51.222199 kubelet[2857]: E0311 01:26:51.219010 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:51.222199 kubelet[2857]: W0311 01:26:51.219253 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:51.222199 kubelet[2857]: E0311 01:26:51.219274 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:51.226107 kubelet[2857]: E0311 01:26:51.225627 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:51.226107 kubelet[2857]: W0311 01:26:51.225704 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:51.226107 kubelet[2857]: E0311 01:26:51.225729 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:51.227752 kubelet[2857]: E0311 01:26:51.226605 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:51.229036 kubelet[2857]: W0311 01:26:51.227916 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:51.231519 kubelet[2857]: E0311 01:26:51.229247 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:51.235972 kubelet[2857]: E0311 01:26:51.235512 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:51.235972 kubelet[2857]: W0311 01:26:51.235554 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:51.235972 kubelet[2857]: E0311 01:26:51.235577 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:51.240870 kubelet[2857]: E0311 01:26:51.236688 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:51.240870 kubelet[2857]: W0311 01:26:51.236703 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:51.240870 kubelet[2857]: E0311 01:26:51.236719 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:51.242590 kubelet[2857]: E0311 01:26:51.242500 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:51.242590 kubelet[2857]: W0311 01:26:51.242590 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:51.243092 kubelet[2857]: E0311 01:26:51.242610 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:51.243461 kubelet[2857]: E0311 01:26:51.243357 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:51.243600 kubelet[2857]: W0311 01:26:51.243464 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:51.243600 kubelet[2857]: E0311 01:26:51.243482 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:51.244203 kubelet[2857]: E0311 01:26:51.244118 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:51.244327 kubelet[2857]: W0311 01:26:51.244206 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:51.244327 kubelet[2857]: E0311 01:26:51.244224 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:51.244922 kubelet[2857]: E0311 01:26:51.244725 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:51.244922 kubelet[2857]: W0311 01:26:51.244736 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:51.244922 kubelet[2857]: E0311 01:26:51.244748 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:51.248202 kubelet[2857]: E0311 01:26:51.248117 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:51.248202 kubelet[2857]: W0311 01:26:51.248193 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:51.248523 kubelet[2857]: E0311 01:26:51.248213 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:51.248974 kubelet[2857]: E0311 01:26:51.248728 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:51.248974 kubelet[2857]: W0311 01:26:51.248744 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:51.248974 kubelet[2857]: E0311 01:26:51.248759 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:51.249356 kubelet[2857]: E0311 01:26:51.249080 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:51.249356 kubelet[2857]: W0311 01:26:51.249091 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:51.249356 kubelet[2857]: E0311 01:26:51.249105 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:51.250069 kubelet[2857]: E0311 01:26:51.249456 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:51.250069 kubelet[2857]: W0311 01:26:51.249466 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:51.250069 kubelet[2857]: E0311 01:26:51.249481 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:51.250069 kubelet[2857]: E0311 01:26:51.249736 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:51.250069 kubelet[2857]: W0311 01:26:51.249751 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:51.250069 kubelet[2857]: E0311 01:26:51.249763 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:51.251207 kubelet[2857]: E0311 01:26:51.250193 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:51.251207 kubelet[2857]: W0311 01:26:51.250204 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:51.251207 kubelet[2857]: E0311 01:26:51.250217 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:51.251207 kubelet[2857]: E0311 01:26:51.250462 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:51.251207 kubelet[2857]: W0311 01:26:51.250473 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:51.251207 kubelet[2857]: E0311 01:26:51.250484 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:51.251207 kubelet[2857]: E0311 01:26:51.250871 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:51.251207 kubelet[2857]: W0311 01:26:51.250881 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:51.251207 kubelet[2857]: E0311 01:26:51.250893 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:51.269975 kubelet[2857]: E0311 01:26:51.269508 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:51.279497 kubelet[2857]: W0311 01:26:51.269707 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:51.282444 kubelet[2857]: E0311 01:26:51.282098 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:51.282554 kubelet[2857]: E0311 01:26:51.282515 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:51.282554 kubelet[2857]: W0311 01:26:51.282531 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:51.282554 kubelet[2857]: E0311 01:26:51.282547 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:51.292480 kubelet[2857]: E0311 01:26:51.292394 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:51.292480 kubelet[2857]: W0311 01:26:51.292426 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:51.292480 kubelet[2857]: E0311 01:26:51.292460 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:51.417927 systemd[1]: Started cri-containerd-b615ea67cb5cfa8ec597d2f8ac0224720bda5e81d089c0e18916885b30f6bbe0.scope - libcontainer container b615ea67cb5cfa8ec597d2f8ac0224720bda5e81d089c0e18916885b30f6bbe0. Mar 11 01:26:51.439846 containerd[1579]: time="2026-03-11T01:26:51.439572409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7c9d8bbf4b-cbjdl,Uid:b705fd9c-03d5-4bf2-8d68-37795c9779ea,Namespace:calico-system,Attempt:0,} returns sandbox id \"45618bbf7fb641674bc1f0a1def907987f122b826f9a3db66e4620b38a09da3f\"" Mar 11 01:26:51.448906 kubelet[2857]: E0311 01:26:51.447567 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 01:26:51.462497 containerd[1579]: time="2026-03-11T01:26:51.462373031Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Mar 11 01:26:51.467401 kubelet[2857]: E0311 01:26:51.464122 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:51.467401 kubelet[2857]: W0311 01:26:51.464197 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:51.467401 kubelet[2857]: E0311 01:26:51.464226 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:51.640438 containerd[1579]: time="2026-03-11T01:26:51.639357124Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-g5psr,Uid:4307cd87-cda7-4045-8bd1-36c5af7168c5,Namespace:calico-system,Attempt:0,} returns sandbox id \"b615ea67cb5cfa8ec597d2f8ac0224720bda5e81d089c0e18916885b30f6bbe0\"" Mar 11 01:26:52.228723 kubelet[2857]: E0311 01:26:52.221478 2857 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-45sfz" podUID="bd5d987a-c72c-4abd-9acd-7762cad20217" Mar 11 01:26:52.802048 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3690103012.mount: Deactivated successfully. Mar 11 01:26:54.237495 kubelet[2857]: E0311 01:26:54.236793 2857 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-45sfz" podUID="bd5d987a-c72c-4abd-9acd-7762cad20217" Mar 11 01:26:56.235078 kubelet[2857]: E0311 01:26:56.234438 2857 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-45sfz" podUID="bd5d987a-c72c-4abd-9acd-7762cad20217" Mar 11 01:26:57.471995 containerd[1579]: time="2026-03-11T01:26:57.471610481Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:26:57.474107 containerd[1579]: time="2026-03-11T01:26:57.474025840Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=36107596" Mar 11 01:26:57.477881 containerd[1579]: time="2026-03-11T01:26:57.477841965Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:26:57.484989 containerd[1579]: time="2026-03-11T01:26:57.484922632Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:26:57.486715 containerd[1579]: time="2026-03-11T01:26:57.485615569Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 6.022605123s" Mar 11 01:26:57.486715 containerd[1579]: time="2026-03-11T01:26:57.486014535Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Mar 11 01:26:57.488509 containerd[1579]: time="2026-03-11T01:26:57.487806848Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Mar 11 01:26:57.517548 containerd[1579]: time="2026-03-11T01:26:57.517456320Z" level=info msg="CreateContainer within sandbox \"45618bbf7fb641674bc1f0a1def907987f122b826f9a3db66e4620b38a09da3f\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 11 01:26:57.572795 containerd[1579]: time="2026-03-11T01:26:57.565662308Z" level=info msg="Container a37979dd2451c51beebe5ff3272fc31c9d061fb18b40f78688aae9cdcc96aa9e: CDI devices from CRI Config.CDIDevices: []" Mar 11 01:26:57.615955 containerd[1579]: time="2026-03-11T01:26:57.615489112Z" level=info msg="CreateContainer within sandbox \"45618bbf7fb641674bc1f0a1def907987f122b826f9a3db66e4620b38a09da3f\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"a37979dd2451c51beebe5ff3272fc31c9d061fb18b40f78688aae9cdcc96aa9e\"" Mar 11 01:26:57.624918 containerd[1579]: time="2026-03-11T01:26:57.622238245Z" level=info msg="StartContainer for \"a37979dd2451c51beebe5ff3272fc31c9d061fb18b40f78688aae9cdcc96aa9e\"" Mar 11 01:26:57.627441 containerd[1579]: time="2026-03-11T01:26:57.626774016Z" level=info msg="connecting to shim a37979dd2451c51beebe5ff3272fc31c9d061fb18b40f78688aae9cdcc96aa9e" address="unix:///run/containerd/s/de9ed693257d2e402672371fbdd6dfb409370beb2837a9ab47de7dd7d3f78753" protocol=ttrpc version=3 Mar 11 01:26:57.708600 systemd[1]: Started cri-containerd-a37979dd2451c51beebe5ff3272fc31c9d061fb18b40f78688aae9cdcc96aa9e.scope - libcontainer container a37979dd2451c51beebe5ff3272fc31c9d061fb18b40f78688aae9cdcc96aa9e. Mar 11 01:26:58.009667 containerd[1579]: time="2026-03-11T01:26:58.007452955Z" level=info msg="StartContainer for \"a37979dd2451c51beebe5ff3272fc31c9d061fb18b40f78688aae9cdcc96aa9e\" returns successfully" Mar 11 01:26:58.243677 kubelet[2857]: E0311 01:26:58.241809 2857 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-45sfz" podUID="bd5d987a-c72c-4abd-9acd-7762cad20217" Mar 11 01:26:58.248746 kubelet[2857]: E0311 01:26:58.248646 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 01:26:58.352633 kubelet[2857]: E0311 01:26:58.351616 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:58.352633 kubelet[2857]: W0311 01:26:58.351643 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:58.352633 kubelet[2857]: E0311 01:26:58.351673 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:58.355690 kubelet[2857]: E0311 01:26:58.353247 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:58.355690 kubelet[2857]: W0311 01:26:58.353264 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:58.355690 kubelet[2857]: E0311 01:26:58.353280 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:58.355690 kubelet[2857]: E0311 01:26:58.353965 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:58.355690 kubelet[2857]: W0311 01:26:58.353979 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:58.355690 kubelet[2857]: E0311 01:26:58.353994 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:58.355690 kubelet[2857]: E0311 01:26:58.354359 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:58.355690 kubelet[2857]: W0311 01:26:58.354370 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:58.355690 kubelet[2857]: E0311 01:26:58.354384 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:58.357735 kubelet[2857]: E0311 01:26:58.357074 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:58.357735 kubelet[2857]: W0311 01:26:58.357103 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:58.357735 kubelet[2857]: E0311 01:26:58.357117 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:58.362988 kubelet[2857]: E0311 01:26:58.361870 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:58.362988 kubelet[2857]: W0311 01:26:58.361893 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:58.362988 kubelet[2857]: E0311 01:26:58.361920 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:58.370702 kubelet[2857]: E0311 01:26:58.370019 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:58.370702 kubelet[2857]: W0311 01:26:58.370480 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:58.372196 kubelet[2857]: E0311 01:26:58.371658 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:58.373782 kubelet[2857]: E0311 01:26:58.373288 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:58.373782 kubelet[2857]: W0311 01:26:58.373308 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:58.373782 kubelet[2857]: E0311 01:26:58.373329 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:58.377956 kubelet[2857]: E0311 01:26:58.377665 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:58.377956 kubelet[2857]: W0311 01:26:58.377684 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:58.377956 kubelet[2857]: E0311 01:26:58.377708 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:58.378836 kubelet[2857]: E0311 01:26:58.378619 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:58.378836 kubelet[2857]: W0311 01:26:58.378634 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:58.378836 kubelet[2857]: E0311 01:26:58.378649 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:58.379652 kubelet[2857]: E0311 01:26:58.379503 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:58.379652 kubelet[2857]: W0311 01:26:58.379513 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:58.379652 kubelet[2857]: E0311 01:26:58.379555 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:58.381639 kubelet[2857]: E0311 01:26:58.381263 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:58.381639 kubelet[2857]: W0311 01:26:58.381297 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:58.381639 kubelet[2857]: E0311 01:26:58.381313 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:58.382857 kubelet[2857]: E0311 01:26:58.382735 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:58.382857 kubelet[2857]: W0311 01:26:58.382764 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:58.382857 kubelet[2857]: E0311 01:26:58.382779 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:58.383441 kubelet[2857]: E0311 01:26:58.383352 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:58.383441 kubelet[2857]: W0311 01:26:58.383385 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:58.383441 kubelet[2857]: E0311 01:26:58.383402 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:58.384799 kubelet[2857]: E0311 01:26:58.384764 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:58.384799 kubelet[2857]: W0311 01:26:58.384793 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:58.384799 kubelet[2857]: E0311 01:26:58.384808 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:58.387317 kubelet[2857]: E0311 01:26:58.385838 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:58.387317 kubelet[2857]: W0311 01:26:58.385853 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:58.387317 kubelet[2857]: E0311 01:26:58.385867 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:58.388397 kubelet[2857]: E0311 01:26:58.387661 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:58.388397 kubelet[2857]: W0311 01:26:58.387677 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:58.388397 kubelet[2857]: E0311 01:26:58.387690 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:58.389791 kubelet[2857]: E0311 01:26:58.389309 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:58.389791 kubelet[2857]: W0311 01:26:58.389322 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:58.394290 kubelet[2857]: E0311 01:26:58.390091 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:58.396925 kubelet[2857]: E0311 01:26:58.394948 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:58.396925 kubelet[2857]: W0311 01:26:58.394971 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:58.396925 kubelet[2857]: E0311 01:26:58.394995 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:58.398186 kubelet[2857]: E0311 01:26:58.398019 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:58.398186 kubelet[2857]: W0311 01:26:58.398080 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:58.398186 kubelet[2857]: E0311 01:26:58.398105 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:58.403722 kubelet[2857]: E0311 01:26:58.402887 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:58.403722 kubelet[2857]: W0311 01:26:58.402998 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:58.403722 kubelet[2857]: E0311 01:26:58.403023 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:58.407932 kubelet[2857]: E0311 01:26:58.407869 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:58.407932 kubelet[2857]: W0311 01:26:58.407907 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:58.407932 kubelet[2857]: E0311 01:26:58.407928 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:58.411736 kubelet[2857]: E0311 01:26:58.410080 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:58.411736 kubelet[2857]: W0311 01:26:58.410096 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:58.411736 kubelet[2857]: E0311 01:26:58.410113 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:58.411736 kubelet[2857]: E0311 01:26:58.411234 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:58.411736 kubelet[2857]: W0311 01:26:58.411248 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:58.411736 kubelet[2857]: E0311 01:26:58.411262 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:58.412306 kubelet[2857]: E0311 01:26:58.412260 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:58.412306 kubelet[2857]: W0311 01:26:58.412275 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:58.412306 kubelet[2857]: E0311 01:26:58.412289 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:58.417605 kubelet[2857]: E0311 01:26:58.417450 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:58.417605 kubelet[2857]: W0311 01:26:58.417493 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:58.417605 kubelet[2857]: E0311 01:26:58.417566 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:58.418753 kubelet[2857]: E0311 01:26:58.418694 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:58.418753 kubelet[2857]: W0311 01:26:58.418710 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:58.418753 kubelet[2857]: E0311 01:26:58.418730 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:58.430606 kubelet[2857]: E0311 01:26:58.430333 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:58.430606 kubelet[2857]: W0311 01:26:58.430349 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:58.430606 kubelet[2857]: E0311 01:26:58.430363 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:58.431783 kubelet[2857]: E0311 01:26:58.431555 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:58.431783 kubelet[2857]: W0311 01:26:58.431578 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:58.431783 kubelet[2857]: E0311 01:26:58.431597 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:58.432374 kubelet[2857]: E0311 01:26:58.432359 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:58.432440 kubelet[2857]: W0311 01:26:58.432429 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:58.432485 kubelet[2857]: E0311 01:26:58.432476 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:58.435587 kubelet[2857]: E0311 01:26:58.435484 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:58.435587 kubelet[2857]: W0311 01:26:58.435559 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:58.435696 kubelet[2857]: E0311 01:26:58.435596 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:58.436875 kubelet[2857]: E0311 01:26:58.436802 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:58.436875 kubelet[2857]: W0311 01:26:58.436843 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:58.436875 kubelet[2857]: E0311 01:26:58.436865 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:58.442702 kubelet[2857]: E0311 01:26:58.440063 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:58.442702 kubelet[2857]: W0311 01:26:58.440111 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:58.442702 kubelet[2857]: E0311 01:26:58.440187 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:58.886195 containerd[1579]: time="2026-03-11T01:26:58.886028925Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:26:58.888115 containerd[1579]: time="2026-03-11T01:26:58.888091275Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4630250" Mar 11 01:26:58.891316 containerd[1579]: time="2026-03-11T01:26:58.891089927Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:26:58.898778 containerd[1579]: time="2026-03-11T01:26:58.898418771Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:26:58.899072 containerd[1579]: time="2026-03-11T01:26:58.899024198Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 1.41118231s" Mar 11 01:26:58.899258 containerd[1579]: time="2026-03-11T01:26:58.899231969Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Mar 11 01:26:58.914439 containerd[1579]: time="2026-03-11T01:26:58.914365819Z" level=info msg="CreateContainer within sandbox \"b615ea67cb5cfa8ec597d2f8ac0224720bda5e81d089c0e18916885b30f6bbe0\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 11 01:26:58.970854 containerd[1579]: time="2026-03-11T01:26:58.967866534Z" level=info msg="Container 001d729be0ca9b178752591f6c877fa14d0155f6519dcde29ac7fea78a40a0bb: CDI devices from CRI Config.CDIDevices: []" Mar 11 01:26:59.010071 containerd[1579]: time="2026-03-11T01:26:59.009961896Z" level=info msg="CreateContainer within sandbox \"b615ea67cb5cfa8ec597d2f8ac0224720bda5e81d089c0e18916885b30f6bbe0\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"001d729be0ca9b178752591f6c877fa14d0155f6519dcde29ac7fea78a40a0bb\"" Mar 11 01:26:59.012362 containerd[1579]: time="2026-03-11T01:26:59.011069119Z" level=info msg="StartContainer for \"001d729be0ca9b178752591f6c877fa14d0155f6519dcde29ac7fea78a40a0bb\"" Mar 11 01:26:59.039276 containerd[1579]: time="2026-03-11T01:26:59.015254966Z" level=info msg="connecting to shim 001d729be0ca9b178752591f6c877fa14d0155f6519dcde29ac7fea78a40a0bb" address="unix:///run/containerd/s/a215e501455b421e76609bb9bd3dd31123b7f5ad550a3ad6a4c76388a1cc6fb5" protocol=ttrpc version=3 Mar 11 01:26:59.175920 systemd[1]: Started cri-containerd-001d729be0ca9b178752591f6c877fa14d0155f6519dcde29ac7fea78a40a0bb.scope - libcontainer container 001d729be0ca9b178752591f6c877fa14d0155f6519dcde29ac7fea78a40a0bb. Mar 11 01:26:59.278501 kubelet[2857]: E0311 01:26:59.278343 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 01:26:59.309811 kubelet[2857]: E0311 01:26:59.304497 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:59.309811 kubelet[2857]: W0311 01:26:59.304557 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:59.309811 kubelet[2857]: E0311 01:26:59.304601 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:59.322357 kubelet[2857]: E0311 01:26:59.314576 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:59.322357 kubelet[2857]: W0311 01:26:59.314622 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:59.322357 kubelet[2857]: E0311 01:26:59.314650 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:59.322357 kubelet[2857]: E0311 01:26:59.319050 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:59.322357 kubelet[2857]: W0311 01:26:59.319075 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:59.322357 kubelet[2857]: E0311 01:26:59.319104 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:59.348749 kubelet[2857]: E0311 01:26:59.327619 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:59.348749 kubelet[2857]: W0311 01:26:59.327728 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:59.348749 kubelet[2857]: E0311 01:26:59.327834 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:59.348749 kubelet[2857]: E0311 01:26:59.335773 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:59.348749 kubelet[2857]: W0311 01:26:59.335876 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:59.348749 kubelet[2857]: E0311 01:26:59.336098 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:59.355438 kubelet[2857]: E0311 01:26:59.354038 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:59.355438 kubelet[2857]: W0311 01:26:59.354117 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:59.355438 kubelet[2857]: E0311 01:26:59.354225 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:59.358074 kubelet[2857]: E0311 01:26:59.357654 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:59.358074 kubelet[2857]: W0311 01:26:59.357673 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:59.358074 kubelet[2857]: E0311 01:26:59.357700 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:59.361652 kubelet[2857]: E0311 01:26:59.359223 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:59.361652 kubelet[2857]: W0311 01:26:59.359241 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:59.361652 kubelet[2857]: E0311 01:26:59.359264 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:59.375229 kubelet[2857]: E0311 01:26:59.370439 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:59.375229 kubelet[2857]: W0311 01:26:59.373489 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:59.375229 kubelet[2857]: E0311 01:26:59.373645 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:59.378583 kubelet[2857]: E0311 01:26:59.378393 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:59.378583 kubelet[2857]: W0311 01:26:59.378511 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:59.378707 kubelet[2857]: E0311 01:26:59.378592 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:59.382865 kubelet[2857]: E0311 01:26:59.382525 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:59.382865 kubelet[2857]: W0311 01:26:59.382559 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:59.382865 kubelet[2857]: E0311 01:26:59.382585 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:59.383334 kubelet[2857]: E0311 01:26:59.383276 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:59.383334 kubelet[2857]: W0311 01:26:59.383310 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:59.383334 kubelet[2857]: E0311 01:26:59.383331 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:59.386851 kubelet[2857]: E0311 01:26:59.384228 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:59.386851 kubelet[2857]: W0311 01:26:59.384665 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:59.386851 kubelet[2857]: E0311 01:26:59.385767 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:59.386851 kubelet[2857]: E0311 01:26:59.386230 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:59.386851 kubelet[2857]: W0311 01:26:59.386241 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:59.386851 kubelet[2857]: E0311 01:26:59.386253 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:59.386851 kubelet[2857]: E0311 01:26:59.386672 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:59.386851 kubelet[2857]: W0311 01:26:59.386683 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:59.386851 kubelet[2857]: E0311 01:26:59.386699 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:59.402638 kubelet[2857]: I0311 01:26:59.401244 2857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7c9d8bbf4b-cbjdl" podStartSLOduration=3.375040614 podStartE2EDuration="9.401216962s" podCreationTimestamp="2026-03-11 01:26:50 +0000 UTC" firstStartedPulling="2026-03-11 01:26:51.461433186 +0000 UTC m=+48.140409819" lastFinishedPulling="2026-03-11 01:26:57.487609535 +0000 UTC m=+54.166586167" observedRunningTime="2026-03-11 01:26:58.345223723 +0000 UTC m=+55.024200355" watchObservedRunningTime="2026-03-11 01:26:59.401216962 +0000 UTC m=+56.080193624" Mar 11 01:26:59.402638 kubelet[2857]: E0311 01:26:59.401697 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:59.402638 kubelet[2857]: W0311 01:26:59.401720 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:59.402638 kubelet[2857]: E0311 01:26:59.401743 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:59.407598 kubelet[2857]: E0311 01:26:59.406848 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:59.407598 kubelet[2857]: W0311 01:26:59.406876 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:59.407598 kubelet[2857]: E0311 01:26:59.406902 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:59.434968 kubelet[2857]: E0311 01:26:59.428318 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:59.434968 kubelet[2857]: W0311 01:26:59.428376 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:59.434968 kubelet[2857]: E0311 01:26:59.428548 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:59.434968 kubelet[2857]: E0311 01:26:59.433287 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:59.434968 kubelet[2857]: W0311 01:26:59.433310 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:59.434968 kubelet[2857]: E0311 01:26:59.433336 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:59.445600 kubelet[2857]: E0311 01:26:59.445239 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:59.445600 kubelet[2857]: W0311 01:26:59.445274 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:59.445600 kubelet[2857]: E0311 01:26:59.445309 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:59.445901 kubelet[2857]: E0311 01:26:59.445884 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:59.445999 kubelet[2857]: W0311 01:26:59.445984 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:59.446289 kubelet[2857]: E0311 01:26:59.446268 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:59.446795 kubelet[2857]: E0311 01:26:59.446775 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:59.446875 kubelet[2857]: W0311 01:26:59.446857 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:59.446951 kubelet[2857]: E0311 01:26:59.446932 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:59.447673 kubelet[2857]: E0311 01:26:59.447321 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:59.447880 kubelet[2857]: W0311 01:26:59.447758 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:59.447880 kubelet[2857]: E0311 01:26:59.447782 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:59.452960 kubelet[2857]: E0311 01:26:59.452301 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:59.454542 kubelet[2857]: W0311 01:26:59.453081 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:59.454542 kubelet[2857]: E0311 01:26:59.453195 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:59.462100 kubelet[2857]: E0311 01:26:59.459574 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:59.462100 kubelet[2857]: W0311 01:26:59.459602 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:59.462100 kubelet[2857]: E0311 01:26:59.459630 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:59.464292 kubelet[2857]: E0311 01:26:59.462900 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:59.464292 kubelet[2857]: W0311 01:26:59.462918 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:59.464292 kubelet[2857]: E0311 01:26:59.462939 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:59.464292 kubelet[2857]: E0311 01:26:59.463326 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:59.464292 kubelet[2857]: W0311 01:26:59.463343 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:59.464292 kubelet[2857]: E0311 01:26:59.463356 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:59.470214 kubelet[2857]: E0311 01:26:59.469643 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:59.470214 kubelet[2857]: W0311 01:26:59.469661 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:59.470214 kubelet[2857]: E0311 01:26:59.469681 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:59.473382 kubelet[2857]: E0311 01:26:59.472972 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:59.473382 kubelet[2857]: W0311 01:26:59.472992 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:59.473382 kubelet[2857]: E0311 01:26:59.473010 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:59.473755 kubelet[2857]: E0311 01:26:59.473691 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:59.473755 kubelet[2857]: W0311 01:26:59.473726 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:59.483550 kubelet[2857]: E0311 01:26:59.473742 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:59.483550 kubelet[2857]: E0311 01:26:59.482357 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:59.483550 kubelet[2857]: W0311 01:26:59.482376 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:59.483550 kubelet[2857]: E0311 01:26:59.482437 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:59.490190 kubelet[2857]: E0311 01:26:59.487681 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:59.490190 kubelet[2857]: W0311 01:26:59.487733 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:59.490190 kubelet[2857]: E0311 01:26:59.487770 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:59.516056 kubelet[2857]: E0311 01:26:59.515837 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:26:59.516609 kubelet[2857]: W0311 01:26:59.515960 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:26:59.516609 kubelet[2857]: E0311 01:26:59.516233 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:26:59.671572 containerd[1579]: time="2026-03-11T01:26:59.671246566Z" level=info msg="StartContainer for \"001d729be0ca9b178752591f6c877fa14d0155f6519dcde29ac7fea78a40a0bb\" returns successfully" Mar 11 01:26:59.690028 systemd[1]: cri-containerd-001d729be0ca9b178752591f6c877fa14d0155f6519dcde29ac7fea78a40a0bb.scope: Deactivated successfully. Mar 11 01:26:59.710442 containerd[1579]: time="2026-03-11T01:26:59.703558114Z" level=info msg="received container exit event container_id:\"001d729be0ca9b178752591f6c877fa14d0155f6519dcde29ac7fea78a40a0bb\" id:\"001d729be0ca9b178752591f6c877fa14d0155f6519dcde29ac7fea78a40a0bb\" pid:3567 exited_at:{seconds:1773192419 nanos:702279122}" Mar 11 01:26:59.837288 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-001d729be0ca9b178752591f6c877fa14d0155f6519dcde29ac7fea78a40a0bb-rootfs.mount: Deactivated successfully. Mar 11 01:27:00.327267 kubelet[2857]: E0311 01:27:00.327057 2857 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-45sfz" podUID="bd5d987a-c72c-4abd-9acd-7762cad20217" Mar 11 01:27:00.354162 kubelet[2857]: E0311 01:27:00.354040 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 01:27:01.386566 containerd[1579]: time="2026-03-11T01:27:01.385649170Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Mar 11 01:27:02.370953 kubelet[2857]: E0311 01:27:02.369653 2857 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-45sfz" podUID="bd5d987a-c72c-4abd-9acd-7762cad20217" Mar 11 01:27:04.242691 kubelet[2857]: E0311 01:27:04.241662 2857 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-45sfz" podUID="bd5d987a-c72c-4abd-9acd-7762cad20217" Mar 11 01:27:06.215952 kubelet[2857]: E0311 01:27:06.215483 2857 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-45sfz" podUID="bd5d987a-c72c-4abd-9acd-7762cad20217" Mar 11 01:27:08.381788 kubelet[2857]: E0311 01:27:08.379683 2857 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-45sfz" podUID="bd5d987a-c72c-4abd-9acd-7762cad20217" Mar 11 01:27:10.227427 kubelet[2857]: E0311 01:27:10.227344 2857 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-45sfz" podUID="bd5d987a-c72c-4abd-9acd-7762cad20217" Mar 11 01:27:12.231226 kubelet[2857]: E0311 01:27:12.224648 2857 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-45sfz" podUID="bd5d987a-c72c-4abd-9acd-7762cad20217" Mar 11 01:27:14.478866 kubelet[2857]: E0311 01:27:14.476820 2857 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-45sfz" podUID="bd5d987a-c72c-4abd-9acd-7762cad20217" Mar 11 01:27:16.302473 kubelet[2857]: E0311 01:27:16.298300 2857 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-45sfz" podUID="bd5d987a-c72c-4abd-9acd-7762cad20217" Mar 11 01:27:18.233221 kubelet[2857]: E0311 01:27:18.232646 2857 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-45sfz" podUID="bd5d987a-c72c-4abd-9acd-7762cad20217" Mar 11 01:27:20.291080 kubelet[2857]: E0311 01:27:20.290311 2857 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-45sfz" podUID="bd5d987a-c72c-4abd-9acd-7762cad20217" Mar 11 01:27:22.256808 kubelet[2857]: E0311 01:27:22.255978 2857 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-45sfz" podUID="bd5d987a-c72c-4abd-9acd-7762cad20217" Mar 11 01:27:23.272656 kubelet[2857]: E0311 01:27:23.266698 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 01:27:24.220576 kubelet[2857]: E0311 01:27:24.215917 2857 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-45sfz" podUID="bd5d987a-c72c-4abd-9acd-7762cad20217" Mar 11 01:27:26.261437 kubelet[2857]: E0311 01:27:26.261070 2857 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-45sfz" podUID="bd5d987a-c72c-4abd-9acd-7762cad20217" Mar 11 01:27:28.216584 kubelet[2857]: E0311 01:27:28.216445 2857 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-45sfz" podUID="bd5d987a-c72c-4abd-9acd-7762cad20217" Mar 11 01:27:29.216726 kubelet[2857]: E0311 01:27:29.216655 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 01:27:30.227245 kubelet[2857]: E0311 01:27:30.226564 2857 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-45sfz" podUID="bd5d987a-c72c-4abd-9acd-7762cad20217" Mar 11 01:27:30.227245 kubelet[2857]: E0311 01:27:30.227045 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 01:27:32.229736 kubelet[2857]: E0311 01:27:32.229572 2857 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-45sfz" podUID="bd5d987a-c72c-4abd-9acd-7762cad20217" Mar 11 01:27:33.218391 kubelet[2857]: E0311 01:27:33.217077 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 01:27:34.297496 kubelet[2857]: E0311 01:27:34.297421 2857 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-45sfz" podUID="bd5d987a-c72c-4abd-9acd-7762cad20217" Mar 11 01:27:35.331978 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1352186758.mount: Deactivated successfully. Mar 11 01:27:35.479322 containerd[1579]: time="2026-03-11T01:27:35.479016979Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:27:35.484345 containerd[1579]: time="2026-03-11T01:27:35.483872494Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Mar 11 01:27:35.488733 containerd[1579]: time="2026-03-11T01:27:35.487610638Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:27:35.505117 containerd[1579]: time="2026-03-11T01:27:35.505060794Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:27:35.506985 containerd[1579]: time="2026-03-11T01:27:35.506591702Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 34.120872896s" Mar 11 01:27:35.506985 containerd[1579]: time="2026-03-11T01:27:35.506631113Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Mar 11 01:27:35.541695 containerd[1579]: time="2026-03-11T01:27:35.539391853Z" level=info msg="CreateContainer within sandbox \"b615ea67cb5cfa8ec597d2f8ac0224720bda5e81d089c0e18916885b30f6bbe0\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Mar 11 01:27:35.564307 containerd[1579]: time="2026-03-11T01:27:35.564118463Z" level=info msg="Container e4b4d75354090736396f7a1dc2ff9674953cf53787bd6f7da67bb843e2d258e9: CDI devices from CRI Config.CDIDevices: []" Mar 11 01:27:35.705335 containerd[1579]: time="2026-03-11T01:27:35.704769849Z" level=info msg="CreateContainer within sandbox \"b615ea67cb5cfa8ec597d2f8ac0224720bda5e81d089c0e18916885b30f6bbe0\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"e4b4d75354090736396f7a1dc2ff9674953cf53787bd6f7da67bb843e2d258e9\"" Mar 11 01:27:35.709745 containerd[1579]: time="2026-03-11T01:27:35.709687360Z" level=info msg="StartContainer for \"e4b4d75354090736396f7a1dc2ff9674953cf53787bd6f7da67bb843e2d258e9\"" Mar 11 01:27:35.714632 containerd[1579]: time="2026-03-11T01:27:35.714463308Z" level=info msg="connecting to shim e4b4d75354090736396f7a1dc2ff9674953cf53787bd6f7da67bb843e2d258e9" address="unix:///run/containerd/s/a215e501455b421e76609bb9bd3dd31123b7f5ad550a3ad6a4c76388a1cc6fb5" protocol=ttrpc version=3 Mar 11 01:27:35.830843 systemd[1]: Started cri-containerd-e4b4d75354090736396f7a1dc2ff9674953cf53787bd6f7da67bb843e2d258e9.scope - libcontainer container e4b4d75354090736396f7a1dc2ff9674953cf53787bd6f7da67bb843e2d258e9. Mar 11 01:27:36.033113 containerd[1579]: time="2026-03-11T01:27:36.032931049Z" level=info msg="StartContainer for \"e4b4d75354090736396f7a1dc2ff9674953cf53787bd6f7da67bb843e2d258e9\" returns successfully" Mar 11 01:27:36.169819 systemd[1]: cri-containerd-e4b4d75354090736396f7a1dc2ff9674953cf53787bd6f7da67bb843e2d258e9.scope: Deactivated successfully. Mar 11 01:27:36.187292 containerd[1579]: time="2026-03-11T01:27:36.187104155Z" level=info msg="received container exit event container_id:\"e4b4d75354090736396f7a1dc2ff9674953cf53787bd6f7da67bb843e2d258e9\" id:\"e4b4d75354090736396f7a1dc2ff9674953cf53787bd6f7da67bb843e2d258e9\" pid:3666 exited_at:{seconds:1773192456 nanos:174752872}" Mar 11 01:27:36.219260 kubelet[2857]: E0311 01:27:36.219068 2857 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-45sfz" podUID="bd5d987a-c72c-4abd-9acd-7762cad20217" Mar 11 01:27:36.333431 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e4b4d75354090736396f7a1dc2ff9674953cf53787bd6f7da67bb843e2d258e9-rootfs.mount: Deactivated successfully. Mar 11 01:27:36.968409 containerd[1579]: time="2026-03-11T01:27:36.967436830Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Mar 11 01:27:38.219058 kubelet[2857]: E0311 01:27:38.217373 2857 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-45sfz" podUID="bd5d987a-c72c-4abd-9acd-7762cad20217" Mar 11 01:27:40.216306 kubelet[2857]: E0311 01:27:40.215857 2857 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-45sfz" podUID="bd5d987a-c72c-4abd-9acd-7762cad20217" Mar 11 01:27:42.382675 kubelet[2857]: E0311 01:27:42.380001 2857 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-45sfz" podUID="bd5d987a-c72c-4abd-9acd-7762cad20217" Mar 11 01:27:44.240183 kubelet[2857]: E0311 01:27:44.216354 2857 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-45sfz" podUID="bd5d987a-c72c-4abd-9acd-7762cad20217" Mar 11 01:27:46.242385 kubelet[2857]: E0311 01:27:46.242102 2857 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-45sfz" podUID="bd5d987a-c72c-4abd-9acd-7762cad20217" Mar 11 01:27:48.264292 kubelet[2857]: E0311 01:27:48.257036 2857 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-45sfz" podUID="bd5d987a-c72c-4abd-9acd-7762cad20217" Mar 11 01:27:50.229706 kubelet[2857]: E0311 01:27:50.227622 2857 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-45sfz" podUID="bd5d987a-c72c-4abd-9acd-7762cad20217" Mar 11 01:27:51.065914 containerd[1579]: time="2026-03-11T01:27:51.062514137Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:27:51.100938 containerd[1579]: time="2026-03-11T01:27:51.100574358Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Mar 11 01:27:51.120840 containerd[1579]: time="2026-03-11T01:27:51.118211761Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:27:51.188927 containerd[1579]: time="2026-03-11T01:27:51.188644019Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:27:51.190843 containerd[1579]: time="2026-03-11T01:27:51.190303649Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 14.222814532s" Mar 11 01:27:51.190843 containerd[1579]: time="2026-03-11T01:27:51.190340457Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Mar 11 01:27:51.221447 containerd[1579]: time="2026-03-11T01:27:51.220854076Z" level=info msg="CreateContainer within sandbox \"b615ea67cb5cfa8ec597d2f8ac0224720bda5e81d089c0e18916885b30f6bbe0\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 11 01:27:51.358245 containerd[1579]: time="2026-03-11T01:27:51.355994944Z" level=info msg="Container 9194650cd096ddf07511af825c2a04fb04fb27855fe2750d4bbdd516911cafad: CDI devices from CRI Config.CDIDevices: []" Mar 11 01:27:51.356784 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount83138417.mount: Deactivated successfully. Mar 11 01:27:51.423548 containerd[1579]: time="2026-03-11T01:27:51.423359759Z" level=info msg="CreateContainer within sandbox \"b615ea67cb5cfa8ec597d2f8ac0224720bda5e81d089c0e18916885b30f6bbe0\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"9194650cd096ddf07511af825c2a04fb04fb27855fe2750d4bbdd516911cafad\"" Mar 11 01:27:51.444194 containerd[1579]: time="2026-03-11T01:27:51.444034320Z" level=info msg="StartContainer for \"9194650cd096ddf07511af825c2a04fb04fb27855fe2750d4bbdd516911cafad\"" Mar 11 01:27:51.451218 containerd[1579]: time="2026-03-11T01:27:51.451062226Z" level=info msg="connecting to shim 9194650cd096ddf07511af825c2a04fb04fb27855fe2750d4bbdd516911cafad" address="unix:///run/containerd/s/a215e501455b421e76609bb9bd3dd31123b7f5ad550a3ad6a4c76388a1cc6fb5" protocol=ttrpc version=3 Mar 11 01:27:51.632405 systemd[1]: Started cri-containerd-9194650cd096ddf07511af825c2a04fb04fb27855fe2750d4bbdd516911cafad.scope - libcontainer container 9194650cd096ddf07511af825c2a04fb04fb27855fe2750d4bbdd516911cafad. Mar 11 01:27:52.279339 kubelet[2857]: E0311 01:27:52.273816 2857 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-45sfz" podUID="bd5d987a-c72c-4abd-9acd-7762cad20217" Mar 11 01:27:52.495756 containerd[1579]: time="2026-03-11T01:27:52.494412868Z" level=info msg="StartContainer for \"9194650cd096ddf07511af825c2a04fb04fb27855fe2750d4bbdd516911cafad\" returns successfully" Mar 11 01:27:54.218240 kubelet[2857]: E0311 01:27:54.217994 2857 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-45sfz" podUID="bd5d987a-c72c-4abd-9acd-7762cad20217" Mar 11 01:27:57.307484 kubelet[2857]: E0311 01:27:57.289936 2857 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-45sfz" podUID="bd5d987a-c72c-4abd-9acd-7762cad20217" Mar 11 01:27:59.216462 kubelet[2857]: E0311 01:27:59.216374 2857 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-45sfz" podUID="bd5d987a-c72c-4abd-9acd-7762cad20217" Mar 11 01:28:00.686421 systemd[1]: cri-containerd-9194650cd096ddf07511af825c2a04fb04fb27855fe2750d4bbdd516911cafad.scope: Deactivated successfully. Mar 11 01:28:00.686924 systemd[1]: cri-containerd-9194650cd096ddf07511af825c2a04fb04fb27855fe2750d4bbdd516911cafad.scope: Consumed 1.899s CPU time, 186.6M memory peak, 3.6M read from disk, 177M written to disk. Mar 11 01:28:00.889611 containerd[1579]: time="2026-03-11T01:28:00.881030675Z" level=info msg="received container exit event container_id:\"9194650cd096ddf07511af825c2a04fb04fb27855fe2750d4bbdd516911cafad\" id:\"9194650cd096ddf07511af825c2a04fb04fb27855fe2750d4bbdd516911cafad\" pid:3725 exited_at:{seconds:1773192480 nanos:874360451}" Mar 11 01:28:00.921431 kubelet[2857]: I0311 01:28:00.919076 2857 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Mar 11 01:28:01.294295 systemd[1]: Created slice kubepods-besteffort-podbd5d987a_c72c_4abd_9acd_7762cad20217.slice - libcontainer container kubepods-besteffort-podbd5d987a_c72c_4abd_9acd_7762cad20217.slice. Mar 11 01:28:02.061438 containerd[1579]: time="2026-03-11T01:28:02.059183813Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-45sfz,Uid:bd5d987a-c72c-4abd-9acd-7762cad20217,Namespace:calico-system,Attempt:0,}" Mar 11 01:28:02.174618 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9194650cd096ddf07511af825c2a04fb04fb27855fe2750d4bbdd516911cafad-rootfs.mount: Deactivated successfully. Mar 11 01:28:02.750380 kubelet[2857]: I0311 01:28:02.714314 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9424aa5f-c89c-4c89-a1c1-04c856e2b5f0-config-volume\") pod \"coredns-66bc5c9577-284rx\" (UID: \"9424aa5f-c89c-4c89-a1c1-04c856e2b5f0\") " pod="kube-system/coredns-66bc5c9577-284rx" Mar 11 01:28:02.750380 kubelet[2857]: I0311 01:28:02.714664 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttfm9\" (UniqueName: \"kubernetes.io/projected/9424aa5f-c89c-4c89-a1c1-04c856e2b5f0-kube-api-access-ttfm9\") pod \"coredns-66bc5c9577-284rx\" (UID: \"9424aa5f-c89c-4c89-a1c1-04c856e2b5f0\") " pod="kube-system/coredns-66bc5c9577-284rx" Mar 11 01:28:02.797271 systemd[1]: Created slice kubepods-burstable-pod9424aa5f_c89c_4c89_a1c1_04c856e2b5f0.slice - libcontainer container kubepods-burstable-pod9424aa5f_c89c_4c89_a1c1_04c856e2b5f0.slice. Mar 11 01:28:02.819744 kubelet[2857]: I0311 01:28:02.815015 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/cd75dd3d-50be-4617-ad26-c09e377b47a8-calico-apiserver-certs\") pod \"calico-apiserver-7477699c4c-wwhh9\" (UID: \"cd75dd3d-50be-4617-ad26-c09e377b47a8\") " pod="calico-system/calico-apiserver-7477699c4c-wwhh9" Mar 11 01:28:02.820973 kubelet[2857]: I0311 01:28:02.820941 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mblhg\" (UniqueName: \"kubernetes.io/projected/cd75dd3d-50be-4617-ad26-c09e377b47a8-kube-api-access-mblhg\") pod \"calico-apiserver-7477699c4c-wwhh9\" (UID: \"cd75dd3d-50be-4617-ad26-c09e377b47a8\") " pod="calico-system/calico-apiserver-7477699c4c-wwhh9" Mar 11 01:28:02.977051 systemd[1]: Created slice kubepods-besteffort-podcd75dd3d_50be_4617_ad26_c09e377b47a8.slice - libcontainer container kubepods-besteffort-podcd75dd3d_50be_4617_ad26_c09e377b47a8.slice. Mar 11 01:28:03.055334 systemd[1]: Created slice kubepods-besteffort-pod2e364242_f62d_48f3_99f5_39ad23651340.slice - libcontainer container kubepods-besteffort-pod2e364242_f62d_48f3_99f5_39ad23651340.slice. Mar 11 01:28:03.093580 systemd[1]: Created slice kubepods-besteffort-podd6503f2d_95be_4a07_b817_d3b00d921973.slice - libcontainer container kubepods-besteffort-podd6503f2d_95be_4a07_b817_d3b00d921973.slice. Mar 11 01:28:03.108834 kubelet[2857]: I0311 01:28:03.098788 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/6f97091f-a4dd-48a5-90c6-376038fd2d9a-calico-apiserver-certs\") pod \"calico-apiserver-7477699c4c-n9gg4\" (UID: \"6f97091f-a4dd-48a5-90c6-376038fd2d9a\") " pod="calico-system/calico-apiserver-7477699c4c-n9gg4" Mar 11 01:28:03.108834 kubelet[2857]: I0311 01:28:03.105468 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5p2w\" (UniqueName: \"kubernetes.io/projected/d6503f2d-95be-4a07-b817-d3b00d921973-kube-api-access-b5p2w\") pod \"calico-kube-controllers-5b7f6c864b-jnn8t\" (UID: \"d6503f2d-95be-4a07-b817-d3b00d921973\") " pod="calico-system/calico-kube-controllers-5b7f6c864b-jnn8t" Mar 11 01:28:03.115922 kubelet[2857]: I0311 01:28:03.115595 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4krfd\" (UniqueName: \"kubernetes.io/projected/28bc3cbd-69fd-4a80-92d0-eceef32616bf-kube-api-access-4krfd\") pod \"goldmane-cccfbd5cf-6mzpb\" (UID: \"28bc3cbd-69fd-4a80-92d0-eceef32616bf\") " pod="calico-system/goldmane-cccfbd5cf-6mzpb" Mar 11 01:28:03.118667 kubelet[2857]: I0311 01:28:03.118549 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/487b1154-bb7d-4368-854f-a2c8c373f6d0-config-volume\") pod \"coredns-66bc5c9577-zf2ch\" (UID: \"487b1154-bb7d-4368-854f-a2c8c373f6d0\") " pod="kube-system/coredns-66bc5c9577-zf2ch" Mar 11 01:28:03.118667 kubelet[2857]: I0311 01:28:03.118643 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2e364242-f62d-48f3-99f5-39ad23651340-whisker-backend-key-pair\") pod \"whisker-79d999cf8c-82gjz\" (UID: \"2e364242-f62d-48f3-99f5-39ad23651340\") " pod="calico-system/whisker-79d999cf8c-82gjz" Mar 11 01:28:03.118976 kubelet[2857]: I0311 01:28:03.118710 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d6503f2d-95be-4a07-b817-d3b00d921973-tigera-ca-bundle\") pod \"calico-kube-controllers-5b7f6c864b-jnn8t\" (UID: \"d6503f2d-95be-4a07-b817-d3b00d921973\") " pod="calico-system/calico-kube-controllers-5b7f6c864b-jnn8t" Mar 11 01:28:03.118976 kubelet[2857]: I0311 01:28:03.118748 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2e364242-f62d-48f3-99f5-39ad23651340-whisker-ca-bundle\") pod \"whisker-79d999cf8c-82gjz\" (UID: \"2e364242-f62d-48f3-99f5-39ad23651340\") " pod="calico-system/whisker-79d999cf8c-82gjz" Mar 11 01:28:03.118976 kubelet[2857]: I0311 01:28:03.118771 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-977ng\" (UniqueName: \"kubernetes.io/projected/2e364242-f62d-48f3-99f5-39ad23651340-kube-api-access-977ng\") pod \"whisker-79d999cf8c-82gjz\" (UID: \"2e364242-f62d-48f3-99f5-39ad23651340\") " pod="calico-system/whisker-79d999cf8c-82gjz" Mar 11 01:28:03.118976 kubelet[2857]: I0311 01:28:03.118793 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28bc3cbd-69fd-4a80-92d0-eceef32616bf-config\") pod \"goldmane-cccfbd5cf-6mzpb\" (UID: \"28bc3cbd-69fd-4a80-92d0-eceef32616bf\") " pod="calico-system/goldmane-cccfbd5cf-6mzpb" Mar 11 01:28:03.118976 kubelet[2857]: I0311 01:28:03.118815 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pb87m\" (UniqueName: \"kubernetes.io/projected/6f97091f-a4dd-48a5-90c6-376038fd2d9a-kube-api-access-pb87m\") pod \"calico-apiserver-7477699c4c-n9gg4\" (UID: \"6f97091f-a4dd-48a5-90c6-376038fd2d9a\") " pod="calico-system/calico-apiserver-7477699c4c-n9gg4" Mar 11 01:28:03.119227 kubelet[2857]: I0311 01:28:03.118840 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/2e364242-f62d-48f3-99f5-39ad23651340-nginx-config\") pod \"whisker-79d999cf8c-82gjz\" (UID: \"2e364242-f62d-48f3-99f5-39ad23651340\") " pod="calico-system/whisker-79d999cf8c-82gjz" Mar 11 01:28:03.119227 kubelet[2857]: I0311 01:28:03.118858 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/28bc3cbd-69fd-4a80-92d0-eceef32616bf-goldmane-ca-bundle\") pod \"goldmane-cccfbd5cf-6mzpb\" (UID: \"28bc3cbd-69fd-4a80-92d0-eceef32616bf\") " pod="calico-system/goldmane-cccfbd5cf-6mzpb" Mar 11 01:28:03.119227 kubelet[2857]: I0311 01:28:03.118951 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/28bc3cbd-69fd-4a80-92d0-eceef32616bf-goldmane-key-pair\") pod \"goldmane-cccfbd5cf-6mzpb\" (UID: \"28bc3cbd-69fd-4a80-92d0-eceef32616bf\") " pod="calico-system/goldmane-cccfbd5cf-6mzpb" Mar 11 01:28:03.119227 kubelet[2857]: I0311 01:28:03.118976 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4r72l\" (UniqueName: \"kubernetes.io/projected/487b1154-bb7d-4368-854f-a2c8c373f6d0-kube-api-access-4r72l\") pod \"coredns-66bc5c9577-zf2ch\" (UID: \"487b1154-bb7d-4368-854f-a2c8c373f6d0\") " pod="kube-system/coredns-66bc5c9577-zf2ch" Mar 11 01:28:03.158417 systemd[1]: Created slice kubepods-besteffort-pod6f97091f_a4dd_48a5_90c6_376038fd2d9a.slice - libcontainer container kubepods-besteffort-pod6f97091f_a4dd_48a5_90c6_376038fd2d9a.slice. Mar 11 01:28:03.773057 systemd[1]: Created slice kubepods-besteffort-pod28bc3cbd_69fd_4a80_92d0_eceef32616bf.slice - libcontainer container kubepods-besteffort-pod28bc3cbd_69fd_4a80_92d0_eceef32616bf.slice. Mar 11 01:28:03.800317 containerd[1579]: time="2026-03-11T01:28:03.779587605Z" level=info msg="CreateContainer within sandbox \"b615ea67cb5cfa8ec597d2f8ac0224720bda5e81d089c0e18916885b30f6bbe0\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 11 01:28:03.804074 systemd[1]: Created slice kubepods-burstable-pod487b1154_bb7d_4368_854f_a2c8c373f6d0.slice - libcontainer container kubepods-burstable-pod487b1154_bb7d_4368_854f_a2c8c373f6d0.slice. Mar 11 01:28:03.826596 containerd[1579]: time="2026-03-11T01:28:03.826049615Z" level=error msg="Failed to destroy network for sandbox \"9754603e5a6881795fe6cf8bb5d9481a4c495bcc618c5be05b233baad142daa8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 01:28:03.881685 containerd[1579]: time="2026-03-11T01:28:03.881464182Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-45sfz,Uid:bd5d987a-c72c-4abd-9acd-7762cad20217,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9754603e5a6881795fe6cf8bb5d9481a4c495bcc618c5be05b233baad142daa8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 01:28:03.898634 kubelet[2857]: E0311 01:28:03.896484 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 01:28:03.906222 containerd[1579]: time="2026-03-11T01:28:03.904581951Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-284rx,Uid:9424aa5f-c89c-4c89-a1c1-04c856e2b5f0,Namespace:kube-system,Attempt:0,}" Mar 11 01:28:06.122047 systemd[1]: run-netns-cni\x2da27f5e40\x2d5b0b\x2d6312\x2d25a3\x2d8947f2173112.mount: Deactivated successfully. Mar 11 01:28:06.130807 containerd[1579]: time="2026-03-11T01:28:06.130702817Z" level=info msg="Container bc08e59798b87cbee0aaccd90670b72a10b513e31a9874c8d7665662ca0c3eae: CDI devices from CRI Config.CDIDevices: []" Mar 11 01:28:06.151837 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3621713022.mount: Deactivated successfully. Mar 11 01:28:06.226595 kubelet[2857]: E0311 01:28:06.195768 2857 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9754603e5a6881795fe6cf8bb5d9481a4c495bcc618c5be05b233baad142daa8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 01:28:06.226595 kubelet[2857]: E0311 01:28:06.195899 2857 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9754603e5a6881795fe6cf8bb5d9481a4c495bcc618c5be05b233baad142daa8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-45sfz" Mar 11 01:28:06.226595 kubelet[2857]: E0311 01:28:06.195929 2857 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9754603e5a6881795fe6cf8bb5d9481a4c495bcc618c5be05b233baad142daa8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-45sfz" Mar 11 01:28:06.350300 kubelet[2857]: E0311 01:28:06.350055 2857 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-45sfz_calico-system(bd5d987a-c72c-4abd-9acd-7762cad20217)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-45sfz_calico-system(bd5d987a-c72c-4abd-9acd-7762cad20217)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9754603e5a6881795fe6cf8bb5d9481a4c495bcc618c5be05b233baad142daa8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-45sfz" podUID="bd5d987a-c72c-4abd-9acd-7762cad20217" Mar 11 01:28:06.358441 containerd[1579]: time="2026-03-11T01:28:06.356905816Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7477699c4c-wwhh9,Uid:cd75dd3d-50be-4617-ad26-c09e377b47a8,Namespace:calico-system,Attempt:0,}" Mar 11 01:28:06.386896 containerd[1579]: time="2026-03-11T01:28:06.386742777Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5b7f6c864b-jnn8t,Uid:d6503f2d-95be-4a07-b817-d3b00d921973,Namespace:calico-system,Attempt:0,}" Mar 11 01:28:06.500840 containerd[1579]: time="2026-03-11T01:28:06.500795544Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-79d999cf8c-82gjz,Uid:2e364242-f62d-48f3-99f5-39ad23651340,Namespace:calico-system,Attempt:0,}" Mar 11 01:28:06.561322 containerd[1579]: time="2026-03-11T01:28:06.561220900Z" level=info msg="CreateContainer within sandbox \"b615ea67cb5cfa8ec597d2f8ac0224720bda5e81d089c0e18916885b30f6bbe0\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"bc08e59798b87cbee0aaccd90670b72a10b513e31a9874c8d7665662ca0c3eae\"" Mar 11 01:28:06.586435 containerd[1579]: time="2026-03-11T01:28:06.586345991Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7477699c4c-n9gg4,Uid:6f97091f-a4dd-48a5-90c6-376038fd2d9a,Namespace:calico-system,Attempt:0,}" Mar 11 01:28:06.601754 containerd[1579]: time="2026-03-11T01:28:06.601704678Z" level=info msg="StartContainer for \"bc08e59798b87cbee0aaccd90670b72a10b513e31a9874c8d7665662ca0c3eae\"" Mar 11 01:28:06.604814 kubelet[2857]: E0311 01:28:06.604693 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 01:28:06.605669 containerd[1579]: time="2026-03-11T01:28:06.605544392Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-zf2ch,Uid:487b1154-bb7d-4368-854f-a2c8c373f6d0,Namespace:kube-system,Attempt:0,}" Mar 11 01:28:06.641250 containerd[1579]: time="2026-03-11T01:28:06.641031579Z" level=info msg="connecting to shim bc08e59798b87cbee0aaccd90670b72a10b513e31a9874c8d7665662ca0c3eae" address="unix:///run/containerd/s/a215e501455b421e76609bb9bd3dd31123b7f5ad550a3ad6a4c76388a1cc6fb5" protocol=ttrpc version=3 Mar 11 01:28:06.920975 systemd[1]: Started cri-containerd-bc08e59798b87cbee0aaccd90670b72a10b513e31a9874c8d7665662ca0c3eae.scope - libcontainer container bc08e59798b87cbee0aaccd90670b72a10b513e31a9874c8d7665662ca0c3eae. Mar 11 01:28:06.964200 containerd[1579]: time="2026-03-11T01:28:06.964046873Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-6mzpb,Uid:28bc3cbd-69fd-4a80-92d0-eceef32616bf,Namespace:calico-system,Attempt:0,}" Mar 11 01:28:07.236587 kubelet[2857]: E0311 01:28:07.221884 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 01:28:07.540764 containerd[1579]: time="2026-03-11T01:28:07.540312699Z" level=error msg="Failed to destroy network for sandbox \"e9e33cbdc4a558d11c9dca09e7b12598fc22d94224284a794d11c6eead93c150\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 01:28:07.544081 systemd[1]: run-netns-cni\x2d0029d05d\x2dfa46\x2d6039\x2d065a\x2d039aa81514ed.mount: Deactivated successfully. Mar 11 01:28:07.589012 containerd[1579]: time="2026-03-11T01:28:07.588832178Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-284rx,Uid:9424aa5f-c89c-4c89-a1c1-04c856e2b5f0,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9e33cbdc4a558d11c9dca09e7b12598fc22d94224284a794d11c6eead93c150\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 01:28:07.595638 kubelet[2857]: E0311 01:28:07.595311 2857 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9e33cbdc4a558d11c9dca09e7b12598fc22d94224284a794d11c6eead93c150\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 01:28:07.595638 kubelet[2857]: E0311 01:28:07.595403 2857 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9e33cbdc4a558d11c9dca09e7b12598fc22d94224284a794d11c6eead93c150\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-284rx" Mar 11 01:28:07.595638 kubelet[2857]: E0311 01:28:07.595435 2857 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9e33cbdc4a558d11c9dca09e7b12598fc22d94224284a794d11c6eead93c150\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-284rx" Mar 11 01:28:07.595955 kubelet[2857]: E0311 01:28:07.595543 2857 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-284rx_kube-system(9424aa5f-c89c-4c89-a1c1-04c856e2b5f0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-284rx_kube-system(9424aa5f-c89c-4c89-a1c1-04c856e2b5f0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e9e33cbdc4a558d11c9dca09e7b12598fc22d94224284a794d11c6eead93c150\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-284rx" podUID="9424aa5f-c89c-4c89-a1c1-04c856e2b5f0" Mar 11 01:28:07.642809 containerd[1579]: time="2026-03-11T01:28:07.642544774Z" level=error msg="Failed to destroy network for sandbox \"c94bae252e42d4f98f18d207a5896c82e6791837c31189a32b988da8e8ae376a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 01:28:07.647095 systemd[1]: run-netns-cni\x2dc2d7ef27\x2d9528\x2dc630\x2d8f3c\x2d024a3d42b507.mount: Deactivated successfully. Mar 11 01:28:07.672693 containerd[1579]: time="2026-03-11T01:28:07.671928161Z" level=error msg="Failed to destroy network for sandbox \"0fa7eeb5cafafc46a26c6a861d9d72b57d0aa91adc21ea07e01cf58fc0dbd3db\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 01:28:07.679010 containerd[1579]: time="2026-03-11T01:28:07.675619238Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7477699c4c-wwhh9,Uid:cd75dd3d-50be-4617-ad26-c09e377b47a8,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c94bae252e42d4f98f18d207a5896c82e6791837c31189a32b988da8e8ae376a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 01:28:07.678935 systemd[1]: run-netns-cni\x2d1bafabe1\x2d36b8\x2d1d38\x2dafc0\x2df149fc6cf9dd.mount: Deactivated successfully. Mar 11 01:28:07.680435 kubelet[2857]: E0311 01:28:07.680340 2857 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c94bae252e42d4f98f18d207a5896c82e6791837c31189a32b988da8e8ae376a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 01:28:07.681290 kubelet[2857]: E0311 01:28:07.681112 2857 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c94bae252e42d4f98f18d207a5896c82e6791837c31189a32b988da8e8ae376a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-7477699c4c-wwhh9" Mar 11 01:28:07.686960 kubelet[2857]: E0311 01:28:07.682830 2857 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c94bae252e42d4f98f18d207a5896c82e6791837c31189a32b988da8e8ae376a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-7477699c4c-wwhh9" Mar 11 01:28:07.692362 containerd[1579]: time="2026-03-11T01:28:07.691242242Z" level=error msg="Failed to destroy network for sandbox \"7a11309cb9b645b1d054dd28285c0b2aa572d8333dc29dacaeb4f1410042822e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 01:28:07.692646 kubelet[2857]: E0311 01:28:07.683123 2857 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7477699c4c-wwhh9_calico-system(cd75dd3d-50be-4617-ad26-c09e377b47a8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7477699c4c-wwhh9_calico-system(cd75dd3d-50be-4617-ad26-c09e377b47a8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c94bae252e42d4f98f18d207a5896c82e6791837c31189a32b988da8e8ae376a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-7477699c4c-wwhh9" podUID="cd75dd3d-50be-4617-ad26-c09e377b47a8" Mar 11 01:28:07.701061 systemd[1]: run-netns-cni\x2dc765cae5\x2d8c5a\x2d2168\x2d8bdd\x2d3b12a56b6eb6.mount: Deactivated successfully. Mar 11 01:28:07.708196 containerd[1579]: time="2026-03-11T01:28:07.708047284Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-79d999cf8c-82gjz,Uid:2e364242-f62d-48f3-99f5-39ad23651340,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a11309cb9b645b1d054dd28285c0b2aa572d8333dc29dacaeb4f1410042822e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 01:28:07.710310 kubelet[2857]: E0311 01:28:07.709318 2857 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a11309cb9b645b1d054dd28285c0b2aa572d8333dc29dacaeb4f1410042822e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 01:28:07.710310 kubelet[2857]: E0311 01:28:07.709397 2857 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a11309cb9b645b1d054dd28285c0b2aa572d8333dc29dacaeb4f1410042822e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-79d999cf8c-82gjz" Mar 11 01:28:07.710310 kubelet[2857]: E0311 01:28:07.709423 2857 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a11309cb9b645b1d054dd28285c0b2aa572d8333dc29dacaeb4f1410042822e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-79d999cf8c-82gjz" Mar 11 01:28:07.710486 kubelet[2857]: E0311 01:28:07.709487 2857 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-79d999cf8c-82gjz_calico-system(2e364242-f62d-48f3-99f5-39ad23651340)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-79d999cf8c-82gjz_calico-system(2e364242-f62d-48f3-99f5-39ad23651340)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7a11309cb9b645b1d054dd28285c0b2aa572d8333dc29dacaeb4f1410042822e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-79d999cf8c-82gjz" podUID="2e364242-f62d-48f3-99f5-39ad23651340" Mar 11 01:28:07.717218 containerd[1579]: time="2026-03-11T01:28:07.713229857Z" level=error msg="Failed to destroy network for sandbox \"d1e9290cd401090e89fd8dd1f0d041b3396642db5603b15354c9cf68a20d2fd9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 01:28:07.717218 containerd[1579]: time="2026-03-11T01:28:07.713985082Z" level=info msg="StartContainer for \"bc08e59798b87cbee0aaccd90670b72a10b513e31a9874c8d7665662ca0c3eae\" returns successfully" Mar 11 01:28:07.717519 containerd[1579]: time="2026-03-11T01:28:07.717469992Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-zf2ch,Uid:487b1154-bb7d-4368-854f-a2c8c373f6d0,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0fa7eeb5cafafc46a26c6a861d9d72b57d0aa91adc21ea07e01cf58fc0dbd3db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 01:28:07.722071 kubelet[2857]: E0311 01:28:07.718262 2857 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0fa7eeb5cafafc46a26c6a861d9d72b57d0aa91adc21ea07e01cf58fc0dbd3db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 01:28:07.722071 kubelet[2857]: E0311 01:28:07.718484 2857 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0fa7eeb5cafafc46a26c6a861d9d72b57d0aa91adc21ea07e01cf58fc0dbd3db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-zf2ch" Mar 11 01:28:07.722071 kubelet[2857]: E0311 01:28:07.718590 2857 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0fa7eeb5cafafc46a26c6a861d9d72b57d0aa91adc21ea07e01cf58fc0dbd3db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-zf2ch" Mar 11 01:28:07.722413 containerd[1579]: time="2026-03-11T01:28:07.719124630Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7477699c4c-n9gg4,Uid:6f97091f-a4dd-48a5-90c6-376038fd2d9a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1e9290cd401090e89fd8dd1f0d041b3396642db5603b15354c9cf68a20d2fd9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 01:28:07.722529 kubelet[2857]: E0311 01:28:07.718853 2857 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-zf2ch_kube-system(487b1154-bb7d-4368-854f-a2c8c373f6d0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-zf2ch_kube-system(487b1154-bb7d-4368-854f-a2c8c373f6d0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0fa7eeb5cafafc46a26c6a861d9d72b57d0aa91adc21ea07e01cf58fc0dbd3db\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-zf2ch" podUID="487b1154-bb7d-4368-854f-a2c8c373f6d0" Mar 11 01:28:07.722529 kubelet[2857]: E0311 01:28:07.719343 2857 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1e9290cd401090e89fd8dd1f0d041b3396642db5603b15354c9cf68a20d2fd9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 01:28:07.722529 kubelet[2857]: E0311 01:28:07.719377 2857 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1e9290cd401090e89fd8dd1f0d041b3396642db5603b15354c9cf68a20d2fd9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-7477699c4c-n9gg4" Mar 11 01:28:07.722703 kubelet[2857]: E0311 01:28:07.719396 2857 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1e9290cd401090e89fd8dd1f0d041b3396642db5603b15354c9cf68a20d2fd9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-7477699c4c-n9gg4" Mar 11 01:28:07.722703 kubelet[2857]: E0311 01:28:07.719460 2857 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7477699c4c-n9gg4_calico-system(6f97091f-a4dd-48a5-90c6-376038fd2d9a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7477699c4c-n9gg4_calico-system(6f97091f-a4dd-48a5-90c6-376038fd2d9a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d1e9290cd401090e89fd8dd1f0d041b3396642db5603b15354c9cf68a20d2fd9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-7477699c4c-n9gg4" podUID="6f97091f-a4dd-48a5-90c6-376038fd2d9a" Mar 11 01:28:07.776232 containerd[1579]: time="2026-03-11T01:28:07.770494428Z" level=error msg="Failed to destroy network for sandbox \"49cc6d5db2b8fc299112d116e6d8c78ea691d86bf199e3600082cd7f752c76d9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 01:28:07.782568 containerd[1579]: time="2026-03-11T01:28:07.782237567Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5b7f6c864b-jnn8t,Uid:d6503f2d-95be-4a07-b817-d3b00d921973,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"49cc6d5db2b8fc299112d116e6d8c78ea691d86bf199e3600082cd7f752c76d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 01:28:07.782881 kubelet[2857]: E0311 01:28:07.782736 2857 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"49cc6d5db2b8fc299112d116e6d8c78ea691d86bf199e3600082cd7f752c76d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 01:28:07.782881 kubelet[2857]: E0311 01:28:07.782799 2857 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"49cc6d5db2b8fc299112d116e6d8c78ea691d86bf199e3600082cd7f752c76d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5b7f6c864b-jnn8t" Mar 11 01:28:07.782881 kubelet[2857]: E0311 01:28:07.782831 2857 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"49cc6d5db2b8fc299112d116e6d8c78ea691d86bf199e3600082cd7f752c76d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5b7f6c864b-jnn8t" Mar 11 01:28:07.783055 kubelet[2857]: E0311 01:28:07.782935 2857 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5b7f6c864b-jnn8t_calico-system(d6503f2d-95be-4a07-b817-d3b00d921973)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5b7f6c864b-jnn8t_calico-system(d6503f2d-95be-4a07-b817-d3b00d921973)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"49cc6d5db2b8fc299112d116e6d8c78ea691d86bf199e3600082cd7f752c76d9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5b7f6c864b-jnn8t" podUID="d6503f2d-95be-4a07-b817-d3b00d921973" Mar 11 01:28:07.785380 containerd[1579]: time="2026-03-11T01:28:07.785282995Z" level=error msg="Failed to destroy network for sandbox \"d9a7fb7e0739ec9753319cb6a52abdd18a174303db363652c1d8b99244511429\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 01:28:07.788783 containerd[1579]: time="2026-03-11T01:28:07.788721010Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-6mzpb,Uid:28bc3cbd-69fd-4a80-92d0-eceef32616bf,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d9a7fb7e0739ec9753319cb6a52abdd18a174303db363652c1d8b99244511429\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 01:28:07.789989 kubelet[2857]: E0311 01:28:07.789893 2857 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d9a7fb7e0739ec9753319cb6a52abdd18a174303db363652c1d8b99244511429\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 01:28:07.790316 kubelet[2857]: E0311 01:28:07.790125 2857 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d9a7fb7e0739ec9753319cb6a52abdd18a174303db363652c1d8b99244511429\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-cccfbd5cf-6mzpb" Mar 11 01:28:07.790316 kubelet[2857]: E0311 01:28:07.790256 2857 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d9a7fb7e0739ec9753319cb6a52abdd18a174303db363652c1d8b99244511429\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-cccfbd5cf-6mzpb" Mar 11 01:28:07.795331 kubelet[2857]: E0311 01:28:07.790440 2857 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-cccfbd5cf-6mzpb_calico-system(28bc3cbd-69fd-4a80-92d0-eceef32616bf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-cccfbd5cf-6mzpb_calico-system(28bc3cbd-69fd-4a80-92d0-eceef32616bf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d9a7fb7e0739ec9753319cb6a52abdd18a174303db363652c1d8b99244511429\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-cccfbd5cf-6mzpb" podUID="28bc3cbd-69fd-4a80-92d0-eceef32616bf" Mar 11 01:28:08.179788 systemd[1]: run-netns-cni\x2db97533b6\x2d7627\x2d2a2c\x2d57b8\x2d5381d6bb90cc.mount: Deactivated successfully. Mar 11 01:28:08.180071 systemd[1]: run-netns-cni\x2d5707d5af\x2db724\x2db541\x2d0ec5\x2d8c2c3e115040.mount: Deactivated successfully. Mar 11 01:28:08.180222 systemd[1]: run-netns-cni\x2de48133c7\x2d96a9\x2d4820\x2d43df\x2d34ca4758f746.mount: Deactivated successfully. Mar 11 01:28:08.812994 kubelet[2857]: I0311 01:28:08.812855 2857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-g5psr" podStartSLOduration=19.270074034 podStartE2EDuration="1m18.812830633s" podCreationTimestamp="2026-03-11 01:26:50 +0000 UTC" firstStartedPulling="2026-03-11 01:26:51.64871801 +0000 UTC m=+48.327694642" lastFinishedPulling="2026-03-11 01:27:51.191474609 +0000 UTC m=+107.870451241" observedRunningTime="2026-03-11 01:28:08.781213926 +0000 UTC m=+125.460190578" watchObservedRunningTime="2026-03-11 01:28:08.812830633 +0000 UTC m=+125.491807265" Mar 11 01:28:10.462760 kubelet[2857]: I0311 01:28:10.451511 2857 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2e364242-f62d-48f3-99f5-39ad23651340-whisker-backend-key-pair\") pod \"2e364242-f62d-48f3-99f5-39ad23651340\" (UID: \"2e364242-f62d-48f3-99f5-39ad23651340\") " Mar 11 01:28:10.462760 kubelet[2857]: I0311 01:28:10.451589 2857 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-977ng\" (UniqueName: \"kubernetes.io/projected/2e364242-f62d-48f3-99f5-39ad23651340-kube-api-access-977ng\") pod \"2e364242-f62d-48f3-99f5-39ad23651340\" (UID: \"2e364242-f62d-48f3-99f5-39ad23651340\") " Mar 11 01:28:10.462760 kubelet[2857]: I0311 01:28:10.451666 2857 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2e364242-f62d-48f3-99f5-39ad23651340-whisker-ca-bundle\") pod \"2e364242-f62d-48f3-99f5-39ad23651340\" (UID: \"2e364242-f62d-48f3-99f5-39ad23651340\") " Mar 11 01:28:10.462760 kubelet[2857]: I0311 01:28:10.451694 2857 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/2e364242-f62d-48f3-99f5-39ad23651340-nginx-config\") pod \"2e364242-f62d-48f3-99f5-39ad23651340\" (UID: \"2e364242-f62d-48f3-99f5-39ad23651340\") " Mar 11 01:28:10.462760 kubelet[2857]: I0311 01:28:10.452419 2857 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e364242-f62d-48f3-99f5-39ad23651340-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "2e364242-f62d-48f3-99f5-39ad23651340" (UID: "2e364242-f62d-48f3-99f5-39ad23651340"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 11 01:28:10.463459 kubelet[2857]: I0311 01:28:10.458904 2857 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e364242-f62d-48f3-99f5-39ad23651340-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "2e364242-f62d-48f3-99f5-39ad23651340" (UID: "2e364242-f62d-48f3-99f5-39ad23651340"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 11 01:28:10.501785 systemd[1]: var-lib-kubelet-pods-2e364242\x2df62d\x2d48f3\x2d99f5\x2d39ad23651340-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d977ng.mount: Deactivated successfully. Mar 11 01:28:10.504852 kubelet[2857]: I0311 01:28:10.504551 2857 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e364242-f62d-48f3-99f5-39ad23651340-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "2e364242-f62d-48f3-99f5-39ad23651340" (UID: "2e364242-f62d-48f3-99f5-39ad23651340"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 11 01:28:10.505113 systemd[1]: var-lib-kubelet-pods-2e364242\x2df62d\x2d48f3\x2d99f5\x2d39ad23651340-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Mar 11 01:28:10.511031 kubelet[2857]: I0311 01:28:10.510905 2857 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e364242-f62d-48f3-99f5-39ad23651340-kube-api-access-977ng" (OuterVolumeSpecName: "kube-api-access-977ng") pod "2e364242-f62d-48f3-99f5-39ad23651340" (UID: "2e364242-f62d-48f3-99f5-39ad23651340"). InnerVolumeSpecName "kube-api-access-977ng". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 11 01:28:10.560402 kubelet[2857]: I0311 01:28:10.556315 2857 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2e364242-f62d-48f3-99f5-39ad23651340-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Mar 11 01:28:10.560402 kubelet[2857]: I0311 01:28:10.556357 2857 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/2e364242-f62d-48f3-99f5-39ad23651340-nginx-config\") on node \"localhost\" DevicePath \"\"" Mar 11 01:28:10.560402 kubelet[2857]: I0311 01:28:10.556370 2857 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2e364242-f62d-48f3-99f5-39ad23651340-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Mar 11 01:28:10.560402 kubelet[2857]: I0311 01:28:10.556382 2857 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-977ng\" (UniqueName: \"kubernetes.io/projected/2e364242-f62d-48f3-99f5-39ad23651340-kube-api-access-977ng\") on node \"localhost\" DevicePath \"\"" Mar 11 01:28:10.713909 systemd[1]: Removed slice kubepods-besteffort-pod2e364242_f62d_48f3_99f5_39ad23651340.slice - libcontainer container kubepods-besteffort-pod2e364242_f62d_48f3_99f5_39ad23651340.slice. Mar 11 01:28:11.527793 systemd[1]: Created slice kubepods-besteffort-pod0e34b03f_9943_4210_9a25_f649e7adb082.slice - libcontainer container kubepods-besteffort-pod0e34b03f_9943_4210_9a25_f649e7adb082.slice. Mar 11 01:28:11.598194 kubelet[2857]: I0311 01:28:11.592660 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnf8w\" (UniqueName: \"kubernetes.io/projected/0e34b03f-9943-4210-9a25-f649e7adb082-kube-api-access-hnf8w\") pod \"whisker-659748fb8d-dfrfc\" (UID: \"0e34b03f-9943-4210-9a25-f649e7adb082\") " pod="calico-system/whisker-659748fb8d-dfrfc" Mar 11 01:28:11.598194 kubelet[2857]: I0311 01:28:11.592722 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/0e34b03f-9943-4210-9a25-f649e7adb082-nginx-config\") pod \"whisker-659748fb8d-dfrfc\" (UID: \"0e34b03f-9943-4210-9a25-f649e7adb082\") " pod="calico-system/whisker-659748fb8d-dfrfc" Mar 11 01:28:11.598194 kubelet[2857]: I0311 01:28:11.592745 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0e34b03f-9943-4210-9a25-f649e7adb082-whisker-backend-key-pair\") pod \"whisker-659748fb8d-dfrfc\" (UID: \"0e34b03f-9943-4210-9a25-f649e7adb082\") " pod="calico-system/whisker-659748fb8d-dfrfc" Mar 11 01:28:11.598194 kubelet[2857]: I0311 01:28:11.592764 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0e34b03f-9943-4210-9a25-f649e7adb082-whisker-ca-bundle\") pod \"whisker-659748fb8d-dfrfc\" (UID: \"0e34b03f-9943-4210-9a25-f649e7adb082\") " pod="calico-system/whisker-659748fb8d-dfrfc" Mar 11 01:28:11.863520 containerd[1579]: time="2026-03-11T01:28:11.862930994Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-659748fb8d-dfrfc,Uid:0e34b03f-9943-4210-9a25-f649e7adb082,Namespace:calico-system,Attempt:0,}" Mar 11 01:28:13.356563 kubelet[2857]: I0311 01:28:13.354913 2857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e364242-f62d-48f3-99f5-39ad23651340" path="/var/lib/kubelet/pods/2e364242-f62d-48f3-99f5-39ad23651340/volumes" Mar 11 01:28:13.357441 kubelet[2857]: E0311 01:28:13.356655 2857 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.045s" Mar 11 01:28:14.439925 systemd-networkd[1466]: cali74c57e61452: Link UP Mar 11 01:28:14.440888 systemd-networkd[1466]: cali74c57e61452: Gained carrier Mar 11 01:28:15.501714 containerd[1579]: 2026-03-11 01:28:11.993 [ERROR][4132] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 11 01:28:15.501714 containerd[1579]: 2026-03-11 01:28:13.419 [INFO][4132] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--659748fb8d--dfrfc-eth0 whisker-659748fb8d- calico-system 0e34b03f-9943-4210-9a25-f649e7adb082 1148 0 2026-03-11 01:28:11 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:659748fb8d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-659748fb8d-dfrfc eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali74c57e61452 [] [] }} ContainerID="533f86e540757862329771f978c82e5fa76bd41b9259ca27652d73edc31b2cd6" Namespace="calico-system" Pod="whisker-659748fb8d-dfrfc" WorkloadEndpoint="localhost-k8s-whisker--659748fb8d--dfrfc-" Mar 11 01:28:15.501714 containerd[1579]: 2026-03-11 01:28:13.419 [INFO][4132] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="533f86e540757862329771f978c82e5fa76bd41b9259ca27652d73edc31b2cd6" Namespace="calico-system" Pod="whisker-659748fb8d-dfrfc" WorkloadEndpoint="localhost-k8s-whisker--659748fb8d--dfrfc-eth0" Mar 11 01:28:15.501714 containerd[1579]: 2026-03-11 01:28:13.793 [INFO][4159] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="533f86e540757862329771f978c82e5fa76bd41b9259ca27652d73edc31b2cd6" HandleID="k8s-pod-network.533f86e540757862329771f978c82e5fa76bd41b9259ca27652d73edc31b2cd6" Workload="localhost-k8s-whisker--659748fb8d--dfrfc-eth0" Mar 11 01:28:15.502503 containerd[1579]: 2026-03-11 01:28:13.863 [INFO][4159] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="533f86e540757862329771f978c82e5fa76bd41b9259ca27652d73edc31b2cd6" HandleID="k8s-pod-network.533f86e540757862329771f978c82e5fa76bd41b9259ca27652d73edc31b2cd6" Workload="localhost-k8s-whisker--659748fb8d--dfrfc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004072b0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-659748fb8d-dfrfc", "timestamp":"2026-03-11 01:28:13.793735112 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0005ed080)} Mar 11 01:28:15.502503 containerd[1579]: 2026-03-11 01:28:13.863 [INFO][4159] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 01:28:15.502503 containerd[1579]: 2026-03-11 01:28:13.863 [INFO][4159] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 01:28:15.502503 containerd[1579]: 2026-03-11 01:28:13.863 [INFO][4159] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 11 01:28:15.502503 containerd[1579]: 2026-03-11 01:28:13.888 [INFO][4159] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.533f86e540757862329771f978c82e5fa76bd41b9259ca27652d73edc31b2cd6" host="localhost" Mar 11 01:28:15.502503 containerd[1579]: 2026-03-11 01:28:13.929 [INFO][4159] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 11 01:28:15.502503 containerd[1579]: 2026-03-11 01:28:14.002 [INFO][4159] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 11 01:28:15.502503 containerd[1579]: 2026-03-11 01:28:14.017 [INFO][4159] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 11 01:28:15.502503 containerd[1579]: 2026-03-11 01:28:14.050 [INFO][4159] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 11 01:28:15.502503 containerd[1579]: 2026-03-11 01:28:14.050 [INFO][4159] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.533f86e540757862329771f978c82e5fa76bd41b9259ca27652d73edc31b2cd6" host="localhost" Mar 11 01:28:15.502849 containerd[1579]: 2026-03-11 01:28:14.073 [INFO][4159] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.533f86e540757862329771f978c82e5fa76bd41b9259ca27652d73edc31b2cd6 Mar 11 01:28:15.502849 containerd[1579]: 2026-03-11 01:28:14.150 [INFO][4159] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.533f86e540757862329771f978c82e5fa76bd41b9259ca27652d73edc31b2cd6" host="localhost" Mar 11 01:28:15.502849 containerd[1579]: 2026-03-11 01:28:14.261 [INFO][4159] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.533f86e540757862329771f978c82e5fa76bd41b9259ca27652d73edc31b2cd6" host="localhost" Mar 11 01:28:15.502849 containerd[1579]: 2026-03-11 01:28:14.261 [INFO][4159] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.533f86e540757862329771f978c82e5fa76bd41b9259ca27652d73edc31b2cd6" host="localhost" Mar 11 01:28:15.502849 containerd[1579]: 2026-03-11 01:28:14.262 [INFO][4159] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 01:28:15.502849 containerd[1579]: 2026-03-11 01:28:14.262 [INFO][4159] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="533f86e540757862329771f978c82e5fa76bd41b9259ca27652d73edc31b2cd6" HandleID="k8s-pod-network.533f86e540757862329771f978c82e5fa76bd41b9259ca27652d73edc31b2cd6" Workload="localhost-k8s-whisker--659748fb8d--dfrfc-eth0" Mar 11 01:28:15.503062 containerd[1579]: 2026-03-11 01:28:14.281 [INFO][4132] cni-plugin/k8s.go 418: Populated endpoint ContainerID="533f86e540757862329771f978c82e5fa76bd41b9259ca27652d73edc31b2cd6" Namespace="calico-system" Pod="whisker-659748fb8d-dfrfc" WorkloadEndpoint="localhost-k8s-whisker--659748fb8d--dfrfc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--659748fb8d--dfrfc-eth0", GenerateName:"whisker-659748fb8d-", Namespace:"calico-system", SelfLink:"", UID:"0e34b03f-9943-4210-9a25-f649e7adb082", ResourceVersion:"1148", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 1, 28, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"659748fb8d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-659748fb8d-dfrfc", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali74c57e61452", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 01:28:15.503062 containerd[1579]: 2026-03-11 01:28:14.281 [INFO][4132] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="533f86e540757862329771f978c82e5fa76bd41b9259ca27652d73edc31b2cd6" Namespace="calico-system" Pod="whisker-659748fb8d-dfrfc" WorkloadEndpoint="localhost-k8s-whisker--659748fb8d--dfrfc-eth0" Mar 11 01:28:15.508493 containerd[1579]: 2026-03-11 01:28:14.281 [INFO][4132] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali74c57e61452 ContainerID="533f86e540757862329771f978c82e5fa76bd41b9259ca27652d73edc31b2cd6" Namespace="calico-system" Pod="whisker-659748fb8d-dfrfc" WorkloadEndpoint="localhost-k8s-whisker--659748fb8d--dfrfc-eth0" Mar 11 01:28:15.508493 containerd[1579]: 2026-03-11 01:28:14.448 [INFO][4132] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="533f86e540757862329771f978c82e5fa76bd41b9259ca27652d73edc31b2cd6" Namespace="calico-system" Pod="whisker-659748fb8d-dfrfc" WorkloadEndpoint="localhost-k8s-whisker--659748fb8d--dfrfc-eth0" Mar 11 01:28:15.508760 containerd[1579]: 2026-03-11 01:28:14.454 [INFO][4132] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="533f86e540757862329771f978c82e5fa76bd41b9259ca27652d73edc31b2cd6" Namespace="calico-system" Pod="whisker-659748fb8d-dfrfc" WorkloadEndpoint="localhost-k8s-whisker--659748fb8d--dfrfc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--659748fb8d--dfrfc-eth0", GenerateName:"whisker-659748fb8d-", Namespace:"calico-system", SelfLink:"", UID:"0e34b03f-9943-4210-9a25-f649e7adb082", ResourceVersion:"1148", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 1, 28, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"659748fb8d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"533f86e540757862329771f978c82e5fa76bd41b9259ca27652d73edc31b2cd6", Pod:"whisker-659748fb8d-dfrfc", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali74c57e61452", MAC:"2e:a3:e6:a0:39:76", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 01:28:15.509119 containerd[1579]: 2026-03-11 01:28:15.429 [INFO][4132] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="533f86e540757862329771f978c82e5fa76bd41b9259ca27652d73edc31b2cd6" Namespace="calico-system" Pod="whisker-659748fb8d-dfrfc" WorkloadEndpoint="localhost-k8s-whisker--659748fb8d--dfrfc-eth0" Mar 11 01:28:15.738996 containerd[1579]: time="2026-03-11T01:28:15.738246231Z" level=info msg="connecting to shim 533f86e540757862329771f978c82e5fa76bd41b9259ca27652d73edc31b2cd6" address="unix:///run/containerd/s/3a9b57d948f59c223ac832c7e9892263f9ab33b7dfaee618408ae1551526b5ef" namespace=k8s.io protocol=ttrpc version=3 Mar 11 01:28:15.932702 systemd[1]: Started cri-containerd-533f86e540757862329771f978c82e5fa76bd41b9259ca27652d73edc31b2cd6.scope - libcontainer container 533f86e540757862329771f978c82e5fa76bd41b9259ca27652d73edc31b2cd6. Mar 11 01:28:16.369934 systemd-resolved[1394]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 11 01:28:16.401856 systemd-networkd[1466]: cali74c57e61452: Gained IPv6LL Mar 11 01:28:16.791242 containerd[1579]: time="2026-03-11T01:28:16.790063433Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-659748fb8d-dfrfc,Uid:0e34b03f-9943-4210-9a25-f649e7adb082,Namespace:calico-system,Attempt:0,} returns sandbox id \"533f86e540757862329771f978c82e5fa76bd41b9259ca27652d73edc31b2cd6\"" Mar 11 01:28:16.798187 containerd[1579]: time="2026-03-11T01:28:16.797119706Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Mar 11 01:28:18.988705 containerd[1579]: time="2026-03-11T01:28:18.987798270Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:28:19.007194 containerd[1579]: time="2026-03-11T01:28:19.006265481Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Mar 11 01:28:19.015251 containerd[1579]: time="2026-03-11T01:28:19.015002706Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:28:19.048411 containerd[1579]: time="2026-03-11T01:28:19.048363441Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:28:19.049633 containerd[1579]: time="2026-03-11T01:28:19.049491323Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 2.251016143s" Mar 11 01:28:19.049916 containerd[1579]: time="2026-03-11T01:28:19.049891410Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Mar 11 01:28:19.080302 containerd[1579]: time="2026-03-11T01:28:19.078863626Z" level=info msg="CreateContainer within sandbox \"533f86e540757862329771f978c82e5fa76bd41b9259ca27652d73edc31b2cd6\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Mar 11 01:28:19.150068 containerd[1579]: time="2026-03-11T01:28:19.146353881Z" level=info msg="Container 7a3d760309d5b00f0f7963f5cdcb39f319ba53a6a1ab0bba6f81797d98f577f4: CDI devices from CRI Config.CDIDevices: []" Mar 11 01:28:19.219184 containerd[1579]: time="2026-03-11T01:28:19.211350845Z" level=info msg="CreateContainer within sandbox \"533f86e540757862329771f978c82e5fa76bd41b9259ca27652d73edc31b2cd6\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"7a3d760309d5b00f0f7963f5cdcb39f319ba53a6a1ab0bba6f81797d98f577f4\"" Mar 11 01:28:19.219528 containerd[1579]: time="2026-03-11T01:28:19.219387901Z" level=info msg="StartContainer for \"7a3d760309d5b00f0f7963f5cdcb39f319ba53a6a1ab0bba6f81797d98f577f4\"" Mar 11 01:28:19.221116 containerd[1579]: time="2026-03-11T01:28:19.221084911Z" level=info msg="connecting to shim 7a3d760309d5b00f0f7963f5cdcb39f319ba53a6a1ab0bba6f81797d98f577f4" address="unix:///run/containerd/s/3a9b57d948f59c223ac832c7e9892263f9ab33b7dfaee618408ae1551526b5ef" protocol=ttrpc version=3 Mar 11 01:28:19.247712 containerd[1579]: time="2026-03-11T01:28:19.247590939Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7477699c4c-n9gg4,Uid:6f97091f-a4dd-48a5-90c6-376038fd2d9a,Namespace:calico-system,Attempt:0,}" Mar 11 01:28:19.268205 containerd[1579]: time="2026-03-11T01:28:19.268078461Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7477699c4c-wwhh9,Uid:cd75dd3d-50be-4617-ad26-c09e377b47a8,Namespace:calico-system,Attempt:0,}" Mar 11 01:28:19.318216 kubelet[2857]: E0311 01:28:19.317895 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 01:28:19.334598 containerd[1579]: time="2026-03-11T01:28:19.334002292Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-zf2ch,Uid:487b1154-bb7d-4368-854f-a2c8c373f6d0,Namespace:kube-system,Attempt:0,}" Mar 11 01:28:19.404728 systemd[1]: Started cri-containerd-7a3d760309d5b00f0f7963f5cdcb39f319ba53a6a1ab0bba6f81797d98f577f4.scope - libcontainer container 7a3d760309d5b00f0f7963f5cdcb39f319ba53a6a1ab0bba6f81797d98f577f4. Mar 11 01:28:20.374994 containerd[1579]: time="2026-03-11T01:28:20.374872235Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-45sfz,Uid:bd5d987a-c72c-4abd-9acd-7762cad20217,Namespace:calico-system,Attempt:0,}" Mar 11 01:28:20.667923 systemd-networkd[1466]: vxlan.calico: Link UP Mar 11 01:28:20.667955 systemd-networkd[1466]: vxlan.calico: Gained carrier Mar 11 01:28:20.996731 containerd[1579]: time="2026-03-11T01:28:20.990889195Z" level=info msg="StartContainer for \"7a3d760309d5b00f0f7963f5cdcb39f319ba53a6a1ab0bba6f81797d98f577f4\" returns successfully" Mar 11 01:28:21.015335 containerd[1579]: time="2026-03-11T01:28:21.015220210Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Mar 11 01:28:21.283127 containerd[1579]: time="2026-03-11T01:28:21.283071427Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5b7f6c864b-jnn8t,Uid:d6503f2d-95be-4a07-b817-d3b00d921973,Namespace:calico-system,Attempt:0,}" Mar 11 01:28:21.355917 containerd[1579]: time="2026-03-11T01:28:21.341715990Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-6mzpb,Uid:28bc3cbd-69fd-4a80-92d0-eceef32616bf,Namespace:calico-system,Attempt:0,}" Mar 11 01:28:21.569619 systemd-networkd[1466]: cali1471ba15ac0: Link UP Mar 11 01:28:21.618683 systemd-networkd[1466]: cali1471ba15ac0: Gained carrier Mar 11 01:28:21.688113 containerd[1579]: 2026-03-11 01:28:19.719 [INFO][4428] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--zf2ch-eth0 coredns-66bc5c9577- kube-system 487b1154-bb7d-4368-854f-a2c8c373f6d0 1091 0 2026-03-11 01:26:13 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-zf2ch eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali1471ba15ac0 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="c8257c2ca4c712a8c25336fa0a63a2d5eb8b12ad9171ee2059e1e2d0a7e4d36f" Namespace="kube-system" Pod="coredns-66bc5c9577-zf2ch" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--zf2ch-" Mar 11 01:28:21.688113 containerd[1579]: 2026-03-11 01:28:19.720 [INFO][4428] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c8257c2ca4c712a8c25336fa0a63a2d5eb8b12ad9171ee2059e1e2d0a7e4d36f" Namespace="kube-system" Pod="coredns-66bc5c9577-zf2ch" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--zf2ch-eth0" Mar 11 01:28:21.688113 containerd[1579]: 2026-03-11 01:28:20.392 [INFO][4458] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c8257c2ca4c712a8c25336fa0a63a2d5eb8b12ad9171ee2059e1e2d0a7e4d36f" HandleID="k8s-pod-network.c8257c2ca4c712a8c25336fa0a63a2d5eb8b12ad9171ee2059e1e2d0a7e4d36f" Workload="localhost-k8s-coredns--66bc5c9577--zf2ch-eth0" Mar 11 01:28:21.690218 containerd[1579]: 2026-03-11 01:28:20.872 [INFO][4458] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="c8257c2ca4c712a8c25336fa0a63a2d5eb8b12ad9171ee2059e1e2d0a7e4d36f" HandleID="k8s-pod-network.c8257c2ca4c712a8c25336fa0a63a2d5eb8b12ad9171ee2059e1e2d0a7e4d36f" Workload="localhost-k8s-coredns--66bc5c9577--zf2ch-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f190), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-zf2ch", "timestamp":"2026-03-11 01:28:20.39297529 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0002102c0)} Mar 11 01:28:21.690218 containerd[1579]: 2026-03-11 01:28:20.872 [INFO][4458] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 01:28:21.690218 containerd[1579]: 2026-03-11 01:28:20.872 [INFO][4458] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 01:28:21.690218 containerd[1579]: 2026-03-11 01:28:20.873 [INFO][4458] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 11 01:28:21.690218 containerd[1579]: 2026-03-11 01:28:20.923 [INFO][4458] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.c8257c2ca4c712a8c25336fa0a63a2d5eb8b12ad9171ee2059e1e2d0a7e4d36f" host="localhost" Mar 11 01:28:21.690218 containerd[1579]: 2026-03-11 01:28:21.126 [INFO][4458] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 11 01:28:21.690218 containerd[1579]: 2026-03-11 01:28:21.180 [INFO][4458] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 11 01:28:21.690218 containerd[1579]: 2026-03-11 01:28:21.200 [INFO][4458] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 11 01:28:21.690218 containerd[1579]: 2026-03-11 01:28:21.281 [INFO][4458] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 11 01:28:21.690218 containerd[1579]: 2026-03-11 01:28:21.281 [INFO][4458] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c8257c2ca4c712a8c25336fa0a63a2d5eb8b12ad9171ee2059e1e2d0a7e4d36f" host="localhost" Mar 11 01:28:21.694421 containerd[1579]: 2026-03-11 01:28:21.320 [INFO][4458] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.c8257c2ca4c712a8c25336fa0a63a2d5eb8b12ad9171ee2059e1e2d0a7e4d36f Mar 11 01:28:21.694421 containerd[1579]: 2026-03-11 01:28:21.425 [INFO][4458] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c8257c2ca4c712a8c25336fa0a63a2d5eb8b12ad9171ee2059e1e2d0a7e4d36f" host="localhost" Mar 11 01:28:21.694421 containerd[1579]: 2026-03-11 01:28:21.519 [INFO][4458] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.c8257c2ca4c712a8c25336fa0a63a2d5eb8b12ad9171ee2059e1e2d0a7e4d36f" host="localhost" Mar 11 01:28:21.694421 containerd[1579]: 2026-03-11 01:28:21.522 [INFO][4458] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.c8257c2ca4c712a8c25336fa0a63a2d5eb8b12ad9171ee2059e1e2d0a7e4d36f" host="localhost" Mar 11 01:28:21.694421 containerd[1579]: 2026-03-11 01:28:21.523 [INFO][4458] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 01:28:21.694421 containerd[1579]: 2026-03-11 01:28:21.523 [INFO][4458] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="c8257c2ca4c712a8c25336fa0a63a2d5eb8b12ad9171ee2059e1e2d0a7e4d36f" HandleID="k8s-pod-network.c8257c2ca4c712a8c25336fa0a63a2d5eb8b12ad9171ee2059e1e2d0a7e4d36f" Workload="localhost-k8s-coredns--66bc5c9577--zf2ch-eth0" Mar 11 01:28:21.694678 containerd[1579]: 2026-03-11 01:28:21.553 [INFO][4428] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c8257c2ca4c712a8c25336fa0a63a2d5eb8b12ad9171ee2059e1e2d0a7e4d36f" Namespace="kube-system" Pod="coredns-66bc5c9577-zf2ch" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--zf2ch-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--zf2ch-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"487b1154-bb7d-4368-854f-a2c8c373f6d0", ResourceVersion:"1091", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 1, 26, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-zf2ch", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1471ba15ac0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 01:28:21.694678 containerd[1579]: 2026-03-11 01:28:21.553 [INFO][4428] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="c8257c2ca4c712a8c25336fa0a63a2d5eb8b12ad9171ee2059e1e2d0a7e4d36f" Namespace="kube-system" Pod="coredns-66bc5c9577-zf2ch" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--zf2ch-eth0" Mar 11 01:28:21.694678 containerd[1579]: 2026-03-11 01:28:21.554 [INFO][4428] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1471ba15ac0 ContainerID="c8257c2ca4c712a8c25336fa0a63a2d5eb8b12ad9171ee2059e1e2d0a7e4d36f" Namespace="kube-system" Pod="coredns-66bc5c9577-zf2ch" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--zf2ch-eth0" Mar 11 01:28:21.694678 containerd[1579]: 2026-03-11 01:28:21.613 [INFO][4428] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c8257c2ca4c712a8c25336fa0a63a2d5eb8b12ad9171ee2059e1e2d0a7e4d36f" Namespace="kube-system" Pod="coredns-66bc5c9577-zf2ch" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--zf2ch-eth0" Mar 11 01:28:21.694678 containerd[1579]: 2026-03-11 01:28:21.643 [INFO][4428] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c8257c2ca4c712a8c25336fa0a63a2d5eb8b12ad9171ee2059e1e2d0a7e4d36f" Namespace="kube-system" Pod="coredns-66bc5c9577-zf2ch" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--zf2ch-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--zf2ch-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"487b1154-bb7d-4368-854f-a2c8c373f6d0", ResourceVersion:"1091", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 1, 26, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c8257c2ca4c712a8c25336fa0a63a2d5eb8b12ad9171ee2059e1e2d0a7e4d36f", Pod:"coredns-66bc5c9577-zf2ch", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1471ba15ac0", MAC:"3a:92:ad:1c:6c:e7", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 01:28:21.694678 containerd[1579]: 2026-03-11 01:28:21.677 [INFO][4428] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c8257c2ca4c712a8c25336fa0a63a2d5eb8b12ad9171ee2059e1e2d0a7e4d36f" Namespace="kube-system" Pod="coredns-66bc5c9577-zf2ch" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--zf2ch-eth0" Mar 11 01:28:21.819798 containerd[1579]: time="2026-03-11T01:28:21.818782996Z" level=info msg="connecting to shim c8257c2ca4c712a8c25336fa0a63a2d5eb8b12ad9171ee2059e1e2d0a7e4d36f" address="unix:///run/containerd/s/fc5e375581e8b290a8e13fbe518ebf0522b06cbfacbc38eea5e738308d748ed7" namespace=k8s.io protocol=ttrpc version=3 Mar 11 01:28:21.957705 systemd[1]: Started cri-containerd-c8257c2ca4c712a8c25336fa0a63a2d5eb8b12ad9171ee2059e1e2d0a7e4d36f.scope - libcontainer container c8257c2ca4c712a8c25336fa0a63a2d5eb8b12ad9171ee2059e1e2d0a7e4d36f. Mar 11 01:28:22.073483 systemd-networkd[1466]: vxlan.calico: Gained IPv6LL Mar 11 01:28:22.118403 systemd-networkd[1466]: cali02841303924: Link UP Mar 11 01:28:22.163349 systemd-networkd[1466]: cali02841303924: Gained carrier Mar 11 01:28:22.203785 systemd-resolved[1394]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 11 01:28:22.262821 kubelet[2857]: E0311 01:28:22.262739 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 01:28:22.270380 containerd[1579]: time="2026-03-11T01:28:22.270311077Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-284rx,Uid:9424aa5f-c89c-4c89-a1c1-04c856e2b5f0,Namespace:kube-system,Attempt:0,}" Mar 11 01:28:22.330862 containerd[1579]: 2026-03-11 01:28:20.328 [INFO][4397] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7477699c4c--n9gg4-eth0 calico-apiserver-7477699c4c- calico-system 6f97091f-a4dd-48a5-90c6-376038fd2d9a 1090 0 2026-03-11 01:26:48 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7477699c4c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7477699c4c-n9gg4 eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali02841303924 [] [] }} ContainerID="04b32efac5491b708bc22eb4ede85eb7ed3ee21025ade3cda0d96001830c53d3" Namespace="calico-system" Pod="calico-apiserver-7477699c4c-n9gg4" WorkloadEndpoint="localhost-k8s-calico--apiserver--7477699c4c--n9gg4-" Mar 11 01:28:22.330862 containerd[1579]: 2026-03-11 01:28:20.329 [INFO][4397] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="04b32efac5491b708bc22eb4ede85eb7ed3ee21025ade3cda0d96001830c53d3" Namespace="calico-system" Pod="calico-apiserver-7477699c4c-n9gg4" WorkloadEndpoint="localhost-k8s-calico--apiserver--7477699c4c--n9gg4-eth0" Mar 11 01:28:22.330862 containerd[1579]: 2026-03-11 01:28:20.966 [INFO][4486] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="04b32efac5491b708bc22eb4ede85eb7ed3ee21025ade3cda0d96001830c53d3" HandleID="k8s-pod-network.04b32efac5491b708bc22eb4ede85eb7ed3ee21025ade3cda0d96001830c53d3" Workload="localhost-k8s-calico--apiserver--7477699c4c--n9gg4-eth0" Mar 11 01:28:22.330862 containerd[1579]: 2026-03-11 01:28:21.015 [INFO][4486] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="04b32efac5491b708bc22eb4ede85eb7ed3ee21025ade3cda0d96001830c53d3" HandleID="k8s-pod-network.04b32efac5491b708bc22eb4ede85eb7ed3ee21025ade3cda0d96001830c53d3" Workload="localhost-k8s-calico--apiserver--7477699c4c--n9gg4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00042bdc0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-7477699c4c-n9gg4", "timestamp":"2026-03-11 01:28:20.966958592 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000490dc0)} Mar 11 01:28:22.330862 containerd[1579]: 2026-03-11 01:28:21.015 [INFO][4486] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 01:28:22.330862 containerd[1579]: 2026-03-11 01:28:21.529 [INFO][4486] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 01:28:22.330862 containerd[1579]: 2026-03-11 01:28:21.530 [INFO][4486] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 11 01:28:22.330862 containerd[1579]: 2026-03-11 01:28:21.582 [INFO][4486] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.04b32efac5491b708bc22eb4ede85eb7ed3ee21025ade3cda0d96001830c53d3" host="localhost" Mar 11 01:28:22.330862 containerd[1579]: 2026-03-11 01:28:21.668 [INFO][4486] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 11 01:28:22.330862 containerd[1579]: 2026-03-11 01:28:21.727 [INFO][4486] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 11 01:28:22.330862 containerd[1579]: 2026-03-11 01:28:21.748 [INFO][4486] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 11 01:28:22.330862 containerd[1579]: 2026-03-11 01:28:21.774 [INFO][4486] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 11 01:28:22.330862 containerd[1579]: 2026-03-11 01:28:21.774 [INFO][4486] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.04b32efac5491b708bc22eb4ede85eb7ed3ee21025ade3cda0d96001830c53d3" host="localhost" Mar 11 01:28:22.330862 containerd[1579]: 2026-03-11 01:28:21.803 [INFO][4486] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.04b32efac5491b708bc22eb4ede85eb7ed3ee21025ade3cda0d96001830c53d3 Mar 11 01:28:22.330862 containerd[1579]: 2026-03-11 01:28:21.845 [INFO][4486] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.04b32efac5491b708bc22eb4ede85eb7ed3ee21025ade3cda0d96001830c53d3" host="localhost" Mar 11 01:28:22.330862 containerd[1579]: 2026-03-11 01:28:21.915 [INFO][4486] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.04b32efac5491b708bc22eb4ede85eb7ed3ee21025ade3cda0d96001830c53d3" host="localhost" Mar 11 01:28:22.330862 containerd[1579]: 2026-03-11 01:28:21.920 [INFO][4486] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.04b32efac5491b708bc22eb4ede85eb7ed3ee21025ade3cda0d96001830c53d3" host="localhost" Mar 11 01:28:22.330862 containerd[1579]: 2026-03-11 01:28:21.920 [INFO][4486] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 01:28:22.330862 containerd[1579]: 2026-03-11 01:28:21.957 [INFO][4486] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="04b32efac5491b708bc22eb4ede85eb7ed3ee21025ade3cda0d96001830c53d3" HandleID="k8s-pod-network.04b32efac5491b708bc22eb4ede85eb7ed3ee21025ade3cda0d96001830c53d3" Workload="localhost-k8s-calico--apiserver--7477699c4c--n9gg4-eth0" Mar 11 01:28:22.332059 containerd[1579]: 2026-03-11 01:28:21.997 [INFO][4397] cni-plugin/k8s.go 418: Populated endpoint ContainerID="04b32efac5491b708bc22eb4ede85eb7ed3ee21025ade3cda0d96001830c53d3" Namespace="calico-system" Pod="calico-apiserver-7477699c4c-n9gg4" WorkloadEndpoint="localhost-k8s-calico--apiserver--7477699c4c--n9gg4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7477699c4c--n9gg4-eth0", GenerateName:"calico-apiserver-7477699c4c-", Namespace:"calico-system", SelfLink:"", UID:"6f97091f-a4dd-48a5-90c6-376038fd2d9a", ResourceVersion:"1090", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 1, 26, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7477699c4c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7477699c4c-n9gg4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali02841303924", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 01:28:22.332059 containerd[1579]: 2026-03-11 01:28:21.998 [INFO][4397] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="04b32efac5491b708bc22eb4ede85eb7ed3ee21025ade3cda0d96001830c53d3" Namespace="calico-system" Pod="calico-apiserver-7477699c4c-n9gg4" WorkloadEndpoint="localhost-k8s-calico--apiserver--7477699c4c--n9gg4-eth0" Mar 11 01:28:22.332059 containerd[1579]: 2026-03-11 01:28:21.998 [INFO][4397] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali02841303924 ContainerID="04b32efac5491b708bc22eb4ede85eb7ed3ee21025ade3cda0d96001830c53d3" Namespace="calico-system" Pod="calico-apiserver-7477699c4c-n9gg4" WorkloadEndpoint="localhost-k8s-calico--apiserver--7477699c4c--n9gg4-eth0" Mar 11 01:28:22.332059 containerd[1579]: 2026-03-11 01:28:22.217 [INFO][4397] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="04b32efac5491b708bc22eb4ede85eb7ed3ee21025ade3cda0d96001830c53d3" Namespace="calico-system" Pod="calico-apiserver-7477699c4c-n9gg4" WorkloadEndpoint="localhost-k8s-calico--apiserver--7477699c4c--n9gg4-eth0" Mar 11 01:28:22.332059 containerd[1579]: 2026-03-11 01:28:22.242 [INFO][4397] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="04b32efac5491b708bc22eb4ede85eb7ed3ee21025ade3cda0d96001830c53d3" Namespace="calico-system" Pod="calico-apiserver-7477699c4c-n9gg4" WorkloadEndpoint="localhost-k8s-calico--apiserver--7477699c4c--n9gg4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7477699c4c--n9gg4-eth0", GenerateName:"calico-apiserver-7477699c4c-", Namespace:"calico-system", SelfLink:"", UID:"6f97091f-a4dd-48a5-90c6-376038fd2d9a", ResourceVersion:"1090", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 1, 26, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7477699c4c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"04b32efac5491b708bc22eb4ede85eb7ed3ee21025ade3cda0d96001830c53d3", Pod:"calico-apiserver-7477699c4c-n9gg4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali02841303924", MAC:"9a:f9:fc:b9:d4:ed", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 01:28:22.332059 containerd[1579]: 2026-03-11 01:28:22.321 [INFO][4397] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="04b32efac5491b708bc22eb4ede85eb7ed3ee21025ade3cda0d96001830c53d3" Namespace="calico-system" Pod="calico-apiserver-7477699c4c-n9gg4" WorkloadEndpoint="localhost-k8s-calico--apiserver--7477699c4c--n9gg4-eth0" Mar 11 01:28:22.619631 systemd-networkd[1466]: calif464161bccb: Link UP Mar 11 01:28:22.684084 systemd-networkd[1466]: calif464161bccb: Gained carrier Mar 11 01:28:22.693355 systemd-networkd[1466]: cali1471ba15ac0: Gained IPv6LL Mar 11 01:28:22.772598 containerd[1579]: time="2026-03-11T01:28:22.772545021Z" level=info msg="connecting to shim 04b32efac5491b708bc22eb4ede85eb7ed3ee21025ade3cda0d96001830c53d3" address="unix:///run/containerd/s/e005906c7b8ad0cc3c7c900c9d5d872d013736fa0d9f2f4628eff07ee77666cf" namespace=k8s.io protocol=ttrpc version=3 Mar 11 01:28:22.774191 containerd[1579]: 2026-03-11 01:28:20.264 [INFO][4403] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7477699c4c--wwhh9-eth0 calico-apiserver-7477699c4c- calico-system cd75dd3d-50be-4617-ad26-c09e377b47a8 1095 0 2026-03-11 01:26:48 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7477699c4c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7477699c4c-wwhh9 eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calif464161bccb [] [] }} ContainerID="c4dc3b48f6a4aaf2d2d8324c515dd87b879b7669e88cc192076045110f1cc3c2" Namespace="calico-system" Pod="calico-apiserver-7477699c4c-wwhh9" WorkloadEndpoint="localhost-k8s-calico--apiserver--7477699c4c--wwhh9-" Mar 11 01:28:22.774191 containerd[1579]: 2026-03-11 01:28:20.264 [INFO][4403] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c4dc3b48f6a4aaf2d2d8324c515dd87b879b7669e88cc192076045110f1cc3c2" Namespace="calico-system" Pod="calico-apiserver-7477699c4c-wwhh9" WorkloadEndpoint="localhost-k8s-calico--apiserver--7477699c4c--wwhh9-eth0" Mar 11 01:28:22.774191 containerd[1579]: 2026-03-11 01:28:20.977 [INFO][4475] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c4dc3b48f6a4aaf2d2d8324c515dd87b879b7669e88cc192076045110f1cc3c2" HandleID="k8s-pod-network.c4dc3b48f6a4aaf2d2d8324c515dd87b879b7669e88cc192076045110f1cc3c2" Workload="localhost-k8s-calico--apiserver--7477699c4c--wwhh9-eth0" Mar 11 01:28:22.774191 containerd[1579]: 2026-03-11 01:28:21.042 [INFO][4475] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="c4dc3b48f6a4aaf2d2d8324c515dd87b879b7669e88cc192076045110f1cc3c2" HandleID="k8s-pod-network.c4dc3b48f6a4aaf2d2d8324c515dd87b879b7669e88cc192076045110f1cc3c2" Workload="localhost-k8s-calico--apiserver--7477699c4c--wwhh9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003c8640), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-7477699c4c-wwhh9", "timestamp":"2026-03-11 01:28:20.977381452 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003026e0)} Mar 11 01:28:22.774191 containerd[1579]: 2026-03-11 01:28:21.043 [INFO][4475] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 01:28:22.774191 containerd[1579]: 2026-03-11 01:28:21.920 [INFO][4475] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 01:28:22.774191 containerd[1579]: 2026-03-11 01:28:21.965 [INFO][4475] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 11 01:28:22.774191 containerd[1579]: 2026-03-11 01:28:22.103 [INFO][4475] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.c4dc3b48f6a4aaf2d2d8324c515dd87b879b7669e88cc192076045110f1cc3c2" host="localhost" Mar 11 01:28:22.774191 containerd[1579]: 2026-03-11 01:28:22.170 [INFO][4475] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 11 01:28:22.774191 containerd[1579]: 2026-03-11 01:28:22.207 [INFO][4475] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 11 01:28:22.774191 containerd[1579]: 2026-03-11 01:28:22.259 [INFO][4475] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 11 01:28:22.774191 containerd[1579]: 2026-03-11 01:28:22.278 [INFO][4475] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 11 01:28:22.774191 containerd[1579]: 2026-03-11 01:28:22.278 [INFO][4475] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c4dc3b48f6a4aaf2d2d8324c515dd87b879b7669e88cc192076045110f1cc3c2" host="localhost" Mar 11 01:28:22.774191 containerd[1579]: 2026-03-11 01:28:22.290 [INFO][4475] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.c4dc3b48f6a4aaf2d2d8324c515dd87b879b7669e88cc192076045110f1cc3c2 Mar 11 01:28:22.774191 containerd[1579]: 2026-03-11 01:28:22.347 [INFO][4475] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c4dc3b48f6a4aaf2d2d8324c515dd87b879b7669e88cc192076045110f1cc3c2" host="localhost" Mar 11 01:28:22.774191 containerd[1579]: 2026-03-11 01:28:22.455 [INFO][4475] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.c4dc3b48f6a4aaf2d2d8324c515dd87b879b7669e88cc192076045110f1cc3c2" host="localhost" Mar 11 01:28:22.774191 containerd[1579]: 2026-03-11 01:28:22.455 [INFO][4475] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.c4dc3b48f6a4aaf2d2d8324c515dd87b879b7669e88cc192076045110f1cc3c2" host="localhost" Mar 11 01:28:22.774191 containerd[1579]: 2026-03-11 01:28:22.455 [INFO][4475] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 01:28:22.774191 containerd[1579]: 2026-03-11 01:28:22.455 [INFO][4475] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="c4dc3b48f6a4aaf2d2d8324c515dd87b879b7669e88cc192076045110f1cc3c2" HandleID="k8s-pod-network.c4dc3b48f6a4aaf2d2d8324c515dd87b879b7669e88cc192076045110f1cc3c2" Workload="localhost-k8s-calico--apiserver--7477699c4c--wwhh9-eth0" Mar 11 01:28:22.779377 containerd[1579]: 2026-03-11 01:28:22.471 [INFO][4403] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c4dc3b48f6a4aaf2d2d8324c515dd87b879b7669e88cc192076045110f1cc3c2" Namespace="calico-system" Pod="calico-apiserver-7477699c4c-wwhh9" WorkloadEndpoint="localhost-k8s-calico--apiserver--7477699c4c--wwhh9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7477699c4c--wwhh9-eth0", GenerateName:"calico-apiserver-7477699c4c-", Namespace:"calico-system", SelfLink:"", UID:"cd75dd3d-50be-4617-ad26-c09e377b47a8", ResourceVersion:"1095", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 1, 26, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7477699c4c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7477699c4c-wwhh9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calif464161bccb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 01:28:22.779377 containerd[1579]: 2026-03-11 01:28:22.473 [INFO][4403] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="c4dc3b48f6a4aaf2d2d8324c515dd87b879b7669e88cc192076045110f1cc3c2" Namespace="calico-system" Pod="calico-apiserver-7477699c4c-wwhh9" WorkloadEndpoint="localhost-k8s-calico--apiserver--7477699c4c--wwhh9-eth0" Mar 11 01:28:22.779377 containerd[1579]: 2026-03-11 01:28:22.473 [INFO][4403] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif464161bccb ContainerID="c4dc3b48f6a4aaf2d2d8324c515dd87b879b7669e88cc192076045110f1cc3c2" Namespace="calico-system" Pod="calico-apiserver-7477699c4c-wwhh9" WorkloadEndpoint="localhost-k8s-calico--apiserver--7477699c4c--wwhh9-eth0" Mar 11 01:28:22.779377 containerd[1579]: 2026-03-11 01:28:22.622 [INFO][4403] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c4dc3b48f6a4aaf2d2d8324c515dd87b879b7669e88cc192076045110f1cc3c2" Namespace="calico-system" Pod="calico-apiserver-7477699c4c-wwhh9" WorkloadEndpoint="localhost-k8s-calico--apiserver--7477699c4c--wwhh9-eth0" Mar 11 01:28:22.779377 containerd[1579]: 2026-03-11 01:28:22.673 [INFO][4403] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c4dc3b48f6a4aaf2d2d8324c515dd87b879b7669e88cc192076045110f1cc3c2" Namespace="calico-system" Pod="calico-apiserver-7477699c4c-wwhh9" WorkloadEndpoint="localhost-k8s-calico--apiserver--7477699c4c--wwhh9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7477699c4c--wwhh9-eth0", GenerateName:"calico-apiserver-7477699c4c-", Namespace:"calico-system", SelfLink:"", UID:"cd75dd3d-50be-4617-ad26-c09e377b47a8", ResourceVersion:"1095", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 1, 26, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7477699c4c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c4dc3b48f6a4aaf2d2d8324c515dd87b879b7669e88cc192076045110f1cc3c2", Pod:"calico-apiserver-7477699c4c-wwhh9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calif464161bccb", MAC:"4e:e5:9a:5e:66:77", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 01:28:22.779377 containerd[1579]: 2026-03-11 01:28:22.759 [INFO][4403] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c4dc3b48f6a4aaf2d2d8324c515dd87b879b7669e88cc192076045110f1cc3c2" Namespace="calico-system" Pod="calico-apiserver-7477699c4c-wwhh9" WorkloadEndpoint="localhost-k8s-calico--apiserver--7477699c4c--wwhh9-eth0" Mar 11 01:28:22.871627 containerd[1579]: time="2026-03-11T01:28:22.871402549Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-zf2ch,Uid:487b1154-bb7d-4368-854f-a2c8c373f6d0,Namespace:kube-system,Attempt:0,} returns sandbox id \"c8257c2ca4c712a8c25336fa0a63a2d5eb8b12ad9171ee2059e1e2d0a7e4d36f\"" Mar 11 01:28:23.146331 kubelet[2857]: E0311 01:28:23.137034 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 01:28:23.179711 systemd-networkd[1466]: calie4210bc1dd5: Link UP Mar 11 01:28:23.187319 systemd-networkd[1466]: calie4210bc1dd5: Gained carrier Mar 11 01:28:23.255222 containerd[1579]: time="2026-03-11T01:28:23.252485494Z" level=info msg="CreateContainer within sandbox \"c8257c2ca4c712a8c25336fa0a63a2d5eb8b12ad9171ee2059e1e2d0a7e4d36f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 11 01:28:23.366866 containerd[1579]: 2026-03-11 01:28:21.132 [INFO][4484] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--45sfz-eth0 csi-node-driver- calico-system bd5d987a-c72c-4abd-9acd-7762cad20217 824 0 2026-03-11 01:26:50 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:98cbb5577 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-45sfz eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calie4210bc1dd5 [] [] }} ContainerID="84243771c733e465fed95ee45179b96ef24d9259c56bd27d30b8b496ed40765b" Namespace="calico-system" Pod="csi-node-driver-45sfz" WorkloadEndpoint="localhost-k8s-csi--node--driver--45sfz-" Mar 11 01:28:23.366866 containerd[1579]: 2026-03-11 01:28:21.141 [INFO][4484] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="84243771c733e465fed95ee45179b96ef24d9259c56bd27d30b8b496ed40765b" Namespace="calico-system" Pod="csi-node-driver-45sfz" WorkloadEndpoint="localhost-k8s-csi--node--driver--45sfz-eth0" Mar 11 01:28:23.366866 containerd[1579]: 2026-03-11 01:28:21.712 [INFO][4531] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="84243771c733e465fed95ee45179b96ef24d9259c56bd27d30b8b496ed40765b" HandleID="k8s-pod-network.84243771c733e465fed95ee45179b96ef24d9259c56bd27d30b8b496ed40765b" Workload="localhost-k8s-csi--node--driver--45sfz-eth0" Mar 11 01:28:23.366866 containerd[1579]: 2026-03-11 01:28:21.743 [INFO][4531] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="84243771c733e465fed95ee45179b96ef24d9259c56bd27d30b8b496ed40765b" HandleID="k8s-pod-network.84243771c733e465fed95ee45179b96ef24d9259c56bd27d30b8b496ed40765b" Workload="localhost-k8s-csi--node--driver--45sfz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f450), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-45sfz", "timestamp":"2026-03-11 01:28:21.712200334 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0002062c0)} Mar 11 01:28:23.366866 containerd[1579]: 2026-03-11 01:28:21.743 [INFO][4531] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 01:28:23.366866 containerd[1579]: 2026-03-11 01:28:22.457 [INFO][4531] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 01:28:23.366866 containerd[1579]: 2026-03-11 01:28:22.457 [INFO][4531] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 11 01:28:23.366866 containerd[1579]: 2026-03-11 01:28:22.533 [INFO][4531] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.84243771c733e465fed95ee45179b96ef24d9259c56bd27d30b8b496ed40765b" host="localhost" Mar 11 01:28:23.366866 containerd[1579]: 2026-03-11 01:28:22.575 [INFO][4531] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 11 01:28:23.366866 containerd[1579]: 2026-03-11 01:28:22.648 [INFO][4531] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 11 01:28:23.366866 containerd[1579]: 2026-03-11 01:28:22.665 [INFO][4531] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 11 01:28:23.366866 containerd[1579]: 2026-03-11 01:28:22.688 [INFO][4531] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 11 01:28:23.366866 containerd[1579]: 2026-03-11 01:28:22.689 [INFO][4531] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.84243771c733e465fed95ee45179b96ef24d9259c56bd27d30b8b496ed40765b" host="localhost" Mar 11 01:28:23.366866 containerd[1579]: 2026-03-11 01:28:22.722 [INFO][4531] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.84243771c733e465fed95ee45179b96ef24d9259c56bd27d30b8b496ed40765b Mar 11 01:28:23.366866 containerd[1579]: 2026-03-11 01:28:22.770 [INFO][4531] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.84243771c733e465fed95ee45179b96ef24d9259c56bd27d30b8b496ed40765b" host="localhost" Mar 11 01:28:23.366866 containerd[1579]: 2026-03-11 01:28:22.830 [INFO][4531] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.84243771c733e465fed95ee45179b96ef24d9259c56bd27d30b8b496ed40765b" host="localhost" Mar 11 01:28:23.366866 containerd[1579]: 2026-03-11 01:28:22.830 [INFO][4531] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.84243771c733e465fed95ee45179b96ef24d9259c56bd27d30b8b496ed40765b" host="localhost" Mar 11 01:28:23.366866 containerd[1579]: 2026-03-11 01:28:22.836 [INFO][4531] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 01:28:23.366866 containerd[1579]: 2026-03-11 01:28:22.853 [INFO][4531] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="84243771c733e465fed95ee45179b96ef24d9259c56bd27d30b8b496ed40765b" HandleID="k8s-pod-network.84243771c733e465fed95ee45179b96ef24d9259c56bd27d30b8b496ed40765b" Workload="localhost-k8s-csi--node--driver--45sfz-eth0" Mar 11 01:28:23.372900 containerd[1579]: 2026-03-11 01:28:22.904 [INFO][4484] cni-plugin/k8s.go 418: Populated endpoint ContainerID="84243771c733e465fed95ee45179b96ef24d9259c56bd27d30b8b496ed40765b" Namespace="calico-system" Pod="csi-node-driver-45sfz" WorkloadEndpoint="localhost-k8s-csi--node--driver--45sfz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--45sfz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"bd5d987a-c72c-4abd-9acd-7762cad20217", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 1, 26, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-45sfz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie4210bc1dd5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 01:28:23.372900 containerd[1579]: 2026-03-11 01:28:22.904 [INFO][4484] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="84243771c733e465fed95ee45179b96ef24d9259c56bd27d30b8b496ed40765b" Namespace="calico-system" Pod="csi-node-driver-45sfz" WorkloadEndpoint="localhost-k8s-csi--node--driver--45sfz-eth0" Mar 11 01:28:23.372900 containerd[1579]: 2026-03-11 01:28:22.905 [INFO][4484] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie4210bc1dd5 ContainerID="84243771c733e465fed95ee45179b96ef24d9259c56bd27d30b8b496ed40765b" Namespace="calico-system" Pod="csi-node-driver-45sfz" WorkloadEndpoint="localhost-k8s-csi--node--driver--45sfz-eth0" Mar 11 01:28:23.372900 containerd[1579]: 2026-03-11 01:28:23.188 [INFO][4484] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="84243771c733e465fed95ee45179b96ef24d9259c56bd27d30b8b496ed40765b" Namespace="calico-system" Pod="csi-node-driver-45sfz" WorkloadEndpoint="localhost-k8s-csi--node--driver--45sfz-eth0" Mar 11 01:28:23.372900 containerd[1579]: 2026-03-11 01:28:23.190 [INFO][4484] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="84243771c733e465fed95ee45179b96ef24d9259c56bd27d30b8b496ed40765b" Namespace="calico-system" Pod="csi-node-driver-45sfz" WorkloadEndpoint="localhost-k8s-csi--node--driver--45sfz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--45sfz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"bd5d987a-c72c-4abd-9acd-7762cad20217", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 1, 26, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"84243771c733e465fed95ee45179b96ef24d9259c56bd27d30b8b496ed40765b", Pod:"csi-node-driver-45sfz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie4210bc1dd5", MAC:"fa:93:ae:28:61:13", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 01:28:23.372900 containerd[1579]: 2026-03-11 01:28:23.326 [INFO][4484] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="84243771c733e465fed95ee45179b96ef24d9259c56bd27d30b8b496ed40765b" Namespace="calico-system" Pod="csi-node-driver-45sfz" WorkloadEndpoint="localhost-k8s-csi--node--driver--45sfz-eth0" Mar 11 01:28:23.994039 systemd-networkd[1466]: cali02841303924: Gained IPv6LL Mar 11 01:28:24.372205 systemd-networkd[1466]: calif464161bccb: Gained IPv6LL Mar 11 01:28:25.194525 systemd-networkd[1466]: calie4210bc1dd5: Gained IPv6LL Mar 11 01:28:25.767934 systemd[1]: Started cri-containerd-04b32efac5491b708bc22eb4ede85eb7ed3ee21025ade3cda0d96001830c53d3.scope - libcontainer container 04b32efac5491b708bc22eb4ede85eb7ed3ee21025ade3cda0d96001830c53d3. Mar 11 01:28:25.955212 systemd-resolved[1394]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 11 01:28:26.086781 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1716119935.mount: Deactivated successfully. Mar 11 01:28:26.164324 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount507105275.mount: Deactivated successfully. Mar 11 01:28:26.185811 containerd[1579]: time="2026-03-11T01:28:26.185719494Z" level=info msg="Container 80f5291d1e98eec24795fbb9c1cecfb7b1c009c3bbbe6074fc8fe5adf5329852: CDI devices from CRI Config.CDIDevices: []" Mar 11 01:28:26.241647 containerd[1579]: time="2026-03-11T01:28:26.240288567Z" level=info msg="CreateContainer within sandbox \"c8257c2ca4c712a8c25336fa0a63a2d5eb8b12ad9171ee2059e1e2d0a7e4d36f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"80f5291d1e98eec24795fbb9c1cecfb7b1c009c3bbbe6074fc8fe5adf5329852\"" Mar 11 01:28:26.260228 containerd[1579]: time="2026-03-11T01:28:26.241952980Z" level=info msg="StartContainer for \"80f5291d1e98eec24795fbb9c1cecfb7b1c009c3bbbe6074fc8fe5adf5329852\"" Mar 11 01:28:26.260228 containerd[1579]: time="2026-03-11T01:28:26.245866218Z" level=info msg="connecting to shim 80f5291d1e98eec24795fbb9c1cecfb7b1c009c3bbbe6074fc8fe5adf5329852" address="unix:///run/containerd/s/fc5e375581e8b290a8e13fbe518ebf0522b06cbfacbc38eea5e738308d748ed7" protocol=ttrpc version=3 Mar 11 01:28:26.256816 systemd-networkd[1466]: caliae26d9911d5: Link UP Mar 11 01:28:26.275283 systemd-networkd[1466]: caliae26d9911d5: Gained carrier Mar 11 01:28:26.328308 containerd[1579]: time="2026-03-11T01:28:26.328247705Z" level=info msg="connecting to shim c4dc3b48f6a4aaf2d2d8324c515dd87b879b7669e88cc192076045110f1cc3c2" address="unix:///run/containerd/s/015ae817e32b6adae2050b505a60d3ad230347dce2078f569781917677b2ed38" namespace=k8s.io protocol=ttrpc version=3 Mar 11 01:28:26.448657 containerd[1579]: time="2026-03-11T01:28:26.439715228Z" level=info msg="connecting to shim 84243771c733e465fed95ee45179b96ef24d9259c56bd27d30b8b496ed40765b" address="unix:///run/containerd/s/875b89a4ab089a93f375ecbcd437e7abcb02577bcf5cac4a62a49088746ee935" namespace=k8s.io protocol=ttrpc version=3 Mar 11 01:28:26.475247 containerd[1579]: 2026-03-11 01:28:21.673 [INFO][4553] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--cccfbd5cf--6mzpb-eth0 goldmane-cccfbd5cf- calico-system 28bc3cbd-69fd-4a80-92d0-eceef32616bf 1093 0 2026-03-11 01:26:49 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:cccfbd5cf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-cccfbd5cf-6mzpb eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] caliae26d9911d5 [] [] }} ContainerID="86a6e68cb83eb28bc71f7ccafd9b57eef4a9ffaf53d3b465bfb909748c958cc6" Namespace="calico-system" Pod="goldmane-cccfbd5cf-6mzpb" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--6mzpb-" Mar 11 01:28:26.475247 containerd[1579]: 2026-03-11 01:28:21.686 [INFO][4553] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="86a6e68cb83eb28bc71f7ccafd9b57eef4a9ffaf53d3b465bfb909748c958cc6" Namespace="calico-system" Pod="goldmane-cccfbd5cf-6mzpb" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--6mzpb-eth0" Mar 11 01:28:26.475247 containerd[1579]: 2026-03-11 01:28:21.887 [INFO][4578] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="86a6e68cb83eb28bc71f7ccafd9b57eef4a9ffaf53d3b465bfb909748c958cc6" HandleID="k8s-pod-network.86a6e68cb83eb28bc71f7ccafd9b57eef4a9ffaf53d3b465bfb909748c958cc6" Workload="localhost-k8s-goldmane--cccfbd5cf--6mzpb-eth0" Mar 11 01:28:26.475247 containerd[1579]: 2026-03-11 01:28:22.084 [INFO][4578] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="86a6e68cb83eb28bc71f7ccafd9b57eef4a9ffaf53d3b465bfb909748c958cc6" HandleID="k8s-pod-network.86a6e68cb83eb28bc71f7ccafd9b57eef4a9ffaf53d3b465bfb909748c958cc6" Workload="localhost-k8s-goldmane--cccfbd5cf--6mzpb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00032ac10), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-cccfbd5cf-6mzpb", "timestamp":"2026-03-11 01:28:21.887496087 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00027e6e0)} Mar 11 01:28:26.475247 containerd[1579]: 2026-03-11 01:28:22.085 [INFO][4578] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 01:28:26.475247 containerd[1579]: 2026-03-11 01:28:22.831 [INFO][4578] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 01:28:26.475247 containerd[1579]: 2026-03-11 01:28:22.831 [INFO][4578] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 11 01:28:26.475247 containerd[1579]: 2026-03-11 01:28:22.890 [INFO][4578] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.86a6e68cb83eb28bc71f7ccafd9b57eef4a9ffaf53d3b465bfb909748c958cc6" host="localhost" Mar 11 01:28:26.475247 containerd[1579]: 2026-03-11 01:28:23.313 [INFO][4578] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 11 01:28:26.475247 containerd[1579]: 2026-03-11 01:28:25.712 [INFO][4578] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 11 01:28:26.475247 containerd[1579]: 2026-03-11 01:28:25.779 [INFO][4578] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 11 01:28:26.475247 containerd[1579]: 2026-03-11 01:28:25.827 [INFO][4578] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 11 01:28:26.475247 containerd[1579]: 2026-03-11 01:28:25.832 [INFO][4578] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.86a6e68cb83eb28bc71f7ccafd9b57eef4a9ffaf53d3b465bfb909748c958cc6" host="localhost" Mar 11 01:28:26.475247 containerd[1579]: 2026-03-11 01:28:25.915 [INFO][4578] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.86a6e68cb83eb28bc71f7ccafd9b57eef4a9ffaf53d3b465bfb909748c958cc6 Mar 11 01:28:26.475247 containerd[1579]: 2026-03-11 01:28:26.029 [INFO][4578] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.86a6e68cb83eb28bc71f7ccafd9b57eef4a9ffaf53d3b465bfb909748c958cc6" host="localhost" Mar 11 01:28:26.475247 containerd[1579]: 2026-03-11 01:28:26.122 [INFO][4578] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.86a6e68cb83eb28bc71f7ccafd9b57eef4a9ffaf53d3b465bfb909748c958cc6" host="localhost" Mar 11 01:28:26.475247 containerd[1579]: 2026-03-11 01:28:26.123 [INFO][4578] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.86a6e68cb83eb28bc71f7ccafd9b57eef4a9ffaf53d3b465bfb909748c958cc6" host="localhost" Mar 11 01:28:26.475247 containerd[1579]: 2026-03-11 01:28:26.124 [INFO][4578] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 01:28:26.475247 containerd[1579]: 2026-03-11 01:28:26.124 [INFO][4578] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="86a6e68cb83eb28bc71f7ccafd9b57eef4a9ffaf53d3b465bfb909748c958cc6" HandleID="k8s-pod-network.86a6e68cb83eb28bc71f7ccafd9b57eef4a9ffaf53d3b465bfb909748c958cc6" Workload="localhost-k8s-goldmane--cccfbd5cf--6mzpb-eth0" Mar 11 01:28:26.476272 containerd[1579]: 2026-03-11 01:28:26.166 [INFO][4553] cni-plugin/k8s.go 418: Populated endpoint ContainerID="86a6e68cb83eb28bc71f7ccafd9b57eef4a9ffaf53d3b465bfb909748c958cc6" Namespace="calico-system" Pod="goldmane-cccfbd5cf-6mzpb" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--6mzpb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--cccfbd5cf--6mzpb-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"28bc3cbd-69fd-4a80-92d0-eceef32616bf", ResourceVersion:"1093", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 1, 26, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-cccfbd5cf-6mzpb", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliae26d9911d5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 01:28:26.476272 containerd[1579]: 2026-03-11 01:28:26.167 [INFO][4553] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="86a6e68cb83eb28bc71f7ccafd9b57eef4a9ffaf53d3b465bfb909748c958cc6" Namespace="calico-system" Pod="goldmane-cccfbd5cf-6mzpb" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--6mzpb-eth0" Mar 11 01:28:26.476272 containerd[1579]: 2026-03-11 01:28:26.167 [INFO][4553] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliae26d9911d5 ContainerID="86a6e68cb83eb28bc71f7ccafd9b57eef4a9ffaf53d3b465bfb909748c958cc6" Namespace="calico-system" Pod="goldmane-cccfbd5cf-6mzpb" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--6mzpb-eth0" Mar 11 01:28:26.476272 containerd[1579]: 2026-03-11 01:28:26.274 [INFO][4553] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="86a6e68cb83eb28bc71f7ccafd9b57eef4a9ffaf53d3b465bfb909748c958cc6" Namespace="calico-system" Pod="goldmane-cccfbd5cf-6mzpb" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--6mzpb-eth0" Mar 11 01:28:26.476272 containerd[1579]: 2026-03-11 01:28:26.342 [INFO][4553] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="86a6e68cb83eb28bc71f7ccafd9b57eef4a9ffaf53d3b465bfb909748c958cc6" Namespace="calico-system" Pod="goldmane-cccfbd5cf-6mzpb" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--6mzpb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--cccfbd5cf--6mzpb-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"28bc3cbd-69fd-4a80-92d0-eceef32616bf", ResourceVersion:"1093", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 1, 26, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"86a6e68cb83eb28bc71f7ccafd9b57eef4a9ffaf53d3b465bfb909748c958cc6", Pod:"goldmane-cccfbd5cf-6mzpb", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliae26d9911d5", MAC:"c6:f0:a8:81:22:bc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 01:28:26.476272 containerd[1579]: 2026-03-11 01:28:26.446 [INFO][4553] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="86a6e68cb83eb28bc71f7ccafd9b57eef4a9ffaf53d3b465bfb909748c958cc6" Namespace="calico-system" Pod="goldmane-cccfbd5cf-6mzpb" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--6mzpb-eth0" Mar 11 01:28:26.530538 systemd[1]: Started cri-containerd-80f5291d1e98eec24795fbb9c1cecfb7b1c009c3bbbe6074fc8fe5adf5329852.scope - libcontainer container 80f5291d1e98eec24795fbb9c1cecfb7b1c009c3bbbe6074fc8fe5adf5329852. Mar 11 01:28:26.579432 containerd[1579]: time="2026-03-11T01:28:26.578952714Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7477699c4c-n9gg4,Uid:6f97091f-a4dd-48a5-90c6-376038fd2d9a,Namespace:calico-system,Attempt:0,} returns sandbox id \"04b32efac5491b708bc22eb4ede85eb7ed3ee21025ade3cda0d96001830c53d3\"" Mar 11 01:28:26.588400 systemd[1]: Started cri-containerd-c4dc3b48f6a4aaf2d2d8324c515dd87b879b7669e88cc192076045110f1cc3c2.scope - libcontainer container c4dc3b48f6a4aaf2d2d8324c515dd87b879b7669e88cc192076045110f1cc3c2. Mar 11 01:28:26.644099 systemd-networkd[1466]: calie9d43b910de: Link UP Mar 11 01:28:26.644445 systemd-networkd[1466]: calie9d43b910de: Gained carrier Mar 11 01:28:26.748707 containerd[1579]: 2026-03-11 01:28:21.800 [INFO][4544] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--5b7f6c864b--jnn8t-eth0 calico-kube-controllers-5b7f6c864b- calico-system d6503f2d-95be-4a07-b817-d3b00d921973 1087 0 2026-03-11 01:26:51 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5b7f6c864b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-5b7f6c864b-jnn8t eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calie9d43b910de [] [] }} ContainerID="4bee9baf24b2eb9778b3ff22d5d65bcbd02bcb112a76ea2e537234ff869fa2b6" Namespace="calico-system" Pod="calico-kube-controllers-5b7f6c864b-jnn8t" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5b7f6c864b--jnn8t-" Mar 11 01:28:26.748707 containerd[1579]: 2026-03-11 01:28:21.800 [INFO][4544] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4bee9baf24b2eb9778b3ff22d5d65bcbd02bcb112a76ea2e537234ff869fa2b6" Namespace="calico-system" Pod="calico-kube-controllers-5b7f6c864b-jnn8t" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5b7f6c864b--jnn8t-eth0" Mar 11 01:28:26.748707 containerd[1579]: 2026-03-11 01:28:22.327 [INFO][4621] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4bee9baf24b2eb9778b3ff22d5d65bcbd02bcb112a76ea2e537234ff869fa2b6" HandleID="k8s-pod-network.4bee9baf24b2eb9778b3ff22d5d65bcbd02bcb112a76ea2e537234ff869fa2b6" Workload="localhost-k8s-calico--kube--controllers--5b7f6c864b--jnn8t-eth0" Mar 11 01:28:26.748707 containerd[1579]: 2026-03-11 01:28:22.406 [INFO][4621] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="4bee9baf24b2eb9778b3ff22d5d65bcbd02bcb112a76ea2e537234ff869fa2b6" HandleID="k8s-pod-network.4bee9baf24b2eb9778b3ff22d5d65bcbd02bcb112a76ea2e537234ff869fa2b6" Workload="localhost-k8s-calico--kube--controllers--5b7f6c864b--jnn8t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000131db0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-5b7f6c864b-jnn8t", "timestamp":"2026-03-11 01:28:22.327676486 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0004c49a0)} Mar 11 01:28:26.748707 containerd[1579]: 2026-03-11 01:28:22.406 [INFO][4621] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 01:28:26.748707 containerd[1579]: 2026-03-11 01:28:26.123 [INFO][4621] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 01:28:26.748707 containerd[1579]: 2026-03-11 01:28:26.129 [INFO][4621] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 11 01:28:26.748707 containerd[1579]: 2026-03-11 01:28:26.180 [INFO][4621] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.4bee9baf24b2eb9778b3ff22d5d65bcbd02bcb112a76ea2e537234ff869fa2b6" host="localhost" Mar 11 01:28:26.748707 containerd[1579]: 2026-03-11 01:28:26.263 [INFO][4621] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 11 01:28:26.748707 containerd[1579]: 2026-03-11 01:28:26.405 [INFO][4621] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 11 01:28:26.748707 containerd[1579]: 2026-03-11 01:28:26.459 [INFO][4621] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 11 01:28:26.748707 containerd[1579]: 2026-03-11 01:28:26.519 [INFO][4621] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 11 01:28:26.748707 containerd[1579]: 2026-03-11 01:28:26.519 [INFO][4621] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4bee9baf24b2eb9778b3ff22d5d65bcbd02bcb112a76ea2e537234ff869fa2b6" host="localhost" Mar 11 01:28:26.748707 containerd[1579]: 2026-03-11 01:28:26.536 [INFO][4621] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.4bee9baf24b2eb9778b3ff22d5d65bcbd02bcb112a76ea2e537234ff869fa2b6 Mar 11 01:28:26.748707 containerd[1579]: 2026-03-11 01:28:26.554 [INFO][4621] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4bee9baf24b2eb9778b3ff22d5d65bcbd02bcb112a76ea2e537234ff869fa2b6" host="localhost" Mar 11 01:28:26.748707 containerd[1579]: 2026-03-11 01:28:26.605 [INFO][4621] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.4bee9baf24b2eb9778b3ff22d5d65bcbd02bcb112a76ea2e537234ff869fa2b6" host="localhost" Mar 11 01:28:26.748707 containerd[1579]: 2026-03-11 01:28:26.605 [INFO][4621] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.4bee9baf24b2eb9778b3ff22d5d65bcbd02bcb112a76ea2e537234ff869fa2b6" host="localhost" Mar 11 01:28:26.748707 containerd[1579]: 2026-03-11 01:28:26.605 [INFO][4621] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 01:28:26.748707 containerd[1579]: 2026-03-11 01:28:26.605 [INFO][4621] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="4bee9baf24b2eb9778b3ff22d5d65bcbd02bcb112a76ea2e537234ff869fa2b6" HandleID="k8s-pod-network.4bee9baf24b2eb9778b3ff22d5d65bcbd02bcb112a76ea2e537234ff869fa2b6" Workload="localhost-k8s-calico--kube--controllers--5b7f6c864b--jnn8t-eth0" Mar 11 01:28:26.749834 containerd[1579]: 2026-03-11 01:28:26.623 [INFO][4544] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4bee9baf24b2eb9778b3ff22d5d65bcbd02bcb112a76ea2e537234ff869fa2b6" Namespace="calico-system" Pod="calico-kube-controllers-5b7f6c864b-jnn8t" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5b7f6c864b--jnn8t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5b7f6c864b--jnn8t-eth0", GenerateName:"calico-kube-controllers-5b7f6c864b-", Namespace:"calico-system", SelfLink:"", UID:"d6503f2d-95be-4a07-b817-d3b00d921973", ResourceVersion:"1087", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 1, 26, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5b7f6c864b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-5b7f6c864b-jnn8t", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie9d43b910de", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 01:28:26.749834 containerd[1579]: 2026-03-11 01:28:26.624 [INFO][4544] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="4bee9baf24b2eb9778b3ff22d5d65bcbd02bcb112a76ea2e537234ff869fa2b6" Namespace="calico-system" Pod="calico-kube-controllers-5b7f6c864b-jnn8t" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5b7f6c864b--jnn8t-eth0" Mar 11 01:28:26.749834 containerd[1579]: 2026-03-11 01:28:26.625 [INFO][4544] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie9d43b910de ContainerID="4bee9baf24b2eb9778b3ff22d5d65bcbd02bcb112a76ea2e537234ff869fa2b6" Namespace="calico-system" Pod="calico-kube-controllers-5b7f6c864b-jnn8t" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5b7f6c864b--jnn8t-eth0" Mar 11 01:28:26.749834 containerd[1579]: 2026-03-11 01:28:26.642 [INFO][4544] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4bee9baf24b2eb9778b3ff22d5d65bcbd02bcb112a76ea2e537234ff869fa2b6" Namespace="calico-system" Pod="calico-kube-controllers-5b7f6c864b-jnn8t" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5b7f6c864b--jnn8t-eth0" Mar 11 01:28:26.749834 containerd[1579]: 2026-03-11 01:28:26.643 [INFO][4544] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4bee9baf24b2eb9778b3ff22d5d65bcbd02bcb112a76ea2e537234ff869fa2b6" Namespace="calico-system" Pod="calico-kube-controllers-5b7f6c864b-jnn8t" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5b7f6c864b--jnn8t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5b7f6c864b--jnn8t-eth0", GenerateName:"calico-kube-controllers-5b7f6c864b-", Namespace:"calico-system", SelfLink:"", UID:"d6503f2d-95be-4a07-b817-d3b00d921973", ResourceVersion:"1087", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 1, 26, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5b7f6c864b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4bee9baf24b2eb9778b3ff22d5d65bcbd02bcb112a76ea2e537234ff869fa2b6", Pod:"calico-kube-controllers-5b7f6c864b-jnn8t", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie9d43b910de", MAC:"9a:9d:d2:c6:b0:c6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 01:28:26.749834 containerd[1579]: 2026-03-11 01:28:26.725 [INFO][4544] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4bee9baf24b2eb9778b3ff22d5d65bcbd02bcb112a76ea2e537234ff869fa2b6" Namespace="calico-system" Pod="calico-kube-controllers-5b7f6c864b-jnn8t" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5b7f6c864b--jnn8t-eth0" Mar 11 01:28:26.767506 containerd[1579]: time="2026-03-11T01:28:26.767449266Z" level=info msg="connecting to shim 86a6e68cb83eb28bc71f7ccafd9b57eef4a9ffaf53d3b465bfb909748c958cc6" address="unix:///run/containerd/s/363e33e57c82d06dc909b6de01aa1a335a8a12f933d5589592772c525c8715ff" namespace=k8s.io protocol=ttrpc version=3 Mar 11 01:28:26.790880 systemd[1]: Started cri-containerd-84243771c733e465fed95ee45179b96ef24d9259c56bd27d30b8b496ed40765b.scope - libcontainer container 84243771c733e465fed95ee45179b96ef24d9259c56bd27d30b8b496ed40765b. Mar 11 01:28:27.005595 containerd[1579]: time="2026-03-11T01:28:27.002094235Z" level=info msg="StartContainer for \"80f5291d1e98eec24795fbb9c1cecfb7b1c009c3bbbe6074fc8fe5adf5329852\" returns successfully" Mar 11 01:28:27.006466 systemd[1]: Started cri-containerd-86a6e68cb83eb28bc71f7ccafd9b57eef4a9ffaf53d3b465bfb909748c958cc6.scope - libcontainer container 86a6e68cb83eb28bc71f7ccafd9b57eef4a9ffaf53d3b465bfb909748c958cc6. Mar 11 01:28:27.009770 containerd[1579]: time="2026-03-11T01:28:27.009494062Z" level=info msg="connecting to shim 4bee9baf24b2eb9778b3ff22d5d65bcbd02bcb112a76ea2e537234ff869fa2b6" address="unix:///run/containerd/s/a7bfe8ecc3ac41ab7808ffc652ec90e4c231f052f565f95b5d7c05d4a995c97d" namespace=k8s.io protocol=ttrpc version=3 Mar 11 01:28:27.157049 systemd-resolved[1394]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 11 01:28:27.182204 systemd-resolved[1394]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 11 01:28:27.332979 systemd-networkd[1466]: cali140eb5ecb68: Link UP Mar 11 01:28:27.378995 systemd-networkd[1466]: cali140eb5ecb68: Gained carrier Mar 11 01:28:27.443026 systemd-resolved[1394]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 11 01:28:27.490112 systemd[1]: Started cri-containerd-4bee9baf24b2eb9778b3ff22d5d65bcbd02bcb112a76ea2e537234ff869fa2b6.scope - libcontainer container 4bee9baf24b2eb9778b3ff22d5d65bcbd02bcb112a76ea2e537234ff869fa2b6. Mar 11 01:28:27.567016 containerd[1579]: 2026-03-11 01:28:25.696 [INFO][4650] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--284rx-eth0 coredns-66bc5c9577- kube-system 9424aa5f-c89c-4c89-a1c1-04c856e2b5f0 1075 0 2026-03-11 01:26:13 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-284rx eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali140eb5ecb68 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="a4823b144750c4ba395dd006b4facd97bc8200b876b5a4fcd29ca3acd7b6f4ce" Namespace="kube-system" Pod="coredns-66bc5c9577-284rx" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--284rx-" Mar 11 01:28:27.567016 containerd[1579]: 2026-03-11 01:28:25.726 [INFO][4650] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a4823b144750c4ba395dd006b4facd97bc8200b876b5a4fcd29ca3acd7b6f4ce" Namespace="kube-system" Pod="coredns-66bc5c9577-284rx" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--284rx-eth0" Mar 11 01:28:27.567016 containerd[1579]: 2026-03-11 01:28:26.252 [INFO][4741] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a4823b144750c4ba395dd006b4facd97bc8200b876b5a4fcd29ca3acd7b6f4ce" HandleID="k8s-pod-network.a4823b144750c4ba395dd006b4facd97bc8200b876b5a4fcd29ca3acd7b6f4ce" Workload="localhost-k8s-coredns--66bc5c9577--284rx-eth0" Mar 11 01:28:27.567016 containerd[1579]: 2026-03-11 01:28:26.488 [INFO][4741] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="a4823b144750c4ba395dd006b4facd97bc8200b876b5a4fcd29ca3acd7b6f4ce" HandleID="k8s-pod-network.a4823b144750c4ba395dd006b4facd97bc8200b876b5a4fcd29ca3acd7b6f4ce" Workload="localhost-k8s-coredns--66bc5c9577--284rx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ee0f0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-284rx", "timestamp":"2026-03-11 01:28:26.25250916 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00060e000)} Mar 11 01:28:27.567016 containerd[1579]: 2026-03-11 01:28:26.488 [INFO][4741] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 01:28:27.567016 containerd[1579]: 2026-03-11 01:28:26.606 [INFO][4741] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 01:28:27.567016 containerd[1579]: 2026-03-11 01:28:26.606 [INFO][4741] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 11 01:28:27.567016 containerd[1579]: 2026-03-11 01:28:26.638 [INFO][4741] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.a4823b144750c4ba395dd006b4facd97bc8200b876b5a4fcd29ca3acd7b6f4ce" host="localhost" Mar 11 01:28:27.567016 containerd[1579]: 2026-03-11 01:28:26.722 [INFO][4741] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 11 01:28:27.567016 containerd[1579]: 2026-03-11 01:28:26.798 [INFO][4741] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 11 01:28:27.567016 containerd[1579]: 2026-03-11 01:28:26.844 [INFO][4741] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 11 01:28:27.567016 containerd[1579]: 2026-03-11 01:28:26.923 [INFO][4741] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 11 01:28:27.567016 containerd[1579]: 2026-03-11 01:28:26.923 [INFO][4741] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a4823b144750c4ba395dd006b4facd97bc8200b876b5a4fcd29ca3acd7b6f4ce" host="localhost" Mar 11 01:28:27.567016 containerd[1579]: 2026-03-11 01:28:26.971 [INFO][4741] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.a4823b144750c4ba395dd006b4facd97bc8200b876b5a4fcd29ca3acd7b6f4ce Mar 11 01:28:27.567016 containerd[1579]: 2026-03-11 01:28:26.998 [INFO][4741] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a4823b144750c4ba395dd006b4facd97bc8200b876b5a4fcd29ca3acd7b6f4ce" host="localhost" Mar 11 01:28:27.567016 containerd[1579]: 2026-03-11 01:28:27.128 [INFO][4741] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.a4823b144750c4ba395dd006b4facd97bc8200b876b5a4fcd29ca3acd7b6f4ce" host="localhost" Mar 11 01:28:27.567016 containerd[1579]: 2026-03-11 01:28:27.129 [INFO][4741] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.a4823b144750c4ba395dd006b4facd97bc8200b876b5a4fcd29ca3acd7b6f4ce" host="localhost" Mar 11 01:28:27.567016 containerd[1579]: 2026-03-11 01:28:27.130 [INFO][4741] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 01:28:27.567016 containerd[1579]: 2026-03-11 01:28:27.131 [INFO][4741] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="a4823b144750c4ba395dd006b4facd97bc8200b876b5a4fcd29ca3acd7b6f4ce" HandleID="k8s-pod-network.a4823b144750c4ba395dd006b4facd97bc8200b876b5a4fcd29ca3acd7b6f4ce" Workload="localhost-k8s-coredns--66bc5c9577--284rx-eth0" Mar 11 01:28:27.568829 containerd[1579]: 2026-03-11 01:28:27.191 [INFO][4650] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a4823b144750c4ba395dd006b4facd97bc8200b876b5a4fcd29ca3acd7b6f4ce" Namespace="kube-system" Pod="coredns-66bc5c9577-284rx" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--284rx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--284rx-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"9424aa5f-c89c-4c89-a1c1-04c856e2b5f0", ResourceVersion:"1075", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 1, 26, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-284rx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali140eb5ecb68", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 01:28:27.568829 containerd[1579]: 2026-03-11 01:28:27.197 [INFO][4650] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="a4823b144750c4ba395dd006b4facd97bc8200b876b5a4fcd29ca3acd7b6f4ce" Namespace="kube-system" Pod="coredns-66bc5c9577-284rx" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--284rx-eth0" Mar 11 01:28:27.568829 containerd[1579]: 2026-03-11 01:28:27.223 [INFO][4650] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali140eb5ecb68 ContainerID="a4823b144750c4ba395dd006b4facd97bc8200b876b5a4fcd29ca3acd7b6f4ce" Namespace="kube-system" Pod="coredns-66bc5c9577-284rx" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--284rx-eth0" Mar 11 01:28:27.568829 containerd[1579]: 2026-03-11 01:28:27.394 [INFO][4650] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a4823b144750c4ba395dd006b4facd97bc8200b876b5a4fcd29ca3acd7b6f4ce" Namespace="kube-system" Pod="coredns-66bc5c9577-284rx" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--284rx-eth0" Mar 11 01:28:27.568829 containerd[1579]: 2026-03-11 01:28:27.460 [INFO][4650] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a4823b144750c4ba395dd006b4facd97bc8200b876b5a4fcd29ca3acd7b6f4ce" Namespace="kube-system" Pod="coredns-66bc5c9577-284rx" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--284rx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--284rx-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"9424aa5f-c89c-4c89-a1c1-04c856e2b5f0", ResourceVersion:"1075", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 1, 26, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a4823b144750c4ba395dd006b4facd97bc8200b876b5a4fcd29ca3acd7b6f4ce", Pod:"coredns-66bc5c9577-284rx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali140eb5ecb68", MAC:"d2:90:5c:21:87:39", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 01:28:27.568829 containerd[1579]: 2026-03-11 01:28:27.531 [INFO][4650] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a4823b144750c4ba395dd006b4facd97bc8200b876b5a4fcd29ca3acd7b6f4ce" Namespace="kube-system" Pod="coredns-66bc5c9577-284rx" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--284rx-eth0" Mar 11 01:28:27.913768 containerd[1579]: time="2026-03-11T01:28:27.911570376Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-45sfz,Uid:bd5d987a-c72c-4abd-9acd-7762cad20217,Namespace:calico-system,Attempt:0,} returns sandbox id \"84243771c733e465fed95ee45179b96ef24d9259c56bd27d30b8b496ed40765b\"" Mar 11 01:28:27.943826 systemd-networkd[1466]: caliae26d9911d5: Gained IPv6LL Mar 11 01:28:28.001924 systemd-resolved[1394]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 11 01:28:28.073111 containerd[1579]: time="2026-03-11T01:28:28.072476412Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7477699c4c-wwhh9,Uid:cd75dd3d-50be-4617-ad26-c09e377b47a8,Namespace:calico-system,Attempt:0,} returns sandbox id \"c4dc3b48f6a4aaf2d2d8324c515dd87b879b7669e88cc192076045110f1cc3c2\"" Mar 11 01:28:28.098832 kubelet[2857]: E0311 01:28:28.098220 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 01:28:28.136695 containerd[1579]: time="2026-03-11T01:28:28.133915062Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-6mzpb,Uid:28bc3cbd-69fd-4a80-92d0-eceef32616bf,Namespace:calico-system,Attempt:0,} returns sandbox id \"86a6e68cb83eb28bc71f7ccafd9b57eef4a9ffaf53d3b465bfb909748c958cc6\"" Mar 11 01:28:28.227098 kubelet[2857]: I0311 01:28:28.226571 2857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-zf2ch" podStartSLOduration=135.226546228 podStartE2EDuration="2m15.226546228s" podCreationTimestamp="2026-03-11 01:26:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 01:28:28.221701082 +0000 UTC m=+144.900677714" watchObservedRunningTime="2026-03-11 01:28:28.226546228 +0000 UTC m=+144.905522860" Mar 11 01:28:28.264578 containerd[1579]: time="2026-03-11T01:28:28.264235874Z" level=info msg="connecting to shim a4823b144750c4ba395dd006b4facd97bc8200b876b5a4fcd29ca3acd7b6f4ce" address="unix:///run/containerd/s/eb831f9f2095d1c11cdd157772cd37910d24632dcf5e27f0b370ab7058766570" namespace=k8s.io protocol=ttrpc version=3 Mar 11 01:28:28.397597 containerd[1579]: time="2026-03-11T01:28:28.395490899Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5b7f6c864b-jnn8t,Uid:d6503f2d-95be-4a07-b817-d3b00d921973,Namespace:calico-system,Attempt:0,} returns sandbox id \"4bee9baf24b2eb9778b3ff22d5d65bcbd02bcb112a76ea2e537234ff869fa2b6\"" Mar 11 01:28:28.451632 systemd[1]: Started cri-containerd-a4823b144750c4ba395dd006b4facd97bc8200b876b5a4fcd29ca3acd7b6f4ce.scope - libcontainer container a4823b144750c4ba395dd006b4facd97bc8200b876b5a4fcd29ca3acd7b6f4ce. Mar 11 01:28:28.539810 systemd-resolved[1394]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 11 01:28:28.643109 systemd-networkd[1466]: calie9d43b910de: Gained IPv6LL Mar 11 01:28:28.830426 containerd[1579]: time="2026-03-11T01:28:28.828571292Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-284rx,Uid:9424aa5f-c89c-4c89-a1c1-04c856e2b5f0,Namespace:kube-system,Attempt:0,} returns sandbox id \"a4823b144750c4ba395dd006b4facd97bc8200b876b5a4fcd29ca3acd7b6f4ce\"" Mar 11 01:28:28.840795 kubelet[2857]: E0311 01:28:28.840711 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 01:28:28.889569 containerd[1579]: time="2026-03-11T01:28:28.887378336Z" level=info msg="CreateContainer within sandbox \"a4823b144750c4ba395dd006b4facd97bc8200b876b5a4fcd29ca3acd7b6f4ce\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 11 01:28:28.977492 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1843267268.mount: Deactivated successfully. Mar 11 01:28:29.055277 containerd[1579]: time="2026-03-11T01:28:29.053645814Z" level=info msg="Container a9bd04c69b5e36cfe3432e9b22aafd37a5f9e823ccf5db2860da892d5eb5777e: CDI devices from CRI Config.CDIDevices: []" Mar 11 01:28:29.093464 containerd[1579]: time="2026-03-11T01:28:29.093334618Z" level=info msg="CreateContainer within sandbox \"a4823b144750c4ba395dd006b4facd97bc8200b876b5a4fcd29ca3acd7b6f4ce\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a9bd04c69b5e36cfe3432e9b22aafd37a5f9e823ccf5db2860da892d5eb5777e\"" Mar 11 01:28:29.113620 containerd[1579]: time="2026-03-11T01:28:29.109764239Z" level=info msg="StartContainer for \"a9bd04c69b5e36cfe3432e9b22aafd37a5f9e823ccf5db2860da892d5eb5777e\"" Mar 11 01:28:29.113620 containerd[1579]: time="2026-03-11T01:28:29.111256174Z" level=info msg="connecting to shim a9bd04c69b5e36cfe3432e9b22aafd37a5f9e823ccf5db2860da892d5eb5777e" address="unix:///run/containerd/s/eb831f9f2095d1c11cdd157772cd37910d24632dcf5e27f0b370ab7058766570" protocol=ttrpc version=3 Mar 11 01:28:29.181202 systemd-networkd[1466]: cali140eb5ecb68: Gained IPv6LL Mar 11 01:28:29.186070 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3475630929.mount: Deactivated successfully. Mar 11 01:28:29.281460 kubelet[2857]: E0311 01:28:29.281423 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 01:28:29.364614 systemd[1]: Started cri-containerd-a9bd04c69b5e36cfe3432e9b22aafd37a5f9e823ccf5db2860da892d5eb5777e.scope - libcontainer container a9bd04c69b5e36cfe3432e9b22aafd37a5f9e823ccf5db2860da892d5eb5777e. Mar 11 01:28:29.693286 containerd[1579]: time="2026-03-11T01:28:29.691716007Z" level=info msg="StartContainer for \"a9bd04c69b5e36cfe3432e9b22aafd37a5f9e823ccf5db2860da892d5eb5777e\" returns successfully" Mar 11 01:28:30.383978 kubelet[2857]: E0311 01:28:30.383706 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 01:28:30.411032 kubelet[2857]: E0311 01:28:30.407817 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 01:28:30.671943 kubelet[2857]: I0311 01:28:30.665097 2857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-284rx" podStartSLOduration=137.665076182 podStartE2EDuration="2m17.665076182s" podCreationTimestamp="2026-03-11 01:26:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 01:28:30.500216932 +0000 UTC m=+147.179193563" watchObservedRunningTime="2026-03-11 01:28:30.665076182 +0000 UTC m=+147.344052815" Mar 11 01:28:31.187696 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2298621035.mount: Deactivated successfully. Mar 11 01:28:31.389885 kubelet[2857]: E0311 01:28:31.388942 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 01:28:31.428740 containerd[1579]: time="2026-03-11T01:28:31.428490748Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:28:31.433059 containerd[1579]: time="2026-03-11T01:28:31.432898846Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Mar 11 01:28:31.474091 containerd[1579]: time="2026-03-11T01:28:31.473961572Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:28:31.514453 containerd[1579]: time="2026-03-11T01:28:31.509000332Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:28:31.514453 containerd[1579]: time="2026-03-11T01:28:31.510319108Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 10.495044161s" Mar 11 01:28:31.514453 containerd[1579]: time="2026-03-11T01:28:31.510358542Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Mar 11 01:28:31.518846 containerd[1579]: time="2026-03-11T01:28:31.517672328Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 11 01:28:31.557720 containerd[1579]: time="2026-03-11T01:28:31.557441106Z" level=info msg="CreateContainer within sandbox \"533f86e540757862329771f978c82e5fa76bd41b9259ca27652d73edc31b2cd6\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Mar 11 01:28:31.635895 containerd[1579]: time="2026-03-11T01:28:31.634257499Z" level=info msg="Container 16bf87d272dbfc838d7a197ff51d8ebf93e2fbc7094b15cb32e34d3d6fb38347: CDI devices from CRI Config.CDIDevices: []" Mar 11 01:28:31.680363 containerd[1579]: time="2026-03-11T01:28:31.679678510Z" level=info msg="CreateContainer within sandbox \"533f86e540757862329771f978c82e5fa76bd41b9259ca27652d73edc31b2cd6\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"16bf87d272dbfc838d7a197ff51d8ebf93e2fbc7094b15cb32e34d3d6fb38347\"" Mar 11 01:28:31.692676 containerd[1579]: time="2026-03-11T01:28:31.690021789Z" level=info msg="StartContainer for \"16bf87d272dbfc838d7a197ff51d8ebf93e2fbc7094b15cb32e34d3d6fb38347\"" Mar 11 01:28:31.692676 containerd[1579]: time="2026-03-11T01:28:31.692218333Z" level=info msg="connecting to shim 16bf87d272dbfc838d7a197ff51d8ebf93e2fbc7094b15cb32e34d3d6fb38347" address="unix:///run/containerd/s/3a9b57d948f59c223ac832c7e9892263f9ab33b7dfaee618408ae1551526b5ef" protocol=ttrpc version=3 Mar 11 01:28:31.813810 systemd[1]: Started cri-containerd-16bf87d272dbfc838d7a197ff51d8ebf93e2fbc7094b15cb32e34d3d6fb38347.scope - libcontainer container 16bf87d272dbfc838d7a197ff51d8ebf93e2fbc7094b15cb32e34d3d6fb38347. Mar 11 01:28:32.267464 containerd[1579]: time="2026-03-11T01:28:32.267221471Z" level=info msg="StartContainer for \"16bf87d272dbfc838d7a197ff51d8ebf93e2fbc7094b15cb32e34d3d6fb38347\" returns successfully" Mar 11 01:28:32.401391 kubelet[2857]: E0311 01:28:32.400995 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 01:28:34.269530 kubelet[2857]: E0311 01:28:34.266799 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 01:28:40.850694 kubelet[2857]: I0311 01:28:40.847783 2857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-659748fb8d-dfrfc" podStartSLOduration=15.126737576 podStartE2EDuration="29.847758985s" podCreationTimestamp="2026-03-11 01:28:11 +0000 UTC" firstStartedPulling="2026-03-11 01:28:16.795855157 +0000 UTC m=+133.474831790" lastFinishedPulling="2026-03-11 01:28:31.516876567 +0000 UTC m=+148.195853199" observedRunningTime="2026-03-11 01:28:32.474476374 +0000 UTC m=+149.153453006" watchObservedRunningTime="2026-03-11 01:28:40.847758985 +0000 UTC m=+157.526735617" Mar 11 01:28:41.881865 containerd[1579]: time="2026-03-11T01:28:41.877213386Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:28:41.893891 containerd[1579]: time="2026-03-11T01:28:41.893779260Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Mar 11 01:28:41.900959 containerd[1579]: time="2026-03-11T01:28:41.900646930Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:28:41.912427 containerd[1579]: time="2026-03-11T01:28:41.912317997Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:28:41.915710 containerd[1579]: time="2026-03-11T01:28:41.915049406Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 10.397331431s" Mar 11 01:28:41.915710 containerd[1579]: time="2026-03-11T01:28:41.915329431Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 11 01:28:41.924753 containerd[1579]: time="2026-03-11T01:28:41.922260546Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Mar 11 01:28:41.961797 containerd[1579]: time="2026-03-11T01:28:41.958507931Z" level=info msg="CreateContainer within sandbox \"04b32efac5491b708bc22eb4ede85eb7ed3ee21025ade3cda0d96001830c53d3\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 11 01:28:42.031275 containerd[1579]: time="2026-03-11T01:28:42.031199065Z" level=info msg="Container 3515624770fefb9492ba0ac350a52a32cab534639a1be3871fb9ac09c6a1d285: CDI devices from CRI Config.CDIDevices: []" Mar 11 01:28:42.091554 containerd[1579]: time="2026-03-11T01:28:42.080618076Z" level=info msg="CreateContainer within sandbox \"04b32efac5491b708bc22eb4ede85eb7ed3ee21025ade3cda0d96001830c53d3\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"3515624770fefb9492ba0ac350a52a32cab534639a1be3871fb9ac09c6a1d285\"" Mar 11 01:28:42.101289 containerd[1579]: time="2026-03-11T01:28:42.100788224Z" level=info msg="StartContainer for \"3515624770fefb9492ba0ac350a52a32cab534639a1be3871fb9ac09c6a1d285\"" Mar 11 01:28:42.103839 containerd[1579]: time="2026-03-11T01:28:42.103807055Z" level=info msg="connecting to shim 3515624770fefb9492ba0ac350a52a32cab534639a1be3871fb9ac09c6a1d285" address="unix:///run/containerd/s/e005906c7b8ad0cc3c7c900c9d5d872d013736fa0d9f2f4628eff07ee77666cf" protocol=ttrpc version=3 Mar 11 01:28:42.209946 systemd[1]: Started cri-containerd-3515624770fefb9492ba0ac350a52a32cab534639a1be3871fb9ac09c6a1d285.scope - libcontainer container 3515624770fefb9492ba0ac350a52a32cab534639a1be3871fb9ac09c6a1d285. Mar 11 01:28:42.937216 containerd[1579]: time="2026-03-11T01:28:42.933521933Z" level=info msg="StartContainer for \"3515624770fefb9492ba0ac350a52a32cab534639a1be3871fb9ac09c6a1d285\" returns successfully" Mar 11 01:28:44.637190 containerd[1579]: time="2026-03-11T01:28:44.632343776Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:28:44.677534 containerd[1579]: time="2026-03-11T01:28:44.676273723Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Mar 11 01:28:44.684665 containerd[1579]: time="2026-03-11T01:28:44.684526682Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:28:44.697051 containerd[1579]: time="2026-03-11T01:28:44.696802493Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:28:44.699843 containerd[1579]: time="2026-03-11T01:28:44.699633141Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 2.777328731s" Mar 11 01:28:44.699843 containerd[1579]: time="2026-03-11T01:28:44.699715259Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Mar 11 01:28:44.712816 containerd[1579]: time="2026-03-11T01:28:44.712740787Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 11 01:28:44.759708 containerd[1579]: time="2026-03-11T01:28:44.759540426Z" level=info msg="CreateContainer within sandbox \"84243771c733e465fed95ee45179b96ef24d9259c56bd27d30b8b496ed40765b\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 11 01:28:44.851056 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2056284001.mount: Deactivated successfully. Mar 11 01:28:44.862185 containerd[1579]: time="2026-03-11T01:28:44.860717458Z" level=info msg="Container 123adc9479a617f0d16492796d763b6aabdda90aea726ef3b3fcee93292fedcb: CDI devices from CRI Config.CDIDevices: []" Mar 11 01:28:44.958623 containerd[1579]: time="2026-03-11T01:28:44.957720549Z" level=info msg="CreateContainer within sandbox \"84243771c733e465fed95ee45179b96ef24d9259c56bd27d30b8b496ed40765b\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"123adc9479a617f0d16492796d763b6aabdda90aea726ef3b3fcee93292fedcb\"" Mar 11 01:28:44.966795 containerd[1579]: time="2026-03-11T01:28:44.963301297Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:28:44.967416 containerd[1579]: time="2026-03-11T01:28:44.967382524Z" level=info msg="StartContainer for \"123adc9479a617f0d16492796d763b6aabdda90aea726ef3b3fcee93292fedcb\"" Mar 11 01:28:44.975772 containerd[1579]: time="2026-03-11T01:28:44.975721238Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Mar 11 01:28:44.979823 containerd[1579]: time="2026-03-11T01:28:44.979784025Z" level=info msg="connecting to shim 123adc9479a617f0d16492796d763b6aabdda90aea726ef3b3fcee93292fedcb" address="unix:///run/containerd/s/875b89a4ab089a93f375ecbcd437e7abcb02577bcf5cac4a62a49088746ee935" protocol=ttrpc version=3 Mar 11 01:28:44.991697 containerd[1579]: time="2026-03-11T01:28:44.990034274Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 276.954446ms" Mar 11 01:28:44.991697 containerd[1579]: time="2026-03-11T01:28:44.991466604Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 11 01:28:45.008011 containerd[1579]: time="2026-03-11T01:28:45.007013596Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Mar 11 01:28:45.026508 containerd[1579]: time="2026-03-11T01:28:45.026447491Z" level=info msg="CreateContainer within sandbox \"c4dc3b48f6a4aaf2d2d8324c515dd87b879b7669e88cc192076045110f1cc3c2\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 11 01:28:45.323065 containerd[1579]: time="2026-03-11T01:28:45.323003936Z" level=info msg="Container dcfa30ea315e886a7b86b6af8a65d38451afc7907035906a5333f85f17b2e41d: CDI devices from CRI Config.CDIDevices: []" Mar 11 01:28:45.475196 containerd[1579]: time="2026-03-11T01:28:45.474381084Z" level=info msg="CreateContainer within sandbox \"c4dc3b48f6a4aaf2d2d8324c515dd87b879b7669e88cc192076045110f1cc3c2\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"dcfa30ea315e886a7b86b6af8a65d38451afc7907035906a5333f85f17b2e41d\"" Mar 11 01:28:45.476983 containerd[1579]: time="2026-03-11T01:28:45.476954267Z" level=info msg="StartContainer for \"dcfa30ea315e886a7b86b6af8a65d38451afc7907035906a5333f85f17b2e41d\"" Mar 11 01:28:45.480893 containerd[1579]: time="2026-03-11T01:28:45.480860900Z" level=info msg="connecting to shim dcfa30ea315e886a7b86b6af8a65d38451afc7907035906a5333f85f17b2e41d" address="unix:///run/containerd/s/015ae817e32b6adae2050b505a60d3ad230347dce2078f569781917677b2ed38" protocol=ttrpc version=3 Mar 11 01:28:45.704343 systemd[1]: Started cri-containerd-123adc9479a617f0d16492796d763b6aabdda90aea726ef3b3fcee93292fedcb.scope - libcontainer container 123adc9479a617f0d16492796d763b6aabdda90aea726ef3b3fcee93292fedcb. Mar 11 01:28:45.782509 systemd[1]: Started cri-containerd-dcfa30ea315e886a7b86b6af8a65d38451afc7907035906a5333f85f17b2e41d.scope - libcontainer container dcfa30ea315e886a7b86b6af8a65d38451afc7907035906a5333f85f17b2e41d. Mar 11 01:28:46.234099 containerd[1579]: time="2026-03-11T01:28:46.234026353Z" level=info msg="StartContainer for \"dcfa30ea315e886a7b86b6af8a65d38451afc7907035906a5333f85f17b2e41d\" returns successfully" Mar 11 01:28:46.303090 containerd[1579]: time="2026-03-11T01:28:46.295842519Z" level=info msg="StartContainer for \"123adc9479a617f0d16492796d763b6aabdda90aea726ef3b3fcee93292fedcb\" returns successfully" Mar 11 01:28:46.406467 kubelet[2857]: I0311 01:28:46.406358 2857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-7477699c4c-n9gg4" podStartSLOduration=103.082397906 podStartE2EDuration="1m58.406334039s" podCreationTimestamp="2026-03-11 01:26:48 +0000 UTC" firstStartedPulling="2026-03-11 01:28:26.598009654 +0000 UTC m=+143.276986286" lastFinishedPulling="2026-03-11 01:28:41.921945787 +0000 UTC m=+158.600922419" observedRunningTime="2026-03-11 01:28:43.138359544 +0000 UTC m=+159.817336206" watchObservedRunningTime="2026-03-11 01:28:46.406334039 +0000 UTC m=+163.085310671" Mar 11 01:28:52.179593 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4117023195.mount: Deactivated successfully. Mar 11 01:28:52.221198 kubelet[2857]: E0311 01:28:52.220906 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 01:28:53.190549 kubelet[2857]: I0311 01:28:53.187363 2857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-7477699c4c-wwhh9" podStartSLOduration=108.268962103 podStartE2EDuration="2m5.187337064s" podCreationTimestamp="2026-03-11 01:26:48 +0000 UTC" firstStartedPulling="2026-03-11 01:28:28.086860426 +0000 UTC m=+144.765837059" lastFinishedPulling="2026-03-11 01:28:45.005235388 +0000 UTC m=+161.684212020" observedRunningTime="2026-03-11 01:28:46.409587178 +0000 UTC m=+163.088563810" watchObservedRunningTime="2026-03-11 01:28:53.187337064 +0000 UTC m=+169.866313706" Mar 11 01:28:56.514703 containerd[1579]: time="2026-03-11T01:28:56.514590128Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:28:56.523717 containerd[1579]: time="2026-03-11T01:28:56.523637620Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Mar 11 01:28:56.531352 containerd[1579]: time="2026-03-11T01:28:56.531218096Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:28:56.538258 containerd[1579]: time="2026-03-11T01:28:56.538080667Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:28:56.548848 containerd[1579]: time="2026-03-11T01:28:56.538638260Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 11.531541794s" Mar 11 01:28:56.548848 containerd[1579]: time="2026-03-11T01:28:56.538683599Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Mar 11 01:28:56.553451 containerd[1579]: time="2026-03-11T01:28:56.553347280Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Mar 11 01:28:56.580477 containerd[1579]: time="2026-03-11T01:28:56.580426692Z" level=info msg="CreateContainer within sandbox \"86a6e68cb83eb28bc71f7ccafd9b57eef4a9ffaf53d3b465bfb909748c958cc6\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Mar 11 01:28:56.631883 containerd[1579]: time="2026-03-11T01:28:56.631383788Z" level=info msg="Container b3e6c489993dd5c698260fffac7aa1773e1d3f5a942c7ce7e6965d89e1f56b58: CDI devices from CRI Config.CDIDevices: []" Mar 11 01:28:56.698374 containerd[1579]: time="2026-03-11T01:28:56.698049960Z" level=info msg="CreateContainer within sandbox \"86a6e68cb83eb28bc71f7ccafd9b57eef4a9ffaf53d3b465bfb909748c958cc6\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"b3e6c489993dd5c698260fffac7aa1773e1d3f5a942c7ce7e6965d89e1f56b58\"" Mar 11 01:28:56.699776 containerd[1579]: time="2026-03-11T01:28:56.699616986Z" level=info msg="StartContainer for \"b3e6c489993dd5c698260fffac7aa1773e1d3f5a942c7ce7e6965d89e1f56b58\"" Mar 11 01:28:56.705567 containerd[1579]: time="2026-03-11T01:28:56.705425139Z" level=info msg="connecting to shim b3e6c489993dd5c698260fffac7aa1773e1d3f5a942c7ce7e6965d89e1f56b58" address="unix:///run/containerd/s/363e33e57c82d06dc909b6de01aa1a335a8a12f933d5589592772c525c8715ff" protocol=ttrpc version=3 Mar 11 01:28:56.809809 systemd[1]: Started cri-containerd-b3e6c489993dd5c698260fffac7aa1773e1d3f5a942c7ce7e6965d89e1f56b58.scope - libcontainer container b3e6c489993dd5c698260fffac7aa1773e1d3f5a942c7ce7e6965d89e1f56b58. Mar 11 01:28:57.093026 containerd[1579]: time="2026-03-11T01:28:57.092732875Z" level=info msg="StartContainer for \"b3e6c489993dd5c698260fffac7aa1773e1d3f5a942c7ce7e6965d89e1f56b58\" returns successfully" Mar 11 01:28:58.013755 kubelet[2857]: I0311 01:28:58.003587 2857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-cccfbd5cf-6mzpb" podStartSLOduration=100.746326614 podStartE2EDuration="2m9.003565382s" podCreationTimestamp="2026-03-11 01:26:49 +0000 UTC" firstStartedPulling="2026-03-11 01:28:28.29407076 +0000 UTC m=+144.973047392" lastFinishedPulling="2026-03-11 01:28:56.551309528 +0000 UTC m=+173.230286160" observedRunningTime="2026-03-11 01:28:58.002696157 +0000 UTC m=+174.681672808" watchObservedRunningTime="2026-03-11 01:28:58.003565382 +0000 UTC m=+174.682542043" Mar 11 01:28:59.220367 kubelet[2857]: E0311 01:28:59.218656 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 01:28:59.226535 kubelet[2857]: E0311 01:28:59.226365 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 01:29:06.975596 containerd[1579]: time="2026-03-11T01:29:06.973260000Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:29:06.980250 containerd[1579]: time="2026-03-11T01:29:06.980209899Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Mar 11 01:29:06.996770 containerd[1579]: time="2026-03-11T01:29:06.996713772Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:29:07.015095 containerd[1579]: time="2026-03-11T01:29:07.014362688Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:29:07.024548 containerd[1579]: time="2026-03-11T01:29:07.023857293Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 10.470276621s" Mar 11 01:29:07.024548 containerd[1579]: time="2026-03-11T01:29:07.023948970Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Mar 11 01:29:07.043345 containerd[1579]: time="2026-03-11T01:29:07.042288409Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Mar 11 01:29:07.123704 containerd[1579]: time="2026-03-11T01:29:07.122047601Z" level=info msg="CreateContainer within sandbox \"4bee9baf24b2eb9778b3ff22d5d65bcbd02bcb112a76ea2e537234ff869fa2b6\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 11 01:29:07.295053 containerd[1579]: time="2026-03-11T01:29:07.293782138Z" level=info msg="Container bc1c64a492a89f0fd147d439b08edf33ed13118ff083a084fc4f3b248e05157a: CDI devices from CRI Config.CDIDevices: []" Mar 11 01:29:07.421304 containerd[1579]: time="2026-03-11T01:29:07.421256441Z" level=info msg="CreateContainer within sandbox \"4bee9baf24b2eb9778b3ff22d5d65bcbd02bcb112a76ea2e537234ff869fa2b6\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"bc1c64a492a89f0fd147d439b08edf33ed13118ff083a084fc4f3b248e05157a\"" Mar 11 01:29:07.423654 containerd[1579]: time="2026-03-11T01:29:07.423622250Z" level=info msg="StartContainer for \"bc1c64a492a89f0fd147d439b08edf33ed13118ff083a084fc4f3b248e05157a\"" Mar 11 01:29:07.426517 containerd[1579]: time="2026-03-11T01:29:07.426400938Z" level=info msg="connecting to shim bc1c64a492a89f0fd147d439b08edf33ed13118ff083a084fc4f3b248e05157a" address="unix:///run/containerd/s/a7bfe8ecc3ac41ab7808ffc652ec90e4c231f052f565f95b5d7c05d4a995c97d" protocol=ttrpc version=3 Mar 11 01:29:07.627294 systemd[1]: Started cri-containerd-bc1c64a492a89f0fd147d439b08edf33ed13118ff083a084fc4f3b248e05157a.scope - libcontainer container bc1c64a492a89f0fd147d439b08edf33ed13118ff083a084fc4f3b248e05157a. Mar 11 01:29:07.962937 containerd[1579]: time="2026-03-11T01:29:07.959048741Z" level=info msg="StartContainer for \"bc1c64a492a89f0fd147d439b08edf33ed13118ff083a084fc4f3b248e05157a\" returns successfully" Mar 11 01:29:10.821021 kubelet[2857]: I0311 01:29:10.817325 2857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5b7f6c864b-jnn8t" podStartSLOduration=101.186229439 podStartE2EDuration="2m19.817298255s" podCreationTimestamp="2026-03-11 01:26:51 +0000 UTC" firstStartedPulling="2026-03-11 01:28:28.409362096 +0000 UTC m=+145.088338728" lastFinishedPulling="2026-03-11 01:29:07.040430912 +0000 UTC m=+183.719407544" observedRunningTime="2026-03-11 01:29:08.427940247 +0000 UTC m=+185.106916879" watchObservedRunningTime="2026-03-11 01:29:10.817298255 +0000 UTC m=+187.496274897" Mar 11 01:29:11.019063 containerd[1579]: time="2026-03-11T01:29:11.016105583Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:29:11.053115 containerd[1579]: time="2026-03-11T01:29:11.051481918Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Mar 11 01:29:11.074120 containerd[1579]: time="2026-03-11T01:29:11.072904778Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:29:11.083743 containerd[1579]: time="2026-03-11T01:29:11.083604642Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:29:11.085610 containerd[1579]: time="2026-03-11T01:29:11.085571723Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 4.043234918s" Mar 11 01:29:11.085857 containerd[1579]: time="2026-03-11T01:29:11.085828953Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Mar 11 01:29:11.117574 containerd[1579]: time="2026-03-11T01:29:11.117516827Z" level=info msg="CreateContainer within sandbox \"84243771c733e465fed95ee45179b96ef24d9259c56bd27d30b8b496ed40765b\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 11 01:29:11.195040 containerd[1579]: time="2026-03-11T01:29:11.194983261Z" level=info msg="Container b0ee2f22c56cfa143c030f532955d1d43f63af75f62e2011b73b1b706201e9e7: CDI devices from CRI Config.CDIDevices: []" Mar 11 01:29:11.232938 containerd[1579]: time="2026-03-11T01:29:11.232881887Z" level=info msg="CreateContainer within sandbox \"84243771c733e465fed95ee45179b96ef24d9259c56bd27d30b8b496ed40765b\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"b0ee2f22c56cfa143c030f532955d1d43f63af75f62e2011b73b1b706201e9e7\"" Mar 11 01:29:11.255505 containerd[1579]: time="2026-03-11T01:29:11.251892580Z" level=info msg="StartContainer for \"b0ee2f22c56cfa143c030f532955d1d43f63af75f62e2011b73b1b706201e9e7\"" Mar 11 01:29:11.269216 containerd[1579]: time="2026-03-11T01:29:11.262914900Z" level=info msg="connecting to shim b0ee2f22c56cfa143c030f532955d1d43f63af75f62e2011b73b1b706201e9e7" address="unix:///run/containerd/s/875b89a4ab089a93f375ecbcd437e7abcb02577bcf5cac4a62a49088746ee935" protocol=ttrpc version=3 Mar 11 01:29:11.445447 systemd[1]: Started cri-containerd-b0ee2f22c56cfa143c030f532955d1d43f63af75f62e2011b73b1b706201e9e7.scope - libcontainer container b0ee2f22c56cfa143c030f532955d1d43f63af75f62e2011b73b1b706201e9e7. Mar 11 01:29:12.327252 containerd[1579]: time="2026-03-11T01:29:12.326730160Z" level=info msg="StartContainer for \"b0ee2f22c56cfa143c030f532955d1d43f63af75f62e2011b73b1b706201e9e7\" returns successfully" Mar 11 01:29:13.177615 kubelet[2857]: I0311 01:29:13.177365 2857 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 11 01:29:13.182706 kubelet[2857]: I0311 01:29:13.181096 2857 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 11 01:29:14.216035 kubelet[2857]: E0311 01:29:14.215960 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 01:29:32.017610 kubelet[2857]: E0311 01:29:32.011221 2857 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.781s" Mar 11 01:29:33.069187 kubelet[2857]: I0311 01:29:33.068953 2857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-45sfz" podStartSLOduration=119.953799649 podStartE2EDuration="2m43.068926287s" podCreationTimestamp="2026-03-11 01:26:50 +0000 UTC" firstStartedPulling="2026-03-11 01:28:27.976311003 +0000 UTC m=+144.655287635" lastFinishedPulling="2026-03-11 01:29:11.091437641 +0000 UTC m=+187.770414273" observedRunningTime="2026-03-11 01:29:12.624275358 +0000 UTC m=+189.303252000" watchObservedRunningTime="2026-03-11 01:29:33.068926287 +0000 UTC m=+209.747902949" Mar 11 01:29:39.200230 kubelet[2857]: E0311 01:29:39.199780 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 01:29:43.558202 kubelet[2857]: E0311 01:29:43.550707 2857 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.414s" Mar 11 01:29:47.222558 kubelet[2857]: E0311 01:29:47.218334 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 01:29:55.111694 kubelet[2857]: E0311 01:29:55.102233 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 01:30:03.225855 kubelet[2857]: E0311 01:30:03.223890 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 01:30:09.240254 kubelet[2857]: E0311 01:30:09.238962 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 01:30:17.261640 kubelet[2857]: E0311 01:30:17.219673 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 01:30:34.301870 containerd[1579]: time="2026-03-11T01:30:34.115758515Z" level=warning msg="container event discarded" container=bcefa65120f175049337d222d75c5b1acc07a0e5add9a024b9c416761761397f type=CONTAINER_CREATED_EVENT Mar 11 01:30:34.301870 containerd[1579]: time="2026-03-11T01:30:34.301692913Z" level=warning msg="container event discarded" container=bcefa65120f175049337d222d75c5b1acc07a0e5add9a024b9c416761761397f type=CONTAINER_STARTED_EVENT Mar 11 01:30:34.563348 containerd[1579]: time="2026-03-11T01:30:34.562774621Z" level=warning msg="container event discarded" container=ff3875320548351faf941abda22c0934cf998875869cca4b636830c11530233e type=CONTAINER_CREATED_EVENT Mar 11 01:30:34.563348 containerd[1579]: time="2026-03-11T01:30:34.562865950Z" level=warning msg="container event discarded" container=ff3875320548351faf941abda22c0934cf998875869cca4b636830c11530233e type=CONTAINER_STARTED_EVENT Mar 11 01:30:34.563348 containerd[1579]: time="2026-03-11T01:30:34.562886190Z" level=warning msg="container event discarded" container=7929e5b4f9d9fe2c04bce28ad50f83e76c30fb43a524216a5afc9d4f6d451ae4 type=CONTAINER_CREATED_EVENT Mar 11 01:30:34.563348 containerd[1579]: time="2026-03-11T01:30:34.562898654Z" level=warning msg="container event discarded" container=7929e5b4f9d9fe2c04bce28ad50f83e76c30fb43a524216a5afc9d4f6d451ae4 type=CONTAINER_STARTED_EVENT Mar 11 01:30:34.563348 containerd[1579]: time="2026-03-11T01:30:34.562907752Z" level=warning msg="container event discarded" container=507e839bdc17046406fb9582b6afa1e56b6b9e3dcf264c9bac97ac1e022b7f99 type=CONTAINER_CREATED_EVENT Mar 11 01:30:34.563348 containerd[1579]: time="2026-03-11T01:30:34.562921370Z" level=warning msg="container event discarded" container=cecd0ff009ce9a5e64250732ed2f263b87e84da473a55c38b5a7db9991be7f5a type=CONTAINER_CREATED_EVENT Mar 11 01:30:34.563348 containerd[1579]: time="2026-03-11T01:30:34.562934175Z" level=warning msg="container event discarded" container=5cc1458252bd908a18bb3adc228f10a5793c389eb9910da542cbd586c898c4bf type=CONTAINER_CREATED_EVENT Mar 11 01:30:34.563348 containerd[1579]: time="2026-03-11T01:30:34.562944686Z" level=warning msg="container event discarded" container=507e839bdc17046406fb9582b6afa1e56b6b9e3dcf264c9bac97ac1e022b7f99 type=CONTAINER_STARTED_EVENT Mar 11 01:30:34.563348 containerd[1579]: time="2026-03-11T01:30:34.562954114Z" level=warning msg="container event discarded" container=cecd0ff009ce9a5e64250732ed2f263b87e84da473a55c38b5a7db9991be7f5a type=CONTAINER_STARTED_EVENT Mar 11 01:30:34.563348 containerd[1579]: time="2026-03-11T01:30:34.562963483Z" level=warning msg="container event discarded" container=5cc1458252bd908a18bb3adc228f10a5793c389eb9910da542cbd586c898c4bf type=CONTAINER_STARTED_EVENT Mar 11 01:30:36.227467 kubelet[2857]: E0311 01:30:36.226980 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 01:30:46.218557 kubelet[2857]: E0311 01:30:46.218293 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 01:30:55.218382 kubelet[2857]: E0311 01:30:55.204905 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 01:31:03.598890 kubelet[2857]: E0311 01:31:03.597648 2857 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.226s" Mar 11 01:31:03.611030 kubelet[2857]: E0311 01:31:03.611001 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 01:31:16.472931 containerd[1579]: time="2026-03-11T01:31:16.472576370Z" level=warning msg="container event discarded" container=36e2baf3a7f1df8730cb8740fd9c7f36eb78738769f697739d92ac72ea110e34 type=CONTAINER_CREATED_EVENT Mar 11 01:31:16.472931 containerd[1579]: time="2026-03-11T01:31:16.472730342Z" level=warning msg="container event discarded" container=36e2baf3a7f1df8730cb8740fd9c7f36eb78738769f697739d92ac72ea110e34 type=CONTAINER_STARTED_EVENT Mar 11 01:31:16.596640 containerd[1579]: time="2026-03-11T01:31:16.596493633Z" level=warning msg="container event discarded" container=75988794ce73dfd34b9b51596d9b1d51a8ca8c05f2e49509529c6f0efba6393a type=CONTAINER_CREATED_EVENT Mar 11 01:31:18.395902 containerd[1579]: time="2026-03-11T01:31:18.395661561Z" level=warning msg="container event discarded" container=75988794ce73dfd34b9b51596d9b1d51a8ca8c05f2e49509529c6f0efba6393a type=CONTAINER_STARTED_EVENT Mar 11 01:31:18.395902 containerd[1579]: time="2026-03-11T01:31:18.395731002Z" level=warning msg="container event discarded" container=9b7c807dd91a1da7b18806aeac8361f2cef1d65ffeea889d8db7c2476e93685a type=CONTAINER_CREATED_EVENT Mar 11 01:31:18.395902 containerd[1579]: time="2026-03-11T01:31:18.395749218Z" level=warning msg="container event discarded" container=9b7c807dd91a1da7b18806aeac8361f2cef1d65ffeea889d8db7c2476e93685a type=CONTAINER_STARTED_EVENT Mar 11 01:31:26.238034 kubelet[2857]: E0311 01:31:26.237900 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 01:31:29.217969 kubelet[2857]: E0311 01:31:29.216633 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 01:31:34.364424 containerd[1579]: time="2026-03-11T01:31:34.364052364Z" level=warning msg="container event discarded" container=3629f50b670711a1be187d87fc7faa2645fce6f40ca9861cf245a7c58f8dd3ff type=CONTAINER_CREATED_EVENT Mar 11 01:31:34.610571 containerd[1579]: time="2026-03-11T01:31:34.609858383Z" level=warning msg="container event discarded" container=3629f50b670711a1be187d87fc7faa2645fce6f40ca9861cf245a7c58f8dd3ff type=CONTAINER_STARTED_EVENT Mar 11 01:31:37.065395 systemd[1]: Started sshd@9-10.0.0.26:22-10.0.0.1:34436.service - OpenSSH per-connection server daemon (10.0.0.1:34436). Mar 11 01:31:37.727383 sshd[6272]: Accepted publickey for core from 10.0.0.1 port 34436 ssh2: RSA SHA256:ov5Rj6fEl6UjPOqNhoR9JzHhu6CIWP/x84fpjn8AOgI Mar 11 01:31:37.765241 sshd-session[6272]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 01:31:37.829732 systemd-logind[1547]: New session 10 of user core. Mar 11 01:31:37.894778 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 11 01:31:40.611917 sshd[6286]: Connection closed by 10.0.0.1 port 34436 Mar 11 01:31:40.616353 sshd-session[6272]: pam_unix(sshd:session): session closed for user core Mar 11 01:31:40.630726 systemd[1]: sshd@9-10.0.0.26:22-10.0.0.1:34436.service: Deactivated successfully. Mar 11 01:31:40.671702 systemd[1]: session-10.scope: Deactivated successfully. Mar 11 01:31:40.686251 systemd-logind[1547]: Session 10 logged out. Waiting for processes to exit. Mar 11 01:31:40.690227 systemd-logind[1547]: Removed session 10. Mar 11 01:31:45.666893 systemd[1]: Started sshd@10-10.0.0.26:22-10.0.0.1:55330.service - OpenSSH per-connection server daemon (10.0.0.1:55330). Mar 11 01:31:45.820182 sshd[6351]: Accepted publickey for core from 10.0.0.1 port 55330 ssh2: RSA SHA256:ov5Rj6fEl6UjPOqNhoR9JzHhu6CIWP/x84fpjn8AOgI Mar 11 01:31:45.830452 sshd-session[6351]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 01:31:45.867538 systemd-logind[1547]: New session 11 of user core. Mar 11 01:31:45.890588 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 11 01:31:46.245831 sshd[6354]: Connection closed by 10.0.0.1 port 55330 Mar 11 01:31:46.253999 sshd-session[6351]: pam_unix(sshd:session): session closed for user core Mar 11 01:31:46.267241 kubelet[2857]: E0311 01:31:46.265080 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 01:31:46.270390 systemd-logind[1547]: Session 11 logged out. Waiting for processes to exit. Mar 11 01:31:46.277535 systemd[1]: sshd@10-10.0.0.26:22-10.0.0.1:55330.service: Deactivated successfully. Mar 11 01:31:46.291957 systemd[1]: session-11.scope: Deactivated successfully. Mar 11 01:31:46.320912 systemd-logind[1547]: Removed session 11. Mar 11 01:31:51.293877 systemd[1]: Started sshd@11-10.0.0.26:22-10.0.0.1:39300.service - OpenSSH per-connection server daemon (10.0.0.1:39300). Mar 11 01:31:51.451392 containerd[1579]: time="2026-03-11T01:31:51.450959066Z" level=warning msg="container event discarded" container=45618bbf7fb641674bc1f0a1def907987f122b826f9a3db66e4620b38a09da3f type=CONTAINER_CREATED_EVENT Mar 11 01:31:51.451392 containerd[1579]: time="2026-03-11T01:31:51.451343657Z" level=warning msg="container event discarded" container=45618bbf7fb641674bc1f0a1def907987f122b826f9a3db66e4620b38a09da3f type=CONTAINER_STARTED_EVENT Mar 11 01:31:51.475397 sshd[6372]: Accepted publickey for core from 10.0.0.1 port 39300 ssh2: RSA SHA256:ov5Rj6fEl6UjPOqNhoR9JzHhu6CIWP/x84fpjn8AOgI Mar 11 01:31:51.478402 sshd-session[6372]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 01:31:51.498391 systemd-logind[1547]: New session 12 of user core. Mar 11 01:31:51.512492 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 11 01:31:51.650547 containerd[1579]: time="2026-03-11T01:31:51.650315538Z" level=warning msg="container event discarded" container=b615ea67cb5cfa8ec597d2f8ac0224720bda5e81d089c0e18916885b30f6bbe0 type=CONTAINER_CREATED_EVENT Mar 11 01:31:51.650547 containerd[1579]: time="2026-03-11T01:31:51.650400764Z" level=warning msg="container event discarded" container=b615ea67cb5cfa8ec597d2f8ac0224720bda5e81d089c0e18916885b30f6bbe0 type=CONTAINER_STARTED_EVENT Mar 11 01:31:51.867815 sshd[6375]: Connection closed by 10.0.0.1 port 39300 Mar 11 01:31:51.871249 sshd-session[6372]: pam_unix(sshd:session): session closed for user core Mar 11 01:31:51.887806 systemd-logind[1547]: Session 12 logged out. Waiting for processes to exit. Mar 11 01:31:51.891062 systemd[1]: sshd@11-10.0.0.26:22-10.0.0.1:39300.service: Deactivated successfully. Mar 11 01:31:51.900864 systemd[1]: session-12.scope: Deactivated successfully. Mar 11 01:31:51.907561 systemd-logind[1547]: Removed session 12. Mar 11 01:31:56.904988 systemd[1]: Started sshd@12-10.0.0.26:22-10.0.0.1:39308.service - OpenSSH per-connection server daemon (10.0.0.1:39308). Mar 11 01:31:57.196591 sshd[6389]: Accepted publickey for core from 10.0.0.1 port 39308 ssh2: RSA SHA256:ov5Rj6fEl6UjPOqNhoR9JzHhu6CIWP/x84fpjn8AOgI Mar 11 01:31:57.209431 sshd-session[6389]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 01:31:57.248640 systemd-logind[1547]: New session 13 of user core. Mar 11 01:31:57.282339 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 11 01:31:57.630590 containerd[1579]: time="2026-03-11T01:31:57.629650313Z" level=warning msg="container event discarded" container=a37979dd2451c51beebe5ff3272fc31c9d061fb18b40f78688aae9cdcc96aa9e type=CONTAINER_CREATED_EVENT Mar 11 01:31:57.844907 sshd[6392]: Connection closed by 10.0.0.1 port 39308 Mar 11 01:31:57.848747 sshd-session[6389]: pam_unix(sshd:session): session closed for user core Mar 11 01:31:57.871912 systemd[1]: sshd@12-10.0.0.26:22-10.0.0.1:39308.service: Deactivated successfully. Mar 11 01:31:57.880988 systemd[1]: session-13.scope: Deactivated successfully. Mar 11 01:31:57.888924 systemd-logind[1547]: Session 13 logged out. Waiting for processes to exit. Mar 11 01:31:57.897566 systemd-logind[1547]: Removed session 13. Mar 11 01:31:58.012186 containerd[1579]: time="2026-03-11T01:31:58.011526488Z" level=warning msg="container event discarded" container=a37979dd2451c51beebe5ff3272fc31c9d061fb18b40f78688aae9cdcc96aa9e type=CONTAINER_STARTED_EVENT Mar 11 01:31:59.062814 containerd[1579]: time="2026-03-11T01:31:59.047019456Z" level=warning msg="container event discarded" container=001d729be0ca9b178752591f6c877fa14d0155f6519dcde29ac7fea78a40a0bb type=CONTAINER_CREATED_EVENT Mar 11 01:31:59.217491 kubelet[2857]: E0311 01:31:59.217375 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 01:31:59.695058 containerd[1579]: time="2026-03-11T01:31:59.691425028Z" level=warning msg="container event discarded" container=001d729be0ca9b178752591f6c877fa14d0155f6519dcde29ac7fea78a40a0bb type=CONTAINER_STARTED_EVENT Mar 11 01:32:00.435411 containerd[1579]: time="2026-03-11T01:32:00.429887462Z" level=warning msg="container event discarded" container=001d729be0ca9b178752591f6c877fa14d0155f6519dcde29ac7fea78a40a0bb type=CONTAINER_STOPPED_EVENT Mar 11 01:32:02.901975 systemd[1]: Started sshd@13-10.0.0.26:22-10.0.0.1:58560.service - OpenSSH per-connection server daemon (10.0.0.1:58560). Mar 11 01:32:03.024460 sshd[6430]: Accepted publickey for core from 10.0.0.1 port 58560 ssh2: RSA SHA256:ov5Rj6fEl6UjPOqNhoR9JzHhu6CIWP/x84fpjn8AOgI Mar 11 01:32:03.025547 sshd-session[6430]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 01:32:03.046644 systemd-logind[1547]: New session 14 of user core. Mar 11 01:32:03.075452 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 11 01:32:03.397594 sshd[6433]: Connection closed by 10.0.0.1 port 58560 Mar 11 01:32:03.400363 sshd-session[6430]: pam_unix(sshd:session): session closed for user core Mar 11 01:32:03.429624 systemd[1]: sshd@13-10.0.0.26:22-10.0.0.1:58560.service: Deactivated successfully. Mar 11 01:32:03.447080 systemd[1]: session-14.scope: Deactivated successfully. Mar 11 01:32:03.489127 systemd-logind[1547]: Session 14 logged out. Waiting for processes to exit. Mar 11 01:32:03.499470 systemd-logind[1547]: Removed session 14. Mar 11 01:32:08.454725 systemd[1]: Started sshd@14-10.0.0.26:22-10.0.0.1:58562.service - OpenSSH per-connection server daemon (10.0.0.1:58562). Mar 11 01:32:08.616730 sshd[6493]: Accepted publickey for core from 10.0.0.1 port 58562 ssh2: RSA SHA256:ov5Rj6fEl6UjPOqNhoR9JzHhu6CIWP/x84fpjn8AOgI Mar 11 01:32:08.626661 sshd-session[6493]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 01:32:08.667821 systemd-logind[1547]: New session 15 of user core. Mar 11 01:32:08.681711 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 11 01:32:09.031280 sshd[6496]: Connection closed by 10.0.0.1 port 58562 Mar 11 01:32:09.033493 sshd-session[6493]: pam_unix(sshd:session): session closed for user core Mar 11 01:32:09.045452 systemd[1]: sshd@14-10.0.0.26:22-10.0.0.1:58562.service: Deactivated successfully. Mar 11 01:32:09.073678 systemd[1]: session-15.scope: Deactivated successfully. Mar 11 01:32:09.075726 systemd-logind[1547]: Session 15 logged out. Waiting for processes to exit. Mar 11 01:32:09.079075 systemd-logind[1547]: Removed session 15. Mar 11 01:32:12.217842 kubelet[2857]: E0311 01:32:12.216904 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 01:32:14.115060 systemd[1]: Started sshd@15-10.0.0.26:22-10.0.0.1:45932.service - OpenSSH per-connection server daemon (10.0.0.1:45932). Mar 11 01:32:14.489740 sshd[6559]: Accepted publickey for core from 10.0.0.1 port 45932 ssh2: RSA SHA256:ov5Rj6fEl6UjPOqNhoR9JzHhu6CIWP/x84fpjn8AOgI Mar 11 01:32:14.496727 sshd-session[6559]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 01:32:14.527858 systemd-logind[1547]: New session 16 of user core. Mar 11 01:32:14.584969 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 11 01:32:15.184957 sshd[6562]: Connection closed by 10.0.0.1 port 45932 Mar 11 01:32:15.183838 sshd-session[6559]: pam_unix(sshd:session): session closed for user core Mar 11 01:32:15.216901 systemd[1]: sshd@15-10.0.0.26:22-10.0.0.1:45932.service: Deactivated successfully. Mar 11 01:32:15.227308 systemd[1]: session-16.scope: Deactivated successfully. Mar 11 01:32:15.246887 systemd-logind[1547]: Session 16 logged out. Waiting for processes to exit. Mar 11 01:32:15.264279 systemd-logind[1547]: Removed session 16. Mar 11 01:32:19.229271 kubelet[2857]: E0311 01:32:19.228408 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 01:32:20.224341 systemd[1]: Started sshd@16-10.0.0.26:22-10.0.0.1:47496.service - OpenSSH per-connection server daemon (10.0.0.1:47496). Mar 11 01:32:20.808239 sshd[6579]: Accepted publickey for core from 10.0.0.1 port 47496 ssh2: RSA SHA256:ov5Rj6fEl6UjPOqNhoR9JzHhu6CIWP/x84fpjn8AOgI Mar 11 01:32:20.830623 sshd-session[6579]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 01:32:20.921633 systemd-logind[1547]: New session 17 of user core. Mar 11 01:32:20.949564 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 11 01:32:21.243708 sshd[6582]: Connection closed by 10.0.0.1 port 47496 Mar 11 01:32:21.244442 sshd-session[6579]: pam_unix(sshd:session): session closed for user core Mar 11 01:32:21.257507 systemd[1]: sshd@16-10.0.0.26:22-10.0.0.1:47496.service: Deactivated successfully. Mar 11 01:32:21.277405 systemd[1]: session-17.scope: Deactivated successfully. Mar 11 01:32:21.280913 systemd-logind[1547]: Session 17 logged out. Waiting for processes to exit. Mar 11 01:32:21.287509 systemd-logind[1547]: Removed session 17. Mar 11 01:32:26.354057 systemd[1]: Started sshd@17-10.0.0.26:22-10.0.0.1:47506.service - OpenSSH per-connection server daemon (10.0.0.1:47506). Mar 11 01:32:26.523264 sshd[6597]: Accepted publickey for core from 10.0.0.1 port 47506 ssh2: RSA SHA256:ov5Rj6fEl6UjPOqNhoR9JzHhu6CIWP/x84fpjn8AOgI Mar 11 01:32:26.530617 sshd-session[6597]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 01:32:26.550408 systemd-logind[1547]: New session 18 of user core. Mar 11 01:32:26.588914 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 11 01:32:26.840772 sshd[6600]: Connection closed by 10.0.0.1 port 47506 Mar 11 01:32:26.842101 sshd-session[6597]: pam_unix(sshd:session): session closed for user core Mar 11 01:32:26.852294 systemd[1]: sshd@17-10.0.0.26:22-10.0.0.1:47506.service: Deactivated successfully. Mar 11 01:32:26.866236 systemd[1]: session-18.scope: Deactivated successfully. Mar 11 01:32:26.871236 systemd-logind[1547]: Session 18 logged out. Waiting for processes to exit. Mar 11 01:32:26.874284 systemd-logind[1547]: Removed session 18. Mar 11 01:32:27.216796 kubelet[2857]: E0311 01:32:27.216717 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 01:32:32.398566 systemd[1]: Started sshd@18-10.0.0.26:22-10.0.0.1:55596.service - OpenSSH per-connection server daemon (10.0.0.1:55596). Mar 11 01:32:32.694431 sshd[6640]: Accepted publickey for core from 10.0.0.1 port 55596 ssh2: RSA SHA256:ov5Rj6fEl6UjPOqNhoR9JzHhu6CIWP/x84fpjn8AOgI Mar 11 01:32:32.707618 sshd-session[6640]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 01:32:32.740262 systemd-logind[1547]: New session 19 of user core. Mar 11 01:32:32.757747 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 11 01:32:33.300105 sshd[6643]: Connection closed by 10.0.0.1 port 55596 Mar 11 01:32:33.305116 sshd-session[6640]: pam_unix(sshd:session): session closed for user core Mar 11 01:32:33.329849 systemd[1]: sshd@18-10.0.0.26:22-10.0.0.1:55596.service: Deactivated successfully. Mar 11 01:32:33.341637 systemd[1]: session-19.scope: Deactivated successfully. Mar 11 01:32:33.344797 systemd-logind[1547]: Session 19 logged out. Waiting for processes to exit. Mar 11 01:32:33.350029 systemd-logind[1547]: Removed session 19. Mar 11 01:32:35.717053 containerd[1579]: time="2026-03-11T01:32:35.714362620Z" level=warning msg="container event discarded" container=e4b4d75354090736396f7a1dc2ff9674953cf53787bd6f7da67bb843e2d258e9 type=CONTAINER_CREATED_EVENT Mar 11 01:32:36.031675 containerd[1579]: time="2026-03-11T01:32:36.029709079Z" level=warning msg="container event discarded" container=e4b4d75354090736396f7a1dc2ff9674953cf53787bd6f7da67bb843e2d258e9 type=CONTAINER_STARTED_EVENT Mar 11 01:32:36.412878 containerd[1579]: time="2026-03-11T01:32:36.412522542Z" level=warning msg="container event discarded" container=e4b4d75354090736396f7a1dc2ff9674953cf53787bd6f7da67bb843e2d258e9 type=CONTAINER_STOPPED_EVENT Mar 11 01:32:38.347027 systemd[1]: Started sshd@19-10.0.0.26:22-10.0.0.1:55600.service - OpenSSH per-connection server daemon (10.0.0.1:55600). Mar 11 01:32:38.591647 sshd[6657]: Accepted publickey for core from 10.0.0.1 port 55600 ssh2: RSA SHA256:ov5Rj6fEl6UjPOqNhoR9JzHhu6CIWP/x84fpjn8AOgI Mar 11 01:32:38.599083 sshd-session[6657]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 01:32:38.648027 systemd-logind[1547]: New session 20 of user core. Mar 11 01:32:38.708939 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 11 01:32:39.805492 sshd[6660]: Connection closed by 10.0.0.1 port 55600 Mar 11 01:32:39.805948 sshd-session[6657]: pam_unix(sshd:session): session closed for user core Mar 11 01:32:39.830357 systemd[1]: sshd@19-10.0.0.26:22-10.0.0.1:55600.service: Deactivated successfully. Mar 11 01:32:39.850951 systemd[1]: session-20.scope: Deactivated successfully. Mar 11 01:32:39.889767 systemd-logind[1547]: Session 20 logged out. Waiting for processes to exit. Mar 11 01:32:39.908773 systemd-logind[1547]: Removed session 20. Mar 11 01:32:45.232790 systemd[1]: Started sshd@20-10.0.0.26:22-10.0.0.1:42178.service - OpenSSH per-connection server daemon (10.0.0.1:42178). Mar 11 01:32:52.275258 containerd[1579]: time="2026-03-11T01:32:52.238278509Z" level=warning msg="container event discarded" container=9194650cd096ddf07511af825c2a04fb04fb27855fe2750d4bbdd516911cafad type=CONTAINER_CREATED_EVENT Mar 11 01:32:54.520917 containerd[1579]: time="2026-03-11T01:32:52.838946369Z" level=warning msg="container event discarded" container=9194650cd096ddf07511af825c2a04fb04fb27855fe2750d4bbdd516911cafad type=CONTAINER_STARTED_EVENT Mar 11 01:32:54.621097 sshd[6709]: Accepted publickey for core from 10.0.0.1 port 42178 ssh2: RSA SHA256:ov5Rj6fEl6UjPOqNhoR9JzHhu6CIWP/x84fpjn8AOgI Mar 11 01:32:54.630322 sshd-session[6709]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 01:32:54.671060 systemd-logind[1547]: New session 21 of user core. Mar 11 01:32:54.683008 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 11 01:32:54.756343 kubelet[2857]: E0311 01:32:54.755806 2857 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="9.666s" Mar 11 01:32:55.306864 kubelet[2857]: E0311 01:32:55.306741 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 01:32:55.310656 kubelet[2857]: E0311 01:32:55.310614 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 01:32:55.411650 containerd[1579]: time="2026-03-11T01:32:55.343811774Z" level=error msg="ExecSync for \"bc08e59798b87cbee0aaccd90670b72a10b513e31a9874c8d7665662ca0c3eae\" failed" error="rpc error: code = DeadlineExceeded desc = failed to exec in container: timeout 10s exceeded: context deadline exceeded" Mar 11 01:32:55.452899 kubelet[2857]: E0311 01:32:55.450661 2857 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = DeadlineExceeded desc = failed to exec in container: timeout 10s exceeded: context deadline exceeded" containerID="bc08e59798b87cbee0aaccd90670b72a10b513e31a9874c8d7665662ca0c3eae" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Mar 11 01:32:56.008197 sshd[6721]: Connection closed by 10.0.0.1 port 42178 Mar 11 01:32:56.009180 sshd-session[6709]: pam_unix(sshd:session): session closed for user core Mar 11 01:32:56.018882 systemd-logind[1547]: Session 21 logged out. Waiting for processes to exit. Mar 11 01:32:56.027626 systemd[1]: sshd@20-10.0.0.26:22-10.0.0.1:42178.service: Deactivated successfully. Mar 11 01:32:56.028119 systemd[1]: sshd@20-10.0.0.26:22-10.0.0.1:42178.service: Consumed 1.018s CPU time, 3.2M memory peak. Mar 11 01:32:56.037456 systemd[1]: session-21.scope: Deactivated successfully. Mar 11 01:32:56.043522 systemd-logind[1547]: Removed session 21. Mar 11 01:33:01.048632 systemd[1]: Started sshd@21-10.0.0.26:22-10.0.0.1:56772.service - OpenSSH per-connection server daemon (10.0.0.1:56772). Mar 11 01:33:01.217346 sshd[6762]: Accepted publickey for core from 10.0.0.1 port 56772 ssh2: RSA SHA256:ov5Rj6fEl6UjPOqNhoR9JzHhu6CIWP/x84fpjn8AOgI Mar 11 01:33:01.224118 sshd-session[6762]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 01:33:01.259635 systemd-logind[1547]: New session 22 of user core. Mar 11 01:33:01.287561 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 11 01:33:01.665753 sshd[6776]: Connection closed by 10.0.0.1 port 56772 Mar 11 01:33:01.671619 sshd-session[6762]: pam_unix(sshd:session): session closed for user core Mar 11 01:33:01.689822 systemd[1]: sshd@21-10.0.0.26:22-10.0.0.1:56772.service: Deactivated successfully. Mar 11 01:33:01.702771 systemd[1]: session-22.scope: Deactivated successfully. Mar 11 01:33:01.710064 systemd-logind[1547]: Session 22 logged out. Waiting for processes to exit. Mar 11 01:33:01.715509 systemd-logind[1547]: Removed session 22. Mar 11 01:33:02.339343 containerd[1579]: time="2026-03-11T01:33:02.339047716Z" level=warning msg="container event discarded" container=9194650cd096ddf07511af825c2a04fb04fb27855fe2750d4bbdd516911cafad type=CONTAINER_STOPPED_EVENT Mar 11 01:33:06.222072 kubelet[2857]: E0311 01:33:06.219794 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 01:33:06.535995 containerd[1579]: time="2026-03-11T01:33:06.533898309Z" level=warning msg="container event discarded" container=bc08e59798b87cbee0aaccd90670b72a10b513e31a9874c8d7665662ca0c3eae type=CONTAINER_CREATED_EVENT Mar 11 01:33:06.715570 systemd[1]: Started sshd@22-10.0.0.26:22-10.0.0.1:56776.service - OpenSSH per-connection server daemon (10.0.0.1:56776). Mar 11 01:33:06.849603 sshd[6814]: Accepted publickey for core from 10.0.0.1 port 56776 ssh2: RSA SHA256:ov5Rj6fEl6UjPOqNhoR9JzHhu6CIWP/x84fpjn8AOgI Mar 11 01:33:06.853025 sshd-session[6814]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 01:33:06.876744 systemd-logind[1547]: New session 23 of user core. Mar 11 01:33:06.896532 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 11 01:33:07.163907 sshd[6817]: Connection closed by 10.0.0.1 port 56776 Mar 11 01:33:07.160462 sshd-session[6814]: pam_unix(sshd:session): session closed for user core Mar 11 01:33:07.190858 systemd[1]: sshd@22-10.0.0.26:22-10.0.0.1:56776.service: Deactivated successfully. Mar 11 01:33:07.197429 systemd[1]: session-23.scope: Deactivated successfully. Mar 11 01:33:07.206037 systemd-logind[1547]: Session 23 logged out. Waiting for processes to exit. Mar 11 01:33:07.208481 systemd-logind[1547]: Removed session 23. Mar 11 01:33:07.715110 containerd[1579]: time="2026-03-11T01:33:07.714277694Z" level=warning msg="container event discarded" container=bc08e59798b87cbee0aaccd90670b72a10b513e31a9874c8d7665662ca0c3eae type=CONTAINER_STARTED_EVENT Mar 11 01:33:08.923794 kubelet[2857]: E0311 01:33:08.922719 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 01:33:12.198311 systemd[1]: Started sshd@23-10.0.0.26:22-10.0.0.1:60036.service - OpenSSH per-connection server daemon (10.0.0.1:60036). Mar 11 01:33:12.382511 sshd[6919]: Accepted publickey for core from 10.0.0.1 port 60036 ssh2: RSA SHA256:ov5Rj6fEl6UjPOqNhoR9JzHhu6CIWP/x84fpjn8AOgI Mar 11 01:33:12.384298 sshd-session[6919]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 01:33:12.398452 systemd-logind[1547]: New session 24 of user core. Mar 11 01:33:12.419840 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 11 01:33:12.710397 sshd[6922]: Connection closed by 10.0.0.1 port 60036 Mar 11 01:33:12.712475 sshd-session[6919]: pam_unix(sshd:session): session closed for user core Mar 11 01:33:12.732711 systemd[1]: sshd@23-10.0.0.26:22-10.0.0.1:60036.service: Deactivated successfully. Mar 11 01:33:12.738021 systemd[1]: session-24.scope: Deactivated successfully. Mar 11 01:33:12.752966 systemd-logind[1547]: Session 24 logged out. Waiting for processes to exit. Mar 11 01:33:12.760626 systemd-logind[1547]: Removed session 24. Mar 11 01:33:16.801698 containerd[1579]: time="2026-03-11T01:33:16.801585523Z" level=warning msg="container event discarded" container=533f86e540757862329771f978c82e5fa76bd41b9259ca27652d73edc31b2cd6 type=CONTAINER_CREATED_EVENT Mar 11 01:33:16.801698 containerd[1579]: time="2026-03-11T01:33:16.801659147Z" level=warning msg="container event discarded" container=533f86e540757862329771f978c82e5fa76bd41b9259ca27652d73edc31b2cd6 type=CONTAINER_STARTED_EVENT Mar 11 01:33:17.747864 systemd[1]: Started sshd@24-10.0.0.26:22-10.0.0.1:60044.service - OpenSSH per-connection server daemon (10.0.0.1:60044). Mar 11 01:33:17.919071 sshd[6937]: Accepted publickey for core from 10.0.0.1 port 60044 ssh2: RSA SHA256:ov5Rj6fEl6UjPOqNhoR9JzHhu6CIWP/x84fpjn8AOgI Mar 11 01:33:17.922974 sshd-session[6937]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 01:33:17.952384 systemd-logind[1547]: New session 25 of user core. Mar 11 01:33:17.990366 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 11 01:33:18.383935 sshd[6940]: Connection closed by 10.0.0.1 port 60044 Mar 11 01:33:18.383272 sshd-session[6937]: pam_unix(sshd:session): session closed for user core Mar 11 01:33:18.403031 systemd[1]: sshd@24-10.0.0.26:22-10.0.0.1:60044.service: Deactivated successfully. Mar 11 01:33:18.413062 systemd[1]: session-25.scope: Deactivated successfully. Mar 11 01:33:18.422943 systemd-logind[1547]: Session 25 logged out. Waiting for processes to exit. Mar 11 01:33:18.429993 systemd-logind[1547]: Removed session 25. Mar 11 01:33:19.220018 containerd[1579]: time="2026-03-11T01:33:19.219901909Z" level=warning msg="container event discarded" container=7a3d760309d5b00f0f7963f5cdcb39f319ba53a6a1ab0bba6f81797d98f577f4 type=CONTAINER_CREATED_EVENT Mar 11 01:33:20.987711 containerd[1579]: time="2026-03-11T01:33:20.987495651Z" level=warning msg="container event discarded" container=7a3d760309d5b00f0f7963f5cdcb39f319ba53a6a1ab0bba6f81797d98f577f4 type=CONTAINER_STARTED_EVENT Mar 11 01:33:22.883669 containerd[1579]: time="2026-03-11T01:33:22.883431553Z" level=warning msg="container event discarded" container=c8257c2ca4c712a8c25336fa0a63a2d5eb8b12ad9171ee2059e1e2d0a7e4d36f type=CONTAINER_CREATED_EVENT Mar 11 01:33:22.883669 containerd[1579]: time="2026-03-11T01:33:22.883526339Z" level=warning msg="container event discarded" container=c8257c2ca4c712a8c25336fa0a63a2d5eb8b12ad9171ee2059e1e2d0a7e4d36f type=CONTAINER_STARTED_EVENT Mar 11 01:33:23.217713 kubelet[2857]: E0311 01:33:23.217455 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 01:33:23.409091 systemd[1]: Started sshd@25-10.0.0.26:22-10.0.0.1:59168.service - OpenSSH per-connection server daemon (10.0.0.1:59168). Mar 11 01:33:23.587824 sshd[6957]: Accepted publickey for core from 10.0.0.1 port 59168 ssh2: RSA SHA256:ov5Rj6fEl6UjPOqNhoR9JzHhu6CIWP/x84fpjn8AOgI Mar 11 01:33:23.599309 sshd-session[6957]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 01:33:23.617378 systemd-logind[1547]: New session 26 of user core. Mar 11 01:33:23.628596 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 11 01:33:23.839938 sshd[6960]: Connection closed by 10.0.0.1 port 59168 Mar 11 01:33:23.839805 sshd-session[6957]: pam_unix(sshd:session): session closed for user core Mar 11 01:33:23.848775 systemd[1]: sshd@25-10.0.0.26:22-10.0.0.1:59168.service: Deactivated successfully. Mar 11 01:33:23.849453 systemd-logind[1547]: Session 26 logged out. Waiting for processes to exit. Mar 11 01:33:23.864423 systemd[1]: session-26.scope: Deactivated successfully. Mar 11 01:33:23.872238 systemd-logind[1547]: Removed session 26. Mar 11 01:33:26.247820 containerd[1579]: time="2026-03-11T01:33:26.247450582Z" level=warning msg="container event discarded" container=80f5291d1e98eec24795fbb9c1cecfb7b1c009c3bbbe6074fc8fe5adf5329852 type=CONTAINER_CREATED_EVENT Mar 11 01:33:26.590524 containerd[1579]: time="2026-03-11T01:33:26.590278065Z" level=warning msg="container event discarded" container=04b32efac5491b708bc22eb4ede85eb7ed3ee21025ade3cda0d96001830c53d3 type=CONTAINER_CREATED_EVENT Mar 11 01:33:26.590524 containerd[1579]: time="2026-03-11T01:33:26.590374927Z" level=warning msg="container event discarded" container=04b32efac5491b708bc22eb4ede85eb7ed3ee21025ade3cda0d96001830c53d3 type=CONTAINER_STARTED_EVENT Mar 11 01:33:27.000414 containerd[1579]: time="2026-03-11T01:33:27.000208683Z" level=warning msg="container event discarded" container=80f5291d1e98eec24795fbb9c1cecfb7b1c009c3bbbe6074fc8fe5adf5329852 type=CONTAINER_STARTED_EVENT Mar 11 01:33:27.922766 containerd[1579]: time="2026-03-11T01:33:27.922562188Z" level=warning msg="container event discarded" container=84243771c733e465fed95ee45179b96ef24d9259c56bd27d30b8b496ed40765b type=CONTAINER_CREATED_EVENT Mar 11 01:33:27.922766 containerd[1579]: time="2026-03-11T01:33:27.922647998Z" level=warning msg="container event discarded" container=84243771c733e465fed95ee45179b96ef24d9259c56bd27d30b8b496ed40765b type=CONTAINER_STARTED_EVENT Mar 11 01:33:28.083543 containerd[1579]: time="2026-03-11T01:33:28.083355509Z" level=warning msg="container event discarded" container=c4dc3b48f6a4aaf2d2d8324c515dd87b879b7669e88cc192076045110f1cc3c2 type=CONTAINER_CREATED_EVENT Mar 11 01:33:28.083543 containerd[1579]: time="2026-03-11T01:33:28.083436093Z" level=warning msg="container event discarded" container=c4dc3b48f6a4aaf2d2d8324c515dd87b879b7669e88cc192076045110f1cc3c2 type=CONTAINER_STARTED_EVENT Mar 11 01:33:28.146909 containerd[1579]: time="2026-03-11T01:33:28.146763849Z" level=warning msg="container event discarded" container=86a6e68cb83eb28bc71f7ccafd9b57eef4a9ffaf53d3b465bfb909748c958cc6 type=CONTAINER_CREATED_EVENT Mar 11 01:33:28.146909 containerd[1579]: time="2026-03-11T01:33:28.146863997Z" level=warning msg="container event discarded" container=86a6e68cb83eb28bc71f7ccafd9b57eef4a9ffaf53d3b465bfb909748c958cc6 type=CONTAINER_STARTED_EVENT Mar 11 01:33:28.220657 kubelet[2857]: E0311 01:33:28.217801 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 01:33:28.411329 containerd[1579]: time="2026-03-11T01:33:28.409123795Z" level=warning msg="container event discarded" container=4bee9baf24b2eb9778b3ff22d5d65bcbd02bcb112a76ea2e537234ff869fa2b6 type=CONTAINER_CREATED_EVENT Mar 11 01:33:28.411329 containerd[1579]: time="2026-03-11T01:33:28.409261091Z" level=warning msg="container event discarded" container=4bee9baf24b2eb9778b3ff22d5d65bcbd02bcb112a76ea2e537234ff869fa2b6 type=CONTAINER_STARTED_EVENT Mar 11 01:33:28.839400 containerd[1579]: time="2026-03-11T01:33:28.839002045Z" level=warning msg="container event discarded" container=a4823b144750c4ba395dd006b4facd97bc8200b876b5a4fcd29ca3acd7b6f4ce type=CONTAINER_CREATED_EVENT Mar 11 01:33:28.839400 containerd[1579]: time="2026-03-11T01:33:28.839076861Z" level=warning msg="container event discarded" container=a4823b144750c4ba395dd006b4facd97bc8200b876b5a4fcd29ca3acd7b6f4ce type=CONTAINER_STARTED_EVENT Mar 11 01:33:28.873601 systemd[1]: Started sshd@26-10.0.0.26:22-10.0.0.1:54336.service - OpenSSH per-connection server daemon (10.0.0.1:54336). Mar 11 01:33:29.056706 sshd[6975]: Accepted publickey for core from 10.0.0.1 port 54336 ssh2: RSA SHA256:ov5Rj6fEl6UjPOqNhoR9JzHhu6CIWP/x84fpjn8AOgI Mar 11 01:33:29.085272 sshd-session[6975]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 01:33:29.103843 containerd[1579]: time="2026-03-11T01:33:29.103678960Z" level=warning msg="container event discarded" container=a9bd04c69b5e36cfe3432e9b22aafd37a5f9e823ccf5db2860da892d5eb5777e type=CONTAINER_CREATED_EVENT Mar 11 01:33:29.104474 systemd-logind[1547]: New session 27 of user core. Mar 11 01:33:29.121550 systemd[1]: Started session-27.scope - Session 27 of User core. Mar 11 01:33:29.478520 sshd[7001]: Connection closed by 10.0.0.1 port 54336 Mar 11 01:33:29.476238 sshd-session[6975]: pam_unix(sshd:session): session closed for user core Mar 11 01:33:29.514044 systemd[1]: sshd@26-10.0.0.26:22-10.0.0.1:54336.service: Deactivated successfully. Mar 11 01:33:29.529997 systemd[1]: session-27.scope: Deactivated successfully. Mar 11 01:33:29.545377 systemd-logind[1547]: Session 27 logged out. Waiting for processes to exit. Mar 11 01:33:29.562463 systemd-logind[1547]: Removed session 27. Mar 11 01:33:29.678856 containerd[1579]: time="2026-03-11T01:33:29.678740351Z" level=warning msg="container event discarded" container=a9bd04c69b5e36cfe3432e9b22aafd37a5f9e823ccf5db2860da892d5eb5777e type=CONTAINER_STARTED_EVENT Mar 11 01:33:30.227948 kubelet[2857]: E0311 01:33:30.225964 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 01:33:31.695705 containerd[1579]: time="2026-03-11T01:33:31.693553792Z" level=warning msg="container event discarded" container=16bf87d272dbfc838d7a197ff51d8ebf93e2fbc7094b15cb32e34d3d6fb38347 type=CONTAINER_CREATED_EVENT Mar 11 01:33:32.259209 containerd[1579]: time="2026-03-11T01:33:32.258562586Z" level=warning msg="container event discarded" container=16bf87d272dbfc838d7a197ff51d8ebf93e2fbc7094b15cb32e34d3d6fb38347 type=CONTAINER_STARTED_EVENT Mar 11 01:33:34.501312 systemd[1]: Started sshd@27-10.0.0.26:22-10.0.0.1:54346.service - OpenSSH per-connection server daemon (10.0.0.1:54346). Mar 11 01:33:34.687817 sshd[7015]: Accepted publickey for core from 10.0.0.1 port 54346 ssh2: RSA SHA256:ov5Rj6fEl6UjPOqNhoR9JzHhu6CIWP/x84fpjn8AOgI Mar 11 01:33:34.691612 sshd-session[7015]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 01:33:34.723742 systemd-logind[1547]: New session 28 of user core. Mar 11 01:33:34.737074 systemd[1]: Started session-28.scope - Session 28 of User core. Mar 11 01:33:34.958621 sshd[7018]: Connection closed by 10.0.0.1 port 54346 Mar 11 01:33:34.956459 sshd-session[7015]: pam_unix(sshd:session): session closed for user core Mar 11 01:33:34.974697 systemd[1]: sshd@27-10.0.0.26:22-10.0.0.1:54346.service: Deactivated successfully. Mar 11 01:33:34.987854 systemd[1]: session-28.scope: Deactivated successfully. Mar 11 01:33:34.994278 systemd-logind[1547]: Session 28 logged out. Waiting for processes to exit. Mar 11 01:33:35.003533 systemd-logind[1547]: Removed session 28. Mar 11 01:33:40.032855 systemd[1]: Started sshd@28-10.0.0.26:22-10.0.0.1:41504.service - OpenSSH per-connection server daemon (10.0.0.1:41504). Mar 11 01:33:40.263277 sshd[7057]: Accepted publickey for core from 10.0.0.1 port 41504 ssh2: RSA SHA256:ov5Rj6fEl6UjPOqNhoR9JzHhu6CIWP/x84fpjn8AOgI Mar 11 01:33:40.268991 sshd-session[7057]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 01:33:40.309231 systemd-logind[1547]: New session 29 of user core. Mar 11 01:33:40.315935 systemd[1]: Started session-29.scope - Session 29 of User core. Mar 11 01:33:40.777000 sshd[7072]: Connection closed by 10.0.0.1 port 41504 Mar 11 01:33:40.779616 sshd-session[7057]: pam_unix(sshd:session): session closed for user core Mar 11 01:33:40.793384 systemd[1]: sshd@28-10.0.0.26:22-10.0.0.1:41504.service: Deactivated successfully. Mar 11 01:33:40.796744 systemd-logind[1547]: Session 29 logged out. Waiting for processes to exit. Mar 11 01:33:40.800503 systemd[1]: session-29.scope: Deactivated successfully. Mar 11 01:33:46.502601 containerd[1579]: time="2026-03-11T01:33:46.502278791Z" level=warning msg="container event discarded" container=3515624770fefb9492ba0ac350a52a32cab534639a1be3871fb9ac09c6a1d285 type=CONTAINER_CREATED_EVENT Mar 11 01:33:46.530064 systemd[1]: Started sshd@29-10.0.0.26:22-10.0.0.1:41510.service - OpenSSH per-connection server daemon (10.0.0.1:41510). Mar 11 01:33:46.555752 systemd-logind[1547]: Removed session 29. Mar 11 01:33:46.721765 kubelet[2857]: E0311 01:33:46.710715 2857 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.779s" Mar 11 01:33:46.916929 sshd[7097]: Accepted publickey for core from 10.0.0.1 port 41510 ssh2: RSA SHA256:ov5Rj6fEl6UjPOqNhoR9JzHhu6CIWP/x84fpjn8AOgI Mar 11 01:33:46.920800 sshd-session[7097]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 01:33:46.936362 systemd-logind[1547]: New session 30 of user core. Mar 11 01:33:46.955723 systemd[1]: Started session-30.scope - Session 30 of User core. Mar 11 01:33:47.210829 sshd[7110]: Connection closed by 10.0.0.1 port 41510 Mar 11 01:33:47.212263 sshd-session[7097]: pam_unix(sshd:session): session closed for user core Mar 11 01:33:47.227582 systemd[1]: sshd@29-10.0.0.26:22-10.0.0.1:41510.service: Deactivated successfully. Mar 11 01:33:47.234500 systemd[1]: session-30.scope: Deactivated successfully. Mar 11 01:33:47.253462 systemd-logind[1547]: Session 30 logged out. Waiting for processes to exit. Mar 11 01:33:47.268456 systemd-logind[1547]: Removed session 30. Mar 11 01:33:50.089492 containerd[1579]: time="2026-03-11T01:33:50.088987994Z" level=warning msg="container event discarded" container=3515624770fefb9492ba0ac350a52a32cab534639a1be3871fb9ac09c6a1d285 type=CONTAINER_STARTED_EVENT Mar 11 01:33:50.089492 containerd[1579]: time="2026-03-11T01:33:50.089067953Z" level=warning msg="container event discarded" container=123adc9479a617f0d16492796d763b6aabdda90aea726ef3b3fcee93292fedcb type=CONTAINER_CREATED_EVENT Mar 11 01:33:50.089492 containerd[1579]: time="2026-03-11T01:33:50.089083757Z" level=warning msg="container event discarded" container=dcfa30ea315e886a7b86b6af8a65d38451afc7907035906a5333f85f17b2e41d type=CONTAINER_CREATED_EVENT Mar 11 01:33:50.089492 containerd[1579]: time="2026-03-11T01:33:50.089096015Z" level=warning msg="container event discarded" container=dcfa30ea315e886a7b86b6af8a65d38451afc7907035906a5333f85f17b2e41d type=CONTAINER_STARTED_EVENT Mar 11 01:33:50.089492 containerd[1579]: time="2026-03-11T01:33:50.089106650Z" level=warning msg="container event discarded" container=123adc9479a617f0d16492796d763b6aabdda90aea726ef3b3fcee93292fedcb type=CONTAINER_STARTED_EVENT Mar 11 01:33:52.309825 systemd[1]: Started sshd@30-10.0.0.26:22-10.0.0.1:48236.service - OpenSSH per-connection server daemon (10.0.0.1:48236). Mar 11 01:33:52.912550 sshd[7132]: Accepted publickey for core from 10.0.0.1 port 48236 ssh2: RSA SHA256:ov5Rj6fEl6UjPOqNhoR9JzHhu6CIWP/x84fpjn8AOgI Mar 11 01:33:52.916853 sshd-session[7132]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 01:33:52.990256 systemd-logind[1547]: New session 31 of user core. Mar 11 01:33:53.013246 systemd[1]: Started session-31.scope - Session 31 of User core. Mar 11 01:33:53.367219 sshd[7135]: Connection closed by 10.0.0.1 port 48236 Mar 11 01:33:53.366455 sshd-session[7132]: pam_unix(sshd:session): session closed for user core Mar 11 01:33:53.378310 systemd[1]: sshd@30-10.0.0.26:22-10.0.0.1:48236.service: Deactivated successfully. Mar 11 01:33:53.383112 systemd[1]: session-31.scope: Deactivated successfully. Mar 11 01:33:53.388967 systemd-logind[1547]: Session 31 logged out. Waiting for processes to exit. Mar 11 01:33:53.406881 systemd-logind[1547]: Removed session 31. Mar 11 01:33:54.069684 update_engine[1549]: I20260311 01:33:54.066747 1549 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Mar 11 01:33:54.069684 update_engine[1549]: I20260311 01:33:54.068053 1549 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Mar 11 01:33:54.083946 update_engine[1549]: I20260311 01:33:54.083847 1549 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Mar 11 01:33:54.086201 update_engine[1549]: I20260311 01:33:54.085028 1549 omaha_request_params.cc:62] Current group set to stable Mar 11 01:33:54.086340 update_engine[1549]: I20260311 01:33:54.086311 1549 update_attempter.cc:499] Already updated boot flags. Skipping. Mar 11 01:33:54.088928 update_engine[1549]: I20260311 01:33:54.088713 1549 update_attempter.cc:643] Scheduling an action processor start. Mar 11 01:33:54.088928 update_engine[1549]: I20260311 01:33:54.088807 1549 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Mar 11 01:33:54.090197 update_engine[1549]: I20260311 01:33:54.088935 1549 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Mar 11 01:33:54.090197 update_engine[1549]: I20260311 01:33:54.089053 1549 omaha_request_action.cc:271] Posting an Omaha request to disabled Mar 11 01:33:54.090197 update_engine[1549]: I20260311 01:33:54.089071 1549 omaha_request_action.cc:272] Request: Mar 11 01:33:54.090197 update_engine[1549]: Mar 11 01:33:54.090197 update_engine[1549]: Mar 11 01:33:54.090197 update_engine[1549]: Mar 11 01:33:54.090197 update_engine[1549]: Mar 11 01:33:54.090197 update_engine[1549]: Mar 11 01:33:54.090197 update_engine[1549]: Mar 11 01:33:54.090197 update_engine[1549]: Mar 11 01:33:54.090197 update_engine[1549]: Mar 11 01:33:54.090197 update_engine[1549]: I20260311 01:33:54.089082 1549 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 11 01:33:54.106679 update_engine[1549]: I20260311 01:33:54.105389 1549 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 11 01:33:54.106679 update_engine[1549]: I20260311 01:33:54.106462 1549 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 11 01:33:54.132107 update_engine[1549]: E20260311 01:33:54.131033 1549 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 11 01:33:54.132477 update_engine[1549]: I20260311 01:33:54.132440 1549 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Mar 11 01:33:54.146963 locksmithd[1601]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Mar 11 01:33:56.704299 containerd[1579]: time="2026-03-11T01:33:56.704213857Z" level=warning msg="container event discarded" container=b3e6c489993dd5c698260fffac7aa1773e1d3f5a942c7ce7e6965d89e1f56b58 type=CONTAINER_CREATED_EVENT Mar 11 01:33:57.096628 containerd[1579]: time="2026-03-11T01:33:57.096494094Z" level=warning msg="container event discarded" container=b3e6c489993dd5c698260fffac7aa1773e1d3f5a942c7ce7e6965d89e1f56b58 type=CONTAINER_STARTED_EVENT Mar 11 01:33:58.402054 systemd[1]: Started sshd@31-10.0.0.26:22-10.0.0.1:48246.service - OpenSSH per-connection server daemon (10.0.0.1:48246). Mar 11 01:33:58.565940 sshd[7152]: Accepted publickey for core from 10.0.0.1 port 48246 ssh2: RSA SHA256:ov5Rj6fEl6UjPOqNhoR9JzHhu6CIWP/x84fpjn8AOgI Mar 11 01:33:58.571436 sshd-session[7152]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 01:33:58.581621 systemd-logind[1547]: New session 32 of user core. Mar 11 01:33:58.591328 systemd[1]: Started session-32.scope - Session 32 of User core. Mar 11 01:33:59.071500 sshd[7155]: Connection closed by 10.0.0.1 port 48246 Mar 11 01:33:59.072615 sshd-session[7152]: pam_unix(sshd:session): session closed for user core Mar 11 01:33:59.095982 systemd[1]: sshd@31-10.0.0.26:22-10.0.0.1:48246.service: Deactivated successfully. Mar 11 01:33:59.105056 systemd[1]: session-32.scope: Deactivated successfully. Mar 11 01:33:59.111723 systemd-logind[1547]: Session 32 logged out. Waiting for processes to exit. Mar 11 01:33:59.123921 systemd-logind[1547]: Removed session 32. Mar 11 01:34:02.225366 kubelet[2857]: E0311 01:34:02.223701 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 01:34:03.912044 update_engine[1549]: I20260311 01:34:03.911049 1549 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 11 01:34:03.912044 update_engine[1549]: I20260311 01:34:03.911299 1549 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 11 01:34:03.912044 update_engine[1549]: I20260311 01:34:03.911883 1549 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 11 01:34:03.929360 update_engine[1549]: E20260311 01:34:03.928633 1549 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 11 01:34:03.929360 update_engine[1549]: I20260311 01:34:03.928849 1549 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Mar 11 01:34:04.156752 systemd[1]: Started sshd@32-10.0.0.26:22-10.0.0.1:47396.service - OpenSSH per-connection server daemon (10.0.0.1:47396). Mar 11 01:34:04.321805 sshd[7193]: Accepted publickey for core from 10.0.0.1 port 47396 ssh2: RSA SHA256:ov5Rj6fEl6UjPOqNhoR9JzHhu6CIWP/x84fpjn8AOgI Mar 11 01:34:04.331764 sshd-session[7193]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 01:34:04.371617 systemd-logind[1547]: New session 33 of user core. Mar 11 01:34:04.392596 systemd[1]: Started session-33.scope - Session 33 of User core. Mar 11 01:34:04.909121 sshd[7199]: Connection closed by 10.0.0.1 port 47396 Mar 11 01:34:04.906899 sshd-session[7193]: pam_unix(sshd:session): session closed for user core Mar 11 01:34:04.915858 systemd[1]: sshd@32-10.0.0.26:22-10.0.0.1:47396.service: Deactivated successfully. Mar 11 01:34:04.923725 systemd[1]: session-33.scope: Deactivated successfully. Mar 11 01:34:04.929842 systemd-logind[1547]: Session 33 logged out. Waiting for processes to exit. Mar 11 01:34:04.947641 systemd-logind[1547]: Removed session 33. Mar 11 01:34:07.429397 containerd[1579]: time="2026-03-11T01:34:07.429229122Z" level=warning msg="container event discarded" container=bc1c64a492a89f0fd147d439b08edf33ed13118ff083a084fc4f3b248e05157a type=CONTAINER_CREATED_EVENT Mar 11 01:34:07.966585 containerd[1579]: time="2026-03-11T01:34:07.966491838Z" level=warning msg="container event discarded" container=bc1c64a492a89f0fd147d439b08edf33ed13118ff083a084fc4f3b248e05157a type=CONTAINER_STARTED_EVENT Mar 11 01:34:09.217002 kubelet[2857]: E0311 01:34:09.216402 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 01:34:09.944743 systemd[1]: Started sshd@33-10.0.0.26:22-10.0.0.1:44244.service - OpenSSH per-connection server daemon (10.0.0.1:44244). Mar 11 01:34:10.155314 sshd[7287]: Accepted publickey for core from 10.0.0.1 port 44244 ssh2: RSA SHA256:ov5Rj6fEl6UjPOqNhoR9JzHhu6CIWP/x84fpjn8AOgI Mar 11 01:34:10.164482 sshd-session[7287]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 01:34:10.177826 systemd-logind[1547]: New session 34 of user core. Mar 11 01:34:10.219739 systemd[1]: Started session-34.scope - Session 34 of User core. Mar 11 01:34:10.431823 sshd[7291]: Connection closed by 10.0.0.1 port 44244 Mar 11 01:34:10.431694 sshd-session[7287]: pam_unix(sshd:session): session closed for user core Mar 11 01:34:10.437533 systemd[1]: sshd@33-10.0.0.26:22-10.0.0.1:44244.service: Deactivated successfully. Mar 11 01:34:10.441226 systemd[1]: session-34.scope: Deactivated successfully. Mar 11 01:34:10.445688 systemd-logind[1547]: Session 34 logged out. Waiting for processes to exit. Mar 11 01:34:10.448451 systemd-logind[1547]: Removed session 34. Mar 11 01:34:11.243111 containerd[1579]: time="2026-03-11T01:34:11.242673692Z" level=warning msg="container event discarded" container=b0ee2f22c56cfa143c030f532955d1d43f63af75f62e2011b73b1b706201e9e7 type=CONTAINER_CREATED_EVENT Mar 11 01:34:12.285807 containerd[1579]: time="2026-03-11T01:34:12.280240060Z" level=warning msg="container event discarded" container=b0ee2f22c56cfa143c030f532955d1d43f63af75f62e2011b73b1b706201e9e7 type=CONTAINER_STARTED_EVENT Mar 11 01:34:13.224354 kubelet[2857]: E0311 01:34:13.224219 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 01:34:15.128688 update_engine[1549]: I20260311 01:34:14.467389 1549 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 11 01:34:17.121780 update_engine[1549]: I20260311 01:34:15.624543 1549 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 11 01:34:17.121780 update_engine[1549]: I20260311 01:34:17.098233 1549 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 11 01:34:17.121780 update_engine[1549]: E20260311 01:34:17.117533 1549 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 11 01:34:17.121780 update_engine[1549]: I20260311 01:34:17.117882 1549 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Mar 11 01:34:17.176090 systemd[1]: Started sshd@34-10.0.0.26:22-10.0.0.1:44248.service - OpenSSH per-connection server daemon (10.0.0.1:44248). Mar 11 01:34:17.291217 kubelet[2857]: E0311 01:34:17.290410 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 01:34:17.469380 sshd[7326]: Accepted publickey for core from 10.0.0.1 port 44248 ssh2: RSA SHA256:ov5Rj6fEl6UjPOqNhoR9JzHhu6CIWP/x84fpjn8AOgI Mar 11 01:34:19.240908 sshd-session[7326]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 01:34:19.277326 systemd-logind[1547]: New session 35 of user core. Mar 11 01:34:19.286485 systemd[1]: Started session-35.scope - Session 35 of User core. Mar 11 01:34:19.579835 sshd[7330]: Connection closed by 10.0.0.1 port 44248 Mar 11 01:34:19.580125 sshd-session[7326]: pam_unix(sshd:session): session closed for user core Mar 11 01:34:19.590892 systemd[1]: sshd@34-10.0.0.26:22-10.0.0.1:44248.service: Deactivated successfully. Mar 11 01:34:19.595247 systemd[1]: session-35.scope: Deactivated successfully. Mar 11 01:34:19.600462 systemd-logind[1547]: Session 35 logged out. Waiting for processes to exit. Mar 11 01:34:19.604915 systemd-logind[1547]: Removed session 35. Mar 11 01:34:24.625702 systemd[1]: Started sshd@35-10.0.0.26:22-10.0.0.1:44960.service - OpenSSH per-connection server daemon (10.0.0.1:44960). Mar 11 01:34:24.795510 sshd[7348]: Accepted publickey for core from 10.0.0.1 port 44960 ssh2: RSA SHA256:ov5Rj6fEl6UjPOqNhoR9JzHhu6CIWP/x84fpjn8AOgI Mar 11 01:34:24.802602 sshd-session[7348]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 01:34:24.858286 systemd-logind[1547]: New session 36 of user core. Mar 11 01:34:24.878086 systemd[1]: Started session-36.scope - Session 36 of User core. Mar 11 01:34:25.387852 sshd[7351]: Connection closed by 10.0.0.1 port 44960 Mar 11 01:34:25.387456 sshd-session[7348]: pam_unix(sshd:session): session closed for user core Mar 11 01:34:25.409900 systemd[1]: sshd@35-10.0.0.26:22-10.0.0.1:44960.service: Deactivated successfully. Mar 11 01:34:25.413610 systemd[1]: session-36.scope: Deactivated successfully. Mar 11 01:34:25.416975 systemd-logind[1547]: Session 36 logged out. Waiting for processes to exit. Mar 11 01:34:25.422777 systemd-logind[1547]: Removed session 36. Mar 11 01:34:26.911423 update_engine[1549]: I20260311 01:34:26.910725 1549 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 11 01:34:26.911423 update_engine[1549]: I20260311 01:34:26.910897 1549 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 11 01:34:26.911974 update_engine[1549]: I20260311 01:34:26.911792 1549 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 11 01:34:26.941227 update_engine[1549]: E20260311 01:34:26.937191 1549 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 11 01:34:26.941227 update_engine[1549]: I20260311 01:34:26.937690 1549 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Mar 11 01:34:26.941227 update_engine[1549]: I20260311 01:34:26.937728 1549 omaha_request_action.cc:617] Omaha request response: Mar 11 01:34:26.941227 update_engine[1549]: E20260311 01:34:26.937867 1549 omaha_request_action.cc:636] Omaha request network transfer failed. Mar 11 01:34:26.941227 update_engine[1549]: I20260311 01:34:26.937976 1549 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Mar 11 01:34:26.941227 update_engine[1549]: I20260311 01:34:26.937990 1549 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 11 01:34:26.941227 update_engine[1549]: I20260311 01:34:26.938040 1549 update_attempter.cc:306] Processing Done. Mar 11 01:34:26.941227 update_engine[1549]: E20260311 01:34:26.938061 1549 update_attempter.cc:619] Update failed. Mar 11 01:34:26.941227 update_engine[1549]: I20260311 01:34:26.938102 1549 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Mar 11 01:34:26.941227 update_engine[1549]: I20260311 01:34:26.938112 1549 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Mar 11 01:34:26.941227 update_engine[1549]: I20260311 01:34:26.938120 1549 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Mar 11 01:34:26.941227 update_engine[1549]: I20260311 01:34:26.938264 1549 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Mar 11 01:34:26.941227 update_engine[1549]: I20260311 01:34:26.938338 1549 omaha_request_action.cc:271] Posting an Omaha request to disabled Mar 11 01:34:26.941227 update_engine[1549]: I20260311 01:34:26.938354 1549 omaha_request_action.cc:272] Request: Mar 11 01:34:26.941227 update_engine[1549]: Mar 11 01:34:26.941227 update_engine[1549]: Mar 11 01:34:26.941902 locksmithd[1601]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Mar 11 01:34:26.942667 update_engine[1549]: Mar 11 01:34:26.942667 update_engine[1549]: Mar 11 01:34:26.942667 update_engine[1549]: Mar 11 01:34:26.942667 update_engine[1549]: Mar 11 01:34:26.942667 update_engine[1549]: I20260311 01:34:26.938365 1549 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 11 01:34:26.942667 update_engine[1549]: I20260311 01:34:26.938400 1549 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 11 01:34:26.942667 update_engine[1549]: I20260311 01:34:26.941109 1549 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 11 01:34:26.970014 update_engine[1549]: E20260311 01:34:26.969579 1549 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 11 01:34:26.970014 update_engine[1549]: I20260311 01:34:26.969734 1549 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Mar 11 01:34:26.970014 update_engine[1549]: I20260311 01:34:26.969758 1549 omaha_request_action.cc:617] Omaha request response: Mar 11 01:34:26.970014 update_engine[1549]: I20260311 01:34:26.969771 1549 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 11 01:34:26.970014 update_engine[1549]: I20260311 01:34:26.969782 1549 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 11 01:34:26.970014 update_engine[1549]: I20260311 01:34:26.969794 1549 update_attempter.cc:306] Processing Done. Mar 11 01:34:26.970014 update_engine[1549]: I20260311 01:34:26.969805 1549 update_attempter.cc:310] Error event sent. Mar 11 01:34:26.970014 update_engine[1549]: I20260311 01:34:26.969822 1549 update_check_scheduler.cc:74] Next update check in 43m23s Mar 11 01:34:26.980299 locksmithd[1601]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Mar 11 01:34:29.218374 kubelet[2857]: E0311 01:34:29.217687 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 01:34:30.449421 systemd[1]: Started sshd@36-10.0.0.26:22-10.0.0.1:59638.service - OpenSSH per-connection server daemon (10.0.0.1:59638). Mar 11 01:34:30.600963 sshd[7409]: Accepted publickey for core from 10.0.0.1 port 59638 ssh2: RSA SHA256:ov5Rj6fEl6UjPOqNhoR9JzHhu6CIWP/x84fpjn8AOgI Mar 11 01:34:30.605062 sshd-session[7409]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 01:34:30.623932 systemd-logind[1547]: New session 37 of user core. Mar 11 01:34:30.640496 systemd[1]: Started session-37.scope - Session 37 of User core. Mar 11 01:34:31.194820 sshd[7412]: Connection closed by 10.0.0.1 port 59638 Mar 11 01:34:31.192444 sshd-session[7409]: pam_unix(sshd:session): session closed for user core Mar 11 01:34:31.245409 systemd[1]: sshd@36-10.0.0.26:22-10.0.0.1:59638.service: Deactivated successfully. Mar 11 01:34:31.265463 systemd[1]: session-37.scope: Deactivated successfully. Mar 11 01:34:31.283014 systemd-logind[1547]: Session 37 logged out. Waiting for processes to exit. Mar 11 01:34:31.317963 systemd[1]: Started sshd@37-10.0.0.26:22-10.0.0.1:59642.service - OpenSSH per-connection server daemon (10.0.0.1:59642). Mar 11 01:34:31.327404 systemd-logind[1547]: Removed session 37. Mar 11 01:34:31.503172 sshd[7426]: Accepted publickey for core from 10.0.0.1 port 59642 ssh2: RSA SHA256:ov5Rj6fEl6UjPOqNhoR9JzHhu6CIWP/x84fpjn8AOgI Mar 11 01:34:31.512245 sshd-session[7426]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 01:34:31.545899 systemd-logind[1547]: New session 38 of user core. Mar 11 01:34:31.549514 systemd[1]: Started session-38.scope - Session 38 of User core. Mar 11 01:34:31.969006 sshd[7429]: Connection closed by 10.0.0.1 port 59642 Mar 11 01:34:31.972471 sshd-session[7426]: pam_unix(sshd:session): session closed for user core Mar 11 01:34:31.992741 systemd[1]: sshd@37-10.0.0.26:22-10.0.0.1:59642.service: Deactivated successfully. Mar 11 01:34:32.003970 systemd[1]: session-38.scope: Deactivated successfully. Mar 11 01:34:32.012758 systemd-logind[1547]: Session 38 logged out. Waiting for processes to exit. Mar 11 01:34:32.058313 systemd[1]: Started sshd@38-10.0.0.26:22-10.0.0.1:59652.service - OpenSSH per-connection server daemon (10.0.0.1:59652). Mar 11 01:34:32.063125 systemd-logind[1547]: Removed session 38. Mar 11 01:34:32.303796 sshd[7441]: Accepted publickey for core from 10.0.0.1 port 59652 ssh2: RSA SHA256:ov5Rj6fEl6UjPOqNhoR9JzHhu6CIWP/x84fpjn8AOgI Mar 11 01:34:32.308906 sshd-session[7441]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 01:34:32.317205 systemd-logind[1547]: New session 39 of user core. Mar 11 01:34:32.334045 systemd[1]: Started session-39.scope - Session 39 of User core. Mar 11 01:34:32.604275 sshd[7444]: Connection closed by 10.0.0.1 port 59652 Mar 11 01:34:32.605407 sshd-session[7441]: pam_unix(sshd:session): session closed for user core Mar 11 01:34:32.614017 systemd[1]: sshd@38-10.0.0.26:22-10.0.0.1:59652.service: Deactivated successfully. Mar 11 01:34:32.626793 systemd[1]: session-39.scope: Deactivated successfully. Mar 11 01:34:32.632956 systemd-logind[1547]: Session 39 logged out. Waiting for processes to exit. Mar 11 01:34:32.654749 systemd-logind[1547]: Removed session 39. Mar 11 01:34:37.669803 systemd[1]: Started sshd@39-10.0.0.26:22-10.0.0.1:59664.service - OpenSSH per-connection server daemon (10.0.0.1:59664). Mar 11 01:34:37.876182 sshd[7469]: Accepted publickey for core from 10.0.0.1 port 59664 ssh2: RSA SHA256:ov5Rj6fEl6UjPOqNhoR9JzHhu6CIWP/x84fpjn8AOgI Mar 11 01:34:37.883353 sshd-session[7469]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 01:34:37.902354 systemd-logind[1547]: New session 40 of user core. Mar 11 01:34:37.916892 systemd[1]: Started session-40.scope - Session 40 of User core. Mar 11 01:34:38.250933 sshd[7472]: Connection closed by 10.0.0.1 port 59664 Mar 11 01:34:38.253400 sshd-session[7469]: pam_unix(sshd:session): session closed for user core Mar 11 01:34:38.264345 systemd[1]: sshd@39-10.0.0.26:22-10.0.0.1:59664.service: Deactivated successfully. Mar 11 01:34:38.267839 systemd-logind[1547]: Session 40 logged out. Waiting for processes to exit. Mar 11 01:34:38.270031 systemd[1]: session-40.scope: Deactivated successfully. Mar 11 01:34:38.280840 systemd-logind[1547]: Removed session 40. Mar 11 01:34:43.318625 systemd[1]: Started sshd@40-10.0.0.26:22-10.0.0.1:57358.service - OpenSSH per-connection server daemon (10.0.0.1:57358). Mar 11 01:34:43.656261 sshd[7554]: Accepted publickey for core from 10.0.0.1 port 57358 ssh2: RSA SHA256:ov5Rj6fEl6UjPOqNhoR9JzHhu6CIWP/x84fpjn8AOgI Mar 11 01:34:43.663115 sshd-session[7554]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 01:34:43.705616 systemd-logind[1547]: New session 41 of user core. Mar 11 01:34:43.737900 systemd[1]: Started session-41.scope - Session 41 of User core. Mar 11 01:34:44.211231 sshd[7557]: Connection closed by 10.0.0.1 port 57358 Mar 11 01:34:44.204254 sshd-session[7554]: pam_unix(sshd:session): session closed for user core Mar 11 01:34:44.245826 systemd-logind[1547]: Session 41 logged out. Waiting for processes to exit. Mar 11 01:34:44.254835 systemd[1]: sshd@40-10.0.0.26:22-10.0.0.1:57358.service: Deactivated successfully. Mar 11 01:34:44.268033 systemd[1]: session-41.scope: Deactivated successfully. Mar 11 01:34:44.286970 systemd-logind[1547]: Removed session 41. Mar 11 01:34:46.223256 kubelet[2857]: E0311 01:34:46.223103 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 01:34:49.239501 systemd[1]: Started sshd@41-10.0.0.26:22-10.0.0.1:35568.service - OpenSSH per-connection server daemon (10.0.0.1:35568). Mar 11 01:34:49.358385 sshd[7570]: Accepted publickey for core from 10.0.0.1 port 35568 ssh2: RSA SHA256:ov5Rj6fEl6UjPOqNhoR9JzHhu6CIWP/x84fpjn8AOgI Mar 11 01:34:49.362586 sshd-session[7570]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 01:34:49.389002 systemd-logind[1547]: New session 42 of user core. Mar 11 01:34:49.402290 systemd[1]: Started session-42.scope - Session 42 of User core. Mar 11 01:34:49.709758 sshd[7573]: Connection closed by 10.0.0.1 port 35568 Mar 11 01:34:49.710485 sshd-session[7570]: pam_unix(sshd:session): session closed for user core Mar 11 01:34:49.721950 systemd[1]: sshd@41-10.0.0.26:22-10.0.0.1:35568.service: Deactivated successfully. Mar 11 01:34:49.726502 systemd[1]: session-42.scope: Deactivated successfully. Mar 11 01:34:49.733375 systemd-logind[1547]: Session 42 logged out. Waiting for processes to exit. Mar 11 01:34:49.737365 systemd-logind[1547]: Removed session 42. Mar 11 01:34:51.218957 kubelet[2857]: E0311 01:34:51.217962 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 01:34:54.752772 systemd[1]: Started sshd@42-10.0.0.26:22-10.0.0.1:35576.service - OpenSSH per-connection server daemon (10.0.0.1:35576). Mar 11 01:34:54.869986 sshd[7588]: Accepted publickey for core from 10.0.0.1 port 35576 ssh2: RSA SHA256:ov5Rj6fEl6UjPOqNhoR9JzHhu6CIWP/x84fpjn8AOgI Mar 11 01:34:54.875106 sshd-session[7588]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 01:34:54.887645 systemd-logind[1547]: New session 43 of user core. Mar 11 01:34:54.898830 systemd[1]: Started session-43.scope - Session 43 of User core. Mar 11 01:34:55.076896 sshd[7591]: Connection closed by 10.0.0.1 port 35576 Mar 11 01:34:55.077691 sshd-session[7588]: pam_unix(sshd:session): session closed for user core Mar 11 01:34:55.089009 systemd[1]: sshd@42-10.0.0.26:22-10.0.0.1:35576.service: Deactivated successfully. Mar 11 01:34:55.093699 systemd[1]: session-43.scope: Deactivated successfully. Mar 11 01:34:55.099883 systemd-logind[1547]: Session 43 logged out. Waiting for processes to exit. Mar 11 01:34:55.103661 systemd-logind[1547]: Removed session 43. Mar 11 01:35:00.111366 systemd[1]: Started sshd@43-10.0.0.26:22-10.0.0.1:53818.service - OpenSSH per-connection server daemon (10.0.0.1:53818). Mar 11 01:35:00.228057 sshd[7628]: Accepted publickey for core from 10.0.0.1 port 53818 ssh2: RSA SHA256:ov5Rj6fEl6UjPOqNhoR9JzHhu6CIWP/x84fpjn8AOgI Mar 11 01:35:00.230893 sshd-session[7628]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 01:35:00.254492 systemd-logind[1547]: New session 44 of user core. Mar 11 01:35:00.260292 systemd[1]: Started session-44.scope - Session 44 of User core. Mar 11 01:35:00.523665 sshd[7631]: Connection closed by 10.0.0.1 port 53818 Mar 11 01:35:00.527706 sshd-session[7628]: pam_unix(sshd:session): session closed for user core Mar 11 01:35:00.556569 systemd[1]: sshd@43-10.0.0.26:22-10.0.0.1:53818.service: Deactivated successfully. Mar 11 01:35:00.563126 systemd[1]: session-44.scope: Deactivated successfully. Mar 11 01:35:00.567217 systemd-logind[1547]: Session 44 logged out. Waiting for processes to exit. Mar 11 01:35:00.575596 systemd[1]: Started sshd@44-10.0.0.26:22-10.0.0.1:53834.service - OpenSSH per-connection server daemon (10.0.0.1:53834). Mar 11 01:35:00.578829 systemd-logind[1547]: Removed session 44. Mar 11 01:35:00.703241 sshd[7645]: Accepted publickey for core from 10.0.0.1 port 53834 ssh2: RSA SHA256:ov5Rj6fEl6UjPOqNhoR9JzHhu6CIWP/x84fpjn8AOgI Mar 11 01:35:00.708104 sshd-session[7645]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 01:35:00.722236 systemd-logind[1547]: New session 45 of user core. Mar 11 01:35:00.757698 systemd[1]: Started session-45.scope - Session 45 of User core. Mar 11 01:35:01.690102 sshd[7649]: Connection closed by 10.0.0.1 port 53834 Mar 11 01:35:01.699412 sshd-session[7645]: pam_unix(sshd:session): session closed for user core Mar 11 01:35:01.712689 systemd[1]: Started sshd@45-10.0.0.26:22-10.0.0.1:53842.service - OpenSSH per-connection server daemon (10.0.0.1:53842). Mar 11 01:35:01.716343 systemd[1]: sshd@44-10.0.0.26:22-10.0.0.1:53834.service: Deactivated successfully. Mar 11 01:35:01.721434 systemd[1]: session-45.scope: Deactivated successfully. Mar 11 01:35:01.724726 systemd-logind[1547]: Session 45 logged out. Waiting for processes to exit. Mar 11 01:35:01.727537 systemd-logind[1547]: Removed session 45. Mar 11 01:35:01.965759 sshd[7658]: Accepted publickey for core from 10.0.0.1 port 53842 ssh2: RSA SHA256:ov5Rj6fEl6UjPOqNhoR9JzHhu6CIWP/x84fpjn8AOgI Mar 11 01:35:01.972121 sshd-session[7658]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 01:35:01.991248 systemd-logind[1547]: New session 46 of user core. Mar 11 01:35:02.011636 systemd[1]: Started session-46.scope - Session 46 of User core. Mar 11 01:35:03.463195 sshd[7665]: Connection closed by 10.0.0.1 port 53842 Mar 11 01:35:03.459650 sshd-session[7658]: pam_unix(sshd:session): session closed for user core Mar 11 01:35:03.480974 systemd[1]: sshd@45-10.0.0.26:22-10.0.0.1:53842.service: Deactivated successfully. Mar 11 01:35:03.485248 systemd[1]: session-46.scope: Deactivated successfully. Mar 11 01:35:03.487089 systemd-logind[1547]: Session 46 logged out. Waiting for processes to exit. Mar 11 01:35:03.492845 systemd[1]: Started sshd@46-10.0.0.26:22-10.0.0.1:53846.service - OpenSSH per-connection server daemon (10.0.0.1:53846). Mar 11 01:35:03.496100 systemd-logind[1547]: Removed session 46. Mar 11 01:35:03.613692 sshd[7690]: Accepted publickey for core from 10.0.0.1 port 53846 ssh2: RSA SHA256:ov5Rj6fEl6UjPOqNhoR9JzHhu6CIWP/x84fpjn8AOgI Mar 11 01:35:03.620393 sshd-session[7690]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 01:35:03.641730 systemd-logind[1547]: New session 47 of user core. Mar 11 01:35:03.653907 systemd[1]: Started session-47.scope - Session 47 of User core. Mar 11 01:35:04.356186 sshd[7693]: Connection closed by 10.0.0.1 port 53846 Mar 11 01:35:04.354984 sshd-session[7690]: pam_unix(sshd:session): session closed for user core Mar 11 01:35:04.388902 systemd[1]: sshd@46-10.0.0.26:22-10.0.0.1:53846.service: Deactivated successfully. Mar 11 01:35:04.403600 systemd[1]: session-47.scope: Deactivated successfully. Mar 11 01:35:04.411447 systemd-logind[1547]: Session 47 logged out. Waiting for processes to exit. Mar 11 01:35:04.428373 systemd[1]: Started sshd@47-10.0.0.26:22-10.0.0.1:53858.service - OpenSSH per-connection server daemon (10.0.0.1:53858). Mar 11 01:35:04.440110 systemd-logind[1547]: Removed session 47. Mar 11 01:35:04.559798 sshd[7708]: Accepted publickey for core from 10.0.0.1 port 53858 ssh2: RSA SHA256:ov5Rj6fEl6UjPOqNhoR9JzHhu6CIWP/x84fpjn8AOgI Mar 11 01:35:04.559342 sshd-session[7708]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 01:35:04.581401 systemd-logind[1547]: New session 48 of user core. Mar 11 01:35:04.594194 systemd[1]: Started session-48.scope - Session 48 of User core. Mar 11 01:35:04.942795 sshd[7712]: Connection closed by 10.0.0.1 port 53858 Mar 11 01:35:04.944368 sshd-session[7708]: pam_unix(sshd:session): session closed for user core Mar 11 01:35:04.961383 systemd[1]: sshd@47-10.0.0.26:22-10.0.0.1:53858.service: Deactivated successfully. Mar 11 01:35:04.970579 systemd[1]: session-48.scope: Deactivated successfully. Mar 11 01:35:04.976386 systemd-logind[1547]: Session 48 logged out. Waiting for processes to exit. Mar 11 01:35:04.978742 systemd-logind[1547]: Removed session 48. Mar 11 01:35:08.110857 kubelet[2857]: E0311 01:35:08.110616 2857 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.888s" Mar 11 01:35:09.998786 systemd[1]: Started sshd@48-10.0.0.26:22-10.0.0.1:40522.service - OpenSSH per-connection server daemon (10.0.0.1:40522). Mar 11 01:35:10.273617 sshd[7796]: Accepted publickey for core from 10.0.0.1 port 40522 ssh2: RSA SHA256:ov5Rj6fEl6UjPOqNhoR9JzHhu6CIWP/x84fpjn8AOgI Mar 11 01:35:10.278569 sshd-session[7796]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 01:35:10.292201 systemd-logind[1547]: New session 49 of user core. Mar 11 01:35:10.310625 systemd[1]: Started session-49.scope - Session 49 of User core. Mar 11 01:35:10.517028 sshd[7817]: Connection closed by 10.0.0.1 port 40522 Mar 11 01:35:10.518353 sshd-session[7796]: pam_unix(sshd:session): session closed for user core Mar 11 01:35:10.541480 systemd[1]: sshd@48-10.0.0.26:22-10.0.0.1:40522.service: Deactivated successfully. Mar 11 01:35:10.546360 systemd[1]: session-49.scope: Deactivated successfully. Mar 11 01:35:10.553722 systemd-logind[1547]: Session 49 logged out. Waiting for processes to exit. Mar 11 01:35:10.564290 systemd-logind[1547]: Removed session 49. Mar 11 01:35:13.216685 kubelet[2857]: E0311 01:35:13.216231 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 01:35:15.562293 systemd[1]: Started sshd@49-10.0.0.26:22-10.0.0.1:40524.service - OpenSSH per-connection server daemon (10.0.0.1:40524). Mar 11 01:35:15.690310 sshd[7834]: Accepted publickey for core from 10.0.0.1 port 40524 ssh2: RSA SHA256:ov5Rj6fEl6UjPOqNhoR9JzHhu6CIWP/x84fpjn8AOgI Mar 11 01:35:15.692261 sshd-session[7834]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 01:35:15.708738 systemd-logind[1547]: New session 50 of user core. Mar 11 01:35:15.726816 systemd[1]: Started session-50.scope - Session 50 of User core. Mar 11 01:35:16.006898 sshd[7837]: Connection closed by 10.0.0.1 port 40524 Mar 11 01:35:16.005064 sshd-session[7834]: pam_unix(sshd:session): session closed for user core Mar 11 01:35:16.015436 systemd[1]: sshd@49-10.0.0.26:22-10.0.0.1:40524.service: Deactivated successfully. Mar 11 01:35:16.021976 systemd[1]: session-50.scope: Deactivated successfully. Mar 11 01:35:16.033326 systemd-logind[1547]: Session 50 logged out. Waiting for processes to exit. Mar 11 01:35:16.038092 systemd-logind[1547]: Removed session 50. Mar 11 01:35:20.221415 kubelet[2857]: E0311 01:35:20.218989 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 01:35:21.042550 systemd[1]: Started sshd@50-10.0.0.26:22-10.0.0.1:54664.service - OpenSSH per-connection server daemon (10.0.0.1:54664). Mar 11 01:35:21.219434 sshd[7852]: Accepted publickey for core from 10.0.0.1 port 54664 ssh2: RSA SHA256:ov5Rj6fEl6UjPOqNhoR9JzHhu6CIWP/x84fpjn8AOgI Mar 11 01:35:21.222095 sshd-session[7852]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 01:35:21.268051 systemd-logind[1547]: New session 51 of user core. Mar 11 01:35:21.279077 systemd[1]: Started session-51.scope - Session 51 of User core. Mar 11 01:35:21.481991 sshd[7855]: Connection closed by 10.0.0.1 port 54664 Mar 11 01:35:21.482972 sshd-session[7852]: pam_unix(sshd:session): session closed for user core Mar 11 01:35:21.490227 systemd[1]: sshd@50-10.0.0.26:22-10.0.0.1:54664.service: Deactivated successfully. Mar 11 01:35:21.495445 systemd[1]: session-51.scope: Deactivated successfully. Mar 11 01:35:21.502678 systemd-logind[1547]: Session 51 logged out. Waiting for processes to exit. Mar 11 01:35:21.507089 systemd-logind[1547]: Removed session 51. Mar 11 01:35:26.506230 systemd[1]: Started sshd@51-10.0.0.26:22-10.0.0.1:54668.service - OpenSSH per-connection server daemon (10.0.0.1:54668). Mar 11 01:35:26.687476 sshd[7868]: Accepted publickey for core from 10.0.0.1 port 54668 ssh2: RSA SHA256:ov5Rj6fEl6UjPOqNhoR9JzHhu6CIWP/x84fpjn8AOgI Mar 11 01:35:26.691329 sshd-session[7868]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 01:35:26.718788 systemd-logind[1547]: New session 52 of user core. Mar 11 01:35:26.745567 systemd[1]: Started session-52.scope - Session 52 of User core. Mar 11 01:35:27.101015 sshd[7871]: Connection closed by 10.0.0.1 port 54668 Mar 11 01:35:27.101896 sshd-session[7868]: pam_unix(sshd:session): session closed for user core Mar 11 01:35:27.113294 systemd[1]: sshd@51-10.0.0.26:22-10.0.0.1:54668.service: Deactivated successfully. Mar 11 01:35:27.126952 systemd[1]: session-52.scope: Deactivated successfully. Mar 11 01:35:27.138104 systemd-logind[1547]: Session 52 logged out. Waiting for processes to exit. Mar 11 01:35:27.141957 systemd-logind[1547]: Removed session 52. Mar 11 01:35:32.166328 systemd[1]: Started sshd@52-10.0.0.26:22-10.0.0.1:34108.service - OpenSSH per-connection server daemon (10.0.0.1:34108). Mar 11 01:35:32.411910 sshd[7907]: Accepted publickey for core from 10.0.0.1 port 34108 ssh2: RSA SHA256:ov5Rj6fEl6UjPOqNhoR9JzHhu6CIWP/x84fpjn8AOgI Mar 11 01:35:32.416751 sshd-session[7907]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 01:35:32.477752 systemd-logind[1547]: New session 53 of user core. Mar 11 01:35:32.505957 systemd[1]: Started session-53.scope - Session 53 of User core. Mar 11 01:35:32.895065 sshd[7910]: Connection closed by 10.0.0.1 port 34108 Mar 11 01:35:32.894793 sshd-session[7907]: pam_unix(sshd:session): session closed for user core Mar 11 01:35:32.902062 systemd[1]: sshd@52-10.0.0.26:22-10.0.0.1:34108.service: Deactivated successfully. Mar 11 01:35:32.907330 systemd[1]: session-53.scope: Deactivated successfully. Mar 11 01:35:32.914952 systemd-logind[1547]: Session 53 logged out. Waiting for processes to exit. Mar 11 01:35:32.919454 systemd-logind[1547]: Removed session 53. Mar 11 01:35:37.953251 systemd[1]: Started sshd@53-10.0.0.26:22-10.0.0.1:34112.service - OpenSSH per-connection server daemon (10.0.0.1:34112). Mar 11 01:35:38.074100 sshd[7923]: Accepted publickey for core from 10.0.0.1 port 34112 ssh2: RSA SHA256:ov5Rj6fEl6UjPOqNhoR9JzHhu6CIWP/x84fpjn8AOgI Mar 11 01:35:38.081374 sshd-session[7923]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 01:35:38.096930 systemd-logind[1547]: New session 54 of user core. Mar 11 01:35:38.114514 systemd[1]: Started session-54.scope - Session 54 of User core. Mar 11 01:35:38.352475 sshd[7926]: Connection closed by 10.0.0.1 port 34112 Mar 11 01:35:38.352242 sshd-session[7923]: pam_unix(sshd:session): session closed for user core Mar 11 01:35:38.374369 systemd[1]: sshd@53-10.0.0.26:22-10.0.0.1:34112.service: Deactivated successfully. Mar 11 01:35:38.382048 systemd[1]: session-54.scope: Deactivated successfully. Mar 11 01:35:38.399895 systemd-logind[1547]: Session 54 logged out. Waiting for processes to exit. Mar 11 01:35:38.405215 systemd-logind[1547]: Removed session 54. Mar 11 01:35:39.217054 kubelet[2857]: E0311 01:35:39.216376 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 01:35:39.218438 kubelet[2857]: E0311 01:35:39.217352 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 01:35:43.381057 systemd[1]: Started sshd@54-10.0.0.26:22-10.0.0.1:46640.service - OpenSSH per-connection server daemon (10.0.0.1:46640). Mar 11 01:35:43.750358 sshd[7988]: Accepted publickey for core from 10.0.0.1 port 46640 ssh2: RSA SHA256:ov5Rj6fEl6UjPOqNhoR9JzHhu6CIWP/x84fpjn8AOgI Mar 11 01:35:43.753870 sshd-session[7988]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 01:35:43.780958 systemd-logind[1547]: New session 55 of user core. Mar 11 01:35:43.794533 systemd[1]: Started session-55.scope - Session 55 of User core. Mar 11 01:35:44.086622 sshd[7991]: Connection closed by 10.0.0.1 port 46640 Mar 11 01:35:44.088480 sshd-session[7988]: pam_unix(sshd:session): session closed for user core Mar 11 01:35:44.096951 systemd[1]: sshd@54-10.0.0.26:22-10.0.0.1:46640.service: Deactivated successfully. Mar 11 01:35:44.102705 systemd[1]: session-55.scope: Deactivated successfully. Mar 11 01:35:44.106060 systemd-logind[1547]: Session 55 logged out. Waiting for processes to exit. Mar 11 01:35:44.114353 systemd-logind[1547]: Removed session 55. Mar 11 01:35:49.141774 systemd[1]: Started sshd@55-10.0.0.26:22-10.0.0.1:53592.service - OpenSSH per-connection server daemon (10.0.0.1:53592). Mar 11 01:35:49.295524 sshd[8004]: Accepted publickey for core from 10.0.0.1 port 53592 ssh2: RSA SHA256:ov5Rj6fEl6UjPOqNhoR9JzHhu6CIWP/x84fpjn8AOgI Mar 11 01:35:49.301950 sshd-session[8004]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 01:35:49.333342 systemd-logind[1547]: New session 56 of user core. Mar 11 01:35:49.342556 systemd[1]: Started session-56.scope - Session 56 of User core. Mar 11 01:35:49.691707 sshd[8007]: Connection closed by 10.0.0.1 port 53592 Mar 11 01:35:49.692248 sshd-session[8004]: pam_unix(sshd:session): session closed for user core Mar 11 01:35:49.706638 systemd[1]: sshd@55-10.0.0.26:22-10.0.0.1:53592.service: Deactivated successfully. Mar 11 01:35:49.715683 systemd[1]: session-56.scope: Deactivated successfully. Mar 11 01:35:49.734424 systemd-logind[1547]: Session 56 logged out. Waiting for processes to exit. Mar 11 01:35:49.738448 systemd-logind[1547]: Removed session 56. Mar 11 01:35:51.234060 kubelet[2857]: E0311 01:35:51.233521 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 01:35:54.744976 systemd[1]: Started sshd@56-10.0.0.26:22-10.0.0.1:53600.service - OpenSSH per-connection server daemon (10.0.0.1:53600). Mar 11 01:35:54.908468 sshd[8022]: Accepted publickey for core from 10.0.0.1 port 53600 ssh2: RSA SHA256:ov5Rj6fEl6UjPOqNhoR9JzHhu6CIWP/x84fpjn8AOgI Mar 11 01:35:54.909504 sshd-session[8022]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 01:35:54.960256 systemd-logind[1547]: New session 57 of user core. Mar 11 01:35:54.976895 systemd[1]: Started session-57.scope - Session 57 of User core. Mar 11 01:35:55.381019 sshd[8025]: Connection closed by 10.0.0.1 port 53600 Mar 11 01:35:55.382751 sshd-session[8022]: pam_unix(sshd:session): session closed for user core Mar 11 01:35:55.402872 systemd[1]: sshd@56-10.0.0.26:22-10.0.0.1:53600.service: Deactivated successfully. Mar 11 01:35:55.410771 systemd[1]: session-57.scope: Deactivated successfully. Mar 11 01:35:55.429629 systemd-logind[1547]: Session 57 logged out. Waiting for processes to exit. Mar 11 01:35:55.442451 systemd-logind[1547]: Removed session 57.