Mar 7 01:49:58.495081 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Mar 6 22:58:19 -00 2026 Mar 7 01:49:58.495113 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=531e046a631dbba7b4aae1b7955ffa961f5ce7d570e89a624d767cf739ab70b5 Mar 7 01:49:58.495129 kernel: BIOS-provided physical RAM map: Mar 7 01:49:58.495138 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Mar 7 01:49:58.495146 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Mar 7 01:49:58.495154 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Mar 7 01:49:58.495164 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Mar 7 01:49:58.495173 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Mar 7 01:49:58.495181 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 7 01:49:58.495193 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Mar 7 01:49:58.495202 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 7 01:49:58.495210 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Mar 7 01:49:58.495260 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 7 01:49:58.495271 kernel: NX (Execute Disable) protection: active Mar 7 01:49:58.495281 kernel: APIC: Static calls initialized Mar 7 01:49:58.495324 kernel: SMBIOS 2.8 present. Mar 7 01:49:58.495333 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Mar 7 01:49:58.495342 kernel: Hypervisor detected: KVM Mar 7 01:49:58.495351 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 7 01:49:58.495360 kernel: kvm-clock: using sched offset of 18999801239 cycles Mar 7 01:49:58.495370 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 7 01:49:58.495380 kernel: tsc: Detected 2445.426 MHz processor Mar 7 01:49:58.495389 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 7 01:49:58.495399 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 7 01:49:58.495412 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Mar 7 01:49:58.495422 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Mar 7 01:49:58.495432 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 7 01:49:58.495441 kernel: Using GB pages for direct mapping Mar 7 01:49:58.495450 kernel: ACPI: Early table checksum verification disabled Mar 7 01:49:58.495459 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Mar 7 01:49:58.495469 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:49:58.495478 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:49:58.495488 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:49:58.495500 kernel: ACPI: FACS 0x000000009CFE0000 000040 Mar 7 01:49:58.495510 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:49:58.495519 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:49:58.495528 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:49:58.495538 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:49:58.495547 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Mar 7 01:49:58.495556 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Mar 7 01:49:58.495571 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Mar 7 01:49:58.495584 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Mar 7 01:49:58.495595 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Mar 7 01:49:58.495653 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Mar 7 01:49:58.495672 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Mar 7 01:49:58.495756 kernel: No NUMA configuration found Mar 7 01:49:58.495769 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Mar 7 01:49:58.495786 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Mar 7 01:49:58.495798 kernel: Zone ranges: Mar 7 01:49:58.495810 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 7 01:49:58.495821 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Mar 7 01:49:58.495831 kernel: Normal empty Mar 7 01:49:58.495843 kernel: Movable zone start for each node Mar 7 01:49:58.495853 kernel: Early memory node ranges Mar 7 01:49:58.495864 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Mar 7 01:49:58.495875 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Mar 7 01:49:58.495889 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Mar 7 01:49:58.495901 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 7 01:49:58.495956 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Mar 7 01:49:58.495969 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Mar 7 01:49:58.495980 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 7 01:49:58.495992 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 7 01:49:58.496002 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 7 01:49:58.496013 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 7 01:49:58.496025 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 7 01:49:58.496040 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 7 01:49:58.496052 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 7 01:49:58.496063 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 7 01:49:58.496074 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 7 01:49:58.496086 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 7 01:49:58.496096 kernel: TSC deadline timer available Mar 7 01:49:58.496107 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Mar 7 01:49:58.496119 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 7 01:49:58.496130 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 7 01:49:58.496184 kernel: kvm-guest: setup PV sched yield Mar 7 01:49:58.496196 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Mar 7 01:49:58.496208 kernel: Booting paravirtualized kernel on KVM Mar 7 01:49:58.496219 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 7 01:49:58.496230 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Mar 7 01:49:58.496243 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Mar 7 01:49:58.496254 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Mar 7 01:49:58.496264 kernel: pcpu-alloc: [0] 0 1 2 3 Mar 7 01:49:58.496275 kernel: kvm-guest: PV spinlocks enabled Mar 7 01:49:58.496290 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 7 01:49:58.496303 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=531e046a631dbba7b4aae1b7955ffa961f5ce7d570e89a624d767cf739ab70b5 Mar 7 01:49:58.496314 kernel: random: crng init done Mar 7 01:49:58.496325 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 7 01:49:58.496337 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 7 01:49:58.496348 kernel: Fallback order for Node 0: 0 Mar 7 01:49:58.496358 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Mar 7 01:49:58.496371 kernel: Policy zone: DMA32 Mar 7 01:49:58.499767 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 7 01:49:58.499781 kernel: Memory: 2434608K/2571752K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 136884K reserved, 0K cma-reserved) Mar 7 01:49:58.499792 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 7 01:49:58.499802 kernel: ftrace: allocating 37996 entries in 149 pages Mar 7 01:49:58.499813 kernel: ftrace: allocated 149 pages with 4 groups Mar 7 01:49:58.499823 kernel: Dynamic Preempt: voluntary Mar 7 01:49:58.499833 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 7 01:49:58.499844 kernel: rcu: RCU event tracing is enabled. Mar 7 01:49:58.499855 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 7 01:49:58.499865 kernel: Trampoline variant of Tasks RCU enabled. Mar 7 01:49:58.499888 kernel: Rude variant of Tasks RCU enabled. Mar 7 01:49:58.499901 kernel: Tracing variant of Tasks RCU enabled. Mar 7 01:49:58.499910 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 7 01:49:58.499920 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 7 01:49:58.499970 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Mar 7 01:49:58.499985 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 7 01:49:58.499998 kernel: Console: colour VGA+ 80x25 Mar 7 01:49:58.500010 kernel: printk: console [ttyS0] enabled Mar 7 01:49:58.500020 kernel: ACPI: Core revision 20230628 Mar 7 01:49:58.500035 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 7 01:49:58.500045 kernel: APIC: Switch to symmetric I/O mode setup Mar 7 01:49:58.500055 kernel: x2apic enabled Mar 7 01:49:58.500314 kernel: APIC: Switched APIC routing to: physical x2apic Mar 7 01:49:58.500325 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Mar 7 01:49:58.500335 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Mar 7 01:49:58.500345 kernel: kvm-guest: setup PV IPIs Mar 7 01:49:58.500356 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 7 01:49:58.500385 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Mar 7 01:49:58.500397 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Mar 7 01:49:58.500407 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 7 01:49:58.500422 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 7 01:49:58.500433 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 7 01:49:58.500444 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 7 01:49:58.500456 kernel: Spectre V2 : Mitigation: Retpolines Mar 7 01:49:58.500467 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 7 01:49:58.500481 kernel: Speculative Store Bypass: Vulnerable Mar 7 01:49:58.500492 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Mar 7 01:49:58.500547 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Mar 7 01:49:58.500562 kernel: active return thunk: srso_alias_return_thunk Mar 7 01:49:58.500573 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Mar 7 01:49:58.500584 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Mar 7 01:49:58.500595 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Mar 7 01:49:58.503737 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 7 01:49:58.503766 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 7 01:49:58.503779 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 7 01:49:58.503791 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 7 01:49:58.503803 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Mar 7 01:49:58.503816 kernel: Freeing SMP alternatives memory: 32K Mar 7 01:49:58.503827 kernel: pid_max: default: 32768 minimum: 301 Mar 7 01:49:58.503840 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 7 01:49:58.503853 kernel: landlock: Up and running. Mar 7 01:49:58.503897 kernel: SELinux: Initializing. Mar 7 01:49:58.503915 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 7 01:49:58.503926 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 7 01:49:58.503939 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Mar 7 01:49:58.503951 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 7 01:49:58.503964 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 7 01:49:58.503977 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 7 01:49:58.503989 kernel: Performance Events: PMU not available due to virtualization, using software events only. Mar 7 01:49:58.503999 kernel: signal: max sigframe size: 1776 Mar 7 01:49:58.504042 kernel: rcu: Hierarchical SRCU implementation. Mar 7 01:49:58.504060 kernel: rcu: Max phase no-delay instances is 400. Mar 7 01:49:58.504070 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 7 01:49:58.504081 kernel: smp: Bringing up secondary CPUs ... Mar 7 01:49:58.504091 kernel: smpboot: x86: Booting SMP configuration: Mar 7 01:49:58.504101 kernel: .... node #0, CPUs: #1 #2 #3 Mar 7 01:49:58.504112 kernel: smp: Brought up 1 node, 4 CPUs Mar 7 01:49:58.504123 kernel: smpboot: Max logical packages: 1 Mar 7 01:49:58.504135 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Mar 7 01:49:58.504146 kernel: devtmpfs: initialized Mar 7 01:49:58.504160 kernel: x86/mm: Memory block size: 128MB Mar 7 01:49:58.504171 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 7 01:49:58.504181 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 7 01:49:58.504191 kernel: pinctrl core: initialized pinctrl subsystem Mar 7 01:49:58.504202 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 7 01:49:58.504213 kernel: audit: initializing netlink subsys (disabled) Mar 7 01:49:58.504224 kernel: audit: type=2000 audit(1772848189.065:1): state=initialized audit_enabled=0 res=1 Mar 7 01:49:58.504233 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 7 01:49:58.504244 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 7 01:49:58.504257 kernel: cpuidle: using governor menu Mar 7 01:49:58.504267 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 7 01:49:58.504277 kernel: dca service started, version 1.12.1 Mar 7 01:49:58.504288 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Mar 7 01:49:58.504298 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Mar 7 01:49:58.504309 kernel: PCI: Using configuration type 1 for base access Mar 7 01:49:58.504319 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 7 01:49:58.504330 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 7 01:49:58.504341 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 7 01:49:58.504355 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 7 01:49:58.504365 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 7 01:49:58.504376 kernel: ACPI: Added _OSI(Module Device) Mar 7 01:49:58.504386 kernel: ACPI: Added _OSI(Processor Device) Mar 7 01:49:58.504396 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 7 01:49:58.504407 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 7 01:49:58.504417 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 7 01:49:58.504427 kernel: ACPI: Interpreter enabled Mar 7 01:49:58.504437 kernel: ACPI: PM: (supports S0 S3 S5) Mar 7 01:49:58.504451 kernel: ACPI: Using IOAPIC for interrupt routing Mar 7 01:49:58.504461 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 7 01:49:58.504471 kernel: PCI: Using E820 reservations for host bridge windows Mar 7 01:49:58.504482 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 7 01:49:58.504492 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 7 01:49:58.507413 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 7 01:49:58.507664 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 7 01:49:58.507936 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 7 01:49:58.507963 kernel: PCI host bridge to bus 0000:00 Mar 7 01:49:58.508394 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 7 01:49:58.508564 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 7 01:49:58.511923 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 7 01:49:58.512110 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Mar 7 01:49:58.512278 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 7 01:49:58.512445 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Mar 7 01:49:58.513122 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 7 01:49:58.514887 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Mar 7 01:49:58.515239 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Mar 7 01:49:58.515421 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Mar 7 01:49:58.515588 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Mar 7 01:49:58.515893 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Mar 7 01:49:58.516093 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 7 01:49:58.516379 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Mar 7 01:49:58.516561 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Mar 7 01:49:58.520947 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Mar 7 01:49:58.521202 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Mar 7 01:49:58.523733 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Mar 7 01:49:58.523945 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Mar 7 01:49:58.524182 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Mar 7 01:49:58.524365 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Mar 7 01:49:58.524814 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Mar 7 01:49:58.525017 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Mar 7 01:49:58.525196 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Mar 7 01:49:58.525373 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Mar 7 01:49:58.525544 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Mar 7 01:49:58.525937 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Mar 7 01:49:58.526119 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 7 01:49:58.526284 kernel: pci 0000:00:1f.0: quirk_ich7_lpc+0x0/0x180 took 29296 usecs Mar 7 01:49:58.532064 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Mar 7 01:49:58.532258 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Mar 7 01:49:58.532433 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Mar 7 01:49:58.532833 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Mar 7 01:49:58.533042 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Mar 7 01:49:58.533062 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 7 01:49:58.533075 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 7 01:49:58.533087 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 7 01:49:58.533099 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 7 01:49:58.533113 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 7 01:49:58.533124 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 7 01:49:58.533135 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 7 01:49:58.533154 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 7 01:49:58.533166 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 7 01:49:58.533177 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 7 01:49:58.533189 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 7 01:49:58.533202 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 7 01:49:58.533213 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 7 01:49:58.533224 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 7 01:49:58.533236 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 7 01:49:58.533247 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 7 01:49:58.533263 kernel: iommu: Default domain type: Translated Mar 7 01:49:58.533275 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 7 01:49:58.533287 kernel: PCI: Using ACPI for IRQ routing Mar 7 01:49:58.533298 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 7 01:49:58.533310 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Mar 7 01:49:58.533323 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Mar 7 01:49:58.533527 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 7 01:49:58.537898 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 7 01:49:58.538125 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 7 01:49:58.538147 kernel: vgaarb: loaded Mar 7 01:49:58.538159 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 7 01:49:58.538171 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 7 01:49:58.538184 kernel: clocksource: Switched to clocksource kvm-clock Mar 7 01:49:58.538196 kernel: VFS: Disk quotas dquot_6.6.0 Mar 7 01:49:58.538208 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 7 01:49:58.538220 kernel: pnp: PnP ACPI init Mar 7 01:49:58.538583 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Mar 7 01:49:58.538660 kernel: pnp: PnP ACPI: found 6 devices Mar 7 01:49:58.538674 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 7 01:49:58.538749 kernel: NET: Registered PF_INET protocol family Mar 7 01:49:58.538761 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 7 01:49:58.538772 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 7 01:49:58.538783 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 7 01:49:58.538793 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 7 01:49:58.538803 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 7 01:49:58.538818 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 7 01:49:58.538828 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 7 01:49:58.538839 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 7 01:49:58.538849 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 7 01:49:58.538859 kernel: NET: Registered PF_XDP protocol family Mar 7 01:49:58.539077 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 7 01:49:58.539240 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 7 01:49:58.539396 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 7 01:49:58.539552 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Mar 7 01:49:58.543893 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Mar 7 01:49:58.544082 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Mar 7 01:49:58.544103 kernel: PCI: CLS 0 bytes, default 64 Mar 7 01:49:58.544117 kernel: Initialise system trusted keyrings Mar 7 01:49:58.544129 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 7 01:49:58.544141 kernel: Key type asymmetric registered Mar 7 01:49:58.544151 kernel: Asymmetric key parser 'x509' registered Mar 7 01:49:58.544164 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 7 01:49:58.544183 kernel: io scheduler mq-deadline registered Mar 7 01:49:58.544195 kernel: io scheduler kyber registered Mar 7 01:49:58.544207 kernel: io scheduler bfq registered Mar 7 01:49:58.544218 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 7 01:49:58.544232 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 7 01:49:58.544243 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 7 01:49:58.544255 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 7 01:49:58.544266 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 7 01:49:58.544277 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 7 01:49:58.544289 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 7 01:49:58.544306 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 7 01:49:58.544317 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 7 01:49:58.547756 kernel: rtc_cmos 00:04: RTC can wake from S4 Mar 7 01:49:58.548008 kernel: rtc_cmos 00:04: registered as rtc0 Mar 7 01:49:58.548194 kernel: rtc_cmos 00:04: setting system clock to 2026-03-07T01:49:55 UTC (1772848195) Mar 7 01:49:58.548362 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Mar 7 01:49:58.548378 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 7 01:49:58.548398 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Mar 7 01:49:58.548410 kernel: NET: Registered PF_INET6 protocol family Mar 7 01:49:58.548422 kernel: Segment Routing with IPv6 Mar 7 01:49:58.548434 kernel: In-situ OAM (IOAM) with IPv6 Mar 7 01:49:58.548446 kernel: NET: Registered PF_PACKET protocol family Mar 7 01:49:58.548457 kernel: Key type dns_resolver registered Mar 7 01:49:58.548469 kernel: IPI shorthand broadcast: enabled Mar 7 01:49:58.548480 kernel: sched_clock: Marking stable (5293026392, 1368409512)->(7931388603, -1269952699) Mar 7 01:49:58.548492 kernel: registered taskstats version 1 Mar 7 01:49:58.548504 kernel: Loading compiled-in X.509 certificates Mar 7 01:49:58.550448 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: da286e6f6c247ee6f65a875c513de7da57782e90' Mar 7 01:49:58.550461 kernel: Key type .fscrypt registered Mar 7 01:49:58.550471 kernel: Key type fscrypt-provisioning registered Mar 7 01:49:58.550481 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 7 01:49:58.550492 kernel: ima: Allocated hash algorithm: sha1 Mar 7 01:49:58.550502 kernel: ima: No architecture policies found Mar 7 01:49:58.550514 kernel: clk: Disabling unused clocks Mar 7 01:49:58.550525 kernel: Freeing unused kernel image (initmem) memory: 42892K Mar 7 01:49:58.550540 kernel: Write protecting the kernel read-only data: 36864k Mar 7 01:49:58.550550 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Mar 7 01:49:58.550561 kernel: Run /init as init process Mar 7 01:49:58.550572 kernel: with arguments: Mar 7 01:49:58.550583 kernel: /init Mar 7 01:49:58.550593 kernel: with environment: Mar 7 01:49:58.550603 kernel: HOME=/ Mar 7 01:49:58.550673 kernel: TERM=linux Mar 7 01:49:58.550748 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 7 01:49:58.550769 systemd[1]: Detected virtualization kvm. Mar 7 01:49:58.550780 systemd[1]: Detected architecture x86-64. Mar 7 01:49:58.550791 systemd[1]: Running in initrd. Mar 7 01:49:58.550802 systemd[1]: No hostname configured, using default hostname. Mar 7 01:49:58.550813 systemd[1]: Hostname set to . Mar 7 01:49:58.550823 systemd[1]: Initializing machine ID from VM UUID. Mar 7 01:49:58.550834 systemd[1]: Queued start job for default target initrd.target. Mar 7 01:49:58.550849 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 01:49:58.550860 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 01:49:58.550872 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 7 01:49:58.550883 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 7 01:49:58.550894 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 7 01:49:58.550906 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 7 01:49:58.550918 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 7 01:49:58.550934 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 7 01:49:58.550944 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 01:49:58.550957 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 7 01:49:58.551011 systemd[1]: Reached target paths.target - Path Units. Mar 7 01:49:58.551024 systemd[1]: Reached target slices.target - Slice Units. Mar 7 01:49:58.551056 systemd[1]: Reached target swap.target - Swaps. Mar 7 01:49:58.551074 systemd[1]: Reached target timers.target - Timer Units. Mar 7 01:49:58.551085 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 7 01:49:58.551096 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 7 01:49:58.551108 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 7 01:49:58.551119 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 7 01:49:58.551130 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 7 01:49:58.551141 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 7 01:49:58.551152 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 01:49:58.551164 systemd[1]: Reached target sockets.target - Socket Units. Mar 7 01:49:58.551178 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 7 01:49:58.551190 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 7 01:49:58.551201 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 7 01:49:58.551212 systemd[1]: Starting systemd-fsck-usr.service... Mar 7 01:49:58.551223 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 7 01:49:58.551234 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 7 01:49:58.551281 systemd-journald[194]: Collecting audit messages is disabled. Mar 7 01:49:58.551314 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:49:58.551328 systemd-journald[194]: Journal started Mar 7 01:49:58.551352 systemd-journald[194]: Runtime Journal (/run/log/journal/b4f6cbaffa43423fa30fd2af46e80936) is 6.0M, max 48.4M, 42.3M free. Mar 7 01:49:58.685768 systemd[1]: Started systemd-journald.service - Journal Service. Mar 7 01:49:58.722862 systemd-modules-load[195]: Inserted module 'overlay' Mar 7 01:49:58.739517 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 7 01:49:58.781595 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 01:49:58.809308 systemd[1]: Finished systemd-fsck-usr.service. Mar 7 01:49:58.983167 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 7 01:49:59.535453 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 7 01:49:59.535556 kernel: Bridge firewalling registered Mar 7 01:49:59.437123 systemd-modules-load[195]: Inserted module 'br_netfilter' Mar 7 01:49:59.608080 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 7 01:49:59.662894 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 7 01:49:59.681774 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:49:59.703849 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 7 01:49:59.708942 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 01:49:59.723122 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 01:49:59.730261 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 7 01:49:59.738152 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 7 01:49:59.878184 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:49:59.895517 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 7 01:49:59.948230 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 01:50:00.008221 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 7 01:50:00.037106 dracut-cmdline[227]: dracut-dracut-053 Mar 7 01:50:00.037106 dracut-cmdline[227]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=531e046a631dbba7b4aae1b7955ffa961f5ce7d570e89a624d767cf739ab70b5 Mar 7 01:50:00.047114 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 7 01:50:00.213607 systemd-resolved[244]: Positive Trust Anchors: Mar 7 01:50:00.213997 systemd-resolved[244]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 7 01:50:00.217538 systemd-resolved[244]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 7 01:50:00.225356 systemd-resolved[244]: Defaulting to hostname 'linux'. Mar 7 01:50:00.234235 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 7 01:50:00.240020 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 7 01:50:00.553028 kernel: SCSI subsystem initialized Mar 7 01:50:00.592424 kernel: Loading iSCSI transport class v2.0-870. Mar 7 01:50:00.629087 kernel: iscsi: registered transport (tcp) Mar 7 01:50:00.693031 kernel: iscsi: registered transport (qla4xxx) Mar 7 01:50:00.693119 kernel: QLogic iSCSI HBA Driver Mar 7 01:50:00.900261 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 7 01:50:00.926939 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 7 01:50:01.017603 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 7 01:50:01.017798 kernel: device-mapper: uevent: version 1.0.3 Mar 7 01:50:01.018465 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 7 01:50:01.138482 kernel: raid6: avx2x4 gen() 14964 MB/s Mar 7 01:50:01.157194 kernel: raid6: avx2x2 gen() 19640 MB/s Mar 7 01:50:01.181235 kernel: raid6: avx2x1 gen() 5118 MB/s Mar 7 01:50:01.181845 kernel: raid6: using algorithm avx2x2 gen() 19640 MB/s Mar 7 01:50:01.206055 kernel: raid6: .... xor() 15560 MB/s, rmw enabled Mar 7 01:50:01.206167 kernel: raid6: using avx2x2 recovery algorithm Mar 7 01:50:01.277589 kernel: xor: automatically using best checksumming function avx Mar 7 01:50:01.915464 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 7 01:50:01.978169 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 7 01:50:02.013889 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 01:50:02.094550 systemd-udevd[416]: Using default interface naming scheme 'v255'. Mar 7 01:50:02.106555 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 01:50:02.145813 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 7 01:50:02.209321 dracut-pre-trigger[418]: rd.md=0: removing MD RAID activation Mar 7 01:50:02.392773 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 7 01:50:02.424064 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 7 01:50:02.631250 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 01:50:02.705090 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 7 01:50:02.783322 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 7 01:50:02.840453 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 7 01:50:02.873565 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 01:50:02.887566 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 7 01:50:02.948237 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 7 01:50:03.033320 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 7 01:50:03.060420 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 7 01:50:03.064105 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:50:03.094158 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 01:50:03.094237 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 01:50:03.094473 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:50:03.094565 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:50:03.248949 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:50:03.353763 kernel: cryptd: max_cpu_qlen set to 1000 Mar 7 01:50:03.405604 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Mar 7 01:50:03.482735 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 7 01:50:03.501780 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 7 01:50:03.501903 kernel: GPT:9289727 != 19775487 Mar 7 01:50:03.501929 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 7 01:50:03.518796 kernel: GPT:9289727 != 19775487 Mar 7 01:50:03.518883 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 7 01:50:03.529132 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 7 01:50:03.861255 kernel: libata version 3.00 loaded. Mar 7 01:50:04.052870 kernel: BTRFS: device fsid 3bed8db9-42ad-4483-9cc8-1ad17a6cd948 devid 1 transid 34 /dev/vda3 scanned by (udev-worker) (471) Mar 7 01:50:04.059528 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 7 01:50:04.124542 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (464) Mar 7 01:50:04.138760 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:50:04.188754 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 7 01:50:04.254964 kernel: AVX2 version of gcm_enc/dec engaged. Mar 7 01:50:04.293673 kernel: ahci 0000:00:1f.2: version 3.0 Mar 7 01:50:04.294340 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 7 01:50:04.294374 kernel: AES CTR mode by8 optimization enabled Mar 7 01:50:04.294611 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 7 01:50:04.352441 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Mar 7 01:50:04.352935 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 7 01:50:04.353182 kernel: scsi host0: ahci Mar 7 01:50:04.306246 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 7 01:50:04.381600 kernel: scsi host1: ahci Mar 7 01:50:04.382005 kernel: scsi host2: ahci Mar 7 01:50:04.332344 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 7 01:50:04.400332 kernel: scsi host3: ahci Mar 7 01:50:04.407101 kernel: scsi host4: ahci Mar 7 01:50:04.407383 kernel: scsi host5: ahci Mar 7 01:50:04.407840 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Mar 7 01:50:04.370026 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 7 01:50:04.537158 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Mar 7 01:50:04.537193 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Mar 7 01:50:04.537221 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Mar 7 01:50:04.537238 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Mar 7 01:50:04.537254 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Mar 7 01:50:04.537269 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 7 01:50:04.537285 disk-uuid[524]: Primary Header is updated. Mar 7 01:50:04.537285 disk-uuid[524]: Secondary Entries is updated. Mar 7 01:50:04.537285 disk-uuid[524]: Secondary Header is updated. Mar 7 01:50:04.442434 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 01:50:04.599300 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 7 01:50:04.718086 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:50:04.746142 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Mar 7 01:50:04.746179 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 7 01:50:04.754878 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 7 01:50:04.787851 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 7 01:50:04.792796 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 7 01:50:04.793074 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Mar 7 01:50:04.801125 kernel: ata3.00: applying bridge limits Mar 7 01:50:04.803423 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 7 01:50:04.809142 kernel: ata3.00: configured for UDMA/100 Mar 7 01:50:04.825346 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 7 01:50:05.134921 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Mar 7 01:50:05.135598 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 7 01:50:05.187886 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Mar 7 01:50:05.544415 kernel: hrtimer: interrupt took 13824339 ns Mar 7 01:50:05.638024 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 7 01:50:05.646319 disk-uuid[534]: The operation has completed successfully. Mar 7 01:50:06.148033 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 7 01:50:06.148286 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 7 01:50:06.202053 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 7 01:50:06.243346 sh[597]: Success Mar 7 01:50:06.585788 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Mar 7 01:50:06.763108 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 7 01:50:06.798822 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 7 01:50:06.854234 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 7 01:50:07.131853 kernel: BTRFS info (device dm-0): first mount of filesystem 3bed8db9-42ad-4483-9cc8-1ad17a6cd948 Mar 7 01:50:07.132029 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:50:07.132049 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 7 01:50:07.134538 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 7 01:50:07.138855 kernel: BTRFS info (device dm-0): using free space tree Mar 7 01:50:07.216556 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 7 01:50:07.234063 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 7 01:50:07.254079 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 7 01:50:07.278759 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 7 01:50:07.449145 kernel: BTRFS info (device vda6): first mount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:50:07.451256 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:50:07.451283 kernel: BTRFS info (device vda6): using free space tree Mar 7 01:50:07.519830 kernel: BTRFS info (device vda6): auto enabling async discard Mar 7 01:50:07.571861 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 7 01:50:07.591596 kernel: BTRFS info (device vda6): last unmount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:50:07.618772 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 7 01:50:07.644369 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 7 01:50:08.808172 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 7 01:50:09.326973 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 7 01:50:09.385193 ignition[701]: Ignition 2.19.0 Mar 7 01:50:09.385239 ignition[701]: Stage: fetch-offline Mar 7 01:50:09.385443 ignition[701]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:50:09.385492 ignition[701]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 7 01:50:09.386191 ignition[701]: parsed url from cmdline: "" Mar 7 01:50:09.386199 ignition[701]: no config URL provided Mar 7 01:50:09.386209 ignition[701]: reading system config file "/usr/lib/ignition/user.ign" Mar 7 01:50:09.386227 ignition[701]: no config at "/usr/lib/ignition/user.ign" Mar 7 01:50:09.386355 ignition[701]: op(1): [started] loading QEMU firmware config module Mar 7 01:50:09.386364 ignition[701]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 7 01:50:09.554464 systemd-networkd[783]: lo: Link UP Mar 7 01:50:09.554472 systemd-networkd[783]: lo: Gained carrier Mar 7 01:50:09.682099 ignition[701]: op(1): [finished] loading QEMU firmware config module Mar 7 01:50:09.725570 systemd-networkd[783]: Enumeration completed Mar 7 01:50:09.728542 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 7 01:50:09.743562 systemd-networkd[783]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:50:09.743571 systemd-networkd[783]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 7 01:50:09.758117 systemd[1]: Reached target network.target - Network. Mar 7 01:50:09.758239 systemd-networkd[783]: eth0: Link UP Mar 7 01:50:09.758247 systemd-networkd[783]: eth0: Gained carrier Mar 7 01:50:09.758269 systemd-networkd[783]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:50:09.904115 systemd-networkd[783]: eth0: DHCPv4 address 10.0.0.118/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 7 01:50:10.506105 ignition[701]: parsing config with SHA512: 31da82572769caabc7062d4e171a8c70b9aa5ab873dcb27c06f2e8139859238e95a53603051085e8dbe152c5cafc4ddf68f8e86a282529e3b592a12638119681 Mar 7 01:50:10.543146 unknown[701]: fetched base config from "system" Mar 7 01:50:10.543192 unknown[701]: fetched user config from "qemu" Mar 7 01:50:10.579882 ignition[701]: fetch-offline: fetch-offline passed Mar 7 01:50:10.580045 ignition[701]: Ignition finished successfully Mar 7 01:50:10.602011 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 7 01:50:10.622841 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 7 01:50:10.659213 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 7 01:50:10.834550 ignition[790]: Ignition 2.19.0 Mar 7 01:50:10.835986 ignition[790]: Stage: kargs Mar 7 01:50:10.842512 ignition[790]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:50:10.842581 ignition[790]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 7 01:50:10.845623 ignition[790]: kargs: kargs passed Mar 7 01:50:10.866968 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 7 01:50:10.846000 ignition[790]: Ignition finished successfully Mar 7 01:50:10.918036 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 7 01:50:11.186591 ignition[798]: Ignition 2.19.0 Mar 7 01:50:11.188780 ignition[798]: Stage: disks Mar 7 01:50:11.189341 ignition[798]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:50:11.189360 ignition[798]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 7 01:50:11.200284 ignition[798]: disks: disks passed Mar 7 01:50:11.238579 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 7 01:50:11.200418 ignition[798]: Ignition finished successfully Mar 7 01:50:11.274484 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 7 01:50:11.287303 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 7 01:50:11.294125 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 7 01:50:11.333353 systemd[1]: Reached target sysinit.target - System Initialization. Mar 7 01:50:11.351292 systemd[1]: Reached target basic.target - Basic System. Mar 7 01:50:11.393489 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 7 01:50:11.451091 systemd-fsck[808]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 7 01:50:11.531148 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 7 01:50:11.627403 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 7 01:50:11.729156 systemd-networkd[783]: eth0: Gained IPv6LL Mar 7 01:50:12.254858 kernel: EXT4-fs (vda9): mounted filesystem aab0506b-de72-4dd2-9393-24d7958f49a5 r/w with ordered data mode. Quota mode: none. Mar 7 01:50:12.257845 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 7 01:50:12.275921 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 7 01:50:12.374527 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 7 01:50:12.421617 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 7 01:50:12.439964 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 7 01:50:12.440131 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 7 01:50:12.440249 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 7 01:50:12.551363 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (817) Mar 7 01:50:12.573636 kernel: BTRFS info (device vda6): first mount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:50:12.573829 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:50:12.577516 kernel: BTRFS info (device vda6): using free space tree Mar 7 01:50:12.591790 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 7 01:50:12.623233 kernel: BTRFS info (device vda6): auto enabling async discard Mar 7 01:50:12.627528 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 7 01:50:12.703460 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 7 01:50:13.053468 initrd-setup-root[841]: cut: /sysroot/etc/passwd: No such file or directory Mar 7 01:50:13.112634 initrd-setup-root[848]: cut: /sysroot/etc/group: No such file or directory Mar 7 01:50:13.176521 initrd-setup-root[855]: cut: /sysroot/etc/shadow: No such file or directory Mar 7 01:50:13.206939 initrd-setup-root[862]: cut: /sysroot/etc/gshadow: No such file or directory Mar 7 01:50:13.846550 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 7 01:50:13.906109 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 7 01:50:13.959072 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 7 01:50:14.035008 kernel: BTRFS info (device vda6): last unmount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:50:14.012539 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 7 01:50:14.208808 ignition[930]: INFO : Ignition 2.19.0 Mar 7 01:50:14.216985 ignition[930]: INFO : Stage: mount Mar 7 01:50:14.216985 ignition[930]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 01:50:14.216985 ignition[930]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 7 01:50:14.216985 ignition[930]: INFO : mount: mount passed Mar 7 01:50:14.216985 ignition[930]: INFO : Ignition finished successfully Mar 7 01:50:14.230932 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 7 01:50:14.349054 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 7 01:50:14.477417 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 7 01:50:14.604798 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 7 01:50:14.716616 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (943) Mar 7 01:50:14.731078 kernel: BTRFS info (device vda6): first mount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:50:14.731160 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:50:14.736630 kernel: BTRFS info (device vda6): using free space tree Mar 7 01:50:14.807035 kernel: BTRFS info (device vda6): auto enabling async discard Mar 7 01:50:14.824197 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 7 01:50:15.129143 ignition[960]: INFO : Ignition 2.19.0 Mar 7 01:50:15.146367 ignition[960]: INFO : Stage: files Mar 7 01:50:15.146367 ignition[960]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 01:50:15.146367 ignition[960]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 7 01:50:15.146367 ignition[960]: DEBUG : files: compiled without relabeling support, skipping Mar 7 01:50:15.146367 ignition[960]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 7 01:50:15.146367 ignition[960]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 7 01:50:15.238083 ignition[960]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 7 01:50:15.238083 ignition[960]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 7 01:50:15.238083 ignition[960]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 7 01:50:15.238083 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Mar 7 01:50:15.238083 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Mar 7 01:50:15.238083 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 7 01:50:15.238083 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 7 01:50:15.215196 unknown[960]: wrote ssh authorized keys file for user: core Mar 7 01:50:15.621467 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 7 01:50:17.054860 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 7 01:50:17.054860 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 7 01:50:17.205474 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 7 01:50:17.205474 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 7 01:50:17.301306 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 7 01:50:17.301306 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 7 01:50:17.301306 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 7 01:50:17.301306 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 7 01:50:17.301306 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 7 01:50:17.301306 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 7 01:50:17.301306 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 7 01:50:17.301306 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 7 01:50:17.301306 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 7 01:50:17.301306 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 7 01:50:17.301306 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Mar 7 01:50:18.193400 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 7 01:50:23.376966 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 7 01:50:23.376966 ignition[960]: INFO : files: op(c): [started] processing unit "containerd.service" Mar 7 01:50:23.444916 ignition[960]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Mar 7 01:50:23.444916 ignition[960]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Mar 7 01:50:23.444916 ignition[960]: INFO : files: op(c): [finished] processing unit "containerd.service" Mar 7 01:50:23.444916 ignition[960]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Mar 7 01:50:23.444916 ignition[960]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 7 01:50:23.444916 ignition[960]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 7 01:50:23.444916 ignition[960]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Mar 7 01:50:23.444916 ignition[960]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Mar 7 01:50:23.444916 ignition[960]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 7 01:50:23.444916 ignition[960]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 7 01:50:23.444916 ignition[960]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Mar 7 01:50:23.444916 ignition[960]: INFO : files: op(12): [started] setting preset to disabled for "coreos-metadata.service" Mar 7 01:50:24.082052 ignition[960]: INFO : files: op(12): op(13): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 7 01:50:24.154007 ignition[960]: INFO : files: op(12): op(13): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 7 01:50:24.197589 ignition[960]: INFO : files: op(12): [finished] setting preset to disabled for "coreos-metadata.service" Mar 7 01:50:24.197589 ignition[960]: INFO : files: op(14): [started] setting preset to enabled for "prepare-helm.service" Mar 7 01:50:24.197589 ignition[960]: INFO : files: op(14): [finished] setting preset to enabled for "prepare-helm.service" Mar 7 01:50:24.197589 ignition[960]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 7 01:50:24.197589 ignition[960]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 7 01:50:24.197589 ignition[960]: INFO : files: files passed Mar 7 01:50:24.197589 ignition[960]: INFO : Ignition finished successfully Mar 7 01:50:24.194606 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 7 01:50:24.308111 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 7 01:50:24.449961 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 7 01:50:24.450812 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 7 01:50:24.451024 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 7 01:50:24.579632 initrd-setup-root-after-ignition[989]: grep: /sysroot/oem/oem-release: No such file or directory Mar 7 01:50:24.637990 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 7 01:50:24.637990 initrd-setup-root-after-ignition[991]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 7 01:50:24.696103 initrd-setup-root-after-ignition[995]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 7 01:50:24.751863 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 7 01:50:24.792174 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 7 01:50:24.880350 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 7 01:50:25.102421 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 7 01:50:25.102668 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 7 01:50:25.137259 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 7 01:50:25.146327 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 7 01:50:25.155440 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 7 01:50:25.198046 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 7 01:50:25.258438 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 7 01:50:25.306000 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 7 01:50:25.386409 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 7 01:50:25.396981 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 01:50:25.405673 systemd[1]: Stopped target timers.target - Timer Units. Mar 7 01:50:25.419191 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 7 01:50:25.419378 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 7 01:50:25.445127 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 7 01:50:25.504253 systemd[1]: Stopped target basic.target - Basic System. Mar 7 01:50:25.514012 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 7 01:50:25.522923 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 7 01:50:25.536124 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 7 01:50:25.579480 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 7 01:50:25.629051 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 7 01:50:25.644281 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 7 01:50:25.666093 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 7 01:50:25.683904 systemd[1]: Stopped target swap.target - Swaps. Mar 7 01:50:25.701550 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 7 01:50:25.701986 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 7 01:50:25.771838 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 7 01:50:25.792302 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 01:50:25.798842 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 7 01:50:25.800488 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 01:50:25.852785 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 7 01:50:25.853037 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 7 01:50:25.871179 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 7 01:50:25.871438 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 7 01:50:25.899301 systemd[1]: Stopped target paths.target - Path Units. Mar 7 01:50:25.909080 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 7 01:50:25.935414 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 01:50:26.005426 systemd[1]: Stopped target slices.target - Slice Units. Mar 7 01:50:26.005581 systemd[1]: Stopped target sockets.target - Socket Units. Mar 7 01:50:26.007046 systemd[1]: iscsid.socket: Deactivated successfully. Mar 7 01:50:26.007291 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 7 01:50:26.091640 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 7 01:50:26.092331 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 7 01:50:26.094255 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 7 01:50:26.094471 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 7 01:50:26.095153 systemd[1]: ignition-files.service: Deactivated successfully. Mar 7 01:50:26.097941 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 7 01:50:26.149946 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 7 01:50:26.197118 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 7 01:50:26.205782 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 01:50:26.292085 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 7 01:50:26.302170 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 7 01:50:26.303094 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 01:50:26.323382 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 7 01:50:26.323559 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 7 01:50:26.380248 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 7 01:50:26.380429 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 7 01:50:26.490859 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 7 01:50:26.830464 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 7 01:50:26.830866 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 7 01:50:26.904330 ignition[1015]: INFO : Ignition 2.19.0 Mar 7 01:50:26.904330 ignition[1015]: INFO : Stage: umount Mar 7 01:50:26.904330 ignition[1015]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 01:50:26.904330 ignition[1015]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 7 01:50:26.904330 ignition[1015]: INFO : umount: umount passed Mar 7 01:50:26.904330 ignition[1015]: INFO : Ignition finished successfully Mar 7 01:50:26.891363 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 7 01:50:26.891996 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 7 01:50:26.926401 systemd[1]: Stopped target network.target - Network. Mar 7 01:50:26.951310 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 7 01:50:26.951450 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 7 01:50:27.029024 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 7 01:50:27.029144 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 7 01:50:27.086357 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 7 01:50:27.086460 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 7 01:50:27.092544 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 7 01:50:27.092668 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 7 01:50:27.140666 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 7 01:50:27.141650 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 7 01:50:27.187338 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 7 01:50:27.203645 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 7 01:50:27.222424 systemd-networkd[783]: eth0: DHCPv6 lease lost Mar 7 01:50:27.274132 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 7 01:50:27.288451 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 7 01:50:27.348041 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 7 01:50:27.348431 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 7 01:50:27.398264 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 7 01:50:27.398374 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 7 01:50:27.422364 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 7 01:50:27.447627 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 7 01:50:27.447886 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 7 01:50:27.455508 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 7 01:50:27.455633 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 7 01:50:27.473143 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 7 01:50:27.473253 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 7 01:50:27.485354 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 7 01:50:27.485454 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 01:50:27.497155 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 01:50:27.547931 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 7 01:50:27.548175 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 01:50:27.582459 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 7 01:50:27.582796 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 7 01:50:27.597962 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 7 01:50:27.598050 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 7 01:50:27.614518 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 7 01:50:27.614605 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 01:50:27.625050 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 7 01:50:27.625152 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 7 01:50:27.637900 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 7 01:50:27.638015 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 7 01:50:27.641580 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 7 01:50:27.641658 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:50:27.684372 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 7 01:50:27.701979 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 7 01:50:27.702118 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 01:50:27.729992 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 01:50:27.730112 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:50:27.745004 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 7 01:50:27.745836 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 7 01:50:27.777002 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 7 01:50:27.779175 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 7 01:50:27.830631 systemd[1]: Switching root. Mar 7 01:50:27.872883 systemd-journald[194]: Journal stopped Mar 7 01:50:35.905168 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Mar 7 01:50:35.905323 kernel: SELinux: policy capability network_peer_controls=1 Mar 7 01:50:35.905362 kernel: SELinux: policy capability open_perms=1 Mar 7 01:50:35.905386 kernel: SELinux: policy capability extended_socket_class=1 Mar 7 01:50:35.905405 kernel: SELinux: policy capability always_check_network=0 Mar 7 01:50:35.905432 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 7 01:50:35.905450 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 7 01:50:35.905468 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 7 01:50:35.905485 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 7 01:50:35.905509 kernel: audit: type=1403 audit(1772848228.806:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 7 01:50:35.905528 systemd[1]: Successfully loaded SELinux policy in 223.560ms. Mar 7 01:50:35.905559 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 35.428ms. Mar 7 01:50:35.905578 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 7 01:50:35.905600 systemd[1]: Detected virtualization kvm. Mar 7 01:50:35.905617 systemd[1]: Detected architecture x86-64. Mar 7 01:50:35.905635 systemd[1]: Detected first boot. Mar 7 01:50:35.905652 systemd[1]: Initializing machine ID from VM UUID. Mar 7 01:50:35.905669 zram_generator::config[1079]: No configuration found. Mar 7 01:50:35.905821 systemd[1]: Populated /etc with preset unit settings. Mar 7 01:50:35.905875 systemd[1]: Queued start job for default target multi-user.target. Mar 7 01:50:35.905904 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 7 01:50:35.905926 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 7 01:50:35.905945 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 7 01:50:35.905963 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 7 01:50:35.905980 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 7 01:50:35.905998 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 7 01:50:35.906015 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 7 01:50:35.906032 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 7 01:50:35.906050 systemd[1]: Created slice user.slice - User and Session Slice. Mar 7 01:50:35.906072 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 01:50:35.906090 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 01:50:35.906110 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 7 01:50:35.906127 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 7 01:50:35.906145 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 7 01:50:35.906165 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 7 01:50:35.906184 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 7 01:50:35.906203 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 01:50:35.906222 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 7 01:50:35.906245 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 01:50:35.906262 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 7 01:50:35.906280 systemd[1]: Reached target slices.target - Slice Units. Mar 7 01:50:35.906296 systemd[1]: Reached target swap.target - Swaps. Mar 7 01:50:35.906313 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 7 01:50:35.906330 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 7 01:50:35.906346 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 7 01:50:35.906363 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 7 01:50:35.906385 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 7 01:50:35.906402 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 7 01:50:35.906419 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 01:50:35.906437 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 7 01:50:35.906456 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 7 01:50:35.906473 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 7 01:50:35.906496 systemd[1]: Mounting media.mount - External Media Directory... Mar 7 01:50:35.906516 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:50:35.906536 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 7 01:50:35.906561 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 7 01:50:35.906579 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 7 01:50:35.906596 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 7 01:50:35.906615 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 7 01:50:35.906633 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 7 01:50:35.906649 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 7 01:50:35.906667 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 7 01:50:35.909838 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 7 01:50:35.909893 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 7 01:50:35.909917 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 7 01:50:35.909940 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 7 01:50:35.909960 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 7 01:50:35.909979 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Mar 7 01:50:35.910006 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Mar 7 01:50:35.910024 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 7 01:50:35.910045 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 7 01:50:35.910063 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 7 01:50:35.910088 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 7 01:50:35.910108 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 7 01:50:35.910126 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:50:35.910143 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 7 01:50:35.910162 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 7 01:50:35.910183 systemd[1]: Mounted media.mount - External Media Directory. Mar 7 01:50:35.910236 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 7 01:50:35.910259 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 7 01:50:35.910285 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 7 01:50:35.910306 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 01:50:35.910326 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 7 01:50:35.910345 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 7 01:50:35.910363 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 7 01:50:35.910382 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 7 01:50:35.910399 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 7 01:50:35.910416 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 7 01:50:35.910432 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 7 01:50:35.910456 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 7 01:50:35.910523 systemd-journald[1139]: Collecting audit messages is disabled. Mar 7 01:50:35.910556 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 7 01:50:35.910575 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 7 01:50:35.910599 kernel: loop: module loaded Mar 7 01:50:35.910619 kernel: fuse: init (API version 7.39) Mar 7 01:50:35.910636 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 7 01:50:35.910654 systemd-journald[1139]: Journal started Mar 7 01:50:35.914823 systemd-journald[1139]: Runtime Journal (/run/log/journal/b4f6cbaffa43423fa30fd2af46e80936) is 6.0M, max 48.4M, 42.3M free. Mar 7 01:50:36.032813 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 7 01:50:36.082983 systemd[1]: Started systemd-journald.service - Journal Service. Mar 7 01:50:36.135841 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 7 01:50:36.222412 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 7 01:50:36.222878 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 7 01:50:36.272042 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 7 01:50:36.272644 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 7 01:50:36.315565 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 7 01:50:36.367460 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 7 01:50:36.412792 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 7 01:50:36.453990 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 7 01:50:36.617327 kernel: ACPI: bus type drm_connector registered Mar 7 01:50:36.620789 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 7 01:50:36.621146 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 7 01:50:36.714339 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 01:50:36.781510 systemd-tmpfiles[1189]: ACLs are not supported, ignoring. Mar 7 01:50:36.782403 systemd-tmpfiles[1189]: ACLs are not supported, ignoring. Mar 7 01:50:36.796598 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 7 01:50:36.870053 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 7 01:50:36.884170 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 7 01:50:36.905662 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 7 01:50:36.920304 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 7 01:50:36.948047 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 7 01:50:37.091145 systemd-journald[1139]: Time spent on flushing to /var/log/journal/b4f6cbaffa43423fa30fd2af46e80936 is 168.976ms for 936 entries. Mar 7 01:50:37.091145 systemd-journald[1139]: System Journal (/var/log/journal/b4f6cbaffa43423fa30fd2af46e80936) is 8.0M, max 195.6M, 187.6M free. Mar 7 01:50:37.294256 systemd-journald[1139]: Received client request to flush runtime journal. Mar 7 01:50:37.082977 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 7 01:50:37.234439 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 7 01:50:37.258247 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 7 01:50:37.280368 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 7 01:50:37.300886 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 7 01:50:37.345106 udevadm[1221]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 7 01:50:37.368127 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 7 01:50:37.377043 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 7 01:50:37.756335 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 7 01:50:37.788532 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 7 01:50:37.869296 systemd-tmpfiles[1237]: ACLs are not supported, ignoring. Mar 7 01:50:37.870072 systemd-tmpfiles[1237]: ACLs are not supported, ignoring. Mar 7 01:50:37.900040 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 01:50:41.537606 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 7 01:50:41.596432 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 01:50:41.845488 systemd-udevd[1243]: Using default interface naming scheme 'v255'. Mar 7 01:50:42.582822 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 01:50:42.637215 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 7 01:50:42.706258 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 7 01:50:42.748543 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Mar 7 01:50:43.529184 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1246) Mar 7 01:50:43.804946 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Mar 7 01:50:43.845284 kernel: ACPI: button: Power Button [PWRF] Mar 7 01:50:43.861573 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 7 01:50:43.977465 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 7 01:50:43.984126 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Mar 7 01:50:43.984453 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 7 01:50:43.984674 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Mar 7 01:50:43.940084 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 7 01:50:45.130833 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:50:45.220789 kernel: mousedev: PS/2 mouse device common for all mice Mar 7 01:50:46.060477 systemd-networkd[1251]: lo: Link UP Mar 7 01:50:46.060495 systemd-networkd[1251]: lo: Gained carrier Mar 7 01:50:46.086598 systemd-networkd[1251]: Enumeration completed Mar 7 01:50:46.087936 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 7 01:50:46.097283 systemd-networkd[1251]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:50:46.097290 systemd-networkd[1251]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 7 01:50:46.117330 systemd-networkd[1251]: eth0: Link UP Mar 7 01:50:46.117604 systemd-networkd[1251]: eth0: Gained carrier Mar 7 01:50:46.117888 systemd-networkd[1251]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:50:46.190155 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 7 01:50:46.337977 systemd-networkd[1251]: eth0: DHCPv4 address 10.0.0.118/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 7 01:50:46.775187 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:50:47.338867 kernel: kvm_amd: TSC scaling supported Mar 7 01:50:47.338985 kernel: kvm_amd: Nested Virtualization enabled Mar 7 01:50:47.339012 kernel: kvm_amd: Nested Paging enabled Mar 7 01:50:47.343089 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Mar 7 01:50:47.348028 kernel: kvm_amd: PMU virtualization is disabled Mar 7 01:50:47.993491 systemd-networkd[1251]: eth0: Gained IPv6LL Mar 7 01:50:48.038326 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 7 01:50:48.500993 kernel: EDAC MC: Ver: 3.0.0 Mar 7 01:50:48.654769 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 7 01:50:48.703904 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 7 01:50:48.787836 lvm[1292]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 7 01:50:48.880499 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 7 01:50:48.957444 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 7 01:50:49.114578 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 7 01:50:49.358848 lvm[1295]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 7 01:50:49.658645 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 7 01:50:49.741140 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 7 01:50:49.766121 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 7 01:50:49.766231 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 7 01:50:49.782972 systemd[1]: Reached target machines.target - Containers. Mar 7 01:50:49.800220 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 7 01:50:49.855046 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 7 01:50:49.887162 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 7 01:50:49.915009 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 7 01:50:50.007064 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 7 01:50:50.032397 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 7 01:50:50.094169 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 7 01:50:50.100112 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 7 01:50:50.168249 kernel: loop0: detected capacity change from 0 to 228704 Mar 7 01:50:50.177140 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 7 01:50:50.276304 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 7 01:50:50.309151 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 7 01:50:50.458605 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 7 01:50:50.603817 kernel: loop1: detected capacity change from 0 to 140768 Mar 7 01:50:50.900146 kernel: loop2: detected capacity change from 0 to 142488 Mar 7 01:50:51.635451 kernel: loop3: detected capacity change from 0 to 228704 Mar 7 01:50:52.430415 kernel: loop4: detected capacity change from 0 to 140768 Mar 7 01:50:53.592538 kernel: loop5: detected capacity change from 0 to 142488 Mar 7 01:50:53.839853 (sd-merge)[1316]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 7 01:50:53.840998 (sd-merge)[1316]: Merged extensions into '/usr'. Mar 7 01:50:53.880339 systemd[1]: Reloading requested from client PID 1303 ('systemd-sysext') (unit systemd-sysext.service)... Mar 7 01:50:53.880537 systemd[1]: Reloading... Mar 7 01:50:54.330641 zram_generator::config[1353]: No configuration found. Mar 7 01:50:55.896874 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 01:50:56.474049 systemd[1]: Reloading finished in 2583 ms. Mar 7 01:50:56.530349 ldconfig[1299]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 7 01:50:56.530083 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 7 01:50:56.538634 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 7 01:50:56.824556 systemd[1]: Starting ensure-sysext.service... Mar 7 01:50:56.881214 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 7 01:50:56.924086 systemd[1]: Reloading requested from client PID 1388 ('systemctl') (unit ensure-sysext.service)... Mar 7 01:50:56.924143 systemd[1]: Reloading... Mar 7 01:50:57.494370 systemd-tmpfiles[1389]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 7 01:50:57.536859 systemd-tmpfiles[1389]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 7 01:50:57.539647 systemd-tmpfiles[1389]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 7 01:50:57.544316 systemd-tmpfiles[1389]: ACLs are not supported, ignoring. Mar 7 01:50:57.545810 systemd-tmpfiles[1389]: ACLs are not supported, ignoring. Mar 7 01:50:57.584542 systemd-tmpfiles[1389]: Detected autofs mount point /boot during canonicalization of boot. Mar 7 01:50:57.589004 systemd-tmpfiles[1389]: Skipping /boot Mar 7 01:50:57.725970 zram_generator::config[1420]: No configuration found. Mar 7 01:50:57.785399 systemd-tmpfiles[1389]: Detected autofs mount point /boot during canonicalization of boot. Mar 7 01:50:57.785632 systemd-tmpfiles[1389]: Skipping /boot Mar 7 01:50:58.923294 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 01:50:59.184664 systemd[1]: Reloading finished in 2259 ms. Mar 7 01:50:59.235194 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 01:50:59.291273 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 7 01:50:59.495324 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 7 01:50:59.802449 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 7 01:50:59.970429 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 7 01:51:00.044286 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 7 01:51:00.120048 augenrules[1482]: No rules Mar 7 01:51:00.123189 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 7 01:51:00.157307 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:51:00.157667 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 7 01:51:00.202271 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 7 01:51:00.227507 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 7 01:51:00.276576 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 7 01:51:00.311061 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 7 01:51:00.311610 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:51:00.326923 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 7 01:51:00.346529 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 7 01:51:00.349050 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 7 01:51:00.372679 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 7 01:51:00.374420 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 7 01:51:00.397618 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 7 01:51:00.399610 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 7 01:51:00.436052 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 7 01:51:00.505027 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:51:00.505516 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 7 01:51:00.581829 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 7 01:51:00.652361 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 7 01:51:00.697381 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 7 01:51:00.733345 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 7 01:51:00.750457 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 7 01:51:00.776334 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 7 01:51:00.808454 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:51:00.847546 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 7 01:51:00.900331 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 7 01:51:00.925679 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 7 01:51:00.926178 systemd-resolved[1474]: Positive Trust Anchors: Mar 7 01:51:00.926195 systemd-resolved[1474]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 7 01:51:00.926241 systemd-resolved[1474]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 7 01:51:01.006961 systemd-resolved[1474]: Defaulting to hostname 'linux'. Mar 7 01:51:01.009389 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 7 01:51:01.021296 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 7 01:51:01.043743 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 7 01:51:01.057747 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 7 01:51:01.069498 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 7 01:51:01.100631 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 7 01:51:01.105067 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 7 01:51:01.124257 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 7 01:51:01.176044 systemd[1]: Finished ensure-sysext.service. Mar 7 01:51:01.241446 systemd[1]: Reached target network.target - Network. Mar 7 01:51:01.256750 systemd[1]: Reached target network-online.target - Network is Online. Mar 7 01:51:01.273752 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 7 01:51:01.285971 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 7 01:51:01.287321 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 7 01:51:01.316164 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 7 01:51:01.325755 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 7 01:51:01.663591 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 7 01:51:02.103289 systemd-timesyncd[1524]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 7 01:51:02.103359 systemd-timesyncd[1524]: Initial clock synchronization to Sat 2026-03-07 01:51:02.103074 UTC. Mar 7 01:51:02.103642 systemd-resolved[1474]: Clock change detected. Flushing caches. Mar 7 01:51:02.114418 systemd[1]: Reached target sysinit.target - System Initialization. Mar 7 01:51:02.127376 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 7 01:51:02.148693 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 7 01:51:02.170365 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 7 01:51:02.186263 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 7 01:51:02.186354 systemd[1]: Reached target paths.target - Path Units. Mar 7 01:51:02.206151 systemd[1]: Reached target time-set.target - System Time Set. Mar 7 01:51:02.227599 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 7 01:51:02.239390 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 7 01:51:02.271231 systemd[1]: Reached target timers.target - Timer Units. Mar 7 01:51:02.381178 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 7 01:51:02.429169 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 7 01:51:02.463431 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 7 01:51:02.476427 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 7 01:51:02.483335 systemd[1]: Reached target sockets.target - Socket Units. Mar 7 01:51:02.514995 systemd[1]: Reached target basic.target - Basic System. Mar 7 01:51:02.532645 systemd[1]: System is tainted: cgroupsv1 Mar 7 01:51:02.532764 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 7 01:51:02.532940 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 7 01:51:02.553063 systemd[1]: Starting containerd.service - containerd container runtime... Mar 7 01:51:02.569309 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 7 01:51:02.602189 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 7 01:51:02.642948 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 7 01:51:02.726209 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 7 01:51:02.753231 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 7 01:51:02.951677 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:51:03.044280 jq[1531]: false Mar 7 01:51:03.044481 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 7 01:51:03.174630 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 7 01:51:03.228502 extend-filesystems[1534]: Found loop3 Mar 7 01:51:03.228502 extend-filesystems[1534]: Found loop4 Mar 7 01:51:03.228502 extend-filesystems[1534]: Found loop5 Mar 7 01:51:03.228502 extend-filesystems[1534]: Found sr0 Mar 7 01:51:03.228502 extend-filesystems[1534]: Found vda Mar 7 01:51:03.228502 extend-filesystems[1534]: Found vda1 Mar 7 01:51:03.228502 extend-filesystems[1534]: Found vda2 Mar 7 01:51:03.228502 extend-filesystems[1534]: Found vda3 Mar 7 01:51:03.228502 extend-filesystems[1534]: Found usr Mar 7 01:51:03.228502 extend-filesystems[1534]: Found vda4 Mar 7 01:51:03.228502 extend-filesystems[1534]: Found vda6 Mar 7 01:51:03.228502 extend-filesystems[1534]: Found vda7 Mar 7 01:51:03.228502 extend-filesystems[1534]: Found vda9 Mar 7 01:51:03.228502 extend-filesystems[1534]: Checking size of /dev/vda9 Mar 7 01:51:03.566140 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 7 01:51:03.238252 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 7 01:51:03.231374 dbus-daemon[1530]: [system] SELinux support is enabled Mar 7 01:51:03.570776 extend-filesystems[1534]: Resized partition /dev/vda9 Mar 7 01:51:03.272160 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 7 01:51:03.593409 extend-filesystems[1560]: resize2fs 1.47.1 (20-May-2024) Mar 7 01:51:03.661553 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1568) Mar 7 01:51:03.334179 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 7 01:51:03.440036 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 7 01:51:03.550651 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 7 01:51:03.573111 systemd[1]: Starting update-engine.service - Update Engine... Mar 7 01:51:03.592600 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 7 01:51:03.664126 jq[1575]: true Mar 7 01:51:03.633126 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 7 01:51:03.672755 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 7 01:51:03.673395 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 7 01:51:03.675127 systemd[1]: motdgen.service: Deactivated successfully. Mar 7 01:51:03.675559 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 7 01:51:03.731026 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 7 01:51:03.739629 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 7 01:51:03.764859 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 7 01:51:03.765567 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 7 01:51:03.864607 extend-filesystems[1560]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 7 01:51:03.864607 extend-filesystems[1560]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 7 01:51:03.864607 extend-filesystems[1560]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 7 01:51:03.926206 extend-filesystems[1534]: Resized filesystem in /dev/vda9 Mar 7 01:51:03.969042 update_engine[1574]: I20260307 01:51:03.966619 1574 main.cc:92] Flatcar Update Engine starting Mar 7 01:51:03.975438 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 7 01:51:03.985137 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 7 01:51:04.006182 update_engine[1574]: I20260307 01:51:04.005550 1574 update_check_scheduler.cc:74] Next update check in 2m32s Mar 7 01:51:04.017669 (ntainerd)[1587]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 7 01:51:04.023179 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 7 01:51:04.023648 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 7 01:51:04.037099 sshd_keygen[1573]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 7 01:51:04.054609 jq[1583]: true Mar 7 01:51:04.293759 systemd-logind[1563]: Watching system buttons on /dev/input/event1 (Power Button) Mar 7 01:51:04.306758 systemd-logind[1563]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 7 01:51:04.319397 systemd-logind[1563]: New seat seat0. Mar 7 01:51:04.333524 tar[1582]: linux-amd64/LICENSE Mar 7 01:51:04.333524 tar[1582]: linux-amd64/helm Mar 7 01:51:04.379613 systemd[1]: Started systemd-logind.service - User Login Management. Mar 7 01:51:04.511188 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 7 01:51:04.577969 systemd[1]: Started update-engine.service - Update Engine. Mar 7 01:51:04.624263 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 7 01:51:04.636416 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 7 01:51:04.637221 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 7 01:51:04.637536 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 7 01:51:04.653200 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 7 01:51:04.653605 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 7 01:51:04.679683 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 7 01:51:04.979555 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 7 01:51:05.048435 systemd[1]: issuegen.service: Deactivated successfully. Mar 7 01:51:05.072292 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 7 01:51:05.421474 bash[1637]: Updated "/home/core/.ssh/authorized_keys" Mar 7 01:51:05.424377 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 7 01:51:05.456081 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 7 01:51:05.811696 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 7 01:51:07.727290 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 7 01:51:07.819499 systemd[1]: Started sshd@0-10.0.0.118:22-10.0.0.1:48256.service - OpenSSH per-connection server daemon (10.0.0.1:48256). Mar 7 01:51:07.909542 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 7 01:51:08.049433 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 7 01:51:08.199883 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 7 01:51:08.256244 systemd[1]: Reached target getty.target - Login Prompts. Mar 7 01:51:08.817616 locksmithd[1639]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 7 01:51:09.334497 sshd[1645]: Accepted publickey for core from 10.0.0.1 port 48256 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:51:09.355117 sshd[1645]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:51:09.833051 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 7 01:51:09.894440 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 7 01:51:10.027409 systemd-logind[1563]: New session 1 of user core. Mar 7 01:51:11.041109 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 7 01:51:11.620477 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 7 01:51:11.950553 (systemd)[1664]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 7 01:51:12.935487 containerd[1587]: time="2026-03-07T01:51:12.933582174Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Mar 7 01:51:13.255696 containerd[1587]: time="2026-03-07T01:51:13.255319625Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 7 01:51:13.264475 containerd[1587]: time="2026-03-07T01:51:13.262723608Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 7 01:51:13.264475 containerd[1587]: time="2026-03-07T01:51:13.262785304Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 7 01:51:13.264475 containerd[1587]: time="2026-03-07T01:51:13.262876003Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 7 01:51:13.264475 containerd[1587]: time="2026-03-07T01:51:13.264093636Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 7 01:51:13.264475 containerd[1587]: time="2026-03-07T01:51:13.264131597Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 7 01:51:13.264475 containerd[1587]: time="2026-03-07T01:51:13.264303167Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 01:51:13.264475 containerd[1587]: time="2026-03-07T01:51:13.264325529Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 7 01:51:13.265233 containerd[1587]: time="2026-03-07T01:51:13.265203568Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 01:51:13.265367 containerd[1587]: time="2026-03-07T01:51:13.265343480Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 7 01:51:13.265521 containerd[1587]: time="2026-03-07T01:51:13.265494862Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 01:51:13.265605 containerd[1587]: time="2026-03-07T01:51:13.265588547Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 7 01:51:13.265784 containerd[1587]: time="2026-03-07T01:51:13.265767201Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 7 01:51:13.274330 containerd[1587]: time="2026-03-07T01:51:13.273883122Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 7 01:51:13.277009 containerd[1587]: time="2026-03-07T01:51:13.276146458Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 01:51:13.277009 containerd[1587]: time="2026-03-07T01:51:13.276181203Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 7 01:51:13.277009 containerd[1587]: time="2026-03-07T01:51:13.276404099Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 7 01:51:13.277009 containerd[1587]: time="2026-03-07T01:51:13.276531247Z" level=info msg="metadata content store policy set" policy=shared Mar 7 01:51:13.391210 containerd[1587]: time="2026-03-07T01:51:13.389706400Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 7 01:51:13.391210 containerd[1587]: time="2026-03-07T01:51:13.390422617Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 7 01:51:13.391210 containerd[1587]: time="2026-03-07T01:51:13.390589889Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 7 01:51:13.391210 containerd[1587]: time="2026-03-07T01:51:13.390626016Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 7 01:51:13.391210 containerd[1587]: time="2026-03-07T01:51:13.390726394Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 7 01:51:13.401552 containerd[1587]: time="2026-03-07T01:51:13.398731138Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 7 01:51:13.961693 containerd[1587]: time="2026-03-07T01:51:13.961601977Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 7 01:51:14.077492 containerd[1587]: time="2026-03-07T01:51:13.963532080Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 7 01:51:14.077492 containerd[1587]: time="2026-03-07T01:51:13.963563589Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 7 01:51:14.077492 containerd[1587]: time="2026-03-07T01:51:13.963581693Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 7 01:51:14.077492 containerd[1587]: time="2026-03-07T01:51:13.963626276Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 7 01:51:14.077492 containerd[1587]: time="2026-03-07T01:51:13.963715632Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 7 01:51:14.077492 containerd[1587]: time="2026-03-07T01:51:13.963787477Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 7 01:51:14.077492 containerd[1587]: time="2026-03-07T01:51:13.964009742Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 7 01:51:14.077492 containerd[1587]: time="2026-03-07T01:51:13.964090633Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 7 01:51:14.077492 containerd[1587]: time="2026-03-07T01:51:13.965337621Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 7 01:51:14.077492 containerd[1587]: time="2026-03-07T01:51:13.965372496Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 7 01:51:14.077492 containerd[1587]: time="2026-03-07T01:51:13.965394607Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 7 01:51:14.077492 containerd[1587]: time="2026-03-07T01:51:13.965631369Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 7 01:51:14.077492 containerd[1587]: time="2026-03-07T01:51:13.965663930Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 7 01:51:14.077492 containerd[1587]: time="2026-03-07T01:51:13.965738599Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 7 01:51:13.967542 systemd[1664]: Queued start job for default target default.target. Mar 7 01:51:14.079112 containerd[1587]: time="2026-03-07T01:51:13.965767964Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 7 01:51:14.079112 containerd[1587]: time="2026-03-07T01:51:13.965788232Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 7 01:51:14.079112 containerd[1587]: time="2026-03-07T01:51:13.965876978Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 7 01:51:14.079112 containerd[1587]: time="2026-03-07T01:51:13.965897396Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 7 01:51:14.079112 containerd[1587]: time="2026-03-07T01:51:13.966064247Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 7 01:51:14.079112 containerd[1587]: time="2026-03-07T01:51:13.966093101Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 7 01:51:14.079112 containerd[1587]: time="2026-03-07T01:51:13.966118890Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 7 01:51:14.079112 containerd[1587]: time="2026-03-07T01:51:13.966137013Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 7 01:51:14.079112 containerd[1587]: time="2026-03-07T01:51:13.966157211Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 7 01:51:14.079112 containerd[1587]: time="2026-03-07T01:51:13.966364719Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 7 01:51:14.079112 containerd[1587]: time="2026-03-07T01:51:13.966457933Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 7 01:51:14.079112 containerd[1587]: time="2026-03-07T01:51:13.966550635Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 7 01:51:14.079112 containerd[1587]: time="2026-03-07T01:51:13.966604706Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 7 01:51:14.079112 containerd[1587]: time="2026-03-07T01:51:13.966654069Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 7 01:51:14.079681 containerd[1587]: time="2026-03-07T01:51:13.966972343Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 7 01:51:14.079681 containerd[1587]: time="2026-03-07T01:51:13.967042133Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 7 01:51:14.079681 containerd[1587]: time="2026-03-07T01:51:13.967094672Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 7 01:51:14.079681 containerd[1587]: time="2026-03-07T01:51:13.967120049Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 7 01:51:14.079681 containerd[1587]: time="2026-03-07T01:51:13.967133504Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 7 01:51:14.079681 containerd[1587]: time="2026-03-07T01:51:13.967149353Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 7 01:51:14.079681 containerd[1587]: time="2026-03-07T01:51:13.967214284Z" level=info msg="NRI interface is disabled by configuration." Mar 7 01:51:14.079681 containerd[1587]: time="2026-03-07T01:51:13.967230575Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 7 01:51:14.080065 containerd[1587]: time="2026-03-07T01:51:13.969010929Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 7 01:51:14.080065 containerd[1587]: time="2026-03-07T01:51:13.969177570Z" level=info msg="Connect containerd service" Mar 7 01:51:14.080065 containerd[1587]: time="2026-03-07T01:51:13.969375059Z" level=info msg="using legacy CRI server" Mar 7 01:51:14.080065 containerd[1587]: time="2026-03-07T01:51:13.969392431Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 7 01:51:14.080065 containerd[1587]: time="2026-03-07T01:51:13.970244852Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 7 01:51:14.080065 containerd[1587]: time="2026-03-07T01:51:13.978248505Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 7 01:51:14.080065 containerd[1587]: time="2026-03-07T01:51:13.979393684Z" level=info msg="Start subscribing containerd event" Mar 7 01:51:14.080065 containerd[1587]: time="2026-03-07T01:51:13.979778814Z" level=info msg="Start recovering state" Mar 7 01:51:14.080065 containerd[1587]: time="2026-03-07T01:51:13.980252167Z" level=info msg="Start event monitor" Mar 7 01:51:14.080065 containerd[1587]: time="2026-03-07T01:51:13.980326597Z" level=info msg="Start snapshots syncer" Mar 7 01:51:14.080065 containerd[1587]: time="2026-03-07T01:51:13.980348718Z" level=info msg="Start cni network conf syncer for default" Mar 7 01:51:14.080065 containerd[1587]: time="2026-03-07T01:51:13.980395014Z" level=info msg="Start streaming server" Mar 7 01:51:14.080065 containerd[1587]: time="2026-03-07T01:51:13.981153168Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 7 01:51:14.080065 containerd[1587]: time="2026-03-07T01:51:13.982134470Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 7 01:51:14.080065 containerd[1587]: time="2026-03-07T01:51:13.982320196Z" level=info msg="containerd successfully booted in 1.058922s" Mar 7 01:51:14.089311 systemd[1664]: Created slice app.slice - User Application Slice. Mar 7 01:51:14.089378 systemd[1664]: Reached target paths.target - Paths. Mar 7 01:51:14.089401 systemd[1664]: Reached target timers.target - Timers. Mar 7 01:51:14.091662 systemd[1]: Started containerd.service - containerd container runtime. Mar 7 01:51:14.292777 systemd[1664]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 7 01:51:14.584221 systemd[1664]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 7 01:51:14.584367 systemd[1664]: Reached target sockets.target - Sockets. Mar 7 01:51:14.584391 systemd[1664]: Reached target basic.target - Basic System. Mar 7 01:51:14.584476 systemd[1664]: Reached target default.target - Main User Target. Mar 7 01:51:14.584538 systemd[1664]: Startup finished in 2.149s. Mar 7 01:51:14.590024 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 7 01:51:14.656108 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 7 01:51:15.563046 systemd[1]: Started sshd@1-10.0.0.118:22-10.0.0.1:33136.service - OpenSSH per-connection server daemon (10.0.0.1:33136). Mar 7 01:51:16.480743 sshd[1685]: Accepted publickey for core from 10.0.0.1 port 33136 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:51:16.496605 sshd[1685]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:51:16.922060 systemd-logind[1563]: New session 2 of user core. Mar 7 01:51:17.059711 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 7 01:51:17.511138 sshd[1685]: pam_unix(sshd:session): session closed for user core Mar 7 01:51:17.890200 systemd[1]: Started sshd@2-10.0.0.118:22-10.0.0.1:33142.service - OpenSSH per-connection server daemon (10.0.0.1:33142). Mar 7 01:51:17.925360 systemd[1]: sshd@1-10.0.0.118:22-10.0.0.1:33136.service: Deactivated successfully. Mar 7 01:51:17.939689 systemd[1]: session-2.scope: Deactivated successfully. Mar 7 01:51:17.967130 systemd-logind[1563]: Session 2 logged out. Waiting for processes to exit. Mar 7 01:51:17.989782 systemd-logind[1563]: Removed session 2. Mar 7 01:51:18.614675 tar[1582]: linux-amd64/README.md Mar 7 01:51:18.925153 sshd[1690]: Accepted publickey for core from 10.0.0.1 port 33142 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:51:18.978692 sshd[1690]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:51:19.004740 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 7 01:51:19.295250 systemd-logind[1563]: New session 3 of user core. Mar 7 01:51:19.507718 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 7 01:51:19.858575 sshd[1690]: pam_unix(sshd:session): session closed for user core Mar 7 01:51:20.178284 systemd[1]: sshd@2-10.0.0.118:22-10.0.0.1:33142.service: Deactivated successfully. Mar 7 01:51:20.218436 systemd[1]: session-3.scope: Deactivated successfully. Mar 7 01:51:20.219002 systemd-logind[1563]: Session 3 logged out. Waiting for processes to exit. Mar 7 01:51:20.232235 systemd-logind[1563]: Removed session 3. Mar 7 01:51:25.665153 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:51:25.714378 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 7 01:51:25.716283 systemd[1]: Startup finished in 38.205s (kernel) + 56.688s (userspace) = 1min 34.893s. Mar 7 01:51:25.808369 (kubelet)[1713]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:51:29.859320 systemd[1]: Started sshd@3-10.0.0.118:22-10.0.0.1:35086.service - OpenSSH per-connection server daemon (10.0.0.1:35086). Mar 7 01:51:30.217010 sshd[1720]: Accepted publickey for core from 10.0.0.1 port 35086 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:51:30.233770 sshd[1720]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:51:30.287733 systemd-logind[1563]: New session 4 of user core. Mar 7 01:51:30.415715 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 7 01:51:30.823702 sshd[1720]: pam_unix(sshd:session): session closed for user core Mar 7 01:51:30.875597 systemd[1]: Started sshd@4-10.0.0.118:22-10.0.0.1:60818.service - OpenSSH per-connection server daemon (10.0.0.1:60818). Mar 7 01:51:30.882533 systemd[1]: sshd@3-10.0.0.118:22-10.0.0.1:35086.service: Deactivated successfully. Mar 7 01:51:30.930675 systemd[1]: session-4.scope: Deactivated successfully. Mar 7 01:51:30.935091 systemd-logind[1563]: Session 4 logged out. Waiting for processes to exit. Mar 7 01:51:31.023748 systemd-logind[1563]: Removed session 4. Mar 7 01:51:31.241030 sshd[1727]: Accepted publickey for core from 10.0.0.1 port 60818 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:51:31.250769 sshd[1727]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:51:31.303960 systemd-logind[1563]: New session 5 of user core. Mar 7 01:51:31.318079 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 7 01:51:31.505655 sshd[1727]: pam_unix(sshd:session): session closed for user core Mar 7 01:51:31.594629 systemd[1]: Started sshd@5-10.0.0.118:22-10.0.0.1:60822.service - OpenSSH per-connection server daemon (10.0.0.1:60822). Mar 7 01:51:31.599715 systemd[1]: sshd@4-10.0.0.118:22-10.0.0.1:60818.service: Deactivated successfully. Mar 7 01:51:31.615229 systemd[1]: session-5.scope: Deactivated successfully. Mar 7 01:51:31.619588 systemd-logind[1563]: Session 5 logged out. Waiting for processes to exit. Mar 7 01:51:31.632690 systemd-logind[1563]: Removed session 5. Mar 7 01:51:31.965648 sshd[1735]: Accepted publickey for core from 10.0.0.1 port 60822 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:51:31.980309 sshd[1735]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:51:32.366410 systemd-logind[1563]: New session 6 of user core. Mar 7 01:51:32.407372 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 7 01:51:32.657192 sshd[1735]: pam_unix(sshd:session): session closed for user core Mar 7 01:51:32.672900 systemd[1]: sshd@5-10.0.0.118:22-10.0.0.1:60822.service: Deactivated successfully. Mar 7 01:51:32.721386 systemd[1]: session-6.scope: Deactivated successfully. Mar 7 01:51:32.723140 systemd-logind[1563]: Session 6 logged out. Waiting for processes to exit. Mar 7 01:51:32.853218 systemd[1]: Started sshd@6-10.0.0.118:22-10.0.0.1:60834.service - OpenSSH per-connection server daemon (10.0.0.1:60834). Mar 7 01:51:32.875755 systemd-logind[1563]: Removed session 6. Mar 7 01:51:33.166433 sshd[1746]: Accepted publickey for core from 10.0.0.1 port 60834 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:51:33.170357 sshd[1746]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:51:33.221018 systemd-logind[1563]: New session 7 of user core. Mar 7 01:51:33.243299 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 7 01:51:33.290198 kubelet[1713]: E0307 01:51:33.286647 1713 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:51:33.311567 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:51:33.320231 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:51:33.410506 sudo[1752]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 7 01:51:33.412091 sudo[1752]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 01:51:34.424625 sudo[1752]: pam_unix(sudo:session): session closed for user root Mar 7 01:51:34.543246 sshd[1746]: pam_unix(sshd:session): session closed for user core Mar 7 01:51:34.681264 systemd[1]: Started sshd@7-10.0.0.118:22-10.0.0.1:60844.service - OpenSSH per-connection server daemon (10.0.0.1:60844). Mar 7 01:51:34.988962 systemd[1]: sshd@6-10.0.0.118:22-10.0.0.1:60834.service: Deactivated successfully. Mar 7 01:51:35.176824 systemd[1]: session-7.scope: Deactivated successfully. Mar 7 01:51:35.189287 systemd-logind[1563]: Session 7 logged out. Waiting for processes to exit. Mar 7 01:51:35.231224 systemd-logind[1563]: Removed session 7. Mar 7 01:51:35.260279 sshd[1754]: Accepted publickey for core from 10.0.0.1 port 60844 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:51:35.268198 sshd[1754]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:51:35.329590 systemd-logind[1563]: New session 8 of user core. Mar 7 01:51:35.344393 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 7 01:51:35.527545 sudo[1762]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 7 01:51:35.531884 sudo[1762]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 01:51:35.553030 sudo[1762]: pam_unix(sudo:session): session closed for user root Mar 7 01:51:35.576539 sudo[1761]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Mar 7 01:51:35.577180 sudo[1761]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 01:51:35.680328 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Mar 7 01:51:35.733581 auditctl[1765]: No rules Mar 7 01:51:35.742369 systemd[1]: audit-rules.service: Deactivated successfully. Mar 7 01:51:35.748397 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Mar 7 01:51:35.822198 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 7 01:51:36.057906 augenrules[1784]: No rules Mar 7 01:51:36.061760 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 7 01:51:36.073423 sudo[1761]: pam_unix(sudo:session): session closed for user root Mar 7 01:51:36.101734 sshd[1754]: pam_unix(sshd:session): session closed for user core Mar 7 01:51:36.138893 systemd[1]: Started sshd@8-10.0.0.118:22-10.0.0.1:60854.service - OpenSSH per-connection server daemon (10.0.0.1:60854). Mar 7 01:51:36.152410 systemd[1]: sshd@7-10.0.0.118:22-10.0.0.1:60844.service: Deactivated successfully. Mar 7 01:51:36.182415 systemd[1]: session-8.scope: Deactivated successfully. Mar 7 01:51:36.201572 systemd-logind[1563]: Session 8 logged out. Waiting for processes to exit. Mar 7 01:51:36.229520 systemd-logind[1563]: Removed session 8. Mar 7 01:51:36.288194 sshd[1790]: Accepted publickey for core from 10.0.0.1 port 60854 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:51:36.309066 sshd[1790]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:51:36.366442 systemd-logind[1563]: New session 9 of user core. Mar 7 01:51:36.385578 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 7 01:51:36.504484 sudo[1797]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 7 01:51:36.507583 sudo[1797]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 01:51:43.026260 (dockerd)[1815]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 7 01:51:43.032255 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 7 01:51:43.625058 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 7 01:51:43.686026 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:51:47.594155 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:51:47.633745 (kubelet)[1833]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:51:48.573124 dockerd[1815]: time="2026-03-07T01:51:48.572573723Z" level=info msg="Starting up" Mar 7 01:51:49.165308 update_engine[1574]: I20260307 01:51:49.035276 1574 update_attempter.cc:509] Updating boot flags... Mar 7 01:51:49.432753 kubelet[1833]: E0307 01:51:49.431128 1833 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:51:49.449133 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:51:49.449535 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:51:49.525863 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1855) Mar 7 01:51:49.936027 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1854) Mar 7 01:51:50.765398 dockerd[1815]: time="2026-03-07T01:51:50.763563315Z" level=info msg="Loading containers: start." Mar 7 01:51:52.080920 kernel: Initializing XFRM netlink socket Mar 7 01:51:52.811412 systemd-networkd[1251]: docker0: Link UP Mar 7 01:51:52.944006 dockerd[1815]: time="2026-03-07T01:51:52.941992194Z" level=info msg="Loading containers: done." Mar 7 01:51:53.209532 dockerd[1815]: time="2026-03-07T01:51:53.207036867Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 7 01:51:53.209532 dockerd[1815]: time="2026-03-07T01:51:53.207747465Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Mar 7 01:51:53.209532 dockerd[1815]: time="2026-03-07T01:51:53.208658169Z" level=info msg="Daemon has completed initialization" Mar 7 01:51:53.442000 dockerd[1815]: time="2026-03-07T01:51:53.436774491Z" level=info msg="API listen on /run/docker.sock" Mar 7 01:51:53.449730 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 7 01:51:57.703103 containerd[1587]: time="2026-03-07T01:51:57.702723112Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\"" Mar 7 01:51:59.454247 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2361322520.mount: Deactivated successfully. Mar 7 01:51:59.458093 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 7 01:51:59.480357 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:52:00.895910 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:52:00.909339 (kubelet)[2026]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:52:01.629733 kubelet[2026]: E0307 01:52:01.626102 2026 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:52:01.645620 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:52:01.646329 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:52:11.824965 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 7 01:52:11.941176 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:52:13.666098 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:52:13.779767 (kubelet)[2092]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:52:14.231297 containerd[1587]: time="2026-03-07T01:52:14.230917578Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:52:14.248096 containerd[1587]: time="2026-03-07T01:52:14.247025928Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.9: active requests=0, bytes read=30116186" Mar 7 01:52:14.256858 containerd[1587]: time="2026-03-07T01:52:14.256699091Z" level=info msg="ImageCreate event name:\"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:52:14.307505 containerd[1587]: time="2026-03-07T01:52:14.306445043Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:52:14.315524 containerd[1587]: time="2026-03-07T01:52:14.314582338Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.9\" with image id \"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\", size \"30112785\" in 16.611616889s" Mar 7 01:52:14.315524 containerd[1587]: time="2026-03-07T01:52:14.314701342Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\" returns image reference \"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\"" Mar 7 01:52:14.343570 containerd[1587]: time="2026-03-07T01:52:14.340222095Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\"" Mar 7 01:52:14.591862 kubelet[2092]: E0307 01:52:14.591645 2092 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:52:14.613088 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:52:14.615710 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:52:24.821654 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Mar 7 01:52:24.859979 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:52:26.046123 containerd[1587]: time="2026-03-07T01:52:26.045455076Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:52:26.051014 containerd[1587]: time="2026-03-07T01:52:26.050967393Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.9: active requests=0, bytes read=26021810" Mar 7 01:52:26.060135 containerd[1587]: time="2026-03-07T01:52:26.059725703Z" level=info msg="ImageCreate event name:\"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:52:26.073983 containerd[1587]: time="2026-03-07T01:52:26.072493862Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:52:26.078007 containerd[1587]: time="2026-03-07T01:52:26.077053607Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.9\" with image id \"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\", size \"27678758\" in 11.736687592s" Mar 7 01:52:26.078007 containerd[1587]: time="2026-03-07T01:52:26.077131490Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\" returns image reference \"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\"" Mar 7 01:52:26.102033 containerd[1587]: time="2026-03-07T01:52:26.101057294Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\"" Mar 7 01:52:26.138027 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:52:26.141864 (kubelet)[2119]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:52:27.856459 kubelet[2119]: E0307 01:52:27.856181 2119 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:52:27.878442 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:52:27.882505 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:52:35.860428 containerd[1587]: time="2026-03-07T01:52:35.858776065Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:52:35.864637 containerd[1587]: time="2026-03-07T01:52:35.861981455Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.9: active requests=0, bytes read=20162746" Mar 7 01:52:35.868442 containerd[1587]: time="2026-03-07T01:52:35.868283575Z" level=info msg="ImageCreate event name:\"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:52:35.885315 containerd[1587]: time="2026-03-07T01:52:35.885208819Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:52:35.909298 containerd[1587]: time="2026-03-07T01:52:35.908988890Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.9\" with image id \"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\", size \"21819712\" in 9.807833886s" Mar 7 01:52:35.909298 containerd[1587]: time="2026-03-07T01:52:35.909049101Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\" returns image reference \"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\"" Mar 7 01:52:35.923461 containerd[1587]: time="2026-03-07T01:52:35.923300680Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\"" Mar 7 01:52:38.049264 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Mar 7 01:52:38.081469 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:52:38.872978 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:52:38.882507 (kubelet)[2149]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:52:39.133086 kubelet[2149]: E0307 01:52:39.128309 2149 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:52:39.138924 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:52:39.139271 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:52:45.969893 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount928139994.mount: Deactivated successfully. Mar 7 01:52:49.319902 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Mar 7 01:52:49.392447 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:52:53.034291 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:52:53.062273 (kubelet)[2175]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:52:54.710161 kubelet[2175]: E0307 01:52:54.705863 2175 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:52:54.752214 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:52:54.752893 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:52:55.880491 containerd[1587]: time="2026-03-07T01:52:55.877680210Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:52:55.880491 containerd[1587]: time="2026-03-07T01:52:55.879426250Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.9: active requests=0, bytes read=31828647" Mar 7 01:52:55.883323 containerd[1587]: time="2026-03-07T01:52:55.882107783Z" level=info msg="ImageCreate event name:\"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:52:55.897030 containerd[1587]: time="2026-03-07T01:52:55.892504635Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:52:55.897205 containerd[1587]: time="2026-03-07T01:52:55.896921759Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.9\" with image id \"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\", repo tag \"registry.k8s.io/kube-proxy:v1.33.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\", size \"31827666\" in 19.97353712s" Mar 7 01:52:55.897205 containerd[1587]: time="2026-03-07T01:52:55.897110519Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\" returns image reference \"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\"" Mar 7 01:52:55.921560 containerd[1587]: time="2026-03-07T01:52:55.921474360Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Mar 7 01:52:57.429565 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3199569998.mount: Deactivated successfully. Mar 7 01:53:04.830535 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Mar 7 01:53:04.880023 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:53:05.661729 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:53:05.670691 (kubelet)[2249]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:53:05.914144 kubelet[2249]: E0307 01:53:05.913988 2249 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:53:05.934445 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:53:05.934916 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:53:06.959304 containerd[1587]: time="2026-03-07T01:53:06.956624509Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:53:06.959304 containerd[1587]: time="2026-03-07T01:53:06.958865431Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Mar 7 01:53:06.968701 containerd[1587]: time="2026-03-07T01:53:06.968553330Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:53:06.980598 containerd[1587]: time="2026-03-07T01:53:06.980451662Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:53:06.983520 containerd[1587]: time="2026-03-07T01:53:06.982119228Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 11.060583037s" Mar 7 01:53:06.983520 containerd[1587]: time="2026-03-07T01:53:06.982216999Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Mar 7 01:53:06.992270 containerd[1587]: time="2026-03-07T01:53:06.987696802Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 7 01:53:08.046019 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2006034167.mount: Deactivated successfully. Mar 7 01:53:08.094411 containerd[1587]: time="2026-03-07T01:53:08.093005977Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:53:08.099136 containerd[1587]: time="2026-03-07T01:53:08.098849428Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Mar 7 01:53:08.102406 containerd[1587]: time="2026-03-07T01:53:08.101881830Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:53:08.136769 containerd[1587]: time="2026-03-07T01:53:08.136080956Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:53:08.141870 containerd[1587]: time="2026-03-07T01:53:08.140682125Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.152931993s" Mar 7 01:53:08.141870 containerd[1587]: time="2026-03-07T01:53:08.140748361Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Mar 7 01:53:08.146496 containerd[1587]: time="2026-03-07T01:53:08.145970554Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Mar 7 01:53:12.567085 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount253682231.mount: Deactivated successfully. Mar 7 01:53:16.107124 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Mar 7 01:53:16.252334 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:53:17.635096 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:53:17.668479 (kubelet)[2309]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:53:17.985161 kubelet[2309]: E0307 01:53:17.983337 2309 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:53:17.989745 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:53:17.997626 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:53:25.821538 containerd[1587]: time="2026-03-07T01:53:25.820344781Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:53:25.821538 containerd[1587]: time="2026-03-07T01:53:25.820992255Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=23718840" Mar 7 01:53:25.824176 containerd[1587]: time="2026-03-07T01:53:25.824091573Z" level=info msg="ImageCreate event name:\"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:53:25.835851 containerd[1587]: time="2026-03-07T01:53:25.834587560Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:53:25.842613 containerd[1587]: time="2026-03-07T01:53:25.841210676Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"23716032\" in 17.695195391s" Mar 7 01:53:25.842613 containerd[1587]: time="2026-03-07T01:53:25.841422758Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\"" Mar 7 01:53:28.163140 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Mar 7 01:53:28.230394 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:53:29.571606 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:53:29.753572 (kubelet)[2386]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:53:30.089622 kubelet[2386]: E0307 01:53:30.088653 2386 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:53:30.133476 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:53:30.134400 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:53:35.926014 update_engine[1574]: I20260307 01:53:35.924705 1574 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Mar 7 01:53:35.926014 update_engine[1574]: I20260307 01:53:35.924963 1574 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Mar 7 01:53:35.926014 update_engine[1574]: I20260307 01:53:35.925563 1574 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Mar 7 01:53:35.934397 update_engine[1574]: I20260307 01:53:35.930393 1574 omaha_request_params.cc:62] Current group set to lts Mar 7 01:53:35.938860 update_engine[1574]: I20260307 01:53:35.937276 1574 update_attempter.cc:499] Already updated boot flags. Skipping. Mar 7 01:53:35.938860 update_engine[1574]: I20260307 01:53:35.937327 1574 update_attempter.cc:643] Scheduling an action processor start. Mar 7 01:53:35.938860 update_engine[1574]: I20260307 01:53:35.937359 1574 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Mar 7 01:53:35.938860 update_engine[1574]: I20260307 01:53:35.937526 1574 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Mar 7 01:53:35.938860 update_engine[1574]: I20260307 01:53:35.937701 1574 omaha_request_action.cc:271] Posting an Omaha request to disabled Mar 7 01:53:35.938860 update_engine[1574]: I20260307 01:53:35.937721 1574 omaha_request_action.cc:272] Request: Mar 7 01:53:35.938860 update_engine[1574]: Mar 7 01:53:35.938860 update_engine[1574]: Mar 7 01:53:35.938860 update_engine[1574]: Mar 7 01:53:35.938860 update_engine[1574]: Mar 7 01:53:35.938860 update_engine[1574]: Mar 7 01:53:35.938860 update_engine[1574]: Mar 7 01:53:35.938860 update_engine[1574]: Mar 7 01:53:35.938860 update_engine[1574]: Mar 7 01:53:35.938860 update_engine[1574]: I20260307 01:53:35.937733 1574 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 7 01:53:35.948993 update_engine[1574]: I20260307 01:53:35.948930 1574 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 7 01:53:35.950000 update_engine[1574]: I20260307 01:53:35.949954 1574 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 7 01:53:35.957182 locksmithd[1639]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Mar 7 01:53:35.969036 update_engine[1574]: E20260307 01:53:35.968890 1574 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 7 01:53:35.969189 update_engine[1574]: I20260307 01:53:35.969075 1574 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Mar 7 01:53:36.327559 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:53:36.345312 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:53:36.455367 systemd[1]: Reloading requested from client PID 2405 ('systemctl') (unit session-9.scope)... Mar 7 01:53:36.455592 systemd[1]: Reloading... Mar 7 01:53:36.637968 zram_generator::config[2447]: No configuration found. Mar 7 01:53:36.958387 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 01:53:37.091363 systemd[1]: Reloading finished in 633 ms. Mar 7 01:53:37.253170 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 7 01:53:37.253426 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 7 01:53:37.255700 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:53:37.270455 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:53:37.701388 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:53:37.729020 (kubelet)[2505]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 7 01:53:37.957388 kubelet[2505]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 7 01:53:37.957388 kubelet[2505]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 7 01:53:37.957388 kubelet[2505]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 7 01:53:37.971055 kubelet[2505]: I0307 01:53:37.965369 2505 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 7 01:53:40.167505 kubelet[2505]: I0307 01:53:40.163215 2505 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 7 01:53:40.167505 kubelet[2505]: I0307 01:53:40.164167 2505 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 7 01:53:40.174332 kubelet[2505]: I0307 01:53:40.171378 2505 server.go:956] "Client rotation is on, will bootstrap in background" Mar 7 01:53:40.572117 kubelet[2505]: I0307 01:53:40.566332 2505 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 7 01:53:40.591313 kubelet[2505]: E0307 01:53:40.591222 2505 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.118:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.118:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 7 01:53:40.861900 kubelet[2505]: E0307 01:53:40.858735 2505 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 7 01:53:40.861900 kubelet[2505]: I0307 01:53:40.858887 2505 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 7 01:53:41.090725 kubelet[2505]: I0307 01:53:41.086143 2505 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 7 01:53:41.090725 kubelet[2505]: I0307 01:53:41.087785 2505 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 7 01:53:41.156961 kubelet[2505]: I0307 01:53:41.087994 2505 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Mar 7 01:53:41.156961 kubelet[2505]: I0307 01:53:41.151779 2505 topology_manager.go:138] "Creating topology manager with none policy" Mar 7 01:53:41.156961 kubelet[2505]: I0307 01:53:41.153497 2505 container_manager_linux.go:303] "Creating device plugin manager" Mar 7 01:53:41.164245 kubelet[2505]: I0307 01:53:41.162555 2505 state_mem.go:36] "Initialized new in-memory state store" Mar 7 01:53:41.287074 kubelet[2505]: I0307 01:53:41.263418 2505 kubelet.go:480] "Attempting to sync node with API server" Mar 7 01:53:41.287074 kubelet[2505]: I0307 01:53:41.265344 2505 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 7 01:53:41.287074 kubelet[2505]: I0307 01:53:41.272889 2505 kubelet.go:386] "Adding apiserver pod source" Mar 7 01:53:41.287074 kubelet[2505]: I0307 01:53:41.273088 2505 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 7 01:53:41.319842 kubelet[2505]: E0307 01:53:41.292960 2505 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.118:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.118:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 7 01:53:41.319842 kubelet[2505]: E0307 01:53:41.298339 2505 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.118:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.118:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 7 01:53:41.361781 kubelet[2505]: I0307 01:53:41.359017 2505 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 7 01:53:41.361781 kubelet[2505]: I0307 01:53:41.360772 2505 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 7 01:53:41.468345 kubelet[2505]: W0307 01:53:41.466556 2505 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 7 01:53:41.549152 kubelet[2505]: I0307 01:53:41.543242 2505 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 7 01:53:41.549152 kubelet[2505]: I0307 01:53:41.543434 2505 server.go:1289] "Started kubelet" Mar 7 01:53:41.549152 kubelet[2505]: I0307 01:53:41.545514 2505 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 7 01:53:41.553254 kubelet[2505]: I0307 01:53:41.553025 2505 server.go:317] "Adding debug handlers to kubelet server" Mar 7 01:53:41.567433 kubelet[2505]: I0307 01:53:41.566934 2505 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 7 01:53:41.757399 kubelet[2505]: I0307 01:53:41.751467 2505 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 7 01:53:41.757399 kubelet[2505]: I0307 01:53:41.753724 2505 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 7 01:53:41.757399 kubelet[2505]: I0307 01:53:41.756398 2505 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 7 01:53:41.765252 kubelet[2505]: I0307 01:53:41.763725 2505 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 7 01:53:41.765704 kubelet[2505]: I0307 01:53:41.765599 2505 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 7 01:53:41.765958 kubelet[2505]: I0307 01:53:41.765923 2505 reconciler.go:26] "Reconciler: start to sync state" Mar 7 01:53:41.792178 kubelet[2505]: E0307 01:53:41.760473 2505 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.118:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.118:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.189a6c3f16b3c34f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-07 01:53:41.543330639 +0000 UTC m=+3.797010140,LastTimestamp:2026-03-07 01:53:41.543330639 +0000 UTC m=+3.797010140,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 7 01:53:41.805959 kubelet[2505]: I0307 01:53:41.803297 2505 factory.go:223] Registration of the systemd container factory successfully Mar 7 01:53:41.805959 kubelet[2505]: I0307 01:53:41.803604 2505 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 7 01:53:41.805959 kubelet[2505]: E0307 01:53:41.804756 2505 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.118:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.118:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 7 01:53:41.828611 kubelet[2505]: E0307 01:53:41.827741 2505 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:53:41.833641 kubelet[2505]: E0307 01:53:41.833358 2505 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.118:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.118:6443: connect: connection refused" interval="200ms" Mar 7 01:53:41.843142 kubelet[2505]: I0307 01:53:41.843063 2505 factory.go:223] Registration of the containerd container factory successfully Mar 7 01:53:41.851989 kubelet[2505]: E0307 01:53:41.851781 2505 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 7 01:53:41.935919 kubelet[2505]: E0307 01:53:41.930437 2505 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:53:42.032605 kubelet[2505]: I0307 01:53:42.029692 2505 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 7 01:53:42.032605 kubelet[2505]: E0307 01:53:42.031488 2505 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:53:42.037682 kubelet[2505]: E0307 01:53:42.037074 2505 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.118:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.118:6443: connect: connection refused" interval="400ms" Mar 7 01:53:42.048004 kubelet[2505]: I0307 01:53:42.044378 2505 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 7 01:53:42.048004 kubelet[2505]: I0307 01:53:42.044655 2505 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 7 01:53:42.048004 kubelet[2505]: I0307 01:53:42.044891 2505 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 7 01:53:42.048004 kubelet[2505]: I0307 01:53:42.044972 2505 kubelet.go:2436] "Starting kubelet main sync loop" Mar 7 01:53:42.048004 kubelet[2505]: E0307 01:53:42.045180 2505 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 7 01:53:42.273727 kubelet[2505]: E0307 01:53:42.273207 2505 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:53:42.285108 kubelet[2505]: E0307 01:53:42.283286 2505 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 7 01:53:42.293703 kubelet[2505]: E0307 01:53:42.293573 2505 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.118:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.118:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 7 01:53:42.305306 kubelet[2505]: I0307 01:53:42.304039 2505 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 7 01:53:42.305306 kubelet[2505]: I0307 01:53:42.304060 2505 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 7 01:53:42.305306 kubelet[2505]: I0307 01:53:42.304082 2505 state_mem.go:36] "Initialized new in-memory state store" Mar 7 01:53:42.315609 kubelet[2505]: I0307 01:53:42.314578 2505 policy_none.go:49] "None policy: Start" Mar 7 01:53:42.315609 kubelet[2505]: I0307 01:53:42.314872 2505 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 7 01:53:42.315609 kubelet[2505]: I0307 01:53:42.314972 2505 state_mem.go:35] "Initializing new in-memory state store" Mar 7 01:53:42.357288 kubelet[2505]: E0307 01:53:42.357113 2505 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 7 01:53:42.360233 kubelet[2505]: I0307 01:53:42.358068 2505 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 7 01:53:42.360233 kubelet[2505]: I0307 01:53:42.358156 2505 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 7 01:53:42.360233 kubelet[2505]: I0307 01:53:42.359871 2505 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 7 01:53:42.362316 kubelet[2505]: E0307 01:53:42.362285 2505 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 7 01:53:42.362450 kubelet[2505]: E0307 01:53:42.362431 2505 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 7 01:53:42.418635 kubelet[2505]: E0307 01:53:42.416578 2505 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.118:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.118:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 7 01:53:42.438749 kubelet[2505]: E0307 01:53:42.438603 2505 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.118:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.118:6443: connect: connection refused" interval="800ms" Mar 7 01:53:42.461276 kubelet[2505]: I0307 01:53:42.461031 2505 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 7 01:53:42.461789 kubelet[2505]: E0307 01:53:42.461574 2505 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.118:6443/api/v1/nodes\": dial tcp 10.0.0.118:6443: connect: connection refused" node="localhost" Mar 7 01:53:42.532980 kubelet[2505]: E0307 01:53:42.532886 2505 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:53:42.650110 kubelet[2505]: E0307 01:53:42.647579 2505 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.118:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.118:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 7 01:53:42.674896 kubelet[2505]: E0307 01:53:42.668775 2505 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:53:42.684013 kubelet[2505]: I0307 01:53:42.677957 2505 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 7 01:53:42.736679 kubelet[2505]: I0307 01:53:42.736454 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e944e4cb17af904786c3a2e01e298498-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"e944e4cb17af904786c3a2e01e298498\") " pod="kube-system/kube-scheduler-localhost" Mar 7 01:53:42.736679 kubelet[2505]: I0307 01:53:42.736625 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cabc3565aac99f2f84bae765e01330a3-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"cabc3565aac99f2f84bae765e01330a3\") " pod="kube-system/kube-apiserver-localhost" Mar 7 01:53:42.736679 kubelet[2505]: I0307 01:53:42.736705 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cabc3565aac99f2f84bae765e01330a3-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"cabc3565aac99f2f84bae765e01330a3\") " pod="kube-system/kube-apiserver-localhost" Mar 7 01:53:42.745758 kubelet[2505]: I0307 01:53:42.736766 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 01:53:42.745758 kubelet[2505]: E0307 01:53:42.741892 2505 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.118:6443/api/v1/nodes\": dial tcp 10.0.0.118:6443: connect: connection refused" node="localhost" Mar 7 01:53:42.745758 kubelet[2505]: E0307 01:53:42.743998 2505 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.118:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.118:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 7 01:53:42.747143 kubelet[2505]: I0307 01:53:42.746991 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 01:53:42.747143 kubelet[2505]: E0307 01:53:42.747024 2505 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.118:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.118:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 7 01:53:42.747143 kubelet[2505]: I0307 01:53:42.747061 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 01:53:42.747143 kubelet[2505]: I0307 01:53:42.747096 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cabc3565aac99f2f84bae765e01330a3-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"cabc3565aac99f2f84bae765e01330a3\") " pod="kube-system/kube-apiserver-localhost" Mar 7 01:53:42.747510 kubelet[2505]: I0307 01:53:42.747218 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 01:53:42.747510 kubelet[2505]: I0307 01:53:42.747353 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 01:53:42.754655 kubelet[2505]: E0307 01:53:42.753681 2505 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:53:43.065925 kubelet[2505]: E0307 01:53:43.063747 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:53:43.065925 kubelet[2505]: E0307 01:53:43.064172 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:53:43.083913 containerd[1587]: time="2026-03-07T01:53:43.081699963Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8747e1f8a49a618fbc1324a8fe2d3754,Namespace:kube-system,Attempt:0,}" Mar 7 01:53:43.087420 containerd[1587]: time="2026-03-07T01:53:43.082123512Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:e944e4cb17af904786c3a2e01e298498,Namespace:kube-system,Attempt:0,}" Mar 7 01:53:43.138741 kubelet[2505]: E0307 01:53:43.136445 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:53:43.143541 containerd[1587]: time="2026-03-07T01:53:43.142595465Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:cabc3565aac99f2f84bae765e01330a3,Namespace:kube-system,Attempt:0,}" Mar 7 01:53:43.159731 kubelet[2505]: I0307 01:53:43.152769 2505 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 7 01:53:43.160321 kubelet[2505]: E0307 01:53:43.160181 2505 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.118:6443/api/v1/nodes\": dial tcp 10.0.0.118:6443: connect: connection refused" node="localhost" Mar 7 01:53:43.242234 kubelet[2505]: E0307 01:53:43.241443 2505 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.118:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.118:6443: connect: connection refused" interval="1.6s" Mar 7 01:53:43.286530 kubelet[2505]: E0307 01:53:43.285176 2505 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.118:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.118:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 7 01:53:43.987105 kubelet[2505]: I0307 01:53:43.986789 2505 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 7 01:53:43.987105 kubelet[2505]: E0307 01:53:43.987916 2505 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.118:6443/api/v1/nodes\": dial tcp 10.0.0.118:6443: connect: connection refused" node="localhost" Mar 7 01:53:44.875222 kubelet[2505]: E0307 01:53:44.869454 2505 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.118:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.118:6443: connect: connection refused" interval="3.2s" Mar 7 01:53:45.264744 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4092356933.mount: Deactivated successfully. Mar 7 01:53:45.317078 kubelet[2505]: E0307 01:53:45.316437 2505 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.118:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.118:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 7 01:53:45.335704 containerd[1587]: time="2026-03-07T01:53:45.334132033Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:53:45.381055 containerd[1587]: time="2026-03-07T01:53:45.379879260Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Mar 7 01:53:45.390748 containerd[1587]: time="2026-03-07T01:53:45.388032383Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:53:45.492257 kubelet[2505]: E0307 01:53:45.491688 2505 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.118:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.118:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 7 01:53:45.492257 kubelet[2505]: E0307 01:53:45.495625 2505 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.118:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.118:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 7 01:53:45.510148 containerd[1587]: time="2026-03-07T01:53:45.505358031Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 7 01:53:45.510148 containerd[1587]: time="2026-03-07T01:53:45.507494733Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 7 01:53:45.510148 containerd[1587]: time="2026-03-07T01:53:45.507658624Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:53:45.522530 containerd[1587]: time="2026-03-07T01:53:45.521006142Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:53:45.535084 containerd[1587]: time="2026-03-07T01:53:45.533952347Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:53:45.542238 containerd[1587]: time="2026-03-07T01:53:45.541006574Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 2.45627087s" Mar 7 01:53:45.552555 containerd[1587]: time="2026-03-07T01:53:45.551002456Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 2.408302426s" Mar 7 01:53:45.562358 containerd[1587]: time="2026-03-07T01:53:45.561902899Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 2.475967061s" Mar 7 01:53:45.612644 kubelet[2505]: I0307 01:53:45.612018 2505 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 7 01:53:45.612644 kubelet[2505]: E0307 01:53:45.612597 2505 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.118:6443/api/v1/nodes\": dial tcp 10.0.0.118:6443: connect: connection refused" node="localhost" Mar 7 01:53:45.664543 kubelet[2505]: E0307 01:53:45.663748 2505 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.118:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.118:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 7 01:53:45.840407 update_engine[1574]: I20260307 01:53:45.838038 1574 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 7 01:53:45.840407 update_engine[1574]: I20260307 01:53:45.839375 1574 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 7 01:53:45.840407 update_engine[1574]: I20260307 01:53:45.840287 1574 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 7 01:53:45.902759 update_engine[1574]: E20260307 01:53:45.891597 1574 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 7 01:53:45.910540 update_engine[1574]: I20260307 01:53:45.910094 1574 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Mar 7 01:53:46.219263 kubelet[2505]: E0307 01:53:46.207566 2505 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.118:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.118:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.189a6c3f16b3c34f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-07 01:53:41.543330639 +0000 UTC m=+3.797010140,LastTimestamp:2026-03-07 01:53:41.543330639 +0000 UTC m=+3.797010140,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 7 01:53:46.672391 kubelet[2505]: E0307 01:53:46.671472 2505 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.118:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.118:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 7 01:53:48.254307 kubelet[2505]: E0307 01:53:48.234627 2505 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.118:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.118:6443: connect: connection refused" interval="6.4s" Mar 7 01:53:48.288938 containerd[1587]: time="2026-03-07T01:53:48.288429585Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:53:48.312751 containerd[1587]: time="2026-03-07T01:53:48.299925121Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:53:48.312751 containerd[1587]: time="2026-03-07T01:53:48.299955330Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:53:48.316316 containerd[1587]: time="2026-03-07T01:53:48.316201216Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:53:48.371878 containerd[1587]: time="2026-03-07T01:53:48.368028335Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:53:48.371878 containerd[1587]: time="2026-03-07T01:53:48.368169560Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:53:48.371878 containerd[1587]: time="2026-03-07T01:53:48.368193407Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:53:48.371878 containerd[1587]: time="2026-03-07T01:53:48.368312909Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:53:48.390218 containerd[1587]: time="2026-03-07T01:53:48.385425979Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:53:48.390218 containerd[1587]: time="2026-03-07T01:53:48.386321965Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:53:48.390218 containerd[1587]: time="2026-03-07T01:53:48.386358016Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:53:48.390218 containerd[1587]: time="2026-03-07T01:53:48.388194816Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:53:48.881310 kubelet[2505]: E0307 01:53:48.880537 2505 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.118:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.118:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 7 01:53:48.957368 kubelet[2505]: I0307 01:53:48.955179 2505 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 7 01:53:48.961673 kubelet[2505]: E0307 01:53:48.961528 2505 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.118:6443/api/v1/nodes\": dial tcp 10.0.0.118:6443: connect: connection refused" node="localhost" Mar 7 01:53:49.310069 containerd[1587]: time="2026-03-07T01:53:49.309433966Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:cabc3565aac99f2f84bae765e01330a3,Namespace:kube-system,Attempt:0,} returns sandbox id \"8648506f9fb7aec740cae14eb74de00c7e76c0cc62ff6e4e3fd7a0dbd2c727ba\"" Mar 7 01:53:49.320447 kubelet[2505]: E0307 01:53:49.317962 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:53:49.544283 kubelet[2505]: E0307 01:53:49.539267 2505 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.118:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.118:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 7 01:53:49.544283 kubelet[2505]: E0307 01:53:49.540319 2505 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.118:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.118:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 7 01:53:49.552858 containerd[1587]: time="2026-03-07T01:53:49.552169893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:e944e4cb17af904786c3a2e01e298498,Namespace:kube-system,Attempt:0,} returns sandbox id \"e4ce1142970fabcf71a9aac5e8509a6b73ce933be60da115362e8dd32e1015ac\"" Mar 7 01:53:49.555922 kubelet[2505]: E0307 01:53:49.553662 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:53:49.606625 containerd[1587]: time="2026-03-07T01:53:49.605189933Z" level=info msg="CreateContainer within sandbox \"8648506f9fb7aec740cae14eb74de00c7e76c0cc62ff6e4e3fd7a0dbd2c727ba\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 7 01:53:49.618747 containerd[1587]: time="2026-03-07T01:53:49.614420575Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8747e1f8a49a618fbc1324a8fe2d3754,Namespace:kube-system,Attempt:0,} returns sandbox id \"cb3c7191ad2b83794f73779e2ed8d0d9bbb39205483c880450101f187aa4b6ae\"" Mar 7 01:53:49.618927 kubelet[2505]: E0307 01:53:49.615601 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:53:49.626584 containerd[1587]: time="2026-03-07T01:53:49.625246001Z" level=info msg="CreateContainer within sandbox \"e4ce1142970fabcf71a9aac5e8509a6b73ce933be60da115362e8dd32e1015ac\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 7 01:53:49.637109 containerd[1587]: time="2026-03-07T01:53:49.637066828Z" level=info msg="CreateContainer within sandbox \"cb3c7191ad2b83794f73779e2ed8d0d9bbb39205483c880450101f187aa4b6ae\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 7 01:53:49.679120 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1167677083.mount: Deactivated successfully. Mar 7 01:53:49.694251 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount798551643.mount: Deactivated successfully. Mar 7 01:53:49.731689 containerd[1587]: time="2026-03-07T01:53:49.731292324Z" level=info msg="CreateContainer within sandbox \"8648506f9fb7aec740cae14eb74de00c7e76c0cc62ff6e4e3fd7a0dbd2c727ba\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"7b7436b09e72fbf4dd16f569961365ad80739d8633d80f0ff4d596501ec182a2\"" Mar 7 01:53:49.736053 containerd[1587]: time="2026-03-07T01:53:49.736015955Z" level=info msg="StartContainer for \"7b7436b09e72fbf4dd16f569961365ad80739d8633d80f0ff4d596501ec182a2\"" Mar 7 01:53:49.748861 containerd[1587]: time="2026-03-07T01:53:49.748730218Z" level=info msg="CreateContainer within sandbox \"cb3c7191ad2b83794f73779e2ed8d0d9bbb39205483c880450101f187aa4b6ae\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"341073464790a8d31692d61dd46ac90963ec8993cb7a1f19cc2b765c92059f6a\"" Mar 7 01:53:49.752184 containerd[1587]: time="2026-03-07T01:53:49.752149954Z" level=info msg="StartContainer for \"341073464790a8d31692d61dd46ac90963ec8993cb7a1f19cc2b765c92059f6a\"" Mar 7 01:53:49.790604 kubelet[2505]: E0307 01:53:49.790370 2505 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.118:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.118:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 7 01:53:49.806426 containerd[1587]: time="2026-03-07T01:53:49.806319353Z" level=info msg="CreateContainer within sandbox \"e4ce1142970fabcf71a9aac5e8509a6b73ce933be60da115362e8dd32e1015ac\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"cd12bfb9149f3438212664d1ee6e5d7c04cc46422909d23e5f9f42490cfc705e\"" Mar 7 01:53:49.809769 containerd[1587]: time="2026-03-07T01:53:49.809743629Z" level=info msg="StartContainer for \"cd12bfb9149f3438212664d1ee6e5d7c04cc46422909d23e5f9f42490cfc705e\"" Mar 7 01:53:50.265126 containerd[1587]: time="2026-03-07T01:53:50.264947956Z" level=info msg="StartContainer for \"341073464790a8d31692d61dd46ac90963ec8993cb7a1f19cc2b765c92059f6a\" returns successfully" Mar 7 01:53:50.472956 containerd[1587]: time="2026-03-07T01:53:50.471071447Z" level=info msg="StartContainer for \"cd12bfb9149f3438212664d1ee6e5d7c04cc46422909d23e5f9f42490cfc705e\" returns successfully" Mar 7 01:53:50.508100 containerd[1587]: time="2026-03-07T01:53:50.503413199Z" level=info msg="StartContainer for \"7b7436b09e72fbf4dd16f569961365ad80739d8633d80f0ff4d596501ec182a2\" returns successfully" Mar 7 01:53:51.410176 kubelet[2505]: E0307 01:53:51.409445 2505 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:53:51.410176 kubelet[2505]: E0307 01:53:51.409700 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:53:51.426040 kubelet[2505]: E0307 01:53:51.424102 2505 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:53:51.426040 kubelet[2505]: E0307 01:53:51.424300 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:53:51.436584 kubelet[2505]: E0307 01:53:51.436539 2505 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:53:51.445592 kubelet[2505]: E0307 01:53:51.443603 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:53:52.366092 kubelet[2505]: E0307 01:53:52.363374 2505 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 7 01:53:52.481965 kubelet[2505]: E0307 01:53:52.479574 2505 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:53:52.481965 kubelet[2505]: E0307 01:53:52.479969 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:53:52.530036 kubelet[2505]: E0307 01:53:52.529950 2505 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:53:52.530722 kubelet[2505]: E0307 01:53:52.530695 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:53:53.508990 kubelet[2505]: E0307 01:53:53.505741 2505 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:53:53.508990 kubelet[2505]: E0307 01:53:53.507076 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:53:53.531577 kubelet[2505]: E0307 01:53:53.523962 2505 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:53:53.531577 kubelet[2505]: E0307 01:53:53.524499 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:53:55.430205 kubelet[2505]: E0307 01:53:55.426951 2505 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:53:55.430205 kubelet[2505]: E0307 01:53:55.429372 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:53:55.438449 kubelet[2505]: I0307 01:53:55.437552 2505 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 7 01:53:55.838193 update_engine[1574]: I20260307 01:53:55.836246 1574 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 7 01:53:55.838193 update_engine[1574]: I20260307 01:53:55.837218 1574 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 7 01:53:55.841750 update_engine[1574]: I20260307 01:53:55.841691 1574 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 7 01:53:55.860024 update_engine[1574]: E20260307 01:53:55.859946 1574 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 7 01:53:55.860371 update_engine[1574]: I20260307 01:53:55.860258 1574 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Mar 7 01:53:58.355691 kubelet[2505]: E0307 01:53:58.345383 2505 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:53:58.355691 kubelet[2505]: E0307 01:53:58.345706 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:53:58.620637 kubelet[2505]: E0307 01:53:58.620426 2505 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Mar 7 01:53:58.670875 kubelet[2505]: E0307 01:53:58.668698 2505 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.189a6c3f16b3c34f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-07 01:53:41.543330639 +0000 UTC m=+3.797010140,LastTimestamp:2026-03-07 01:53:41.543330639 +0000 UTC m=+3.797010140,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 7 01:53:58.757691 kubelet[2505]: E0307 01:53:58.756022 2505 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.189a6c3f181cc4fd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:CgroupV1,Message:cgroup v1 support is in maintenance mode, please migrate to cgroup v2,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-07 01:53:41.566989565 +0000 UTC m=+3.820669066,LastTimestamp:2026-03-07 01:53:41.566989565 +0000 UTC m=+3.820669066,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 7 01:53:58.779151 kubelet[2505]: I0307 01:53:58.779051 2505 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Mar 7 01:53:58.779151 kubelet[2505]: E0307 01:53:58.779149 2505 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Mar 7 01:53:58.903588 kubelet[2505]: E0307 01:53:58.901980 2505 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.189a6c3f29135f18 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-07 01:53:41.851586328 +0000 UTC m=+4.105265869,LastTimestamp:2026-03-07 01:53:41.851586328 +0000 UTC m=+4.105265869,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 7 01:53:58.928372 kubelet[2505]: E0307 01:53:58.928234 2505 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:53:59.030096 kubelet[2505]: E0307 01:53:59.030005 2505 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:53:59.137463 kubelet[2505]: E0307 01:53:59.134647 2505 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:53:59.259303 kubelet[2505]: E0307 01:53:59.243664 2505 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:53:59.359911 kubelet[2505]: E0307 01:53:59.345012 2505 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:53:59.456416 kubelet[2505]: E0307 01:53:59.451898 2505 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:53:59.553396 kubelet[2505]: E0307 01:53:59.553213 2505 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:53:59.656539 kubelet[2505]: E0307 01:53:59.656437 2505 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:53:59.773236 kubelet[2505]: E0307 01:53:59.761985 2505 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:53:59.881350 kubelet[2505]: E0307 01:53:59.863463 2505 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:53:59.965209 kubelet[2505]: E0307 01:53:59.964771 2505 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:54:00.068637 kubelet[2505]: E0307 01:54:00.066282 2505 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:54:00.203381 kubelet[2505]: E0307 01:54:00.178954 2505 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:54:00.284942 kubelet[2505]: E0307 01:54:00.283564 2505 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:54:00.395763 kubelet[2505]: E0307 01:54:00.394887 2505 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:54:00.430486 kubelet[2505]: I0307 01:54:00.421233 2505 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 7 01:54:00.430486 kubelet[2505]: I0307 01:54:00.429412 2505 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 7 01:54:00.547709 kubelet[2505]: E0307 01:54:00.546307 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:54:00.553745 kubelet[2505]: E0307 01:54:00.553708 2505 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Mar 7 01:54:00.554316 kubelet[2505]: I0307 01:54:00.553911 2505 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 7 01:54:00.591332 kubelet[2505]: I0307 01:54:00.590541 2505 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 7 01:54:01.241120 kubelet[2505]: I0307 01:54:01.239067 2505 apiserver.go:52] "Watching apiserver" Mar 7 01:54:01.261390 kubelet[2505]: E0307 01:54:01.258748 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:54:01.261633 kubelet[2505]: E0307 01:54:01.260900 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:54:01.269056 kubelet[2505]: I0307 01:54:01.268458 2505 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 7 01:54:01.420525 kubelet[2505]: I0307 01:54:01.417195 2505 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 7 01:54:01.456782 kubelet[2505]: E0307 01:54:01.453288 2505 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Mar 7 01:54:01.456782 kubelet[2505]: E0307 01:54:01.453595 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:54:01.882412 kubelet[2505]: I0307 01:54:01.881755 2505 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.881679578 podStartE2EDuration="1.881679578s" podCreationTimestamp="2026-03-07 01:54:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:54:01.877562658 +0000 UTC m=+24.131242159" watchObservedRunningTime="2026-03-07 01:54:01.881679578 +0000 UTC m=+24.135359079" Mar 7 01:54:01.993996 kubelet[2505]: I0307 01:54:01.991250 2505 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.991223414 podStartE2EDuration="1.991223414s" podCreationTimestamp="2026-03-07 01:54:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:54:01.934086169 +0000 UTC m=+24.187765671" watchObservedRunningTime="2026-03-07 01:54:01.991223414 +0000 UTC m=+24.244902955" Mar 7 01:54:02.002763 kubelet[2505]: I0307 01:54:01.997582 2505 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.997560351 podStartE2EDuration="1.997560351s" podCreationTimestamp="2026-03-07 01:54:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:54:01.997194414 +0000 UTC m=+24.250873914" watchObservedRunningTime="2026-03-07 01:54:01.997560351 +0000 UTC m=+24.251239862" Mar 7 01:54:02.616971 kubelet[2505]: E0307 01:54:02.616714 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:54:05.884298 update_engine[1574]: I20260307 01:54:05.840747 1574 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 7 01:54:06.015990 update_engine[1574]: I20260307 01:54:05.943461 1574 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 7 01:54:06.015990 update_engine[1574]: I20260307 01:54:05.954205 1574 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 7 01:54:06.035412 update_engine[1574]: E20260307 01:54:06.026127 1574 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 7 01:54:06.035412 update_engine[1574]: I20260307 01:54:06.026336 1574 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Mar 7 01:54:06.035412 update_engine[1574]: I20260307 01:54:06.026353 1574 omaha_request_action.cc:617] Omaha request response: Mar 7 01:54:06.035412 update_engine[1574]: E20260307 01:54:06.026911 1574 omaha_request_action.cc:636] Omaha request network transfer failed. Mar 7 01:54:06.035412 update_engine[1574]: I20260307 01:54:06.031093 1574 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Mar 7 01:54:06.035412 update_engine[1574]: I20260307 01:54:06.031126 1574 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 7 01:54:06.035412 update_engine[1574]: I20260307 01:54:06.031137 1574 update_attempter.cc:306] Processing Done. Mar 7 01:54:06.035412 update_engine[1574]: E20260307 01:54:06.031194 1574 update_attempter.cc:619] Update failed. Mar 7 01:54:06.035412 update_engine[1574]: I20260307 01:54:06.031213 1574 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Mar 7 01:54:06.035412 update_engine[1574]: I20260307 01:54:06.031225 1574 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Mar 7 01:54:06.035412 update_engine[1574]: I20260307 01:54:06.031236 1574 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Mar 7 01:54:06.035412 update_engine[1574]: I20260307 01:54:06.031345 1574 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Mar 7 01:54:06.035412 update_engine[1574]: I20260307 01:54:06.031394 1574 omaha_request_action.cc:271] Posting an Omaha request to disabled Mar 7 01:54:06.035412 update_engine[1574]: I20260307 01:54:06.031411 1574 omaha_request_action.cc:272] Request: Mar 7 01:54:06.035412 update_engine[1574]: Mar 7 01:54:06.035412 update_engine[1574]: Mar 7 01:54:06.040224 update_engine[1574]: Mar 7 01:54:06.040224 update_engine[1574]: Mar 7 01:54:06.040224 update_engine[1574]: Mar 7 01:54:06.040224 update_engine[1574]: Mar 7 01:54:06.040224 update_engine[1574]: I20260307 01:54:06.031423 1574 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 7 01:54:06.040224 update_engine[1574]: I20260307 01:54:06.034896 1574 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 7 01:54:06.040224 update_engine[1574]: I20260307 01:54:06.035346 1574 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 7 01:54:06.044510 locksmithd[1639]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Mar 7 01:54:06.090279 update_engine[1574]: E20260307 01:54:06.081907 1574 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 7 01:54:06.090279 update_engine[1574]: I20260307 01:54:06.082086 1574 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Mar 7 01:54:06.090279 update_engine[1574]: I20260307 01:54:06.082105 1574 omaha_request_action.cc:617] Omaha request response: Mar 7 01:54:06.090279 update_engine[1574]: I20260307 01:54:06.082118 1574 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 7 01:54:06.090279 update_engine[1574]: I20260307 01:54:06.082127 1574 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 7 01:54:06.090279 update_engine[1574]: I20260307 01:54:06.082139 1574 update_attempter.cc:306] Processing Done. Mar 7 01:54:06.090279 update_engine[1574]: I20260307 01:54:06.082151 1574 update_attempter.cc:310] Error event sent. Mar 7 01:54:06.090279 update_engine[1574]: I20260307 01:54:06.082168 1574 update_check_scheduler.cc:74] Next update check in 44m12s Mar 7 01:54:06.093521 locksmithd[1639]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Mar 7 01:54:07.293440 systemd[1]: Reloading requested from client PID 2801 ('systemctl') (unit session-9.scope)... Mar 7 01:54:07.350579 systemd[1]: Reloading... Mar 7 01:54:08.025963 kubelet[2505]: E0307 01:54:08.018625 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:54:09.281925 zram_generator::config[2841]: No configuration found. Mar 7 01:54:09.389576 kubelet[2505]: E0307 01:54:09.389523 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:54:10.845200 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 01:54:11.928707 systemd[1]: Reloading finished in 4571 ms. Mar 7 01:54:12.559306 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:54:12.789581 systemd[1]: kubelet.service: Deactivated successfully. Mar 7 01:54:12.793594 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:54:12.833634 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:54:14.139069 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:54:14.179295 (kubelet)[2895]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 7 01:54:14.566532 kubelet[2895]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 7 01:54:14.566532 kubelet[2895]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 7 01:54:14.566532 kubelet[2895]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 7 01:54:14.566532 kubelet[2895]: I0307 01:54:14.557468 2895 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 7 01:54:14.686745 kubelet[2895]: I0307 01:54:14.678295 2895 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 7 01:54:14.686745 kubelet[2895]: I0307 01:54:14.678348 2895 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 7 01:54:14.686745 kubelet[2895]: I0307 01:54:14.679072 2895 server.go:956] "Client rotation is on, will bootstrap in background" Mar 7 01:54:14.726779 kubelet[2895]: I0307 01:54:14.725120 2895 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 7 01:54:14.914618 kubelet[2895]: I0307 01:54:14.910744 2895 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 7 01:54:14.973788 kubelet[2895]: E0307 01:54:14.971493 2895 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 7 01:54:14.973788 kubelet[2895]: I0307 01:54:14.971671 2895 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 7 01:54:15.038957 kubelet[2895]: I0307 01:54:15.029522 2895 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 7 01:54:15.038957 kubelet[2895]: I0307 01:54:15.033666 2895 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 7 01:54:15.038957 kubelet[2895]: I0307 01:54:15.033709 2895 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Mar 7 01:54:15.038957 kubelet[2895]: I0307 01:54:15.033985 2895 topology_manager.go:138] "Creating topology manager with none policy" Mar 7 01:54:15.040247 kubelet[2895]: I0307 01:54:15.034002 2895 container_manager_linux.go:303] "Creating device plugin manager" Mar 7 01:54:15.040247 kubelet[2895]: I0307 01:54:15.034143 2895 state_mem.go:36] "Initialized new in-memory state store" Mar 7 01:54:15.047841 kubelet[2895]: I0307 01:54:15.046996 2895 kubelet.go:480] "Attempting to sync node with API server" Mar 7 01:54:15.047841 kubelet[2895]: I0307 01:54:15.047377 2895 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 7 01:54:15.047841 kubelet[2895]: I0307 01:54:15.047456 2895 kubelet.go:386] "Adding apiserver pod source" Mar 7 01:54:15.047841 kubelet[2895]: I0307 01:54:15.047475 2895 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 7 01:54:15.069217 kubelet[2895]: I0307 01:54:15.068543 2895 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 7 01:54:15.070139 kubelet[2895]: I0307 01:54:15.070011 2895 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 7 01:54:15.150864 kubelet[2895]: I0307 01:54:15.150480 2895 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 7 01:54:15.150864 kubelet[2895]: I0307 01:54:15.150554 2895 server.go:1289] "Started kubelet" Mar 7 01:54:15.157717 kubelet[2895]: I0307 01:54:15.157574 2895 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 7 01:54:15.158156 kubelet[2895]: I0307 01:54:15.157997 2895 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 7 01:54:15.178336 kubelet[2895]: I0307 01:54:15.176580 2895 server.go:317] "Adding debug handlers to kubelet server" Mar 7 01:54:15.185425 kubelet[2895]: I0307 01:54:15.181671 2895 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 7 01:54:15.188644 kubelet[2895]: I0307 01:54:15.187993 2895 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 7 01:54:15.190450 kubelet[2895]: I0307 01:54:15.190423 2895 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 7 01:54:15.196183 kubelet[2895]: I0307 01:54:15.196157 2895 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 7 01:54:15.232063 kubelet[2895]: E0307 01:54:15.216092 2895 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:54:15.232063 kubelet[2895]: I0307 01:54:15.216577 2895 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 7 01:54:15.232063 kubelet[2895]: I0307 01:54:15.216942 2895 reconciler.go:26] "Reconciler: start to sync state" Mar 7 01:54:15.589527 kubelet[2895]: I0307 01:54:15.588514 2895 factory.go:223] Registration of the systemd container factory successfully Mar 7 01:54:15.612889 kubelet[2895]: I0307 01:54:15.609079 2895 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 7 01:54:15.628991 kubelet[2895]: I0307 01:54:15.628896 2895 factory.go:223] Registration of the containerd container factory successfully Mar 7 01:54:15.635554 kubelet[2895]: E0307 01:54:15.631585 2895 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 7 01:54:15.958897 kubelet[2895]: I0307 01:54:15.956194 2895 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 7 01:54:15.965741 kubelet[2895]: I0307 01:54:15.965650 2895 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 7 01:54:15.965990 kubelet[2895]: I0307 01:54:15.965970 2895 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 7 01:54:15.966093 kubelet[2895]: I0307 01:54:15.966080 2895 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 7 01:54:15.966155 kubelet[2895]: I0307 01:54:15.966146 2895 kubelet.go:2436] "Starting kubelet main sync loop" Mar 7 01:54:15.966280 kubelet[2895]: E0307 01:54:15.966249 2895 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 7 01:54:16.061598 kubelet[2895]: I0307 01:54:16.061387 2895 apiserver.go:52] "Watching apiserver" Mar 7 01:54:16.067844 kubelet[2895]: E0307 01:54:16.066967 2895 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 7 01:54:16.291414 kubelet[2895]: E0307 01:54:16.274442 2895 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 7 01:54:16.468878 kubelet[2895]: I0307 01:54:16.468393 2895 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 7 01:54:16.481602 kubelet[2895]: I0307 01:54:16.474968 2895 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 7 01:54:16.481602 kubelet[2895]: I0307 01:54:16.476644 2895 state_mem.go:36] "Initialized new in-memory state store" Mar 7 01:54:16.481602 kubelet[2895]: I0307 01:54:16.478879 2895 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 7 01:54:16.481602 kubelet[2895]: I0307 01:54:16.478899 2895 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 7 01:54:16.481602 kubelet[2895]: I0307 01:54:16.478934 2895 policy_none.go:49] "None policy: Start" Mar 7 01:54:16.481602 kubelet[2895]: I0307 01:54:16.478954 2895 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 7 01:54:16.481602 kubelet[2895]: I0307 01:54:16.478979 2895 state_mem.go:35] "Initializing new in-memory state store" Mar 7 01:54:16.481602 kubelet[2895]: I0307 01:54:16.481440 2895 state_mem.go:75] "Updated machine memory state" Mar 7 01:54:16.491145 kubelet[2895]: E0307 01:54:16.491104 2895 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 7 01:54:16.509253 kubelet[2895]: I0307 01:54:16.509213 2895 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 7 01:54:16.509489 kubelet[2895]: I0307 01:54:16.509431 2895 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 7 01:54:16.510180 kubelet[2895]: I0307 01:54:16.510159 2895 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 7 01:54:16.518931 kubelet[2895]: I0307 01:54:16.517211 2895 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 7 01:54:16.520362 containerd[1587]: time="2026-03-07T01:54:16.520071961Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 7 01:54:16.527735 kubelet[2895]: E0307 01:54:16.526095 2895 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 7 01:54:16.528574 kubelet[2895]: I0307 01:54:16.528237 2895 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 7 01:54:16.951117 kubelet[2895]: I0307 01:54:16.950394 2895 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 7 01:54:17.039500 kubelet[2895]: I0307 01:54:16.964502 2895 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 7 01:54:17.125526 kubelet[2895]: I0307 01:54:17.042966 2895 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cabc3565aac99f2f84bae765e01330a3-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"cabc3565aac99f2f84bae765e01330a3\") " pod="kube-system/kube-apiserver-localhost" Mar 7 01:54:17.125526 kubelet[2895]: I0307 01:54:17.043018 2895 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cabc3565aac99f2f84bae765e01330a3-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"cabc3565aac99f2f84bae765e01330a3\") " pod="kube-system/kube-apiserver-localhost" Mar 7 01:54:17.125526 kubelet[2895]: I0307 01:54:17.043048 2895 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cabc3565aac99f2f84bae765e01330a3-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"cabc3565aac99f2f84bae765e01330a3\") " pod="kube-system/kube-apiserver-localhost" Mar 7 01:54:17.125526 kubelet[2895]: I0307 01:54:17.052492 2895 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 7 01:54:17.168196 kubelet[2895]: E0307 01:54:17.168124 2895 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 7 01:54:17.177363 kubelet[2895]: I0307 01:54:17.175022 2895 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d096d003-f8e9-4c28-a324-d9ca608b6945-lib-modules\") pod \"kube-proxy-b9pdb\" (UID: \"d096d003-f8e9-4c28-a324-d9ca608b6945\") " pod="kube-system/kube-proxy-b9pdb" Mar 7 01:54:17.177363 kubelet[2895]: I0307 01:54:17.177027 2895 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 01:54:17.177363 kubelet[2895]: I0307 01:54:17.177074 2895 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 01:54:17.177363 kubelet[2895]: I0307 01:54:17.177308 2895 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e944e4cb17af904786c3a2e01e298498-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"e944e4cb17af904786c3a2e01e298498\") " pod="kube-system/kube-scheduler-localhost" Mar 7 01:54:17.189176 kubelet[2895]: I0307 01:54:17.185488 2895 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhrnf\" (UniqueName: \"kubernetes.io/projected/d096d003-f8e9-4c28-a324-d9ca608b6945-kube-api-access-vhrnf\") pod \"kube-proxy-b9pdb\" (UID: \"d096d003-f8e9-4c28-a324-d9ca608b6945\") " pod="kube-system/kube-proxy-b9pdb" Mar 7 01:54:17.189176 kubelet[2895]: I0307 01:54:17.185612 2895 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 01:54:17.189176 kubelet[2895]: I0307 01:54:17.185702 2895 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 01:54:17.223261 kubelet[2895]: I0307 01:54:17.217366 2895 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 01:54:17.223261 kubelet[2895]: I0307 01:54:17.222033 2895 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d096d003-f8e9-4c28-a324-d9ca608b6945-kube-proxy\") pod \"kube-proxy-b9pdb\" (UID: \"d096d003-f8e9-4c28-a324-d9ca608b6945\") " pod="kube-system/kube-proxy-b9pdb" Mar 7 01:54:17.223261 kubelet[2895]: I0307 01:54:17.222304 2895 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d096d003-f8e9-4c28-a324-d9ca608b6945-xtables-lock\") pod \"kube-proxy-b9pdb\" (UID: \"d096d003-f8e9-4c28-a324-d9ca608b6945\") " pod="kube-system/kube-proxy-b9pdb" Mar 7 01:54:17.346308 kubelet[2895]: I0307 01:54:17.345535 2895 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Mar 7 01:54:17.392141 kubelet[2895]: I0307 01:54:17.359979 2895 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Mar 7 01:54:17.471088 kubelet[2895]: E0307 01:54:17.469159 2895 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:54:17.663362 kubelet[2895]: E0307 01:54:17.648096 2895 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:54:17.675582 kubelet[2895]: E0307 01:54:17.674733 2895 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:54:17.868920 kubelet[2895]: E0307 01:54:17.866312 2895 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:54:17.885239 containerd[1587]: time="2026-03-07T01:54:17.885175292Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-b9pdb,Uid:d096d003-f8e9-4c28-a324-d9ca608b6945,Namespace:kube-system,Attempt:0,}" Mar 7 01:54:18.558764 kubelet[2895]: E0307 01:54:18.557771 2895 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:54:18.631561 kubelet[2895]: E0307 01:54:18.585463 2895 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:54:18.631561 kubelet[2895]: E0307 01:54:18.588579 2895 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:54:19.460985 containerd[1587]: time="2026-03-07T01:54:19.459263731Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:54:19.460985 containerd[1587]: time="2026-03-07T01:54:19.459672539Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:54:19.460985 containerd[1587]: time="2026-03-07T01:54:19.459697618Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:54:19.460985 containerd[1587]: time="2026-03-07T01:54:19.460245071Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:54:19.593650 kubelet[2895]: E0307 01:54:19.590055 2895 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:54:19.593650 kubelet[2895]: E0307 01:54:19.590197 2895 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:54:20.470579 containerd[1587]: time="2026-03-07T01:54:20.470445882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-b9pdb,Uid:d096d003-f8e9-4c28-a324-d9ca608b6945,Namespace:kube-system,Attempt:0,} returns sandbox id \"81c72cf05cf243a7d2d299190562b60a9ac8440b696ac97cd5b4d7ad9bdaaeb9\"" Mar 7 01:54:20.549542 kubelet[2895]: E0307 01:54:20.549098 2895 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:54:20.640706 kubelet[2895]: E0307 01:54:20.638352 2895 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:54:20.640706 kubelet[2895]: E0307 01:54:20.643768 2895 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:54:20.651229 containerd[1587]: time="2026-03-07T01:54:20.645669786Z" level=info msg="CreateContainer within sandbox \"81c72cf05cf243a7d2d299190562b60a9ac8440b696ac97cd5b4d7ad9bdaaeb9\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 7 01:54:21.094213 containerd[1587]: time="2026-03-07T01:54:21.090654990Z" level=info msg="CreateContainer within sandbox \"81c72cf05cf243a7d2d299190562b60a9ac8440b696ac97cd5b4d7ad9bdaaeb9\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"97c3415493ad434b18592a0c682b8df210a10bf799eb12fd933c4bd11c850afa\"" Mar 7 01:54:21.778785 containerd[1587]: time="2026-03-07T01:54:21.778121771Z" level=info msg="StartContainer for \"97c3415493ad434b18592a0c682b8df210a10bf799eb12fd933c4bd11c850afa\"" Mar 7 01:54:21.786645 kubelet[2895]: E0307 01:54:21.786427 2895 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:54:21.902557 kubelet[2895]: E0307 01:54:21.899496 2895 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:54:23.492153 kubelet[2895]: E0307 01:54:22.916068 2895 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:54:24.461231 containerd[1587]: time="2026-03-07T01:54:24.460118423Z" level=info msg="StartContainer for \"97c3415493ad434b18592a0c682b8df210a10bf799eb12fd933c4bd11c850afa\" returns successfully" Mar 7 01:54:25.409954 kubelet[2895]: E0307 01:54:25.394118 2895 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:54:25.526564 kubelet[2895]: I0307 01:54:25.523995 2895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-b9pdb" podStartSLOduration=10.523970754 podStartE2EDuration="10.523970754s" podCreationTimestamp="2026-03-07 01:54:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:54:25.489493258 +0000 UTC m=+11.271793268" watchObservedRunningTime="2026-03-07 01:54:25.523970754 +0000 UTC m=+11.306270765" Mar 7 01:54:26.280633 kubelet[2895]: I0307 01:54:26.279979 2895 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4k9kk\" (UniqueName: \"kubernetes.io/projected/96dfb523-962a-45c2-97a1-004c9716ea65-kube-api-access-4k9kk\") pod \"tigera-operator-6bf85f8dd-qhjqw\" (UID: \"96dfb523-962a-45c2-97a1-004c9716ea65\") " pod="tigera-operator/tigera-operator-6bf85f8dd-qhjqw" Mar 7 01:54:26.280633 kubelet[2895]: I0307 01:54:26.280055 2895 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/96dfb523-962a-45c2-97a1-004c9716ea65-var-lib-calico\") pod \"tigera-operator-6bf85f8dd-qhjqw\" (UID: \"96dfb523-962a-45c2-97a1-004c9716ea65\") " pod="tigera-operator/tigera-operator-6bf85f8dd-qhjqw" Mar 7 01:54:26.408379 kubelet[2895]: E0307 01:54:26.394250 2895 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:54:26.710470 containerd[1587]: time="2026-03-07T01:54:26.710251280Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-qhjqw,Uid:96dfb523-962a-45c2-97a1-004c9716ea65,Namespace:tigera-operator,Attempt:0,}" Mar 7 01:54:26.799189 containerd[1587]: time="2026-03-07T01:54:26.798993314Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:54:26.799486 containerd[1587]: time="2026-03-07T01:54:26.799173951Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:54:26.799486 containerd[1587]: time="2026-03-07T01:54:26.799195653Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:54:26.799486 containerd[1587]: time="2026-03-07T01:54:26.799369927Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:54:26.963471 containerd[1587]: time="2026-03-07T01:54:26.962142655Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-qhjqw,Uid:96dfb523-962a-45c2-97a1-004c9716ea65,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"d10366faa2ceef5b609545115507f170ae5d303322c998e79d1f5cf18e8dedc8\"" Mar 7 01:54:26.978906 containerd[1587]: time="2026-03-07T01:54:26.978463332Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Mar 7 01:54:28.562662 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3297754775.mount: Deactivated successfully. Mar 7 01:54:35.708904 containerd[1587]: time="2026-03-07T01:54:35.708728586Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:54:35.718213 containerd[1587]: time="2026-03-07T01:54:35.718020839Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Mar 7 01:54:35.719704 containerd[1587]: time="2026-03-07T01:54:35.719666154Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:54:35.727594 containerd[1587]: time="2026-03-07T01:54:35.727506319Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:54:35.731105 containerd[1587]: time="2026-03-07T01:54:35.730928129Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 8.752405864s" Mar 7 01:54:35.731105 containerd[1587]: time="2026-03-07T01:54:35.731015085Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Mar 7 01:54:35.750423 containerd[1587]: time="2026-03-07T01:54:35.750212073Z" level=info msg="CreateContainer within sandbox \"d10366faa2ceef5b609545115507f170ae5d303322c998e79d1f5cf18e8dedc8\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 7 01:54:35.800066 containerd[1587]: time="2026-03-07T01:54:35.799973541Z" level=info msg="CreateContainer within sandbox \"d10366faa2ceef5b609545115507f170ae5d303322c998e79d1f5cf18e8dedc8\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"551afb8399b80a421c44e137c327b8a0a3afee6f60d53587fa9c06f1f25146d9\"" Mar 7 01:54:35.821732 containerd[1587]: time="2026-03-07T01:54:35.815587556Z" level=info msg="StartContainer for \"551afb8399b80a421c44e137c327b8a0a3afee6f60d53587fa9c06f1f25146d9\"" Mar 7 01:54:36.075142 containerd[1587]: time="2026-03-07T01:54:36.074537277Z" level=info msg="StartContainer for \"551afb8399b80a421c44e137c327b8a0a3afee6f60d53587fa9c06f1f25146d9\" returns successfully" Mar 7 01:54:36.559747 kubelet[2895]: I0307 01:54:36.559649 2895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6bf85f8dd-qhjqw" podStartSLOduration=1.8005586139999998 podStartE2EDuration="10.559580154s" podCreationTimestamp="2026-03-07 01:54:26 +0000 UTC" firstStartedPulling="2026-03-07 01:54:26.977286681 +0000 UTC m=+12.759586690" lastFinishedPulling="2026-03-07 01:54:35.736308221 +0000 UTC m=+21.518608230" observedRunningTime="2026-03-07 01:54:36.556852119 +0000 UTC m=+22.339152140" watchObservedRunningTime="2026-03-07 01:54:36.559580154 +0000 UTC m=+22.341880165" Mar 7 01:54:44.195103 sudo[1797]: pam_unix(sudo:session): session closed for user root Mar 7 01:54:44.227601 sshd[1790]: pam_unix(sshd:session): session closed for user core Mar 7 01:54:44.233954 systemd[1]: sshd@8-10.0.0.118:22-10.0.0.1:60854.service: Deactivated successfully. Mar 7 01:54:44.239199 systemd-logind[1563]: Session 9 logged out. Waiting for processes to exit. Mar 7 01:54:44.244203 systemd[1]: session-9.scope: Deactivated successfully. Mar 7 01:54:44.246967 systemd-logind[1563]: Removed session 9. Mar 7 01:54:52.379870 kubelet[2895]: I0307 01:54:52.378626 2895 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ff1e4ba8-3087-492d-b570-eb9b7b424c69-tigera-ca-bundle\") pod \"calico-typha-55d65f4688-878j5\" (UID: \"ff1e4ba8-3087-492d-b570-eb9b7b424c69\") " pod="calico-system/calico-typha-55d65f4688-878j5" Mar 7 01:54:52.391869 kubelet[2895]: I0307 01:54:52.383277 2895 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/ff1e4ba8-3087-492d-b570-eb9b7b424c69-typha-certs\") pod \"calico-typha-55d65f4688-878j5\" (UID: \"ff1e4ba8-3087-492d-b570-eb9b7b424c69\") " pod="calico-system/calico-typha-55d65f4688-878j5" Mar 7 01:54:52.391869 kubelet[2895]: I0307 01:54:52.383337 2895 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5l9g9\" (UniqueName: \"kubernetes.io/projected/ff1e4ba8-3087-492d-b570-eb9b7b424c69-kube-api-access-5l9g9\") pod \"calico-typha-55d65f4688-878j5\" (UID: \"ff1e4ba8-3087-492d-b570-eb9b7b424c69\") " pod="calico-system/calico-typha-55d65f4688-878j5" Mar 7 01:54:52.720030 kubelet[2895]: E0307 01:54:52.718909 2895 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:54:52.720230 containerd[1587]: time="2026-03-07T01:54:52.719651543Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-55d65f4688-878j5,Uid:ff1e4ba8-3087-492d-b570-eb9b7b424c69,Namespace:calico-system,Attempt:0,}" Mar 7 01:54:52.828899 containerd[1587]: time="2026-03-07T01:54:52.826285741Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:54:52.828899 containerd[1587]: time="2026-03-07T01:54:52.826398985Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:54:52.828899 containerd[1587]: time="2026-03-07T01:54:52.826425543Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:54:52.828899 containerd[1587]: time="2026-03-07T01:54:52.826599266Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:54:52.913942 kubelet[2895]: I0307 01:54:52.913891 2895 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/a51ca567-53c4-4454-b716-2f1ebac14559-var-run-calico\") pod \"calico-node-jgmjr\" (UID: \"a51ca567-53c4-4454-b716-2f1ebac14559\") " pod="calico-system/calico-node-jgmjr" Mar 7 01:54:52.917773 kubelet[2895]: I0307 01:54:52.915997 2895 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/a51ca567-53c4-4454-b716-2f1ebac14559-node-certs\") pod \"calico-node-jgmjr\" (UID: \"a51ca567-53c4-4454-b716-2f1ebac14559\") " pod="calico-system/calico-node-jgmjr" Mar 7 01:54:52.917773 kubelet[2895]: I0307 01:54:52.916075 2895 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/a51ca567-53c4-4454-b716-2f1ebac14559-sys-fs\") pod \"calico-node-jgmjr\" (UID: \"a51ca567-53c4-4454-b716-2f1ebac14559\") " pod="calico-system/calico-node-jgmjr" Mar 7 01:54:52.917773 kubelet[2895]: I0307 01:54:52.916117 2895 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/a51ca567-53c4-4454-b716-2f1ebac14559-cni-bin-dir\") pod \"calico-node-jgmjr\" (UID: \"a51ca567-53c4-4454-b716-2f1ebac14559\") " pod="calico-system/calico-node-jgmjr" Mar 7 01:54:52.917773 kubelet[2895]: I0307 01:54:52.916144 2895 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/a51ca567-53c4-4454-b716-2f1ebac14559-flexvol-driver-host\") pod \"calico-node-jgmjr\" (UID: \"a51ca567-53c4-4454-b716-2f1ebac14559\") " pod="calico-system/calico-node-jgmjr" Mar 7 01:54:52.917773 kubelet[2895]: I0307 01:54:52.916173 2895 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a51ca567-53c4-4454-b716-2f1ebac14559-var-lib-calico\") pod \"calico-node-jgmjr\" (UID: \"a51ca567-53c4-4454-b716-2f1ebac14559\") " pod="calico-system/calico-node-jgmjr" Mar 7 01:54:52.918087 kubelet[2895]: I0307 01:54:52.916201 2895 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/a51ca567-53c4-4454-b716-2f1ebac14559-nodeproc\") pod \"calico-node-jgmjr\" (UID: \"a51ca567-53c4-4454-b716-2f1ebac14559\") " pod="calico-system/calico-node-jgmjr" Mar 7 01:54:52.918087 kubelet[2895]: E0307 01:54:52.916200 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c8z84" podUID="96475dac-1179-4e59-8100-da5ef27d719a" Mar 7 01:54:52.918087 kubelet[2895]: I0307 01:54:52.916759 2895 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/a51ca567-53c4-4454-b716-2f1ebac14559-bpffs\") pod \"calico-node-jgmjr\" (UID: \"a51ca567-53c4-4454-b716-2f1ebac14559\") " pod="calico-system/calico-node-jgmjr" Mar 7 01:54:52.918087 kubelet[2895]: I0307 01:54:52.916865 2895 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/a51ca567-53c4-4454-b716-2f1ebac14559-cni-log-dir\") pod \"calico-node-jgmjr\" (UID: \"a51ca567-53c4-4454-b716-2f1ebac14559\") " pod="calico-system/calico-node-jgmjr" Mar 7 01:54:52.918087 kubelet[2895]: I0307 01:54:52.916904 2895 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mczpm\" (UniqueName: \"kubernetes.io/projected/a51ca567-53c4-4454-b716-2f1ebac14559-kube-api-access-mczpm\") pod \"calico-node-jgmjr\" (UID: \"a51ca567-53c4-4454-b716-2f1ebac14559\") " pod="calico-system/calico-node-jgmjr" Mar 7 01:54:52.918290 kubelet[2895]: I0307 01:54:52.916941 2895 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a51ca567-53c4-4454-b716-2f1ebac14559-tigera-ca-bundle\") pod \"calico-node-jgmjr\" (UID: \"a51ca567-53c4-4454-b716-2f1ebac14559\") " pod="calico-system/calico-node-jgmjr" Mar 7 01:54:52.918290 kubelet[2895]: I0307 01:54:52.916972 2895 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/a51ca567-53c4-4454-b716-2f1ebac14559-policysync\") pod \"calico-node-jgmjr\" (UID: \"a51ca567-53c4-4454-b716-2f1ebac14559\") " pod="calico-system/calico-node-jgmjr" Mar 7 01:54:52.918290 kubelet[2895]: I0307 01:54:52.917002 2895 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a51ca567-53c4-4454-b716-2f1ebac14559-lib-modules\") pod \"calico-node-jgmjr\" (UID: \"a51ca567-53c4-4454-b716-2f1ebac14559\") " pod="calico-system/calico-node-jgmjr" Mar 7 01:54:52.918290 kubelet[2895]: I0307 01:54:52.917077 2895 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/a51ca567-53c4-4454-b716-2f1ebac14559-cni-net-dir\") pod \"calico-node-jgmjr\" (UID: \"a51ca567-53c4-4454-b716-2f1ebac14559\") " pod="calico-system/calico-node-jgmjr" Mar 7 01:54:52.918290 kubelet[2895]: I0307 01:54:52.917114 2895 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a51ca567-53c4-4454-b716-2f1ebac14559-xtables-lock\") pod \"calico-node-jgmjr\" (UID: \"a51ca567-53c4-4454-b716-2f1ebac14559\") " pod="calico-system/calico-node-jgmjr" Mar 7 01:54:53.019493 kubelet[2895]: I0307 01:54:53.019299 2895 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/96475dac-1179-4e59-8100-da5ef27d719a-socket-dir\") pod \"csi-node-driver-c8z84\" (UID: \"96475dac-1179-4e59-8100-da5ef27d719a\") " pod="calico-system/csi-node-driver-c8z84" Mar 7 01:54:53.019493 kubelet[2895]: I0307 01:54:53.019423 2895 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ql7w2\" (UniqueName: \"kubernetes.io/projected/96475dac-1179-4e59-8100-da5ef27d719a-kube-api-access-ql7w2\") pod \"csi-node-driver-c8z84\" (UID: \"96475dac-1179-4e59-8100-da5ef27d719a\") " pod="calico-system/csi-node-driver-c8z84" Mar 7 01:54:53.020002 kubelet[2895]: I0307 01:54:53.019556 2895 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/96475dac-1179-4e59-8100-da5ef27d719a-kubelet-dir\") pod \"csi-node-driver-c8z84\" (UID: \"96475dac-1179-4e59-8100-da5ef27d719a\") " pod="calico-system/csi-node-driver-c8z84" Mar 7 01:54:53.020002 kubelet[2895]: I0307 01:54:53.019581 2895 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/96475dac-1179-4e59-8100-da5ef27d719a-varrun\") pod \"csi-node-driver-c8z84\" (UID: \"96475dac-1179-4e59-8100-da5ef27d719a\") " pod="calico-system/csi-node-driver-c8z84" Mar 7 01:54:53.020002 kubelet[2895]: I0307 01:54:53.019729 2895 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/96475dac-1179-4e59-8100-da5ef27d719a-registration-dir\") pod \"csi-node-driver-c8z84\" (UID: \"96475dac-1179-4e59-8100-da5ef27d719a\") " pod="calico-system/csi-node-driver-c8z84" Mar 7 01:54:53.030877 kubelet[2895]: E0307 01:54:53.030780 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:54:53.031225 kubelet[2895]: W0307 01:54:53.031077 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:54:53.031225 kubelet[2895]: E0307 01:54:53.031173 2895 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:54:53.047339 kubelet[2895]: E0307 01:54:53.047302 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:54:53.047636 kubelet[2895]: W0307 01:54:53.047513 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:54:53.047636 kubelet[2895]: E0307 01:54:53.047547 2895 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:54:53.085953 kubelet[2895]: E0307 01:54:53.085912 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:54:53.086289 kubelet[2895]: W0307 01:54:53.086195 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:54:53.086289 kubelet[2895]: E0307 01:54:53.086236 2895 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:54:53.105439 containerd[1587]: time="2026-03-07T01:54:53.105272872Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-55d65f4688-878j5,Uid:ff1e4ba8-3087-492d-b570-eb9b7b424c69,Namespace:calico-system,Attempt:0,} returns sandbox id \"d5d836bf1651e62d7f5f6f80966ceb3c09c0681d5e36137d8a930f9e9183413a\"" Mar 7 01:54:53.107646 kubelet[2895]: E0307 01:54:53.107572 2895 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:54:53.110771 containerd[1587]: time="2026-03-07T01:54:53.110382221Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Mar 7 01:54:53.125627 kubelet[2895]: E0307 01:54:53.124263 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:54:53.125627 kubelet[2895]: W0307 01:54:53.124296 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:54:53.125627 kubelet[2895]: E0307 01:54:53.124329 2895 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:54:53.132782 kubelet[2895]: E0307 01:54:53.132432 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:54:53.132782 kubelet[2895]: W0307 01:54:53.132483 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:54:53.132782 kubelet[2895]: E0307 01:54:53.132513 2895 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:54:53.137873 kubelet[2895]: E0307 01:54:53.134903 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:54:53.137873 kubelet[2895]: W0307 01:54:53.136635 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:54:53.152491 kubelet[2895]: E0307 01:54:53.151190 2895 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:54:53.154265 kubelet[2895]: E0307 01:54:53.154217 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:54:53.154265 kubelet[2895]: W0307 01:54:53.154243 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:54:53.156234 kubelet[2895]: E0307 01:54:53.154268 2895 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:54:53.160753 kubelet[2895]: E0307 01:54:53.159909 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:54:53.160753 kubelet[2895]: W0307 01:54:53.159931 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:54:53.160753 kubelet[2895]: E0307 01:54:53.159957 2895 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:54:53.160753 kubelet[2895]: E0307 01:54:53.160700 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:54:53.160753 kubelet[2895]: W0307 01:54:53.160714 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:54:53.160753 kubelet[2895]: E0307 01:54:53.160733 2895 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:54:53.161933 kubelet[2895]: E0307 01:54:53.161784 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:54:53.161933 kubelet[2895]: W0307 01:54:53.161885 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:54:53.161933 kubelet[2895]: E0307 01:54:53.161910 2895 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:54:53.164097 kubelet[2895]: E0307 01:54:53.163642 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:54:53.164097 kubelet[2895]: W0307 01:54:53.163658 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:54:53.164097 kubelet[2895]: E0307 01:54:53.163675 2895 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:54:53.165617 kubelet[2895]: E0307 01:54:53.165257 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:54:53.165617 kubelet[2895]: W0307 01:54:53.165292 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:54:53.165617 kubelet[2895]: E0307 01:54:53.165309 2895 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:54:53.166429 kubelet[2895]: E0307 01:54:53.166292 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:54:53.166429 kubelet[2895]: W0307 01:54:53.166329 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:54:53.166429 kubelet[2895]: E0307 01:54:53.166346 2895 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:54:53.169584 kubelet[2895]: E0307 01:54:53.169480 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:54:53.169584 kubelet[2895]: W0307 01:54:53.169523 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:54:53.169584 kubelet[2895]: E0307 01:54:53.169543 2895 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:54:53.171622 kubelet[2895]: E0307 01:54:53.170668 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:54:53.171622 kubelet[2895]: W0307 01:54:53.170683 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:54:53.171622 kubelet[2895]: E0307 01:54:53.170728 2895 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:54:53.172549 kubelet[2895]: E0307 01:54:53.172288 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:54:53.172549 kubelet[2895]: W0307 01:54:53.172370 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:54:53.172549 kubelet[2895]: E0307 01:54:53.172384 2895 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:54:53.174759 kubelet[2895]: E0307 01:54:53.174561 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:54:53.174759 kubelet[2895]: W0307 01:54:53.174578 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:54:53.174759 kubelet[2895]: E0307 01:54:53.174593 2895 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:54:53.176363 kubelet[2895]: E0307 01:54:53.176300 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:54:53.176518 kubelet[2895]: W0307 01:54:53.176447 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:54:53.176677 kubelet[2895]: E0307 01:54:53.176617 2895 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:54:53.179092 kubelet[2895]: E0307 01:54:53.178726 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:54:53.179092 kubelet[2895]: W0307 01:54:53.178766 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:54:53.179092 kubelet[2895]: E0307 01:54:53.178861 2895 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:54:53.180465 kubelet[2895]: E0307 01:54:53.180096 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:54:53.180465 kubelet[2895]: W0307 01:54:53.180111 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:54:53.180465 kubelet[2895]: E0307 01:54:53.180124 2895 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:54:53.181092 kubelet[2895]: E0307 01:54:53.180890 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:54:53.181092 kubelet[2895]: W0307 01:54:53.180906 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:54:53.181092 kubelet[2895]: E0307 01:54:53.180922 2895 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:54:53.182414 kubelet[2895]: E0307 01:54:53.182198 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:54:53.182414 kubelet[2895]: W0307 01:54:53.182222 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:54:53.182414 kubelet[2895]: E0307 01:54:53.182241 2895 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:54:53.186275 kubelet[2895]: E0307 01:54:53.185393 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:54:53.186275 kubelet[2895]: W0307 01:54:53.185413 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:54:53.186275 kubelet[2895]: E0307 01:54:53.185432 2895 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:54:53.186275 kubelet[2895]: E0307 01:54:53.186135 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:54:53.186275 kubelet[2895]: W0307 01:54:53.186149 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:54:53.186275 kubelet[2895]: E0307 01:54:53.186164 2895 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:54:53.188114 kubelet[2895]: E0307 01:54:53.187661 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:54:53.188114 kubelet[2895]: W0307 01:54:53.187676 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:54:53.188114 kubelet[2895]: E0307 01:54:53.187691 2895 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:54:53.189042 kubelet[2895]: E0307 01:54:53.188532 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:54:53.189042 kubelet[2895]: W0307 01:54:53.188672 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:54:53.189155 kubelet[2895]: E0307 01:54:53.189083 2895 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:54:53.190464 kubelet[2895]: E0307 01:54:53.190401 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:54:53.190464 kubelet[2895]: W0307 01:54:53.190435 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:54:53.190464 kubelet[2895]: E0307 01:54:53.190457 2895 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:54:53.194506 kubelet[2895]: E0307 01:54:53.191292 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:54:53.194506 kubelet[2895]: W0307 01:54:53.191851 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:54:53.194506 kubelet[2895]: E0307 01:54:53.191871 2895 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:54:53.223457 kubelet[2895]: E0307 01:54:53.223235 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:54:53.223457 kubelet[2895]: W0307 01:54:53.223276 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:54:53.223457 kubelet[2895]: E0307 01:54:53.223298 2895 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:54:53.383911 containerd[1587]: time="2026-03-07T01:54:53.383666638Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-jgmjr,Uid:a51ca567-53c4-4454-b716-2f1ebac14559,Namespace:calico-system,Attempt:0,}" Mar 7 01:54:53.518551 containerd[1587]: time="2026-03-07T01:54:53.515908364Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:54:53.518551 containerd[1587]: time="2026-03-07T01:54:53.516681709Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:54:53.518551 containerd[1587]: time="2026-03-07T01:54:53.516709628Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:54:53.518551 containerd[1587]: time="2026-03-07T01:54:53.516957496Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:54:53.590401 systemd[1]: run-containerd-runc-k8s.io-d5d836bf1651e62d7f5f6f80966ceb3c09c0681d5e36137d8a930f9e9183413a-runc.p4ItLa.mount: Deactivated successfully. Mar 7 01:54:53.657052 containerd[1587]: time="2026-03-07T01:54:53.656525467Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-jgmjr,Uid:a51ca567-53c4-4454-b716-2f1ebac14559,Namespace:calico-system,Attempt:0,} returns sandbox id \"a33786778d79dea8c0bfe55b0a90634e63ae5bdcd59895bc34596d589be60e08\"" Mar 7 01:54:54.337393 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2911415758.mount: Deactivated successfully. Mar 7 01:54:54.972960 kubelet[2895]: E0307 01:54:54.970892 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c8z84" podUID="96475dac-1179-4e59-8100-da5ef27d719a" Mar 7 01:54:56.966916 kubelet[2895]: E0307 01:54:56.966670 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c8z84" podUID="96475dac-1179-4e59-8100-da5ef27d719a" Mar 7 01:54:57.849542 containerd[1587]: time="2026-03-07T01:54:57.849032208Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:54:57.852660 containerd[1587]: time="2026-03-07T01:54:57.852487632Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=36107596" Mar 7 01:54:57.857369 containerd[1587]: time="2026-03-07T01:54:57.857276512Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:54:57.863340 containerd[1587]: time="2026-03-07T01:54:57.863231475Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:54:57.865043 containerd[1587]: time="2026-03-07T01:54:57.864896526Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 4.754453936s" Mar 7 01:54:57.865043 containerd[1587]: time="2026-03-07T01:54:57.864971040Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Mar 7 01:54:57.868516 containerd[1587]: time="2026-03-07T01:54:57.868462616Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Mar 7 01:54:57.901383 containerd[1587]: time="2026-03-07T01:54:57.901259116Z" level=info msg="CreateContainer within sandbox \"d5d836bf1651e62d7f5f6f80966ceb3c09c0681d5e36137d8a930f9e9183413a\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 7 01:54:57.947285 containerd[1587]: time="2026-03-07T01:54:57.947163239Z" level=info msg="CreateContainer within sandbox \"d5d836bf1651e62d7f5f6f80966ceb3c09c0681d5e36137d8a930f9e9183413a\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"432d717585208565bb1f33941aea4dea38a92c6a6ebba44647d2095987f65812\"" Mar 7 01:54:57.950339 containerd[1587]: time="2026-03-07T01:54:57.950152787Z" level=info msg="StartContainer for \"432d717585208565bb1f33941aea4dea38a92c6a6ebba44647d2095987f65812\"" Mar 7 01:54:58.094694 containerd[1587]: time="2026-03-07T01:54:58.094421117Z" level=info msg="StartContainer for \"432d717585208565bb1f33941aea4dea38a92c6a6ebba44647d2095987f65812\" returns successfully" Mar 7 01:54:58.715878 kubelet[2895]: E0307 01:54:58.713896 2895 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:54:58.762890 kubelet[2895]: E0307 01:54:58.762618 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:54:58.762890 kubelet[2895]: W0307 01:54:58.762663 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:54:58.762890 kubelet[2895]: E0307 01:54:58.762736 2895 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:54:58.768577 kubelet[2895]: E0307 01:54:58.767566 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:54:58.768577 kubelet[2895]: W0307 01:54:58.767589 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:54:58.768577 kubelet[2895]: E0307 01:54:58.767615 2895 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:54:58.779482 kubelet[2895]: E0307 01:54:58.775595 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:54:58.779482 kubelet[2895]: W0307 01:54:58.775625 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:54:58.779482 kubelet[2895]: E0307 01:54:58.775654 2895 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:54:58.779482 kubelet[2895]: E0307 01:54:58.776206 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:54:58.779482 kubelet[2895]: W0307 01:54:58.776220 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:54:58.779482 kubelet[2895]: E0307 01:54:58.776236 2895 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:54:58.779482 kubelet[2895]: E0307 01:54:58.776500 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:54:58.779482 kubelet[2895]: W0307 01:54:58.776511 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:54:58.779482 kubelet[2895]: E0307 01:54:58.776524 2895 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:54:58.779482 kubelet[2895]: E0307 01:54:58.777043 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:54:58.780981 kubelet[2895]: W0307 01:54:58.777056 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:54:58.780981 kubelet[2895]: E0307 01:54:58.777071 2895 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:54:58.780981 kubelet[2895]: E0307 01:54:58.777652 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:54:58.780981 kubelet[2895]: W0307 01:54:58.777667 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:54:58.780981 kubelet[2895]: E0307 01:54:58.777681 2895 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:54:58.780981 kubelet[2895]: E0307 01:54:58.779743 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:54:58.780981 kubelet[2895]: W0307 01:54:58.779757 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:54:58.780981 kubelet[2895]: E0307 01:54:58.779773 2895 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:54:58.780981 kubelet[2895]: E0307 01:54:58.780123 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:54:58.780981 kubelet[2895]: W0307 01:54:58.780135 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:54:58.781444 kubelet[2895]: E0307 01:54:58.780149 2895 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:54:58.781444 kubelet[2895]: E0307 01:54:58.780749 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:54:58.781444 kubelet[2895]: W0307 01:54:58.780762 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:54:58.784123 kubelet[2895]: E0307 01:54:58.780778 2895 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:54:58.788084 kubelet[2895]: E0307 01:54:58.787947 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:54:58.788084 kubelet[2895]: W0307 01:54:58.787972 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:54:58.788084 kubelet[2895]: E0307 01:54:58.787998 2895 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:54:58.791354 kubelet[2895]: E0307 01:54:58.789977 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:54:58.791354 kubelet[2895]: W0307 01:54:58.789998 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:54:58.791354 kubelet[2895]: E0307 01:54:58.790019 2895 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:54:58.792659 kubelet[2895]: E0307 01:54:58.792132 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:54:58.792659 kubelet[2895]: W0307 01:54:58.792151 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:54:58.792659 kubelet[2895]: E0307 01:54:58.792171 2895 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:54:58.792659 kubelet[2895]: E0307 01:54:58.792477 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:54:58.792659 kubelet[2895]: W0307 01:54:58.792489 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:54:58.792659 kubelet[2895]: E0307 01:54:58.792502 2895 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:54:58.793914 kubelet[2895]: E0307 01:54:58.793290 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:54:58.793914 kubelet[2895]: W0307 01:54:58.793306 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:54:58.793914 kubelet[2895]: E0307 01:54:58.793323 2895 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:54:58.836784 kubelet[2895]: I0307 01:54:58.833540 2895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-55d65f4688-878j5" podStartSLOduration=2.076103852 podStartE2EDuration="6.833428196s" podCreationTimestamp="2026-03-07 01:54:52 +0000 UTC" firstStartedPulling="2026-03-07 01:54:53.109972724 +0000 UTC m=+38.892272734" lastFinishedPulling="2026-03-07 01:54:57.867297058 +0000 UTC m=+43.649597078" observedRunningTime="2026-03-07 01:54:58.772556553 +0000 UTC m=+44.554856564" watchObservedRunningTime="2026-03-07 01:54:58.833428196 +0000 UTC m=+44.615728216" Mar 7 01:54:58.867983 kubelet[2895]: E0307 01:54:58.867581 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:54:58.867983 kubelet[2895]: W0307 01:54:58.867641 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:54:58.867983 kubelet[2895]: E0307 01:54:58.867675 2895 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:54:58.870021 kubelet[2895]: E0307 01:54:58.869937 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:54:58.870021 kubelet[2895]: W0307 01:54:58.869958 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:54:58.870021 kubelet[2895]: E0307 01:54:58.869983 2895 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:54:58.872311 kubelet[2895]: E0307 01:54:58.872245 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:54:58.872311 kubelet[2895]: W0307 01:54:58.872287 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:54:58.872311 kubelet[2895]: E0307 01:54:58.872310 2895 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:54:58.873425 kubelet[2895]: E0307 01:54:58.872865 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:54:58.873425 kubelet[2895]: W0307 01:54:58.872949 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:54:58.873425 kubelet[2895]: E0307 01:54:58.872968 2895 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:54:58.877325 kubelet[2895]: E0307 01:54:58.875968 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:54:58.877325 kubelet[2895]: W0307 01:54:58.877243 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:54:58.878115 kubelet[2895]: E0307 01:54:58.877362 2895 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:54:58.898907 kubelet[2895]: E0307 01:54:58.878426 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:54:58.898907 kubelet[2895]: W0307 01:54:58.878537 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:54:58.898907 kubelet[2895]: E0307 01:54:58.878560 2895 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:54:58.898907 kubelet[2895]: E0307 01:54:58.881008 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:54:58.898907 kubelet[2895]: W0307 01:54:58.881023 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:54:58.898907 kubelet[2895]: E0307 01:54:58.881046 2895 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:54:58.898907 kubelet[2895]: E0307 01:54:58.882225 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:54:58.898907 kubelet[2895]: W0307 01:54:58.882239 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:54:58.898907 kubelet[2895]: E0307 01:54:58.882259 2895 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:54:58.898907 kubelet[2895]: E0307 01:54:58.885784 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:54:58.899360 kubelet[2895]: W0307 01:54:58.885860 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:54:58.899360 kubelet[2895]: E0307 01:54:58.885885 2895 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:54:58.899360 kubelet[2895]: E0307 01:54:58.888367 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:54:58.899360 kubelet[2895]: W0307 01:54:58.888382 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:54:58.899360 kubelet[2895]: E0307 01:54:58.888399 2895 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:54:58.899360 kubelet[2895]: E0307 01:54:58.890723 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:54:58.899360 kubelet[2895]: W0307 01:54:58.890742 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:54:58.899360 kubelet[2895]: E0307 01:54:58.890759 2895 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:54:58.899360 kubelet[2895]: E0307 01:54:58.891497 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:54:58.899360 kubelet[2895]: W0307 01:54:58.891510 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:54:58.900031 kubelet[2895]: E0307 01:54:58.891526 2895 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:54:58.900031 kubelet[2895]: E0307 01:54:58.898116 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:54:58.900031 kubelet[2895]: W0307 01:54:58.898143 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:54:58.900031 kubelet[2895]: E0307 01:54:58.898172 2895 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:54:58.900031 kubelet[2895]: E0307 01:54:58.899369 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:54:58.900031 kubelet[2895]: W0307 01:54:58.899386 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:54:58.900031 kubelet[2895]: E0307 01:54:58.899409 2895 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:54:58.902655 kubelet[2895]: E0307 01:54:58.902579 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:54:58.902655 kubelet[2895]: W0307 01:54:58.902644 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:54:58.903057 kubelet[2895]: E0307 01:54:58.902667 2895 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:54:58.905904 kubelet[2895]: E0307 01:54:58.905286 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:54:58.905904 kubelet[2895]: W0307 01:54:58.905301 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:54:58.905904 kubelet[2895]: E0307 01:54:58.905319 2895 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:54:58.906742 kubelet[2895]: E0307 01:54:58.906629 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:54:58.906742 kubelet[2895]: W0307 01:54:58.906717 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:54:58.906742 kubelet[2895]: E0307 01:54:58.906739 2895 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:54:58.908154 kubelet[2895]: E0307 01:54:58.908118 2895 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:54:58.908154 kubelet[2895]: W0307 01:54:58.908136 2895 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:54:58.908154 kubelet[2895]: E0307 01:54:58.908153 2895 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:54:58.968453 kubelet[2895]: E0307 01:54:58.967648 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c8z84" podUID="96475dac-1179-4e59-8100-da5ef27d719a" Mar 7 01:54:59.062713 containerd[1587]: time="2026-03-07T01:54:59.061995159Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:54:59.066118 containerd[1587]: time="2026-03-07T01:54:59.065759261Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4630250" Mar 7 01:54:59.068666 containerd[1587]: time="2026-03-07T01:54:59.068442782Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:54:59.074042 containerd[1587]: time="2026-03-07T01:54:59.073933558Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:54:59.076355 containerd[1587]: time="2026-03-07T01:54:59.075337013Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 1.206804339s" Mar 7 01:54:59.076355 containerd[1587]: time="2026-03-07T01:54:59.075384128Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Mar 7 01:54:59.099893 containerd[1587]: time="2026-03-07T01:54:59.097449312Z" level=info msg="CreateContainer within sandbox \"a33786778d79dea8c0bfe55b0a90634e63ae5bdcd59895bc34596d589be60e08\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 7 01:54:59.164913 containerd[1587]: time="2026-03-07T01:54:59.164751886Z" level=info msg="CreateContainer within sandbox \"a33786778d79dea8c0bfe55b0a90634e63ae5bdcd59895bc34596d589be60e08\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"2b9bc90dbf09f4d2a5f8ab5284900214b0170d05a94a2b2834380020a02169d6\"" Mar 7 01:54:59.166636 containerd[1587]: time="2026-03-07T01:54:59.166170417Z" level=info msg="StartContainer for \"2b9bc90dbf09f4d2a5f8ab5284900214b0170d05a94a2b2834380020a02169d6\"" Mar 7 01:54:59.367486 containerd[1587]: time="2026-03-07T01:54:59.367411831Z" level=info msg="StartContainer for \"2b9bc90dbf09f4d2a5f8ab5284900214b0170d05a94a2b2834380020a02169d6\" returns successfully" Mar 7 01:54:59.491209 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2b9bc90dbf09f4d2a5f8ab5284900214b0170d05a94a2b2834380020a02169d6-rootfs.mount: Deactivated successfully. Mar 7 01:54:59.526486 containerd[1587]: time="2026-03-07T01:54:59.526241212Z" level=info msg="shim disconnected" id=2b9bc90dbf09f4d2a5f8ab5284900214b0170d05a94a2b2834380020a02169d6 namespace=k8s.io Mar 7 01:54:59.526486 containerd[1587]: time="2026-03-07T01:54:59.526426608Z" level=warning msg="cleaning up after shim disconnected" id=2b9bc90dbf09f4d2a5f8ab5284900214b0170d05a94a2b2834380020a02169d6 namespace=k8s.io Mar 7 01:54:59.526486 containerd[1587]: time="2026-03-07T01:54:59.526449149Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:54:59.768065 kubelet[2895]: E0307 01:54:59.763980 2895 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:54:59.788939 containerd[1587]: time="2026-03-07T01:54:59.788763710Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Mar 7 01:55:00.770789 kubelet[2895]: E0307 01:55:00.768172 2895 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:55:00.969233 kubelet[2895]: E0307 01:55:00.968717 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c8z84" podUID="96475dac-1179-4e59-8100-da5ef27d719a" Mar 7 01:55:02.989058 kubelet[2895]: E0307 01:55:02.986451 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c8z84" podUID="96475dac-1179-4e59-8100-da5ef27d719a" Mar 7 01:55:04.975693 kubelet[2895]: E0307 01:55:04.967661 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c8z84" podUID="96475dac-1179-4e59-8100-da5ef27d719a" Mar 7 01:55:06.969966 kubelet[2895]: E0307 01:55:06.969260 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c8z84" podUID="96475dac-1179-4e59-8100-da5ef27d719a" Mar 7 01:55:08.969781 kubelet[2895]: E0307 01:55:08.969659 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c8z84" podUID="96475dac-1179-4e59-8100-da5ef27d719a" Mar 7 01:55:10.970577 kubelet[2895]: E0307 01:55:10.967913 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c8z84" podUID="96475dac-1179-4e59-8100-da5ef27d719a" Mar 7 01:55:19.608727 kubelet[2895]: E0307 01:55:19.608536 2895 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="7.541s" Mar 7 01:55:19.617738 kubelet[2895]: E0307 01:55:19.613552 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c8z84" podUID="96475dac-1179-4e59-8100-da5ef27d719a" Mar 7 01:55:20.974018 kubelet[2895]: E0307 01:55:20.973362 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c8z84" podUID="96475dac-1179-4e59-8100-da5ef27d719a" Mar 7 01:55:22.969140 kubelet[2895]: E0307 01:55:22.967332 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c8z84" podUID="96475dac-1179-4e59-8100-da5ef27d719a" Mar 7 01:55:24.972131 kubelet[2895]: E0307 01:55:24.968505 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c8z84" podUID="96475dac-1179-4e59-8100-da5ef27d719a" Mar 7 01:55:25.968422 kubelet[2895]: E0307 01:55:25.967519 2895 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:55:26.970020 kubelet[2895]: E0307 01:55:26.969070 2895 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:55:26.970020 kubelet[2895]: E0307 01:55:26.969940 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c8z84" podUID="96475dac-1179-4e59-8100-da5ef27d719a" Mar 7 01:55:28.977623 kubelet[2895]: E0307 01:55:28.973228 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c8z84" podUID="96475dac-1179-4e59-8100-da5ef27d719a" Mar 7 01:55:30.968978 kubelet[2895]: E0307 01:55:30.968333 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c8z84" podUID="96475dac-1179-4e59-8100-da5ef27d719a" Mar 7 01:55:52.621075 systemd-journald[1139]: Under memory pressure, flushing caches. Mar 7 01:55:52.577663 systemd-resolved[1474]: Under memory pressure, flushing caches. Mar 7 01:55:52.655895 systemd-resolved[1474]: Flushed all caches. Mar 7 01:55:54.230901 kubelet[2895]: E0307 01:55:54.222629 2895 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="21.93s" Mar 7 01:55:54.568913 kubelet[2895]: E0307 01:55:54.567566 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c8z84" podUID="96475dac-1179-4e59-8100-da5ef27d719a" Mar 7 01:55:54.568913 kubelet[2895]: E0307 01:55:54.566345 2895 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:55:54.568913 kubelet[2895]: E0307 01:55:54.568677 2895 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:55:56.003760 kubelet[2895]: E0307 01:55:55.995641 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c8z84" podUID="96475dac-1179-4e59-8100-da5ef27d719a" Mar 7 01:55:56.026022 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cd12bfb9149f3438212664d1ee6e5d7c04cc46422909d23e5f9f42490cfc705e-rootfs.mount: Deactivated successfully. Mar 7 01:55:56.290453 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-341073464790a8d31692d61dd46ac90963ec8993cb7a1f19cc2b765c92059f6a-rootfs.mount: Deactivated successfully. Mar 7 01:55:56.382264 containerd[1587]: time="2026-03-07T01:55:56.375916205Z" level=info msg="shim disconnected" id=cd12bfb9149f3438212664d1ee6e5d7c04cc46422909d23e5f9f42490cfc705e namespace=k8s.io Mar 7 01:55:56.394260 containerd[1587]: time="2026-03-07T01:55:56.388904648Z" level=warning msg="cleaning up after shim disconnected" id=cd12bfb9149f3438212664d1ee6e5d7c04cc46422909d23e5f9f42490cfc705e namespace=k8s.io Mar 7 01:55:56.394260 containerd[1587]: time="2026-03-07T01:55:56.388987391Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:55:56.394260 containerd[1587]: time="2026-03-07T01:55:56.387352284Z" level=info msg="shim disconnected" id=341073464790a8d31692d61dd46ac90963ec8993cb7a1f19cc2b765c92059f6a namespace=k8s.io Mar 7 01:55:56.394260 containerd[1587]: time="2026-03-07T01:55:56.389773774Z" level=warning msg="cleaning up after shim disconnected" id=341073464790a8d31692d61dd46ac90963ec8993cb7a1f19cc2b765c92059f6a namespace=k8s.io Mar 7 01:55:56.394260 containerd[1587]: time="2026-03-07T01:55:56.389788781Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:55:56.686611 containerd[1587]: time="2026-03-07T01:55:56.686533785Z" level=warning msg="cleanup warnings time=\"2026-03-07T01:55:56Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 7 01:55:57.393713 kubelet[2895]: I0307 01:55:57.392496 2895 scope.go:117] "RemoveContainer" containerID="cd12bfb9149f3438212664d1ee6e5d7c04cc46422909d23e5f9f42490cfc705e" Mar 7 01:55:57.393713 kubelet[2895]: E0307 01:55:57.392715 2895 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:55:57.486376 kubelet[2895]: I0307 01:55:57.485383 2895 scope.go:117] "RemoveContainer" containerID="341073464790a8d31692d61dd46ac90963ec8993cb7a1f19cc2b765c92059f6a" Mar 7 01:55:57.486376 kubelet[2895]: E0307 01:55:57.485528 2895 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:55:57.539105 containerd[1587]: time="2026-03-07T01:55:57.527632515Z" level=info msg="CreateContainer within sandbox \"e4ce1142970fabcf71a9aac5e8509a6b73ce933be60da115362e8dd32e1015ac\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Mar 7 01:55:57.539105 containerd[1587]: time="2026-03-07T01:55:57.536589332Z" level=info msg="CreateContainer within sandbox \"cb3c7191ad2b83794f73779e2ed8d0d9bbb39205483c880450101f187aa4b6ae\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Mar 7 01:55:57.857759 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2115138200.mount: Deactivated successfully. Mar 7 01:55:57.916643 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1769601112.mount: Deactivated successfully. Mar 7 01:55:57.930440 containerd[1587]: time="2026-03-07T01:55:57.930259440Z" level=info msg="CreateContainer within sandbox \"cb3c7191ad2b83794f73779e2ed8d0d9bbb39205483c880450101f187aa4b6ae\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"e23bdf2fb8f10ebd3a0f452050aeef11258a02d26c65d9d1b5e005ee31270bc3\"" Mar 7 01:55:57.947880 containerd[1587]: time="2026-03-07T01:55:57.947761391Z" level=info msg="StartContainer for \"e23bdf2fb8f10ebd3a0f452050aeef11258a02d26c65d9d1b5e005ee31270bc3\"" Mar 7 01:55:57.979880 kubelet[2895]: E0307 01:55:57.978721 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c8z84" podUID="96475dac-1179-4e59-8100-da5ef27d719a" Mar 7 01:55:58.021314 containerd[1587]: time="2026-03-07T01:55:58.018117983Z" level=info msg="CreateContainer within sandbox \"e4ce1142970fabcf71a9aac5e8509a6b73ce933be60da115362e8dd32e1015ac\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"955d00ca35a8d296c4b6012db574b724c1f99d8e93892e41280dee4208f93653\"" Mar 7 01:55:58.021314 containerd[1587]: time="2026-03-07T01:55:58.019043232Z" level=info msg="StartContainer for \"955d00ca35a8d296c4b6012db574b724c1f99d8e93892e41280dee4208f93653\"" Mar 7 01:55:58.781553 containerd[1587]: time="2026-03-07T01:55:58.781474597Z" level=info msg="StartContainer for \"955d00ca35a8d296c4b6012db574b724c1f99d8e93892e41280dee4208f93653\" returns successfully" Mar 7 01:55:58.789984 containerd[1587]: time="2026-03-07T01:55:58.785334791Z" level=info msg="StartContainer for \"e23bdf2fb8f10ebd3a0f452050aeef11258a02d26c65d9d1b5e005ee31270bc3\" returns successfully" Mar 7 01:55:59.709916 kubelet[2895]: E0307 01:55:59.676566 2895 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:55:59.763315 kubelet[2895]: E0307 01:55:59.741541 2895 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:55:59.976022 kubelet[2895]: E0307 01:55:59.969028 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c8z84" podUID="96475dac-1179-4e59-8100-da5ef27d719a" Mar 7 01:56:00.825871 kubelet[2895]: E0307 01:56:00.822123 2895 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:56:01.819451 kubelet[2895]: E0307 01:56:01.819402 2895 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:56:01.981355 kubelet[2895]: E0307 01:56:01.980573 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c8z84" podUID="96475dac-1179-4e59-8100-da5ef27d719a" Mar 7 01:56:03.980746 kubelet[2895]: E0307 01:56:03.978449 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c8z84" podUID="96475dac-1179-4e59-8100-da5ef27d719a" Mar 7 01:56:04.974618 kubelet[2895]: E0307 01:56:04.973017 2895 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:56:06.009617 kubelet[2895]: E0307 01:56:05.992694 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c8z84" podUID="96475dac-1179-4e59-8100-da5ef27d719a" Mar 7 01:56:08.007950 kubelet[2895]: E0307 01:56:08.003751 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c8z84" podUID="96475dac-1179-4e59-8100-da5ef27d719a" Mar 7 01:56:08.892416 kubelet[2895]: E0307 01:56:08.889427 2895 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:56:09.091601 kubelet[2895]: E0307 01:56:09.084959 2895 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:56:09.973670 kubelet[2895]: E0307 01:56:09.970282 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c8z84" podUID="96475dac-1179-4e59-8100-da5ef27d719a" Mar 7 01:56:10.082076 kubelet[2895]: E0307 01:56:10.082037 2895 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:56:11.815080 kubelet[2895]: E0307 01:56:11.814607 2895 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:56:11.988052 kubelet[2895]: E0307 01:56:11.985018 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c8z84" podUID="96475dac-1179-4e59-8100-da5ef27d719a" Mar 7 01:56:13.358874 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount546879664.mount: Deactivated successfully. Mar 7 01:56:13.706988 containerd[1587]: time="2026-03-07T01:56:13.699988263Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:56:13.730980 containerd[1587]: time="2026-03-07T01:56:13.722650958Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Mar 7 01:56:13.730980 containerd[1587]: time="2026-03-07T01:56:13.730021076Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:56:13.768843 containerd[1587]: time="2026-03-07T01:56:13.763615650Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:56:13.806596 containerd[1587]: time="2026-03-07T01:56:13.796759943Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 1m14.00778194s" Mar 7 01:56:13.806596 containerd[1587]: time="2026-03-07T01:56:13.796938036Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Mar 7 01:56:13.873998 containerd[1587]: time="2026-03-07T01:56:13.872303616Z" level=info msg="CreateContainer within sandbox \"a33786778d79dea8c0bfe55b0a90634e63ae5bdcd59895bc34596d589be60e08\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Mar 7 01:56:13.975657 kubelet[2895]: E0307 01:56:13.967720 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c8z84" podUID="96475dac-1179-4e59-8100-da5ef27d719a" Mar 7 01:56:14.194525 containerd[1587]: time="2026-03-07T01:56:14.191353229Z" level=info msg="CreateContainer within sandbox \"a33786778d79dea8c0bfe55b0a90634e63ae5bdcd59895bc34596d589be60e08\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"ea92b092843d709d20e717009831b5e5d4d9c86bd9965bea9f2d5acb9d198c60\"" Mar 7 01:56:14.196155 containerd[1587]: time="2026-03-07T01:56:14.195604777Z" level=info msg="StartContainer for \"ea92b092843d709d20e717009831b5e5d4d9c86bd9965bea9f2d5acb9d198c60\"" Mar 7 01:56:14.852141 containerd[1587]: time="2026-03-07T01:56:14.851986986Z" level=info msg="StartContainer for \"ea92b092843d709d20e717009831b5e5d4d9c86bd9965bea9f2d5acb9d198c60\" returns successfully" Mar 7 01:56:15.250348 kubelet[2895]: E0307 01:56:15.249952 2895 kubelet_node_status.go:460] "Node not becoming ready in time after startup" Mar 7 01:56:15.628733 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ea92b092843d709d20e717009831b5e5d4d9c86bd9965bea9f2d5acb9d198c60-rootfs.mount: Deactivated successfully. Mar 7 01:56:16.003748 kubelet[2895]: E0307 01:56:16.001618 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c8z84" podUID="96475dac-1179-4e59-8100-da5ef27d719a" Mar 7 01:56:16.171640 containerd[1587]: time="2026-03-07T01:56:16.167733137Z" level=info msg="shim disconnected" id=ea92b092843d709d20e717009831b5e5d4d9c86bd9965bea9f2d5acb9d198c60 namespace=k8s.io Mar 7 01:56:16.171640 containerd[1587]: time="2026-03-07T01:56:16.167928372Z" level=warning msg="cleaning up after shim disconnected" id=ea92b092843d709d20e717009831b5e5d4d9c86bd9965bea9f2d5acb9d198c60 namespace=k8s.io Mar 7 01:56:16.171640 containerd[1587]: time="2026-03-07T01:56:16.167953709Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:56:17.297090 containerd[1587]: time="2026-03-07T01:56:17.292889417Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Mar 7 01:56:17.974891 kubelet[2895]: E0307 01:56:17.967368 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c8z84" podUID="96475dac-1179-4e59-8100-da5ef27d719a" Mar 7 01:56:19.055327 kubelet[2895]: E0307 01:56:19.055212 2895 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 7 01:56:19.984887 kubelet[2895]: E0307 01:56:19.980400 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c8z84" podUID="96475dac-1179-4e59-8100-da5ef27d719a" Mar 7 01:56:21.967539 kubelet[2895]: E0307 01:56:21.967223 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c8z84" podUID="96475dac-1179-4e59-8100-da5ef27d719a" Mar 7 01:56:36.272475 systemd-journald[1139]: Under memory pressure, flushing caches. Mar 7 01:56:29.096175 systemd-resolved[1474]: Under memory pressure, flushing caches. Mar 7 01:56:36.356077 systemd-resolved[1474]: Flushed all caches. Mar 7 01:56:47.626913 kubelet[2895]: E0307 01:56:47.625273 2895 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 7 01:56:49.614027 kubelet[2895]: E0307 01:56:49.595158 2895 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.258s" Mar 7 01:56:49.730213 kubelet[2895]: E0307 01:56:49.688006 2895 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:56:49.730213 kubelet[2895]: E0307 01:56:49.716224 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c8z84" podUID="96475dac-1179-4e59-8100-da5ef27d719a" Mar 7 01:56:49.730213 kubelet[2895]: E0307 01:56:49.719645 2895 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:56:50.121632 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-955d00ca35a8d296c4b6012db574b724c1f99d8e93892e41280dee4208f93653-rootfs.mount: Deactivated successfully. Mar 7 01:56:50.174383 containerd[1587]: time="2026-03-07T01:56:50.174013682Z" level=info msg="shim disconnected" id=955d00ca35a8d296c4b6012db574b724c1f99d8e93892e41280dee4208f93653 namespace=k8s.io Mar 7 01:56:50.174383 containerd[1587]: time="2026-03-07T01:56:50.174278090Z" level=warning msg="cleaning up after shim disconnected" id=955d00ca35a8d296c4b6012db574b724c1f99d8e93892e41280dee4208f93653 namespace=k8s.io Mar 7 01:56:50.174383 containerd[1587]: time="2026-03-07T01:56:50.174316973Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:56:50.504665 kubelet[2895]: I0307 01:56:50.499289 2895 scope.go:117] "RemoveContainer" containerID="cd12bfb9149f3438212664d1ee6e5d7c04cc46422909d23e5f9f42490cfc705e" Mar 7 01:56:50.505492 kubelet[2895]: I0307 01:56:50.505461 2895 scope.go:117] "RemoveContainer" containerID="955d00ca35a8d296c4b6012db574b724c1f99d8e93892e41280dee4208f93653" Mar 7 01:56:50.514061 kubelet[2895]: E0307 01:56:50.509341 2895 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:56:50.514061 kubelet[2895]: E0307 01:56:50.509650 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-scheduler pod=kube-scheduler-localhost_kube-system(e944e4cb17af904786c3a2e01e298498)\"" pod="kube-system/kube-scheduler-localhost" podUID="e944e4cb17af904786c3a2e01e298498" Mar 7 01:56:50.897913 containerd[1587]: time="2026-03-07T01:56:50.895158010Z" level=info msg="RemoveContainer for \"cd12bfb9149f3438212664d1ee6e5d7c04cc46422909d23e5f9f42490cfc705e\"" Mar 7 01:56:51.065049 containerd[1587]: time="2026-03-07T01:56:51.064142623Z" level=info msg="RemoveContainer for \"cd12bfb9149f3438212664d1ee6e5d7c04cc46422909d23e5f9f42490cfc705e\" returns successfully" Mar 7 01:56:51.971752 kubelet[2895]: E0307 01:56:51.967259 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c8z84" podUID="96475dac-1179-4e59-8100-da5ef27d719a" Mar 7 01:56:52.639067 kubelet[2895]: E0307 01:56:52.631589 2895 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 7 01:56:53.976599 kubelet[2895]: E0307 01:56:53.976231 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c8z84" podUID="96475dac-1179-4e59-8100-da5ef27d719a" Mar 7 01:56:55.628003 kubelet[2895]: I0307 01:56:55.626957 2895 scope.go:117] "RemoveContainer" containerID="955d00ca35a8d296c4b6012db574b724c1f99d8e93892e41280dee4208f93653" Mar 7 01:56:55.678771 kubelet[2895]: E0307 01:56:55.676324 2895 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:56:55.706952 kubelet[2895]: E0307 01:56:55.684157 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-scheduler pod=kube-scheduler-localhost_kube-system(e944e4cb17af904786c3a2e01e298498)\"" pod="kube-system/kube-scheduler-localhost" podUID="e944e4cb17af904786c3a2e01e298498" Mar 7 01:56:55.706952 kubelet[2895]: E0307 01:56:55.657119 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c8z84" podUID="96475dac-1179-4e59-8100-da5ef27d719a" Mar 7 01:56:55.916111 systemd[1]: Started sshd@9-10.0.0.118:22-10.0.0.1:46944.service - OpenSSH per-connection server daemon (10.0.0.1:46944). Mar 7 01:56:56.360743 sshd[3850]: Accepted publickey for core from 10.0.0.1 port 46944 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:56:56.370373 sshd[3850]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:56:56.444172 systemd-logind[1563]: New session 10 of user core. Mar 7 01:56:56.486213 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 7 01:56:57.768865 kubelet[2895]: E0307 01:56:57.756109 2895 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 7 01:56:58.769006 kubelet[2895]: E0307 01:56:58.753251 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c8z84" podUID="96475dac-1179-4e59-8100-da5ef27d719a" Mar 7 01:56:59.332317 sshd[3850]: pam_unix(sshd:session): session closed for user core Mar 7 01:56:59.343235 systemd[1]: sshd@9-10.0.0.118:22-10.0.0.1:46944.service: Deactivated successfully. Mar 7 01:56:59.369377 systemd-logind[1563]: Session 10 logged out. Waiting for processes to exit. Mar 7 01:56:59.370495 systemd[1]: session-10.scope: Deactivated successfully. Mar 7 01:56:59.383175 systemd-logind[1563]: Removed session 10. Mar 7 01:57:00.970694 kubelet[2895]: E0307 01:57:00.970121 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c8z84" podUID="96475dac-1179-4e59-8100-da5ef27d719a" Mar 7 01:57:02.761473 kubelet[2895]: E0307 01:57:02.761407 2895 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 7 01:57:02.969994 kubelet[2895]: E0307 01:57:02.969632 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c8z84" podUID="96475dac-1179-4e59-8100-da5ef27d719a" Mar 7 01:57:04.352986 systemd[1]: Started sshd@10-10.0.0.118:22-10.0.0.1:52434.service - OpenSSH per-connection server daemon (10.0.0.1:52434). Mar 7 01:57:04.577147 sshd[3879]: Accepted publickey for core from 10.0.0.1 port 52434 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:57:04.590120 sshd[3879]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:57:04.634329 systemd-logind[1563]: New session 11 of user core. Mar 7 01:57:04.661102 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 7 01:57:04.968324 kubelet[2895]: E0307 01:57:04.968009 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c8z84" podUID="96475dac-1179-4e59-8100-da5ef27d719a" Mar 7 01:57:05.358247 sshd[3879]: pam_unix(sshd:session): session closed for user core Mar 7 01:57:05.390583 systemd[1]: sshd@10-10.0.0.118:22-10.0.0.1:52434.service: Deactivated successfully. Mar 7 01:57:05.402057 containerd[1587]: time="2026-03-07T01:57:05.394256251Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Mar 7 01:57:05.402057 containerd[1587]: time="2026-03-07T01:57:05.381741003Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:57:05.410973 systemd[1]: session-11.scope: Deactivated successfully. Mar 7 01:57:05.415762 containerd[1587]: time="2026-03-07T01:57:05.415317325Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:57:05.418455 systemd-logind[1563]: Session 11 logged out. Waiting for processes to exit. Mar 7 01:57:05.422507 systemd-logind[1563]: Removed session 11. Mar 7 01:57:05.435276 containerd[1587]: time="2026-03-07T01:57:05.435162165Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:57:05.442669 containerd[1587]: time="2026-03-07T01:57:05.442611193Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 48.146827899s" Mar 7 01:57:05.443052 containerd[1587]: time="2026-03-07T01:57:05.443019393Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Mar 7 01:57:05.496369 containerd[1587]: time="2026-03-07T01:57:05.496141762Z" level=info msg="CreateContainer within sandbox \"a33786778d79dea8c0bfe55b0a90634e63ae5bdcd59895bc34596d589be60e08\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 7 01:57:05.583597 containerd[1587]: time="2026-03-07T01:57:05.583183599Z" level=info msg="CreateContainer within sandbox \"a33786778d79dea8c0bfe55b0a90634e63ae5bdcd59895bc34596d589be60e08\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"1d4e14712559466431d98c8e4c4559d43f71590ca0f23f587cff4cada6866b5c\"" Mar 7 01:57:05.591965 containerd[1587]: time="2026-03-07T01:57:05.586181190Z" level=info msg="StartContainer for \"1d4e14712559466431d98c8e4c4559d43f71590ca0f23f587cff4cada6866b5c\"" Mar 7 01:57:05.928017 containerd[1587]: time="2026-03-07T01:57:05.927566601Z" level=info msg="StartContainer for \"1d4e14712559466431d98c8e4c4559d43f71590ca0f23f587cff4cada6866b5c\" returns successfully" Mar 7 01:57:06.978905 kubelet[2895]: E0307 01:57:06.976278 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c8z84" podUID="96475dac-1179-4e59-8100-da5ef27d719a" Mar 7 01:57:07.773037 kubelet[2895]: E0307 01:57:07.770530 2895 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 7 01:57:08.972327 kubelet[2895]: E0307 01:57:08.967609 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c8z84" podUID="96475dac-1179-4e59-8100-da5ef27d719a" Mar 7 01:57:08.972327 kubelet[2895]: I0307 01:57:08.970979 2895 scope.go:117] "RemoveContainer" containerID="955d00ca35a8d296c4b6012db574b724c1f99d8e93892e41280dee4208f93653" Mar 7 01:57:08.972327 kubelet[2895]: E0307 01:57:08.971154 2895 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:57:08.999719 containerd[1587]: time="2026-03-07T01:57:08.985618058Z" level=info msg="CreateContainer within sandbox \"e4ce1142970fabcf71a9aac5e8509a6b73ce933be60da115362e8dd32e1015ac\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:2,}" Mar 7 01:57:09.068296 containerd[1587]: time="2026-03-07T01:57:09.067751407Z" level=info msg="CreateContainer within sandbox \"e4ce1142970fabcf71a9aac5e8509a6b73ce933be60da115362e8dd32e1015ac\" for &ContainerMetadata{Name:kube-scheduler,Attempt:2,} returns container id \"6d76e12cf9cdd59eefb340125da6be9eb258ef88563d59bec5f1ea2577ed4f8d\"" Mar 7 01:57:09.077105 containerd[1587]: time="2026-03-07T01:57:09.071427742Z" level=info msg="StartContainer for \"6d76e12cf9cdd59eefb340125da6be9eb258ef88563d59bec5f1ea2577ed4f8d\"" Mar 7 01:57:09.189713 systemd[1]: run-containerd-runc-k8s.io-6d76e12cf9cdd59eefb340125da6be9eb258ef88563d59bec5f1ea2577ed4f8d-runc.2kxGlv.mount: Deactivated successfully. Mar 7 01:57:09.413754 containerd[1587]: time="2026-03-07T01:57:09.412745711Z" level=info msg="StartContainer for \"6d76e12cf9cdd59eefb340125da6be9eb258ef88563d59bec5f1ea2577ed4f8d\" returns successfully" Mar 7 01:57:09.822521 containerd[1587]: time="2026-03-07T01:57:09.822418813Z" level=info msg="shim disconnected" id=1d4e14712559466431d98c8e4c4559d43f71590ca0f23f587cff4cada6866b5c namespace=k8s.io Mar 7 01:57:09.826875 containerd[1587]: time="2026-03-07T01:57:09.823012343Z" level=warning msg="cleaning up after shim disconnected" id=1d4e14712559466431d98c8e4c4559d43f71590ca0f23f587cff4cada6866b5c namespace=k8s.io Mar 7 01:57:09.826875 containerd[1587]: time="2026-03-07T01:57:09.823039915Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:57:10.032368 kubelet[2895]: E0307 01:57:10.031764 2895 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:57:10.068542 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1d4e14712559466431d98c8e4c4559d43f71590ca0f23f587cff4cada6866b5c-rootfs.mount: Deactivated successfully. Mar 7 01:57:10.202760 containerd[1587]: time="2026-03-07T01:57:10.202278189Z" level=info msg="CreateContainer within sandbox \"a33786778d79dea8c0bfe55b0a90634e63ae5bdcd59895bc34596d589be60e08\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 7 01:57:10.343498 containerd[1587]: time="2026-03-07T01:57:10.343408705Z" level=info msg="CreateContainer within sandbox \"a33786778d79dea8c0bfe55b0a90634e63ae5bdcd59895bc34596d589be60e08\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"5e697c9d21930e80cf17498387aea4ee233ddadea76be4c4f7d0740b44985ab5\"" Mar 7 01:57:10.358670 containerd[1587]: time="2026-03-07T01:57:10.352058644Z" level=info msg="StartContainer for \"5e697c9d21930e80cf17498387aea4ee233ddadea76be4c4f7d0740b44985ab5\"" Mar 7 01:57:10.384408 systemd[1]: Started sshd@11-10.0.0.118:22-10.0.0.1:57514.service - OpenSSH per-connection server daemon (10.0.0.1:57514). Mar 7 01:57:10.608982 sshd[3999]: Accepted publickey for core from 10.0.0.1 port 57514 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:57:10.619426 sshd[3999]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:57:10.645123 systemd-logind[1563]: New session 12 of user core. Mar 7 01:57:10.664232 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 7 01:57:10.681721 containerd[1587]: time="2026-03-07T01:57:10.681473968Z" level=info msg="StartContainer for \"5e697c9d21930e80cf17498387aea4ee233ddadea76be4c4f7d0740b44985ab5\" returns successfully" Mar 7 01:57:10.967755 kubelet[2895]: E0307 01:57:10.966908 2895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c8z84" podUID="96475dac-1179-4e59-8100-da5ef27d719a" Mar 7 01:57:11.104199 kubelet[2895]: E0307 01:57:11.095621 2895 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:57:11.281403 kubelet[2895]: I0307 01:57:11.278170 2895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-jgmjr" podStartSLOduration=7.483920959 podStartE2EDuration="2m19.278104958s" podCreationTimestamp="2026-03-07 01:54:52 +0000 UTC" firstStartedPulling="2026-03-07 01:54:53.660254644 +0000 UTC m=+39.442554654" lastFinishedPulling="2026-03-07 01:57:05.454438643 +0000 UTC m=+171.236738653" observedRunningTime="2026-03-07 01:57:11.184539627 +0000 UTC m=+176.966839648" watchObservedRunningTime="2026-03-07 01:57:11.278104958 +0000 UTC m=+177.060404978" Mar 7 01:57:11.534042 systemd[1]: run-containerd-runc-k8s.io-5e697c9d21930e80cf17498387aea4ee233ddadea76be4c4f7d0740b44985ab5-runc.C0K3qg.mount: Deactivated successfully. Mar 7 01:57:11.556638 sshd[3999]: pam_unix(sshd:session): session closed for user core Mar 7 01:57:11.586414 systemd[1]: sshd@11-10.0.0.118:22-10.0.0.1:57514.service: Deactivated successfully. Mar 7 01:57:11.614460 systemd[1]: session-12.scope: Deactivated successfully. Mar 7 01:57:11.621702 systemd-logind[1563]: Session 12 logged out. Waiting for processes to exit. Mar 7 01:57:11.633126 systemd-logind[1563]: Removed session 12. Mar 7 01:57:13.019029 containerd[1587]: time="2026-03-07T01:57:13.018657543Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-c8z84,Uid:96475dac-1179-4e59-8100-da5ef27d719a,Namespace:calico-system,Attempt:0,}" Mar 7 01:57:14.884700 systemd-networkd[1251]: cali3117bbf4864: Link UP Mar 7 01:57:14.885335 systemd-networkd[1251]: cali3117bbf4864: Gained carrier Mar 7 01:57:14.984777 kubelet[2895]: E0307 01:57:14.967382 2895 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:57:14.994172 containerd[1587]: 2026-03-07 01:57:13.869 [ERROR][4105] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 7 01:57:14.994172 containerd[1587]: 2026-03-07 01:57:13.989 [INFO][4105] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--c8z84-eth0 csi-node-driver- calico-system 96475dac-1179-4e59-8100-da5ef27d719a 844 0 2026-03-07 01:54:52 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6d9d697c7c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-c8z84 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali3117bbf4864 [] [] }} ContainerID="1dd88f410ae779b46da777182ec12207392bb104c82883b9b7b3897ce8262cb5" Namespace="calico-system" Pod="csi-node-driver-c8z84" WorkloadEndpoint="localhost-k8s-csi--node--driver--c8z84-" Mar 7 01:57:14.994172 containerd[1587]: 2026-03-07 01:57:13.989 [INFO][4105] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1dd88f410ae779b46da777182ec12207392bb104c82883b9b7b3897ce8262cb5" Namespace="calico-system" Pod="csi-node-driver-c8z84" WorkloadEndpoint="localhost-k8s-csi--node--driver--c8z84-eth0" Mar 7 01:57:14.994172 containerd[1587]: 2026-03-07 01:57:14.258 [INFO][4129] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1dd88f410ae779b46da777182ec12207392bb104c82883b9b7b3897ce8262cb5" HandleID="k8s-pod-network.1dd88f410ae779b46da777182ec12207392bb104c82883b9b7b3897ce8262cb5" Workload="localhost-k8s-csi--node--driver--c8z84-eth0" Mar 7 01:57:14.994172 containerd[1587]: 2026-03-07 01:57:14.306 [INFO][4129] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="1dd88f410ae779b46da777182ec12207392bb104c82883b9b7b3897ce8262cb5" HandleID="k8s-pod-network.1dd88f410ae779b46da777182ec12207392bb104c82883b9b7b3897ce8262cb5" Workload="localhost-k8s-csi--node--driver--c8z84-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002da020), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-c8z84", "timestamp":"2026-03-07 01:57:14.258129138 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00057d600)} Mar 7 01:57:14.994172 containerd[1587]: 2026-03-07 01:57:14.306 [INFO][4129] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:57:14.994172 containerd[1587]: 2026-03-07 01:57:14.306 [INFO][4129] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:57:14.994172 containerd[1587]: 2026-03-07 01:57:14.306 [INFO][4129] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 7 01:57:14.994172 containerd[1587]: 2026-03-07 01:57:14.327 [INFO][4129] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.1dd88f410ae779b46da777182ec12207392bb104c82883b9b7b3897ce8262cb5" host="localhost" Mar 7 01:57:14.994172 containerd[1587]: 2026-03-07 01:57:14.371 [INFO][4129] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 7 01:57:14.994172 containerd[1587]: 2026-03-07 01:57:14.410 [INFO][4129] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 7 01:57:14.994172 containerd[1587]: 2026-03-07 01:57:14.429 [INFO][4129] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 7 01:57:14.994172 containerd[1587]: 2026-03-07 01:57:14.440 [INFO][4129] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 7 01:57:14.994172 containerd[1587]: 2026-03-07 01:57:14.440 [INFO][4129] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1dd88f410ae779b46da777182ec12207392bb104c82883b9b7b3897ce8262cb5" host="localhost" Mar 7 01:57:14.994172 containerd[1587]: 2026-03-07 01:57:14.458 [INFO][4129] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.1dd88f410ae779b46da777182ec12207392bb104c82883b9b7b3897ce8262cb5 Mar 7 01:57:14.994172 containerd[1587]: 2026-03-07 01:57:14.484 [INFO][4129] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1dd88f410ae779b46da777182ec12207392bb104c82883b9b7b3897ce8262cb5" host="localhost" Mar 7 01:57:14.994172 containerd[1587]: 2026-03-07 01:57:14.530 [INFO][4129] ipam/ipam.go 1276: Failed to update block block=192.168.88.128/26 error=update conflict: IPAMBlock(192-168-88-128-26) handle="k8s-pod-network.1dd88f410ae779b46da777182ec12207392bb104c82883b9b7b3897ce8262cb5" host="localhost" Mar 7 01:57:14.994172 containerd[1587]: 2026-03-07 01:57:14.632 [INFO][4129] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1dd88f410ae779b46da777182ec12207392bb104c82883b9b7b3897ce8262cb5" host="localhost" Mar 7 01:57:14.994172 containerd[1587]: 2026-03-07 01:57:14.656 [INFO][4129] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.1dd88f410ae779b46da777182ec12207392bb104c82883b9b7b3897ce8262cb5 Mar 7 01:57:14.994172 containerd[1587]: 2026-03-07 01:57:14.693 [INFO][4129] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1dd88f410ae779b46da777182ec12207392bb104c82883b9b7b3897ce8262cb5" host="localhost" Mar 7 01:57:14.994172 containerd[1587]: 2026-03-07 01:57:14.745 [INFO][4129] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.1dd88f410ae779b46da777182ec12207392bb104c82883b9b7b3897ce8262cb5" host="localhost" Mar 7 01:57:14.994172 containerd[1587]: 2026-03-07 01:57:14.745 [INFO][4129] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.1dd88f410ae779b46da777182ec12207392bb104c82883b9b7b3897ce8262cb5" host="localhost" Mar 7 01:57:14.994172 containerd[1587]: 2026-03-07 01:57:14.745 [INFO][4129] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:57:14.994172 containerd[1587]: 2026-03-07 01:57:14.745 [INFO][4129] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="1dd88f410ae779b46da777182ec12207392bb104c82883b9b7b3897ce8262cb5" HandleID="k8s-pod-network.1dd88f410ae779b46da777182ec12207392bb104c82883b9b7b3897ce8262cb5" Workload="localhost-k8s-csi--node--driver--c8z84-eth0" Mar 7 01:57:15.029881 containerd[1587]: 2026-03-07 01:57:14.773 [INFO][4105] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1dd88f410ae779b46da777182ec12207392bb104c82883b9b7b3897ce8262cb5" Namespace="calico-system" Pod="csi-node-driver-c8z84" WorkloadEndpoint="localhost-k8s-csi--node--driver--c8z84-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--c8z84-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"96475dac-1179-4e59-8100-da5ef27d719a", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 54, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-c8z84", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3117bbf4864", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:57:15.029881 containerd[1587]: 2026-03-07 01:57:14.773 [INFO][4105] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="1dd88f410ae779b46da777182ec12207392bb104c82883b9b7b3897ce8262cb5" Namespace="calico-system" Pod="csi-node-driver-c8z84" WorkloadEndpoint="localhost-k8s-csi--node--driver--c8z84-eth0" Mar 7 01:57:15.029881 containerd[1587]: 2026-03-07 01:57:14.773 [INFO][4105] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3117bbf4864 ContainerID="1dd88f410ae779b46da777182ec12207392bb104c82883b9b7b3897ce8262cb5" Namespace="calico-system" Pod="csi-node-driver-c8z84" WorkloadEndpoint="localhost-k8s-csi--node--driver--c8z84-eth0" Mar 7 01:57:15.029881 containerd[1587]: 2026-03-07 01:57:14.890 [INFO][4105] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1dd88f410ae779b46da777182ec12207392bb104c82883b9b7b3897ce8262cb5" Namespace="calico-system" Pod="csi-node-driver-c8z84" WorkloadEndpoint="localhost-k8s-csi--node--driver--c8z84-eth0" Mar 7 01:57:15.029881 containerd[1587]: 2026-03-07 01:57:14.891 [INFO][4105] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1dd88f410ae779b46da777182ec12207392bb104c82883b9b7b3897ce8262cb5" Namespace="calico-system" Pod="csi-node-driver-c8z84" WorkloadEndpoint="localhost-k8s-csi--node--driver--c8z84-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--c8z84-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"96475dac-1179-4e59-8100-da5ef27d719a", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 54, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1dd88f410ae779b46da777182ec12207392bb104c82883b9b7b3897ce8262cb5", Pod:"csi-node-driver-c8z84", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3117bbf4864", MAC:"fe:56:b5:40:df:ca", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:57:15.029881 containerd[1587]: 2026-03-07 01:57:14.981 [INFO][4105] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1dd88f410ae779b46da777182ec12207392bb104c82883b9b7b3897ce8262cb5" Namespace="calico-system" Pod="csi-node-driver-c8z84" WorkloadEndpoint="localhost-k8s-csi--node--driver--c8z84-eth0" Mar 7 01:57:15.276156 containerd[1587]: time="2026-03-07T01:57:15.274234470Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:57:15.276156 containerd[1587]: time="2026-03-07T01:57:15.274469504Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:57:15.276156 containerd[1587]: time="2026-03-07T01:57:15.274514930Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:57:15.276156 containerd[1587]: time="2026-03-07T01:57:15.274776483Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:57:15.504636 systemd-resolved[1474]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 7 01:57:15.692460 containerd[1587]: time="2026-03-07T01:57:15.692406536Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-c8z84,Uid:96475dac-1179-4e59-8100-da5ef27d719a,Namespace:calico-system,Attempt:0,} returns sandbox id \"1dd88f410ae779b46da777182ec12207392bb104c82883b9b7b3897ce8262cb5\"" Mar 7 01:57:15.714341 containerd[1587]: time="2026-03-07T01:57:15.712679248Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Mar 7 01:57:16.592602 systemd[1]: Started sshd@12-10.0.0.118:22-10.0.0.1:57548.service - OpenSSH per-connection server daemon (10.0.0.1:57548). Mar 7 01:57:16.612691 systemd-networkd[1251]: cali3117bbf4864: Gained IPv6LL Mar 7 01:57:16.913571 sshd[4296]: Accepted publickey for core from 10.0.0.1 port 57548 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:57:16.913362 sshd[4296]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:57:16.972719 systemd-logind[1563]: New session 13 of user core. Mar 7 01:57:17.075574 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 7 01:57:17.186145 systemd-journald[1139]: Under memory pressure, flushing caches. Mar 7 01:57:17.120885 systemd-resolved[1474]: Under memory pressure, flushing caches. Mar 7 01:57:17.120929 systemd-resolved[1474]: Flushed all caches. Mar 7 01:57:17.848636 sshd[4296]: pam_unix(sshd:session): session closed for user core Mar 7 01:57:17.865023 systemd-logind[1563]: Session 13 logged out. Waiting for processes to exit. Mar 7 01:57:17.868595 systemd[1]: sshd@12-10.0.0.118:22-10.0.0.1:57548.service: Deactivated successfully. Mar 7 01:57:17.898991 systemd[1]: session-13.scope: Deactivated successfully. Mar 7 01:57:17.907925 systemd-logind[1563]: Removed session 13. Mar 7 01:57:18.106943 kernel: calico-node[4303]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Mar 7 01:57:18.517323 containerd[1587]: time="2026-03-07T01:57:18.517027547Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:57:18.520483 containerd[1587]: time="2026-03-07T01:57:18.520326701Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Mar 7 01:57:18.530296 containerd[1587]: time="2026-03-07T01:57:18.527718759Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:57:18.541228 containerd[1587]: time="2026-03-07T01:57:18.539274345Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:57:18.541228 containerd[1587]: time="2026-03-07T01:57:18.540513605Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 2.82777753s" Mar 7 01:57:18.541228 containerd[1587]: time="2026-03-07T01:57:18.540559151Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Mar 7 01:57:18.560877 containerd[1587]: time="2026-03-07T01:57:18.560510335Z" level=info msg="CreateContainer within sandbox \"1dd88f410ae779b46da777182ec12207392bb104c82883b9b7b3897ce8262cb5\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 7 01:57:18.676744 containerd[1587]: time="2026-03-07T01:57:18.676192233Z" level=info msg="CreateContainer within sandbox \"1dd88f410ae779b46da777182ec12207392bb104c82883b9b7b3897ce8262cb5\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"ef2a65b501f872befcdccd16b715169a391091cca041baf04b5ac956a1eaa3e3\"" Mar 7 01:57:18.683467 containerd[1587]: time="2026-03-07T01:57:18.678529600Z" level=info msg="StartContainer for \"ef2a65b501f872befcdccd16b715169a391091cca041baf04b5ac956a1eaa3e3\"" Mar 7 01:57:18.865173 kubelet[2895]: E0307 01:57:18.861724 2895 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:57:18.927669 systemd[1]: run-containerd-runc-k8s.io-ef2a65b501f872befcdccd16b715169a391091cca041baf04b5ac956a1eaa3e3-runc.4yWt7X.mount: Deactivated successfully. Mar 7 01:57:19.157612 containerd[1587]: time="2026-03-07T01:57:19.155017378Z" level=info msg="StartContainer for \"ef2a65b501f872befcdccd16b715169a391091cca041baf04b5ac956a1eaa3e3\" returns successfully" Mar 7 01:57:19.164756 containerd[1587]: time="2026-03-07T01:57:19.164452902Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Mar 7 01:57:19.193514 systemd-journald[1139]: Under memory pressure, flushing caches. Mar 7 01:57:19.172174 systemd-resolved[1474]: Under memory pressure, flushing caches. Mar 7 01:57:19.172185 systemd-resolved[1474]: Flushed all caches. Mar 7 01:57:19.978042 kubelet[2895]: E0307 01:57:19.977993 2895 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:57:21.616854 systemd-networkd[1251]: vxlan.calico: Link UP Mar 7 01:57:21.616867 systemd-networkd[1251]: vxlan.calico: Gained carrier Mar 7 01:57:22.863468 containerd[1587]: time="2026-03-07T01:57:22.863400330Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:57:22.879495 containerd[1587]: time="2026-03-07T01:57:22.868405992Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Mar 7 01:57:22.905100 containerd[1587]: time="2026-03-07T01:57:22.904609689Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:57:22.911993 systemd[1]: Started sshd@13-10.0.0.118:22-10.0.0.1:57678.service - OpenSSH per-connection server daemon (10.0.0.1:57678). Mar 7 01:57:22.919072 containerd[1587]: time="2026-03-07T01:57:22.917104634Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:57:22.934535 containerd[1587]: time="2026-03-07T01:57:22.932077829Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 3.76757322s" Mar 7 01:57:22.934535 containerd[1587]: time="2026-03-07T01:57:22.932136439Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Mar 7 01:57:23.051929 containerd[1587]: time="2026-03-07T01:57:23.050553953Z" level=info msg="CreateContainer within sandbox \"1dd88f410ae779b46da777182ec12207392bb104c82883b9b7b3897ce8262cb5\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 7 01:57:23.202212 containerd[1587]: time="2026-03-07T01:57:23.188729114Z" level=info msg="CreateContainer within sandbox \"1dd88f410ae779b46da777182ec12207392bb104c82883b9b7b3897ce8262cb5\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"84d42d438f6089e6de1854ab3913039208d9dc16d5c7067309a65f398694fc22\"" Mar 7 01:57:23.221588 containerd[1587]: time="2026-03-07T01:57:23.218485412Z" level=info msg="StartContainer for \"84d42d438f6089e6de1854ab3913039208d9dc16d5c7067309a65f398694fc22\"" Mar 7 01:57:23.539093 sshd[4445]: Accepted publickey for core from 10.0.0.1 port 57678 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:57:23.559930 sshd[4445]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:57:23.662973 systemd-networkd[1251]: vxlan.calico: Gained IPv6LL Mar 7 01:57:23.669926 systemd-logind[1563]: New session 14 of user core. Mar 7 01:57:23.693063 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 7 01:57:24.118345 containerd[1587]: time="2026-03-07T01:57:24.115107602Z" level=info msg="StartContainer for \"84d42d438f6089e6de1854ab3913039208d9dc16d5c7067309a65f398694fc22\" returns successfully" Mar 7 01:57:24.293331 sshd[4445]: pam_unix(sshd:session): session closed for user core Mar 7 01:57:24.306453 systemd[1]: sshd@13-10.0.0.118:22-10.0.0.1:57678.service: Deactivated successfully. Mar 7 01:57:24.320728 systemd[1]: session-14.scope: Deactivated successfully. Mar 7 01:57:24.346154 systemd-logind[1563]: Session 14 logged out. Waiting for processes to exit. Mar 7 01:57:24.351261 systemd-logind[1563]: Removed session 14. Mar 7 01:57:24.672956 kubelet[2895]: I0307 01:57:24.671942 2895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-c8z84" podStartSLOduration=145.394455139 podStartE2EDuration="2m32.671922475s" podCreationTimestamp="2026-03-07 01:54:52 +0000 UTC" firstStartedPulling="2026-03-07 01:57:15.710115669 +0000 UTC m=+181.492415679" lastFinishedPulling="2026-03-07 01:57:22.987582995 +0000 UTC m=+188.769883015" observedRunningTime="2026-03-07 01:57:24.667681815 +0000 UTC m=+190.449981975" watchObservedRunningTime="2026-03-07 01:57:24.671922475 +0000 UTC m=+190.454222485" Mar 7 01:57:24.766927 kubelet[2895]: I0307 01:57:24.764138 2895 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 7 01:57:24.770152 kubelet[2895]: I0307 01:57:24.769236 2895 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 7 01:57:28.903901 kubelet[2895]: E0307 01:57:28.897008 2895 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:57:29.332407 systemd[1]: Started sshd@14-10.0.0.118:22-10.0.0.1:57702.service - OpenSSH per-connection server daemon (10.0.0.1:57702). Mar 7 01:57:29.458611 sshd[4560]: Accepted publickey for core from 10.0.0.1 port 57702 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:57:29.465002 sshd[4560]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:57:29.500712 systemd-logind[1563]: New session 15 of user core. Mar 7 01:57:29.509949 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 7 01:57:29.630406 kubelet[2895]: E0307 01:57:29.629102 2895 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:57:29.990563 sshd[4560]: pam_unix(sshd:session): session closed for user core Mar 7 01:57:30.026563 systemd[1]: sshd@14-10.0.0.118:22-10.0.0.1:57702.service: Deactivated successfully. Mar 7 01:57:30.047903 systemd[1]: session-15.scope: Deactivated successfully. Mar 7 01:57:30.065234 systemd-logind[1563]: Session 15 logged out. Waiting for processes to exit. Mar 7 01:57:30.086522 systemd-logind[1563]: Removed session 15. Mar 7 01:57:30.237199 kubelet[2895]: I0307 01:57:30.230306 2895 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf75eca3-7bc4-4592-b317-3859d6cdc619-tigera-ca-bundle\") pod \"calico-kube-controllers-64b76d49bd-sx6p4\" (UID: \"bf75eca3-7bc4-4592-b317-3859d6cdc619\") " pod="calico-system/calico-kube-controllers-64b76d49bd-sx6p4" Mar 7 01:57:30.237199 kubelet[2895]: I0307 01:57:30.230448 2895 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnn69\" (UniqueName: \"kubernetes.io/projected/bf75eca3-7bc4-4592-b317-3859d6cdc619-kube-api-access-tnn69\") pod \"calico-kube-controllers-64b76d49bd-sx6p4\" (UID: \"bf75eca3-7bc4-4592-b317-3859d6cdc619\") " pod="calico-system/calico-kube-controllers-64b76d49bd-sx6p4" Mar 7 01:57:30.334262 kubelet[2895]: I0307 01:57:30.331527 2895 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lk6g\" (UniqueName: \"kubernetes.io/projected/62f6ce31-11a9-4768-b2ff-502e5e5b400a-kube-api-access-7lk6g\") pod \"calico-apiserver-647c78759d-8p5hf\" (UID: \"62f6ce31-11a9-4768-b2ff-502e5e5b400a\") " pod="calico-system/calico-apiserver-647c78759d-8p5hf" Mar 7 01:57:30.334262 kubelet[2895]: I0307 01:57:30.331608 2895 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a27a6082-6714-4326-a18a-a16d7c80709b-config-volume\") pod \"coredns-674b8bbfcf-qw6r2\" (UID: \"a27a6082-6714-4326-a18a-a16d7c80709b\") " pod="kube-system/coredns-674b8bbfcf-qw6r2" Mar 7 01:57:30.334262 kubelet[2895]: I0307 01:57:30.332520 2895 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/4b4abf53-5db7-456d-acdd-25832f296097-calico-apiserver-certs\") pod \"calico-apiserver-647c78759d-qnq6w\" (UID: \"4b4abf53-5db7-456d-acdd-25832f296097\") " pod="calico-system/calico-apiserver-647c78759d-qnq6w" Mar 7 01:57:30.334262 kubelet[2895]: I0307 01:57:30.332606 2895 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f26mv\" (UniqueName: \"kubernetes.io/projected/a27a6082-6714-4326-a18a-a16d7c80709b-kube-api-access-f26mv\") pod \"coredns-674b8bbfcf-qw6r2\" (UID: \"a27a6082-6714-4326-a18a-a16d7c80709b\") " pod="kube-system/coredns-674b8bbfcf-qw6r2" Mar 7 01:57:30.334262 kubelet[2895]: I0307 01:57:30.332637 2895 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lm9b8\" (UniqueName: \"kubernetes.io/projected/4b4abf53-5db7-456d-acdd-25832f296097-kube-api-access-lm9b8\") pod \"calico-apiserver-647c78759d-qnq6w\" (UID: \"4b4abf53-5db7-456d-acdd-25832f296097\") " pod="calico-system/calico-apiserver-647c78759d-qnq6w" Mar 7 01:57:30.334626 kubelet[2895]: I0307 01:57:30.332713 2895 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/62f6ce31-11a9-4768-b2ff-502e5e5b400a-calico-apiserver-certs\") pod \"calico-apiserver-647c78759d-8p5hf\" (UID: \"62f6ce31-11a9-4768-b2ff-502e5e5b400a\") " pod="calico-system/calico-apiserver-647c78759d-8p5hf" Mar 7 01:57:30.455874 kubelet[2895]: I0307 01:57:30.453126 2895 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/071407a1-af3e-42ed-8ee8-ae1d7a7c3681-config-volume\") pod \"coredns-674b8bbfcf-cgnsk\" (UID: \"071407a1-af3e-42ed-8ee8-ae1d7a7c3681\") " pod="kube-system/coredns-674b8bbfcf-cgnsk" Mar 7 01:57:30.455874 kubelet[2895]: I0307 01:57:30.453253 2895 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slhxl\" (UniqueName: \"kubernetes.io/projected/f5c47317-c9cf-4ee3-8c60-5a9bc1b9cc0e-kube-api-access-slhxl\") pod \"goldmane-5b85766d88-w5svg\" (UID: \"f5c47317-c9cf-4ee3-8c60-5a9bc1b9cc0e\") " pod="calico-system/goldmane-5b85766d88-w5svg" Mar 7 01:57:30.455874 kubelet[2895]: I0307 01:57:30.453299 2895 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/f5c47317-c9cf-4ee3-8c60-5a9bc1b9cc0e-goldmane-key-pair\") pod \"goldmane-5b85766d88-w5svg\" (UID: \"f5c47317-c9cf-4ee3-8c60-5a9bc1b9cc0e\") " pod="calico-system/goldmane-5b85766d88-w5svg" Mar 7 01:57:30.455874 kubelet[2895]: I0307 01:57:30.453413 2895 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/f595f30d-a60b-4bb7-86e8-dc0eba5c18fa-nginx-config\") pod \"whisker-564b587fc5-hz8rt\" (UID: \"f595f30d-a60b-4bb7-86e8-dc0eba5c18fa\") " pod="calico-system/whisker-564b587fc5-hz8rt" Mar 7 01:57:30.455874 kubelet[2895]: I0307 01:57:30.453567 2895 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f595f30d-a60b-4bb7-86e8-dc0eba5c18fa-whisker-backend-key-pair\") pod \"whisker-564b587fc5-hz8rt\" (UID: \"f595f30d-a60b-4bb7-86e8-dc0eba5c18fa\") " pod="calico-system/whisker-564b587fc5-hz8rt" Mar 7 01:57:30.456178 kubelet[2895]: I0307 01:57:30.453599 2895 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f595f30d-a60b-4bb7-86e8-dc0eba5c18fa-whisker-ca-bundle\") pod \"whisker-564b587fc5-hz8rt\" (UID: \"f595f30d-a60b-4bb7-86e8-dc0eba5c18fa\") " pod="calico-system/whisker-564b587fc5-hz8rt" Mar 7 01:57:30.456178 kubelet[2895]: I0307 01:57:30.453625 2895 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlfnr\" (UniqueName: \"kubernetes.io/projected/f595f30d-a60b-4bb7-86e8-dc0eba5c18fa-kube-api-access-vlfnr\") pod \"whisker-564b587fc5-hz8rt\" (UID: \"f595f30d-a60b-4bb7-86e8-dc0eba5c18fa\") " pod="calico-system/whisker-564b587fc5-hz8rt" Mar 7 01:57:30.456178 kubelet[2895]: I0307 01:57:30.453732 2895 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f5c47317-c9cf-4ee3-8c60-5a9bc1b9cc0e-goldmane-ca-bundle\") pod \"goldmane-5b85766d88-w5svg\" (UID: \"f5c47317-c9cf-4ee3-8c60-5a9bc1b9cc0e\") " pod="calico-system/goldmane-5b85766d88-w5svg" Mar 7 01:57:30.456178 kubelet[2895]: I0307 01:57:30.453916 2895 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5c47317-c9cf-4ee3-8c60-5a9bc1b9cc0e-config\") pod \"goldmane-5b85766d88-w5svg\" (UID: \"f5c47317-c9cf-4ee3-8c60-5a9bc1b9cc0e\") " pod="calico-system/goldmane-5b85766d88-w5svg" Mar 7 01:57:30.456178 kubelet[2895]: I0307 01:57:30.454164 2895 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jf2sd\" (UniqueName: \"kubernetes.io/projected/071407a1-af3e-42ed-8ee8-ae1d7a7c3681-kube-api-access-jf2sd\") pod \"coredns-674b8bbfcf-cgnsk\" (UID: \"071407a1-af3e-42ed-8ee8-ae1d7a7c3681\") " pod="kube-system/coredns-674b8bbfcf-cgnsk" Mar 7 01:57:30.552211 containerd[1587]: time="2026-03-07T01:57:30.552001762Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64b76d49bd-sx6p4,Uid:bf75eca3-7bc4-4592-b317-3859d6cdc619,Namespace:calico-system,Attempt:0,}" Mar 7 01:57:30.622425 kubelet[2895]: E0307 01:57:30.619903 2895 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:57:30.622561 containerd[1587]: time="2026-03-07T01:57:30.621003165Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-qw6r2,Uid:a27a6082-6714-4326-a18a-a16d7c80709b,Namespace:kube-system,Attempt:0,}" Mar 7 01:57:30.622561 containerd[1587]: time="2026-03-07T01:57:30.622327047Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-647c78759d-qnq6w,Uid:4b4abf53-5db7-456d-acdd-25832f296097,Namespace:calico-system,Attempt:0,}" Mar 7 01:57:30.686275 containerd[1587]: time="2026-03-07T01:57:30.686173109Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-647c78759d-8p5hf,Uid:62f6ce31-11a9-4768-b2ff-502e5e5b400a,Namespace:calico-system,Attempt:0,}" Mar 7 01:57:30.750531 containerd[1587]: time="2026-03-07T01:57:30.749520902Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-564b587fc5-hz8rt,Uid:f595f30d-a60b-4bb7-86e8-dc0eba5c18fa,Namespace:calico-system,Attempt:0,}" Mar 7 01:57:30.795669 containerd[1587]: time="2026-03-07T01:57:30.795155952Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-w5svg,Uid:f5c47317-c9cf-4ee3-8c60-5a9bc1b9cc0e,Namespace:calico-system,Attempt:0,}" Mar 7 01:57:30.971166 kubelet[2895]: E0307 01:57:30.971025 2895 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:57:30.989129 kubelet[2895]: E0307 01:57:30.982936 2895 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:57:30.992461 containerd[1587]: time="2026-03-07T01:57:30.986622090Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-cgnsk,Uid:071407a1-af3e-42ed-8ee8-ae1d7a7c3681,Namespace:kube-system,Attempt:0,}" Mar 7 01:57:32.009489 systemd-networkd[1251]: calid2ccb7bd6b3: Link UP Mar 7 01:57:32.039461 systemd-networkd[1251]: calid2ccb7bd6b3: Gained carrier Mar 7 01:57:32.230092 containerd[1587]: 2026-03-07 01:57:31.195 [INFO][4591] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--64b76d49bd--sx6p4-eth0 calico-kube-controllers-64b76d49bd- calico-system bf75eca3-7bc4-4592-b317-3859d6cdc619 1279 0 2026-03-07 01:54:52 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:64b76d49bd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-64b76d49bd-sx6p4 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calid2ccb7bd6b3 [] [] }} ContainerID="b5aa343b9e3d15bc599a0e79a28553e30ac5b4b63ac2d5e3305ca975e9cbf1a6" Namespace="calico-system" Pod="calico-kube-controllers-64b76d49bd-sx6p4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--64b76d49bd--sx6p4-" Mar 7 01:57:32.230092 containerd[1587]: 2026-03-07 01:57:31.195 [INFO][4591] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b5aa343b9e3d15bc599a0e79a28553e30ac5b4b63ac2d5e3305ca975e9cbf1a6" Namespace="calico-system" Pod="calico-kube-controllers-64b76d49bd-sx6p4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--64b76d49bd--sx6p4-eth0" Mar 7 01:57:32.230092 containerd[1587]: 2026-03-07 01:57:31.510 [INFO][4628] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b5aa343b9e3d15bc599a0e79a28553e30ac5b4b63ac2d5e3305ca975e9cbf1a6" HandleID="k8s-pod-network.b5aa343b9e3d15bc599a0e79a28553e30ac5b4b63ac2d5e3305ca975e9cbf1a6" Workload="localhost-k8s-calico--kube--controllers--64b76d49bd--sx6p4-eth0" Mar 7 01:57:32.230092 containerd[1587]: 2026-03-07 01:57:31.546 [INFO][4628] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="b5aa343b9e3d15bc599a0e79a28553e30ac5b4b63ac2d5e3305ca975e9cbf1a6" HandleID="k8s-pod-network.b5aa343b9e3d15bc599a0e79a28553e30ac5b4b63ac2d5e3305ca975e9cbf1a6" Workload="localhost-k8s-calico--kube--controllers--64b76d49bd--sx6p4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000385b0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-64b76d49bd-sx6p4", "timestamp":"2026-03-07 01:57:31.510966641 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000702420)} Mar 7 01:57:32.230092 containerd[1587]: 2026-03-07 01:57:31.546 [INFO][4628] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:57:32.230092 containerd[1587]: 2026-03-07 01:57:31.546 [INFO][4628] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:57:32.230092 containerd[1587]: 2026-03-07 01:57:31.546 [INFO][4628] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 7 01:57:32.230092 containerd[1587]: 2026-03-07 01:57:31.579 [INFO][4628] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.b5aa343b9e3d15bc599a0e79a28553e30ac5b4b63ac2d5e3305ca975e9cbf1a6" host="localhost" Mar 7 01:57:32.230092 containerd[1587]: 2026-03-07 01:57:31.640 [INFO][4628] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 7 01:57:32.230092 containerd[1587]: 2026-03-07 01:57:31.718 [INFO][4628] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 7 01:57:32.230092 containerd[1587]: 2026-03-07 01:57:31.748 [INFO][4628] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 7 01:57:32.230092 containerd[1587]: 2026-03-07 01:57:31.758 [INFO][4628] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 7 01:57:32.230092 containerd[1587]: 2026-03-07 01:57:31.759 [INFO][4628] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b5aa343b9e3d15bc599a0e79a28553e30ac5b4b63ac2d5e3305ca975e9cbf1a6" host="localhost" Mar 7 01:57:32.230092 containerd[1587]: 2026-03-07 01:57:31.787 [INFO][4628] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.b5aa343b9e3d15bc599a0e79a28553e30ac5b4b63ac2d5e3305ca975e9cbf1a6 Mar 7 01:57:32.230092 containerd[1587]: 2026-03-07 01:57:31.850 [INFO][4628] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b5aa343b9e3d15bc599a0e79a28553e30ac5b4b63ac2d5e3305ca975e9cbf1a6" host="localhost" Mar 7 01:57:32.230092 containerd[1587]: 2026-03-07 01:57:31.890 [INFO][4628] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.b5aa343b9e3d15bc599a0e79a28553e30ac5b4b63ac2d5e3305ca975e9cbf1a6" host="localhost" Mar 7 01:57:32.230092 containerd[1587]: 2026-03-07 01:57:31.890 [INFO][4628] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.b5aa343b9e3d15bc599a0e79a28553e30ac5b4b63ac2d5e3305ca975e9cbf1a6" host="localhost" Mar 7 01:57:32.230092 containerd[1587]: 2026-03-07 01:57:31.890 [INFO][4628] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:57:32.230092 containerd[1587]: 2026-03-07 01:57:31.890 [INFO][4628] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="b5aa343b9e3d15bc599a0e79a28553e30ac5b4b63ac2d5e3305ca975e9cbf1a6" HandleID="k8s-pod-network.b5aa343b9e3d15bc599a0e79a28553e30ac5b4b63ac2d5e3305ca975e9cbf1a6" Workload="localhost-k8s-calico--kube--controllers--64b76d49bd--sx6p4-eth0" Mar 7 01:57:32.240110 containerd[1587]: 2026-03-07 01:57:31.959 [INFO][4591] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b5aa343b9e3d15bc599a0e79a28553e30ac5b4b63ac2d5e3305ca975e9cbf1a6" Namespace="calico-system" Pod="calico-kube-controllers-64b76d49bd-sx6p4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--64b76d49bd--sx6p4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--64b76d49bd--sx6p4-eth0", GenerateName:"calico-kube-controllers-64b76d49bd-", Namespace:"calico-system", SelfLink:"", UID:"bf75eca3-7bc4-4592-b317-3859d6cdc619", ResourceVersion:"1279", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 54, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"64b76d49bd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-64b76d49bd-sx6p4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid2ccb7bd6b3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:57:32.240110 containerd[1587]: 2026-03-07 01:57:31.959 [INFO][4591] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="b5aa343b9e3d15bc599a0e79a28553e30ac5b4b63ac2d5e3305ca975e9cbf1a6" Namespace="calico-system" Pod="calico-kube-controllers-64b76d49bd-sx6p4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--64b76d49bd--sx6p4-eth0" Mar 7 01:57:32.240110 containerd[1587]: 2026-03-07 01:57:31.959 [INFO][4591] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid2ccb7bd6b3 ContainerID="b5aa343b9e3d15bc599a0e79a28553e30ac5b4b63ac2d5e3305ca975e9cbf1a6" Namespace="calico-system" Pod="calico-kube-controllers-64b76d49bd-sx6p4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--64b76d49bd--sx6p4-eth0" Mar 7 01:57:32.240110 containerd[1587]: 2026-03-07 01:57:32.034 [INFO][4591] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b5aa343b9e3d15bc599a0e79a28553e30ac5b4b63ac2d5e3305ca975e9cbf1a6" Namespace="calico-system" Pod="calico-kube-controllers-64b76d49bd-sx6p4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--64b76d49bd--sx6p4-eth0" Mar 7 01:57:32.240110 containerd[1587]: 2026-03-07 01:57:32.042 [INFO][4591] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b5aa343b9e3d15bc599a0e79a28553e30ac5b4b63ac2d5e3305ca975e9cbf1a6" Namespace="calico-system" Pod="calico-kube-controllers-64b76d49bd-sx6p4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--64b76d49bd--sx6p4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--64b76d49bd--sx6p4-eth0", GenerateName:"calico-kube-controllers-64b76d49bd-", Namespace:"calico-system", SelfLink:"", UID:"bf75eca3-7bc4-4592-b317-3859d6cdc619", ResourceVersion:"1279", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 54, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"64b76d49bd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b5aa343b9e3d15bc599a0e79a28553e30ac5b4b63ac2d5e3305ca975e9cbf1a6", Pod:"calico-kube-controllers-64b76d49bd-sx6p4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid2ccb7bd6b3", MAC:"d6:9f:45:56:41:db", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:57:32.240110 containerd[1587]: 2026-03-07 01:57:32.143 [INFO][4591] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b5aa343b9e3d15bc599a0e79a28553e30ac5b4b63ac2d5e3305ca975e9cbf1a6" Namespace="calico-system" Pod="calico-kube-controllers-64b76d49bd-sx6p4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--64b76d49bd--sx6p4-eth0" Mar 7 01:57:32.736599 containerd[1587]: time="2026-03-07T01:57:32.733782730Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:57:32.740533 containerd[1587]: time="2026-03-07T01:57:32.738480338Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:57:32.745993 containerd[1587]: time="2026-03-07T01:57:32.741573914Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:57:32.745993 containerd[1587]: time="2026-03-07T01:57:32.745480937Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:57:32.985313 systemd-networkd[1251]: calid51222bf70c: Link UP Mar 7 01:57:33.016046 systemd-networkd[1251]: calid51222bf70c: Gained carrier Mar 7 01:57:33.260587 containerd[1587]: 2026-03-07 01:57:31.685 [INFO][4609] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--647c78759d--qnq6w-eth0 calico-apiserver-647c78759d- calico-system 4b4abf53-5db7-456d-acdd-25832f296097 1281 0 2026-03-07 01:54:49 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:647c78759d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-647c78759d-qnq6w eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calid51222bf70c [] [] }} ContainerID="2eca5351ddf51a03079b83951726d11a4f35ce17ffa910b10559dd3765798a41" Namespace="calico-system" Pod="calico-apiserver-647c78759d-qnq6w" WorkloadEndpoint="localhost-k8s-calico--apiserver--647c78759d--qnq6w-" Mar 7 01:57:33.260587 containerd[1587]: 2026-03-07 01:57:31.693 [INFO][4609] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2eca5351ddf51a03079b83951726d11a4f35ce17ffa910b10559dd3765798a41" Namespace="calico-system" Pod="calico-apiserver-647c78759d-qnq6w" WorkloadEndpoint="localhost-k8s-calico--apiserver--647c78759d--qnq6w-eth0" Mar 7 01:57:33.260587 containerd[1587]: 2026-03-07 01:57:32.387 [INFO][4694] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2eca5351ddf51a03079b83951726d11a4f35ce17ffa910b10559dd3765798a41" HandleID="k8s-pod-network.2eca5351ddf51a03079b83951726d11a4f35ce17ffa910b10559dd3765798a41" Workload="localhost-k8s-calico--apiserver--647c78759d--qnq6w-eth0" Mar 7 01:57:33.260587 containerd[1587]: 2026-03-07 01:57:32.473 [INFO][4694] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="2eca5351ddf51a03079b83951726d11a4f35ce17ffa910b10559dd3765798a41" HandleID="k8s-pod-network.2eca5351ddf51a03079b83951726d11a4f35ce17ffa910b10559dd3765798a41" Workload="localhost-k8s-calico--apiserver--647c78759d--qnq6w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00037b8a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-647c78759d-qnq6w", "timestamp":"2026-03-07 01:57:32.387109322 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0002a3600)} Mar 7 01:57:33.260587 containerd[1587]: 2026-03-07 01:57:32.474 [INFO][4694] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:57:33.260587 containerd[1587]: 2026-03-07 01:57:32.474 [INFO][4694] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:57:33.260587 containerd[1587]: 2026-03-07 01:57:32.474 [INFO][4694] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 7 01:57:33.260587 containerd[1587]: 2026-03-07 01:57:32.537 [INFO][4694] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.2eca5351ddf51a03079b83951726d11a4f35ce17ffa910b10559dd3765798a41" host="localhost" Mar 7 01:57:33.260587 containerd[1587]: 2026-03-07 01:57:32.587 [INFO][4694] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 7 01:57:33.260587 containerd[1587]: 2026-03-07 01:57:32.695 [INFO][4694] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 7 01:57:33.260587 containerd[1587]: 2026-03-07 01:57:32.717 [INFO][4694] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 7 01:57:33.260587 containerd[1587]: 2026-03-07 01:57:32.725 [INFO][4694] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 7 01:57:33.260587 containerd[1587]: 2026-03-07 01:57:32.725 [INFO][4694] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2eca5351ddf51a03079b83951726d11a4f35ce17ffa910b10559dd3765798a41" host="localhost" Mar 7 01:57:33.260587 containerd[1587]: 2026-03-07 01:57:32.748 [INFO][4694] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.2eca5351ddf51a03079b83951726d11a4f35ce17ffa910b10559dd3765798a41 Mar 7 01:57:33.260587 containerd[1587]: 2026-03-07 01:57:32.766 [INFO][4694] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2eca5351ddf51a03079b83951726d11a4f35ce17ffa910b10559dd3765798a41" host="localhost" Mar 7 01:57:33.260587 containerd[1587]: 2026-03-07 01:57:32.840 [INFO][4694] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.2eca5351ddf51a03079b83951726d11a4f35ce17ffa910b10559dd3765798a41" host="localhost" Mar 7 01:57:33.260587 containerd[1587]: 2026-03-07 01:57:32.840 [INFO][4694] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.2eca5351ddf51a03079b83951726d11a4f35ce17ffa910b10559dd3765798a41" host="localhost" Mar 7 01:57:33.260587 containerd[1587]: 2026-03-07 01:57:32.840 [INFO][4694] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:57:33.260587 containerd[1587]: 2026-03-07 01:57:32.840 [INFO][4694] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="2eca5351ddf51a03079b83951726d11a4f35ce17ffa910b10559dd3765798a41" HandleID="k8s-pod-network.2eca5351ddf51a03079b83951726d11a4f35ce17ffa910b10559dd3765798a41" Workload="localhost-k8s-calico--apiserver--647c78759d--qnq6w-eth0" Mar 7 01:57:33.266205 containerd[1587]: 2026-03-07 01:57:32.883 [INFO][4609] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2eca5351ddf51a03079b83951726d11a4f35ce17ffa910b10559dd3765798a41" Namespace="calico-system" Pod="calico-apiserver-647c78759d-qnq6w" WorkloadEndpoint="localhost-k8s-calico--apiserver--647c78759d--qnq6w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--647c78759d--qnq6w-eth0", GenerateName:"calico-apiserver-647c78759d-", Namespace:"calico-system", SelfLink:"", UID:"4b4abf53-5db7-456d-acdd-25832f296097", ResourceVersion:"1281", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 54, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"647c78759d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-647c78759d-qnq6w", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calid51222bf70c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:57:33.266205 containerd[1587]: 2026-03-07 01:57:32.883 [INFO][4609] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="2eca5351ddf51a03079b83951726d11a4f35ce17ffa910b10559dd3765798a41" Namespace="calico-system" Pod="calico-apiserver-647c78759d-qnq6w" WorkloadEndpoint="localhost-k8s-calico--apiserver--647c78759d--qnq6w-eth0" Mar 7 01:57:33.266205 containerd[1587]: 2026-03-07 01:57:32.883 [INFO][4609] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid51222bf70c ContainerID="2eca5351ddf51a03079b83951726d11a4f35ce17ffa910b10559dd3765798a41" Namespace="calico-system" Pod="calico-apiserver-647c78759d-qnq6w" WorkloadEndpoint="localhost-k8s-calico--apiserver--647c78759d--qnq6w-eth0" Mar 7 01:57:33.266205 containerd[1587]: 2026-03-07 01:57:33.040 [INFO][4609] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2eca5351ddf51a03079b83951726d11a4f35ce17ffa910b10559dd3765798a41" Namespace="calico-system" Pod="calico-apiserver-647c78759d-qnq6w" WorkloadEndpoint="localhost-k8s-calico--apiserver--647c78759d--qnq6w-eth0" Mar 7 01:57:33.266205 containerd[1587]: 2026-03-07 01:57:33.072 [INFO][4609] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2eca5351ddf51a03079b83951726d11a4f35ce17ffa910b10559dd3765798a41" Namespace="calico-system" Pod="calico-apiserver-647c78759d-qnq6w" WorkloadEndpoint="localhost-k8s-calico--apiserver--647c78759d--qnq6w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--647c78759d--qnq6w-eth0", GenerateName:"calico-apiserver-647c78759d-", Namespace:"calico-system", SelfLink:"", UID:"4b4abf53-5db7-456d-acdd-25832f296097", ResourceVersion:"1281", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 54, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"647c78759d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2eca5351ddf51a03079b83951726d11a4f35ce17ffa910b10559dd3765798a41", Pod:"calico-apiserver-647c78759d-qnq6w", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calid51222bf70c", MAC:"4a:de:ab:7f:ab:26", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:57:33.266205 containerd[1587]: 2026-03-07 01:57:33.181 [INFO][4609] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2eca5351ddf51a03079b83951726d11a4f35ce17ffa910b10559dd3765798a41" Namespace="calico-system" Pod="calico-apiserver-647c78759d-qnq6w" WorkloadEndpoint="localhost-k8s-calico--apiserver--647c78759d--qnq6w-eth0" Mar 7 01:57:33.290115 systemd-networkd[1251]: calid6c2f881253: Link UP Mar 7 01:57:33.291066 systemd-networkd[1251]: calid6c2f881253: Gained carrier Mar 7 01:57:33.352730 systemd-resolved[1474]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 7 01:57:33.462452 containerd[1587]: 2026-03-07 01:57:31.726 [INFO][4615] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--qw6r2-eth0 coredns-674b8bbfcf- kube-system a27a6082-6714-4326-a18a-a16d7c80709b 1280 0 2026-03-07 01:54:15 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-qw6r2 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calid6c2f881253 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="55b5278ddf09ba1ba00ad44e409fe4ae52b3a317bb32e31b01584829c68ed489" Namespace="kube-system" Pod="coredns-674b8bbfcf-qw6r2" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--qw6r2-" Mar 7 01:57:33.462452 containerd[1587]: 2026-03-07 01:57:31.729 [INFO][4615] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="55b5278ddf09ba1ba00ad44e409fe4ae52b3a317bb32e31b01584829c68ed489" Namespace="kube-system" Pod="coredns-674b8bbfcf-qw6r2" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--qw6r2-eth0" Mar 7 01:57:33.462452 containerd[1587]: 2026-03-07 01:57:32.524 [INFO][4697] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="55b5278ddf09ba1ba00ad44e409fe4ae52b3a317bb32e31b01584829c68ed489" HandleID="k8s-pod-network.55b5278ddf09ba1ba00ad44e409fe4ae52b3a317bb32e31b01584829c68ed489" Workload="localhost-k8s-coredns--674b8bbfcf--qw6r2-eth0" Mar 7 01:57:33.462452 containerd[1587]: 2026-03-07 01:57:32.669 [INFO][4697] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="55b5278ddf09ba1ba00ad44e409fe4ae52b3a317bb32e31b01584829c68ed489" HandleID="k8s-pod-network.55b5278ddf09ba1ba00ad44e409fe4ae52b3a317bb32e31b01584829c68ed489" Workload="localhost-k8s-coredns--674b8bbfcf--qw6r2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003c40d0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-qw6r2", "timestamp":"2026-03-07 01:57:32.524433734 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000646580)} Mar 7 01:57:33.462452 containerd[1587]: 2026-03-07 01:57:32.669 [INFO][4697] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:57:33.462452 containerd[1587]: 2026-03-07 01:57:32.841 [INFO][4697] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:57:33.462452 containerd[1587]: 2026-03-07 01:57:32.841 [INFO][4697] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 7 01:57:33.462452 containerd[1587]: 2026-03-07 01:57:32.871 [INFO][4697] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.55b5278ddf09ba1ba00ad44e409fe4ae52b3a317bb32e31b01584829c68ed489" host="localhost" Mar 7 01:57:33.462452 containerd[1587]: 2026-03-07 01:57:32.932 [INFO][4697] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 7 01:57:33.462452 containerd[1587]: 2026-03-07 01:57:32.966 [INFO][4697] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 7 01:57:33.462452 containerd[1587]: 2026-03-07 01:57:32.977 [INFO][4697] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 7 01:57:33.462452 containerd[1587]: 2026-03-07 01:57:32.990 [INFO][4697] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 7 01:57:33.462452 containerd[1587]: 2026-03-07 01:57:32.990 [INFO][4697] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.55b5278ddf09ba1ba00ad44e409fe4ae52b3a317bb32e31b01584829c68ed489" host="localhost" Mar 7 01:57:33.462452 containerd[1587]: 2026-03-07 01:57:33.025 [INFO][4697] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.55b5278ddf09ba1ba00ad44e409fe4ae52b3a317bb32e31b01584829c68ed489 Mar 7 01:57:33.462452 containerd[1587]: 2026-03-07 01:57:33.052 [INFO][4697] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.55b5278ddf09ba1ba00ad44e409fe4ae52b3a317bb32e31b01584829c68ed489" host="localhost" Mar 7 01:57:33.462452 containerd[1587]: 2026-03-07 01:57:33.195 [INFO][4697] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.55b5278ddf09ba1ba00ad44e409fe4ae52b3a317bb32e31b01584829c68ed489" host="localhost" Mar 7 01:57:33.462452 containerd[1587]: 2026-03-07 01:57:33.195 [INFO][4697] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.55b5278ddf09ba1ba00ad44e409fe4ae52b3a317bb32e31b01584829c68ed489" host="localhost" Mar 7 01:57:33.462452 containerd[1587]: 2026-03-07 01:57:33.195 [INFO][4697] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:57:33.462452 containerd[1587]: 2026-03-07 01:57:33.196 [INFO][4697] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="55b5278ddf09ba1ba00ad44e409fe4ae52b3a317bb32e31b01584829c68ed489" HandleID="k8s-pod-network.55b5278ddf09ba1ba00ad44e409fe4ae52b3a317bb32e31b01584829c68ed489" Workload="localhost-k8s-coredns--674b8bbfcf--qw6r2-eth0" Mar 7 01:57:33.464635 containerd[1587]: 2026-03-07 01:57:33.231 [INFO][4615] cni-plugin/k8s.go 418: Populated endpoint ContainerID="55b5278ddf09ba1ba00ad44e409fe4ae52b3a317bb32e31b01584829c68ed489" Namespace="kube-system" Pod="coredns-674b8bbfcf-qw6r2" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--qw6r2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--qw6r2-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"a27a6082-6714-4326-a18a-a16d7c80709b", ResourceVersion:"1280", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 54, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-qw6r2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid6c2f881253", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:57:33.464635 containerd[1587]: 2026-03-07 01:57:33.248 [INFO][4615] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="55b5278ddf09ba1ba00ad44e409fe4ae52b3a317bb32e31b01584829c68ed489" Namespace="kube-system" Pod="coredns-674b8bbfcf-qw6r2" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--qw6r2-eth0" Mar 7 01:57:33.464635 containerd[1587]: 2026-03-07 01:57:33.248 [INFO][4615] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid6c2f881253 ContainerID="55b5278ddf09ba1ba00ad44e409fe4ae52b3a317bb32e31b01584829c68ed489" Namespace="kube-system" Pod="coredns-674b8bbfcf-qw6r2" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--qw6r2-eth0" Mar 7 01:57:33.464635 containerd[1587]: 2026-03-07 01:57:33.301 [INFO][4615] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="55b5278ddf09ba1ba00ad44e409fe4ae52b3a317bb32e31b01584829c68ed489" Namespace="kube-system" Pod="coredns-674b8bbfcf-qw6r2" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--qw6r2-eth0" Mar 7 01:57:33.464635 containerd[1587]: 2026-03-07 01:57:33.319 [INFO][4615] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="55b5278ddf09ba1ba00ad44e409fe4ae52b3a317bb32e31b01584829c68ed489" Namespace="kube-system" Pod="coredns-674b8bbfcf-qw6r2" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--qw6r2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--qw6r2-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"a27a6082-6714-4326-a18a-a16d7c80709b", ResourceVersion:"1280", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 54, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"55b5278ddf09ba1ba00ad44e409fe4ae52b3a317bb32e31b01584829c68ed489", Pod:"coredns-674b8bbfcf-qw6r2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid6c2f881253", MAC:"92:35:e2:99:34:de", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:57:33.464635 containerd[1587]: 2026-03-07 01:57:33.377 [INFO][4615] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="55b5278ddf09ba1ba00ad44e409fe4ae52b3a317bb32e31b01584829c68ed489" Namespace="kube-system" Pod="coredns-674b8bbfcf-qw6r2" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--qw6r2-eth0" Mar 7 01:57:33.575146 systemd-networkd[1251]: calid2ccb7bd6b3: Gained IPv6LL Mar 7 01:57:33.772298 containerd[1587]: time="2026-03-07T01:57:33.769605114Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:57:33.772298 containerd[1587]: time="2026-03-07T01:57:33.769680477Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:57:33.772298 containerd[1587]: time="2026-03-07T01:57:33.769695835Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:57:33.772298 containerd[1587]: time="2026-03-07T01:57:33.769926931Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:57:34.051389 systemd-networkd[1251]: caliba296445ca7: Link UP Mar 7 01:57:34.056599 systemd-networkd[1251]: caliba296445ca7: Gained carrier Mar 7 01:57:34.117997 containerd[1587]: time="2026-03-07T01:57:34.115088028Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:57:34.124636 containerd[1587]: time="2026-03-07T01:57:34.120761872Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:57:34.124636 containerd[1587]: time="2026-03-07T01:57:34.120867563Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:57:34.124636 containerd[1587]: time="2026-03-07T01:57:34.121014059Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:57:34.127075 containerd[1587]: time="2026-03-07T01:57:34.127023346Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64b76d49bd-sx6p4,Uid:bf75eca3-7bc4-4592-b317-3859d6cdc619,Namespace:calico-system,Attempt:0,} returns sandbox id \"b5aa343b9e3d15bc599a0e79a28553e30ac5b4b63ac2d5e3305ca975e9cbf1a6\"" Mar 7 01:57:34.133639 containerd[1587]: time="2026-03-07T01:57:34.133551253Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Mar 7 01:57:34.179055 containerd[1587]: 2026-03-07 01:57:31.929 [INFO][4671] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--cgnsk-eth0 coredns-674b8bbfcf- kube-system 071407a1-af3e-42ed-8ee8-ae1d7a7c3681 1283 0 2026-03-07 01:54:15 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-cgnsk eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliba296445ca7 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="34bebaaee5c1e4d52babdf0a7aa5f7e84221c554eac0fc33ff0de22653fb58e5" Namespace="kube-system" Pod="coredns-674b8bbfcf-cgnsk" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--cgnsk-" Mar 7 01:57:34.179055 containerd[1587]: 2026-03-07 01:57:31.935 [INFO][4671] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="34bebaaee5c1e4d52babdf0a7aa5f7e84221c554eac0fc33ff0de22653fb58e5" Namespace="kube-system" Pod="coredns-674b8bbfcf-cgnsk" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--cgnsk-eth0" Mar 7 01:57:34.179055 containerd[1587]: 2026-03-07 01:57:32.628 [INFO][4730] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="34bebaaee5c1e4d52babdf0a7aa5f7e84221c554eac0fc33ff0de22653fb58e5" HandleID="k8s-pod-network.34bebaaee5c1e4d52babdf0a7aa5f7e84221c554eac0fc33ff0de22653fb58e5" Workload="localhost-k8s-coredns--674b8bbfcf--cgnsk-eth0" Mar 7 01:57:34.179055 containerd[1587]: 2026-03-07 01:57:32.680 [INFO][4730] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="34bebaaee5c1e4d52babdf0a7aa5f7e84221c554eac0fc33ff0de22653fb58e5" HandleID="k8s-pod-network.34bebaaee5c1e4d52babdf0a7aa5f7e84221c554eac0fc33ff0de22653fb58e5" Workload="localhost-k8s-coredns--674b8bbfcf--cgnsk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f840), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-cgnsk", "timestamp":"2026-03-07 01:57:32.628890765 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00022c000)} Mar 7 01:57:34.179055 containerd[1587]: 2026-03-07 01:57:32.680 [INFO][4730] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:57:34.179055 containerd[1587]: 2026-03-07 01:57:33.196 [INFO][4730] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:57:34.179055 containerd[1587]: 2026-03-07 01:57:33.197 [INFO][4730] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 7 01:57:34.179055 containerd[1587]: 2026-03-07 01:57:33.289 [INFO][4730] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.34bebaaee5c1e4d52babdf0a7aa5f7e84221c554eac0fc33ff0de22653fb58e5" host="localhost" Mar 7 01:57:34.179055 containerd[1587]: 2026-03-07 01:57:33.434 [INFO][4730] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 7 01:57:34.179055 containerd[1587]: 2026-03-07 01:57:33.518 [INFO][4730] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 7 01:57:34.179055 containerd[1587]: 2026-03-07 01:57:33.554 [INFO][4730] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 7 01:57:34.179055 containerd[1587]: 2026-03-07 01:57:33.579 [INFO][4730] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 7 01:57:34.179055 containerd[1587]: 2026-03-07 01:57:33.604 [INFO][4730] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.34bebaaee5c1e4d52babdf0a7aa5f7e84221c554eac0fc33ff0de22653fb58e5" host="localhost" Mar 7 01:57:34.179055 containerd[1587]: 2026-03-07 01:57:33.634 [INFO][4730] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.34bebaaee5c1e4d52babdf0a7aa5f7e84221c554eac0fc33ff0de22653fb58e5 Mar 7 01:57:34.179055 containerd[1587]: 2026-03-07 01:57:33.756 [INFO][4730] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.34bebaaee5c1e4d52babdf0a7aa5f7e84221c554eac0fc33ff0de22653fb58e5" host="localhost" Mar 7 01:57:34.179055 containerd[1587]: 2026-03-07 01:57:33.895 [INFO][4730] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.34bebaaee5c1e4d52babdf0a7aa5f7e84221c554eac0fc33ff0de22653fb58e5" host="localhost" Mar 7 01:57:34.179055 containerd[1587]: 2026-03-07 01:57:33.919 [INFO][4730] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.34bebaaee5c1e4d52babdf0a7aa5f7e84221c554eac0fc33ff0de22653fb58e5" host="localhost" Mar 7 01:57:34.179055 containerd[1587]: 2026-03-07 01:57:33.919 [INFO][4730] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:57:34.179055 containerd[1587]: 2026-03-07 01:57:33.919 [INFO][4730] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="34bebaaee5c1e4d52babdf0a7aa5f7e84221c554eac0fc33ff0de22653fb58e5" HandleID="k8s-pod-network.34bebaaee5c1e4d52babdf0a7aa5f7e84221c554eac0fc33ff0de22653fb58e5" Workload="localhost-k8s-coredns--674b8bbfcf--cgnsk-eth0" Mar 7 01:57:34.180138 containerd[1587]: 2026-03-07 01:57:33.960 [INFO][4671] cni-plugin/k8s.go 418: Populated endpoint ContainerID="34bebaaee5c1e4d52babdf0a7aa5f7e84221c554eac0fc33ff0de22653fb58e5" Namespace="kube-system" Pod="coredns-674b8bbfcf-cgnsk" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--cgnsk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--cgnsk-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"071407a1-af3e-42ed-8ee8-ae1d7a7c3681", ResourceVersion:"1283", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 54, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-cgnsk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliba296445ca7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:57:34.180138 containerd[1587]: 2026-03-07 01:57:33.960 [INFO][4671] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="34bebaaee5c1e4d52babdf0a7aa5f7e84221c554eac0fc33ff0de22653fb58e5" Namespace="kube-system" Pod="coredns-674b8bbfcf-cgnsk" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--cgnsk-eth0" Mar 7 01:57:34.180138 containerd[1587]: 2026-03-07 01:57:33.960 [INFO][4671] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliba296445ca7 ContainerID="34bebaaee5c1e4d52babdf0a7aa5f7e84221c554eac0fc33ff0de22653fb58e5" Namespace="kube-system" Pod="coredns-674b8bbfcf-cgnsk" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--cgnsk-eth0" Mar 7 01:57:34.180138 containerd[1587]: 2026-03-07 01:57:34.082 [INFO][4671] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="34bebaaee5c1e4d52babdf0a7aa5f7e84221c554eac0fc33ff0de22653fb58e5" Namespace="kube-system" Pod="coredns-674b8bbfcf-cgnsk" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--cgnsk-eth0" Mar 7 01:57:34.180138 containerd[1587]: 2026-03-07 01:57:34.088 [INFO][4671] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="34bebaaee5c1e4d52babdf0a7aa5f7e84221c554eac0fc33ff0de22653fb58e5" Namespace="kube-system" Pod="coredns-674b8bbfcf-cgnsk" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--cgnsk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--cgnsk-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"071407a1-af3e-42ed-8ee8-ae1d7a7c3681", ResourceVersion:"1283", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 54, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"34bebaaee5c1e4d52babdf0a7aa5f7e84221c554eac0fc33ff0de22653fb58e5", Pod:"coredns-674b8bbfcf-cgnsk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliba296445ca7", MAC:"ce:09:d3:8e:0c:c3", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:57:34.180138 containerd[1587]: 2026-03-07 01:57:34.158 [INFO][4671] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="34bebaaee5c1e4d52babdf0a7aa5f7e84221c554eac0fc33ff0de22653fb58e5" Namespace="kube-system" Pod="coredns-674b8bbfcf-cgnsk" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--cgnsk-eth0" Mar 7 01:57:34.335592 systemd-resolved[1474]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 7 01:57:34.345066 systemd-networkd[1251]: calid51222bf70c: Gained IPv6LL Mar 7 01:57:34.417923 containerd[1587]: time="2026-03-07T01:57:34.411030275Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:57:34.479651 containerd[1587]: time="2026-03-07T01:57:34.464775185Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:57:34.479651 containerd[1587]: time="2026-03-07T01:57:34.464977117Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:57:34.479651 containerd[1587]: time="2026-03-07T01:57:34.465159952Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:57:34.497664 systemd-networkd[1251]: calie02cac2ea6b: Link UP Mar 7 01:57:34.517128 systemd-networkd[1251]: calie02cac2ea6b: Gained carrier Mar 7 01:57:34.675511 systemd-resolved[1474]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 7 01:57:34.764879 containerd[1587]: 2026-03-07 01:57:31.956 [INFO][4659] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--5b85766d88--w5svg-eth0 goldmane-5b85766d88- calico-system f5c47317-c9cf-4ee3-8c60-5a9bc1b9cc0e 1285 0 2026-03-07 01:54:50 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:5b85766d88 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-5b85766d88-w5svg eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calie02cac2ea6b [] [] }} ContainerID="3808eb8e32ccdb51bc83cad45f95f70614838041c57cb138d807c7854f874b86" Namespace="calico-system" Pod="goldmane-5b85766d88-w5svg" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--w5svg-" Mar 7 01:57:34.764879 containerd[1587]: 2026-03-07 01:57:31.956 [INFO][4659] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3808eb8e32ccdb51bc83cad45f95f70614838041c57cb138d807c7854f874b86" Namespace="calico-system" Pod="goldmane-5b85766d88-w5svg" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--w5svg-eth0" Mar 7 01:57:34.764879 containerd[1587]: 2026-03-07 01:57:32.744 [INFO][4720] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3808eb8e32ccdb51bc83cad45f95f70614838041c57cb138d807c7854f874b86" HandleID="k8s-pod-network.3808eb8e32ccdb51bc83cad45f95f70614838041c57cb138d807c7854f874b86" Workload="localhost-k8s-goldmane--5b85766d88--w5svg-eth0" Mar 7 01:57:34.764879 containerd[1587]: 2026-03-07 01:57:32.851 [INFO][4720] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="3808eb8e32ccdb51bc83cad45f95f70614838041c57cb138d807c7854f874b86" HandleID="k8s-pod-network.3808eb8e32ccdb51bc83cad45f95f70614838041c57cb138d807c7854f874b86" Workload="localhost-k8s-goldmane--5b85766d88--w5svg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fcb0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-5b85766d88-w5svg", "timestamp":"2026-03-07 01:57:32.744655316 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0004d4420)} Mar 7 01:57:34.764879 containerd[1587]: 2026-03-07 01:57:32.852 [INFO][4720] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:57:34.764879 containerd[1587]: 2026-03-07 01:57:33.936 [INFO][4720] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:57:34.764879 containerd[1587]: 2026-03-07 01:57:33.936 [INFO][4720] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 7 01:57:34.764879 containerd[1587]: 2026-03-07 01:57:33.971 [INFO][4720] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.3808eb8e32ccdb51bc83cad45f95f70614838041c57cb138d807c7854f874b86" host="localhost" Mar 7 01:57:34.764879 containerd[1587]: 2026-03-07 01:57:34.097 [INFO][4720] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 7 01:57:34.764879 containerd[1587]: 2026-03-07 01:57:34.146 [INFO][4720] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 7 01:57:34.764879 containerd[1587]: 2026-03-07 01:57:34.194 [INFO][4720] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 7 01:57:34.764879 containerd[1587]: 2026-03-07 01:57:34.207 [INFO][4720] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 7 01:57:34.764879 containerd[1587]: 2026-03-07 01:57:34.207 [INFO][4720] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3808eb8e32ccdb51bc83cad45f95f70614838041c57cb138d807c7854f874b86" host="localhost" Mar 7 01:57:34.764879 containerd[1587]: 2026-03-07 01:57:34.223 [INFO][4720] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.3808eb8e32ccdb51bc83cad45f95f70614838041c57cb138d807c7854f874b86 Mar 7 01:57:34.764879 containerd[1587]: 2026-03-07 01:57:34.270 [INFO][4720] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3808eb8e32ccdb51bc83cad45f95f70614838041c57cb138d807c7854f874b86" host="localhost" Mar 7 01:57:34.764879 containerd[1587]: 2026-03-07 01:57:34.376 [INFO][4720] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.3808eb8e32ccdb51bc83cad45f95f70614838041c57cb138d807c7854f874b86" host="localhost" Mar 7 01:57:34.764879 containerd[1587]: 2026-03-07 01:57:34.386 [INFO][4720] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.3808eb8e32ccdb51bc83cad45f95f70614838041c57cb138d807c7854f874b86" host="localhost" Mar 7 01:57:34.764879 containerd[1587]: 2026-03-07 01:57:34.391 [INFO][4720] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:57:34.764879 containerd[1587]: 2026-03-07 01:57:34.392 [INFO][4720] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="3808eb8e32ccdb51bc83cad45f95f70614838041c57cb138d807c7854f874b86" HandleID="k8s-pod-network.3808eb8e32ccdb51bc83cad45f95f70614838041c57cb138d807c7854f874b86" Workload="localhost-k8s-goldmane--5b85766d88--w5svg-eth0" Mar 7 01:57:34.767350 containerd[1587]: 2026-03-07 01:57:34.470 [INFO][4659] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3808eb8e32ccdb51bc83cad45f95f70614838041c57cb138d807c7854f874b86" Namespace="calico-system" Pod="goldmane-5b85766d88-w5svg" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--w5svg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--5b85766d88--w5svg-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"f5c47317-c9cf-4ee3-8c60-5a9bc1b9cc0e", ResourceVersion:"1285", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 54, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-5b85766d88-w5svg", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie02cac2ea6b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:57:34.767350 containerd[1587]: 2026-03-07 01:57:34.470 [INFO][4659] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="3808eb8e32ccdb51bc83cad45f95f70614838041c57cb138d807c7854f874b86" Namespace="calico-system" Pod="goldmane-5b85766d88-w5svg" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--w5svg-eth0" Mar 7 01:57:34.767350 containerd[1587]: 2026-03-07 01:57:34.470 [INFO][4659] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie02cac2ea6b ContainerID="3808eb8e32ccdb51bc83cad45f95f70614838041c57cb138d807c7854f874b86" Namespace="calico-system" Pod="goldmane-5b85766d88-w5svg" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--w5svg-eth0" Mar 7 01:57:34.767350 containerd[1587]: 2026-03-07 01:57:34.498 [INFO][4659] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3808eb8e32ccdb51bc83cad45f95f70614838041c57cb138d807c7854f874b86" Namespace="calico-system" Pod="goldmane-5b85766d88-w5svg" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--w5svg-eth0" Mar 7 01:57:34.767350 containerd[1587]: 2026-03-07 01:57:34.602 [INFO][4659] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3808eb8e32ccdb51bc83cad45f95f70614838041c57cb138d807c7854f874b86" Namespace="calico-system" Pod="goldmane-5b85766d88-w5svg" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--w5svg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--5b85766d88--w5svg-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"f5c47317-c9cf-4ee3-8c60-5a9bc1b9cc0e", ResourceVersion:"1285", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 54, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3808eb8e32ccdb51bc83cad45f95f70614838041c57cb138d807c7854f874b86", Pod:"goldmane-5b85766d88-w5svg", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie02cac2ea6b", MAC:"ee:dc:7a:ec:96:20", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:57:34.767350 containerd[1587]: 2026-03-07 01:57:34.715 [INFO][4659] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3808eb8e32ccdb51bc83cad45f95f70614838041c57cb138d807c7854f874b86" Namespace="calico-system" Pod="goldmane-5b85766d88-w5svg" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--w5svg-eth0" Mar 7 01:57:35.297976 systemd[1]: Started sshd@15-10.0.0.118:22-10.0.0.1:54546.service - OpenSSH per-connection server daemon (10.0.0.1:54546). Mar 7 01:57:35.372007 systemd-networkd[1251]: calid6c2f881253: Gained IPv6LL Mar 7 01:57:35.403602 containerd[1587]: time="2026-03-07T01:57:35.401619507Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:57:35.403602 containerd[1587]: time="2026-03-07T01:57:35.401690972Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:57:35.403602 containerd[1587]: time="2026-03-07T01:57:35.401709427Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:57:35.436917 containerd[1587]: time="2026-03-07T01:57:35.408159066Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:57:35.527924 systemd-networkd[1251]: cali0238dde9c07: Link UP Mar 7 01:57:35.620594 containerd[1587]: time="2026-03-07T01:57:35.417400453Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-647c78759d-qnq6w,Uid:4b4abf53-5db7-456d-acdd-25832f296097,Namespace:calico-system,Attempt:0,} returns sandbox id \"2eca5351ddf51a03079b83951726d11a4f35ce17ffa910b10559dd3765798a41\"" Mar 7 01:57:35.685018 systemd-networkd[1251]: cali0238dde9c07: Gained carrier Mar 7 01:57:35.744893 containerd[1587]: time="2026-03-07T01:57:35.744686676Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-qw6r2,Uid:a27a6082-6714-4326-a18a-a16d7c80709b,Namespace:kube-system,Attempt:0,} returns sandbox id \"55b5278ddf09ba1ba00ad44e409fe4ae52b3a317bb32e31b01584829c68ed489\"" Mar 7 01:57:35.760615 kubelet[2895]: E0307 01:57:35.752915 2895 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:57:35.795514 containerd[1587]: time="2026-03-07T01:57:35.794910656Z" level=info msg="CreateContainer within sandbox \"55b5278ddf09ba1ba00ad44e409fe4ae52b3a317bb32e31b01584829c68ed489\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 7 01:57:35.802935 sshd[4997]: Accepted publickey for core from 10.0.0.1 port 54546 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:57:35.821095 systemd-resolved[1474]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 7 01:57:35.830438 sshd[4997]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:57:35.936087 systemd-logind[1563]: New session 16 of user core. Mar 7 01:57:35.938130 systemd-networkd[1251]: calie02cac2ea6b: Gained IPv6LL Mar 7 01:57:35.947552 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 7 01:57:36.003591 containerd[1587]: 2026-03-07 01:57:31.906 [INFO][4641] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--564b587fc5--hz8rt-eth0 whisker-564b587fc5- calico-system f595f30d-a60b-4bb7-86e8-dc0eba5c18fa 1287 0 2026-03-07 01:57:13 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:564b587fc5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-564b587fc5-hz8rt eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali0238dde9c07 [] [] }} ContainerID="9d1309318e68cdea2cc4111d5b8b78378adedd3b86d01f8688a68e6e67d54f38" Namespace="calico-system" Pod="whisker-564b587fc5-hz8rt" WorkloadEndpoint="localhost-k8s-whisker--564b587fc5--hz8rt-" Mar 7 01:57:36.003591 containerd[1587]: 2026-03-07 01:57:31.906 [INFO][4641] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9d1309318e68cdea2cc4111d5b8b78378adedd3b86d01f8688a68e6e67d54f38" Namespace="calico-system" Pod="whisker-564b587fc5-hz8rt" WorkloadEndpoint="localhost-k8s-whisker--564b587fc5--hz8rt-eth0" Mar 7 01:57:36.003591 containerd[1587]: 2026-03-07 01:57:32.838 [INFO][4712] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9d1309318e68cdea2cc4111d5b8b78378adedd3b86d01f8688a68e6e67d54f38" HandleID="k8s-pod-network.9d1309318e68cdea2cc4111d5b8b78378adedd3b86d01f8688a68e6e67d54f38" Workload="localhost-k8s-whisker--564b587fc5--hz8rt-eth0" Mar 7 01:57:36.003591 containerd[1587]: 2026-03-07 01:57:32.905 [INFO][4712] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="9d1309318e68cdea2cc4111d5b8b78378adedd3b86d01f8688a68e6e67d54f38" HandleID="k8s-pod-network.9d1309318e68cdea2cc4111d5b8b78378adedd3b86d01f8688a68e6e67d54f38" Workload="localhost-k8s-whisker--564b587fc5--hz8rt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002811a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-564b587fc5-hz8rt", "timestamp":"2026-03-07 01:57:32.838133251 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0000fedc0)} Mar 7 01:57:36.003591 containerd[1587]: 2026-03-07 01:57:32.905 [INFO][4712] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:57:36.003591 containerd[1587]: 2026-03-07 01:57:34.389 [INFO][4712] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:57:36.003591 containerd[1587]: 2026-03-07 01:57:34.389 [INFO][4712] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 7 01:57:36.003591 containerd[1587]: 2026-03-07 01:57:34.480 [INFO][4712] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.9d1309318e68cdea2cc4111d5b8b78378adedd3b86d01f8688a68e6e67d54f38" host="localhost" Mar 7 01:57:36.003591 containerd[1587]: 2026-03-07 01:57:34.552 [INFO][4712] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 7 01:57:36.003591 containerd[1587]: 2026-03-07 01:57:34.711 [INFO][4712] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 7 01:57:36.003591 containerd[1587]: 2026-03-07 01:57:34.738 [INFO][4712] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 7 01:57:36.003591 containerd[1587]: 2026-03-07 01:57:34.780 [INFO][4712] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 7 01:57:36.003591 containerd[1587]: 2026-03-07 01:57:34.780 [INFO][4712] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9d1309318e68cdea2cc4111d5b8b78378adedd3b86d01f8688a68e6e67d54f38" host="localhost" Mar 7 01:57:36.003591 containerd[1587]: 2026-03-07 01:57:34.803 [INFO][4712] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.9d1309318e68cdea2cc4111d5b8b78378adedd3b86d01f8688a68e6e67d54f38 Mar 7 01:57:36.003591 containerd[1587]: 2026-03-07 01:57:34.839 [INFO][4712] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9d1309318e68cdea2cc4111d5b8b78378adedd3b86d01f8688a68e6e67d54f38" host="localhost" Mar 7 01:57:36.003591 containerd[1587]: 2026-03-07 01:57:35.077 [INFO][4712] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.9d1309318e68cdea2cc4111d5b8b78378adedd3b86d01f8688a68e6e67d54f38" host="localhost" Mar 7 01:57:36.003591 containerd[1587]: 2026-03-07 01:57:35.100 [INFO][4712] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.9d1309318e68cdea2cc4111d5b8b78378adedd3b86d01f8688a68e6e67d54f38" host="localhost" Mar 7 01:57:36.003591 containerd[1587]: 2026-03-07 01:57:35.101 [INFO][4712] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:57:36.003591 containerd[1587]: 2026-03-07 01:57:35.101 [INFO][4712] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="9d1309318e68cdea2cc4111d5b8b78378adedd3b86d01f8688a68e6e67d54f38" HandleID="k8s-pod-network.9d1309318e68cdea2cc4111d5b8b78378adedd3b86d01f8688a68e6e67d54f38" Workload="localhost-k8s-whisker--564b587fc5--hz8rt-eth0" Mar 7 01:57:36.002003 systemd-networkd[1251]: caliba296445ca7: Gained IPv6LL Mar 7 01:57:36.017766 containerd[1587]: 2026-03-07 01:57:35.284 [INFO][4641] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9d1309318e68cdea2cc4111d5b8b78378adedd3b86d01f8688a68e6e67d54f38" Namespace="calico-system" Pod="whisker-564b587fc5-hz8rt" WorkloadEndpoint="localhost-k8s-whisker--564b587fc5--hz8rt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--564b587fc5--hz8rt-eth0", GenerateName:"whisker-564b587fc5-", Namespace:"calico-system", SelfLink:"", UID:"f595f30d-a60b-4bb7-86e8-dc0eba5c18fa", ResourceVersion:"1287", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 57, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"564b587fc5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-564b587fc5-hz8rt", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali0238dde9c07", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:57:36.017766 containerd[1587]: 2026-03-07 01:57:35.285 [INFO][4641] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="9d1309318e68cdea2cc4111d5b8b78378adedd3b86d01f8688a68e6e67d54f38" Namespace="calico-system" Pod="whisker-564b587fc5-hz8rt" WorkloadEndpoint="localhost-k8s-whisker--564b587fc5--hz8rt-eth0" Mar 7 01:57:36.017766 containerd[1587]: 2026-03-07 01:57:35.285 [INFO][4641] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0238dde9c07 ContainerID="9d1309318e68cdea2cc4111d5b8b78378adedd3b86d01f8688a68e6e67d54f38" Namespace="calico-system" Pod="whisker-564b587fc5-hz8rt" WorkloadEndpoint="localhost-k8s-whisker--564b587fc5--hz8rt-eth0" Mar 7 01:57:36.017766 containerd[1587]: 2026-03-07 01:57:35.753 [INFO][4641] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9d1309318e68cdea2cc4111d5b8b78378adedd3b86d01f8688a68e6e67d54f38" Namespace="calico-system" Pod="whisker-564b587fc5-hz8rt" WorkloadEndpoint="localhost-k8s-whisker--564b587fc5--hz8rt-eth0" Mar 7 01:57:36.017766 containerd[1587]: 2026-03-07 01:57:35.773 [INFO][4641] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9d1309318e68cdea2cc4111d5b8b78378adedd3b86d01f8688a68e6e67d54f38" Namespace="calico-system" Pod="whisker-564b587fc5-hz8rt" WorkloadEndpoint="localhost-k8s-whisker--564b587fc5--hz8rt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--564b587fc5--hz8rt-eth0", GenerateName:"whisker-564b587fc5-", Namespace:"calico-system", SelfLink:"", UID:"f595f30d-a60b-4bb7-86e8-dc0eba5c18fa", ResourceVersion:"1287", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 57, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"564b587fc5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9d1309318e68cdea2cc4111d5b8b78378adedd3b86d01f8688a68e6e67d54f38", Pod:"whisker-564b587fc5-hz8rt", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali0238dde9c07", MAC:"1a:c4:73:34:1b:48", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:57:36.017766 containerd[1587]: 2026-03-07 01:57:35.858 [INFO][4641] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9d1309318e68cdea2cc4111d5b8b78378adedd3b86d01f8688a68e6e67d54f38" Namespace="calico-system" Pod="whisker-564b587fc5-hz8rt" WorkloadEndpoint="localhost-k8s-whisker--564b587fc5--hz8rt-eth0" Mar 7 01:57:36.055927 systemd-networkd[1251]: cali59d3d77afa3: Link UP Mar 7 01:57:36.063750 systemd-networkd[1251]: cali59d3d77afa3: Gained carrier Mar 7 01:57:36.195127 systemd-resolved[1474]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 7 01:57:36.199981 containerd[1587]: 2026-03-07 01:57:31.923 [INFO][4635] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--647c78759d--8p5hf-eth0 calico-apiserver-647c78759d- calico-system 62f6ce31-11a9-4768-b2ff-502e5e5b400a 1286 0 2026-03-07 01:54:49 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:647c78759d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-647c78759d-8p5hf eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali59d3d77afa3 [] [] }} ContainerID="83e3ac5dc2575dc5b140baf524d9f29b21bfd293c9a9c176ca60056dce45eb49" Namespace="calico-system" Pod="calico-apiserver-647c78759d-8p5hf" WorkloadEndpoint="localhost-k8s-calico--apiserver--647c78759d--8p5hf-" Mar 7 01:57:36.199981 containerd[1587]: 2026-03-07 01:57:31.924 [INFO][4635] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="83e3ac5dc2575dc5b140baf524d9f29b21bfd293c9a9c176ca60056dce45eb49" Namespace="calico-system" Pod="calico-apiserver-647c78759d-8p5hf" WorkloadEndpoint="localhost-k8s-calico--apiserver--647c78759d--8p5hf-eth0" Mar 7 01:57:36.199981 containerd[1587]: 2026-03-07 01:57:32.891 [INFO][4726] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="83e3ac5dc2575dc5b140baf524d9f29b21bfd293c9a9c176ca60056dce45eb49" HandleID="k8s-pod-network.83e3ac5dc2575dc5b140baf524d9f29b21bfd293c9a9c176ca60056dce45eb49" Workload="localhost-k8s-calico--apiserver--647c78759d--8p5hf-eth0" Mar 7 01:57:36.199981 containerd[1587]: 2026-03-07 01:57:32.953 [INFO][4726] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="83e3ac5dc2575dc5b140baf524d9f29b21bfd293c9a9c176ca60056dce45eb49" HandleID="k8s-pod-network.83e3ac5dc2575dc5b140baf524d9f29b21bfd293c9a9c176ca60056dce45eb49" Workload="localhost-k8s-calico--apiserver--647c78759d--8p5hf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00051aba0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-647c78759d-8p5hf", "timestamp":"2026-03-07 01:57:32.891977761 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000412c60)} Mar 7 01:57:36.199981 containerd[1587]: 2026-03-07 01:57:32.953 [INFO][4726] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:57:36.199981 containerd[1587]: 2026-03-07 01:57:35.101 [INFO][4726] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:57:36.199981 containerd[1587]: 2026-03-07 01:57:35.101 [INFO][4726] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 7 01:57:36.199981 containerd[1587]: 2026-03-07 01:57:35.187 [INFO][4726] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.83e3ac5dc2575dc5b140baf524d9f29b21bfd293c9a9c176ca60056dce45eb49" host="localhost" Mar 7 01:57:36.199981 containerd[1587]: 2026-03-07 01:57:35.287 [INFO][4726] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 7 01:57:36.199981 containerd[1587]: 2026-03-07 01:57:35.532 [INFO][4726] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 7 01:57:36.199981 containerd[1587]: 2026-03-07 01:57:35.694 [INFO][4726] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 7 01:57:36.199981 containerd[1587]: 2026-03-07 01:57:35.748 [INFO][4726] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 7 01:57:36.199981 containerd[1587]: 2026-03-07 01:57:35.748 [INFO][4726] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.83e3ac5dc2575dc5b140baf524d9f29b21bfd293c9a9c176ca60056dce45eb49" host="localhost" Mar 7 01:57:36.199981 containerd[1587]: 2026-03-07 01:57:35.762 [INFO][4726] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.83e3ac5dc2575dc5b140baf524d9f29b21bfd293c9a9c176ca60056dce45eb49 Mar 7 01:57:36.199981 containerd[1587]: 2026-03-07 01:57:35.839 [INFO][4726] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.83e3ac5dc2575dc5b140baf524d9f29b21bfd293c9a9c176ca60056dce45eb49" host="localhost" Mar 7 01:57:36.199981 containerd[1587]: 2026-03-07 01:57:35.988 [INFO][4726] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.83e3ac5dc2575dc5b140baf524d9f29b21bfd293c9a9c176ca60056dce45eb49" host="localhost" Mar 7 01:57:36.199981 containerd[1587]: 2026-03-07 01:57:35.988 [INFO][4726] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.83e3ac5dc2575dc5b140baf524d9f29b21bfd293c9a9c176ca60056dce45eb49" host="localhost" Mar 7 01:57:36.199981 containerd[1587]: 2026-03-07 01:57:35.988 [INFO][4726] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:57:36.199981 containerd[1587]: 2026-03-07 01:57:35.988 [INFO][4726] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="83e3ac5dc2575dc5b140baf524d9f29b21bfd293c9a9c176ca60056dce45eb49" HandleID="k8s-pod-network.83e3ac5dc2575dc5b140baf524d9f29b21bfd293c9a9c176ca60056dce45eb49" Workload="localhost-k8s-calico--apiserver--647c78759d--8p5hf-eth0" Mar 7 01:57:36.205745 containerd[1587]: 2026-03-07 01:57:36.046 [INFO][4635] cni-plugin/k8s.go 418: Populated endpoint ContainerID="83e3ac5dc2575dc5b140baf524d9f29b21bfd293c9a9c176ca60056dce45eb49" Namespace="calico-system" Pod="calico-apiserver-647c78759d-8p5hf" WorkloadEndpoint="localhost-k8s-calico--apiserver--647c78759d--8p5hf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--647c78759d--8p5hf-eth0", GenerateName:"calico-apiserver-647c78759d-", Namespace:"calico-system", SelfLink:"", UID:"62f6ce31-11a9-4768-b2ff-502e5e5b400a", ResourceVersion:"1286", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 54, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"647c78759d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-647c78759d-8p5hf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali59d3d77afa3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:57:36.205745 containerd[1587]: 2026-03-07 01:57:36.046 [INFO][4635] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="83e3ac5dc2575dc5b140baf524d9f29b21bfd293c9a9c176ca60056dce45eb49" Namespace="calico-system" Pod="calico-apiserver-647c78759d-8p5hf" WorkloadEndpoint="localhost-k8s-calico--apiserver--647c78759d--8p5hf-eth0" Mar 7 01:57:36.205745 containerd[1587]: 2026-03-07 01:57:36.046 [INFO][4635] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali59d3d77afa3 ContainerID="83e3ac5dc2575dc5b140baf524d9f29b21bfd293c9a9c176ca60056dce45eb49" Namespace="calico-system" Pod="calico-apiserver-647c78759d-8p5hf" WorkloadEndpoint="localhost-k8s-calico--apiserver--647c78759d--8p5hf-eth0" Mar 7 01:57:36.205745 containerd[1587]: 2026-03-07 01:57:36.062 [INFO][4635] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="83e3ac5dc2575dc5b140baf524d9f29b21bfd293c9a9c176ca60056dce45eb49" Namespace="calico-system" Pod="calico-apiserver-647c78759d-8p5hf" WorkloadEndpoint="localhost-k8s-calico--apiserver--647c78759d--8p5hf-eth0" Mar 7 01:57:36.205745 containerd[1587]: 2026-03-07 01:57:36.066 [INFO][4635] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="83e3ac5dc2575dc5b140baf524d9f29b21bfd293c9a9c176ca60056dce45eb49" Namespace="calico-system" Pod="calico-apiserver-647c78759d-8p5hf" WorkloadEndpoint="localhost-k8s-calico--apiserver--647c78759d--8p5hf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--647c78759d--8p5hf-eth0", GenerateName:"calico-apiserver-647c78759d-", Namespace:"calico-system", SelfLink:"", UID:"62f6ce31-11a9-4768-b2ff-502e5e5b400a", ResourceVersion:"1286", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 54, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"647c78759d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"83e3ac5dc2575dc5b140baf524d9f29b21bfd293c9a9c176ca60056dce45eb49", Pod:"calico-apiserver-647c78759d-8p5hf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali59d3d77afa3", MAC:"8a:c2:e7:0a:bf:87", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:57:36.205745 containerd[1587]: 2026-03-07 01:57:36.126 [INFO][4635] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="83e3ac5dc2575dc5b140baf524d9f29b21bfd293c9a9c176ca60056dce45eb49" Namespace="calico-system" Pod="calico-apiserver-647c78759d-8p5hf" WorkloadEndpoint="localhost-k8s-calico--apiserver--647c78759d--8p5hf-eth0" Mar 7 01:57:36.314228 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3143787698.mount: Deactivated successfully. Mar 7 01:57:36.417224 containerd[1587]: time="2026-03-07T01:57:36.414235402Z" level=info msg="CreateContainer within sandbox \"55b5278ddf09ba1ba00ad44e409fe4ae52b3a317bb32e31b01584829c68ed489\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4aeb7a0d70f8b2094be66af1524ea17d72aa1ff1a197b54957683918d3afea9e\"" Mar 7 01:57:36.423517 containerd[1587]: time="2026-03-07T01:57:36.422721669Z" level=info msg="StartContainer for \"4aeb7a0d70f8b2094be66af1524ea17d72aa1ff1a197b54957683918d3afea9e\"" Mar 7 01:57:36.509988 containerd[1587]: time="2026-03-07T01:57:36.508056451Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-w5svg,Uid:f5c47317-c9cf-4ee3-8c60-5a9bc1b9cc0e,Namespace:calico-system,Attempt:0,} returns sandbox id \"3808eb8e32ccdb51bc83cad45f95f70614838041c57cb138d807c7854f874b86\"" Mar 7 01:57:36.540596 containerd[1587]: time="2026-03-07T01:57:36.537624604Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-cgnsk,Uid:071407a1-af3e-42ed-8ee8-ae1d7a7c3681,Namespace:kube-system,Attempt:0,} returns sandbox id \"34bebaaee5c1e4d52babdf0a7aa5f7e84221c554eac0fc33ff0de22653fb58e5\"" Mar 7 01:57:36.545244 kubelet[2895]: E0307 01:57:36.541170 2895 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:57:36.605196 containerd[1587]: time="2026-03-07T01:57:36.605050415Z" level=info msg="CreateContainer within sandbox \"34bebaaee5c1e4d52babdf0a7aa5f7e84221c554eac0fc33ff0de22653fb58e5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 7 01:57:36.627655 containerd[1587]: time="2026-03-07T01:57:36.624106589Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:57:36.627655 containerd[1587]: time="2026-03-07T01:57:36.624454316Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:57:36.627655 containerd[1587]: time="2026-03-07T01:57:36.624525071Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:57:36.627655 containerd[1587]: time="2026-03-07T01:57:36.624719628Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:57:36.724519 containerd[1587]: time="2026-03-07T01:57:36.724264880Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:57:36.724965 containerd[1587]: time="2026-03-07T01:57:36.724863350Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:57:36.724965 containerd[1587]: time="2026-03-07T01:57:36.724936048Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:57:36.732248 containerd[1587]: time="2026-03-07T01:57:36.725385317Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:57:36.891244 containerd[1587]: time="2026-03-07T01:57:36.869879110Z" level=info msg="CreateContainer within sandbox \"34bebaaee5c1e4d52babdf0a7aa5f7e84221c554eac0fc33ff0de22653fb58e5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"df28cd3edaf0c2956c825de7654934636391d6dde5ad1d506be0fd00db42b2e1\"" Mar 7 01:57:36.891244 containerd[1587]: time="2026-03-07T01:57:36.876159140Z" level=info msg="StartContainer for \"df28cd3edaf0c2956c825de7654934636391d6dde5ad1d506be0fd00db42b2e1\"" Mar 7 01:57:36.906015 sshd[4997]: pam_unix(sshd:session): session closed for user core Mar 7 01:57:36.963386 systemd[1]: sshd@15-10.0.0.118:22-10.0.0.1:54546.service: Deactivated successfully. Mar 7 01:57:37.043574 systemd-networkd[1251]: cali0238dde9c07: Gained IPv6LL Mar 7 01:57:37.070999 systemd[1]: session-16.scope: Deactivated successfully. Mar 7 01:57:37.079340 systemd-logind[1563]: Session 16 logged out. Waiting for processes to exit. Mar 7 01:57:37.084198 systemd-logind[1563]: Removed session 16. Mar 7 01:57:37.322783 systemd-resolved[1474]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 7 01:57:37.410217 systemd-resolved[1474]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 7 01:57:37.646632 containerd[1587]: time="2026-03-07T01:57:37.634357975Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-564b587fc5-hz8rt,Uid:f595f30d-a60b-4bb7-86e8-dc0eba5c18fa,Namespace:calico-system,Attempt:0,} returns sandbox id \"9d1309318e68cdea2cc4111d5b8b78378adedd3b86d01f8688a68e6e67d54f38\"" Mar 7 01:57:37.799049 containerd[1587]: time="2026-03-07T01:57:37.798342452Z" level=info msg="StartContainer for \"4aeb7a0d70f8b2094be66af1524ea17d72aa1ff1a197b54957683918d3afea9e\" returns successfully" Mar 7 01:57:37.843219 containerd[1587]: time="2026-03-07T01:57:37.843161802Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-647c78759d-8p5hf,Uid:62f6ce31-11a9-4768-b2ff-502e5e5b400a,Namespace:calico-system,Attempt:0,} returns sandbox id \"83e3ac5dc2575dc5b140baf524d9f29b21bfd293c9a9c176ca60056dce45eb49\"" Mar 7 01:57:37.864938 systemd-networkd[1251]: cali59d3d77afa3: Gained IPv6LL Mar 7 01:57:37.946076 containerd[1587]: time="2026-03-07T01:57:37.945655815Z" level=info msg="StartContainer for \"df28cd3edaf0c2956c825de7654934636391d6dde5ad1d506be0fd00db42b2e1\" returns successfully" Mar 7 01:57:38.467434 kubelet[2895]: E0307 01:57:38.448268 2895 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:57:38.489684 kubelet[2895]: E0307 01:57:38.487291 2895 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:57:38.881232 kubelet[2895]: I0307 01:57:38.881140 2895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-cgnsk" podStartSLOduration=203.88111407 podStartE2EDuration="3m23.88111407s" podCreationTimestamp="2026-03-07 01:54:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:57:38.633016949 +0000 UTC m=+204.415316958" watchObservedRunningTime="2026-03-07 01:57:38.88111407 +0000 UTC m=+204.663414080" Mar 7 01:57:39.642912 kubelet[2895]: E0307 01:57:39.639154 2895 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:57:39.642912 kubelet[2895]: E0307 01:57:39.640279 2895 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:57:40.663152 kubelet[2895]: E0307 01:57:40.659705 2895 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:57:40.672155 kubelet[2895]: E0307 01:57:40.667300 2895 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:57:40.841567 kubelet[2895]: I0307 01:57:40.839627 2895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-qw6r2" podStartSLOduration=205.83955391 podStartE2EDuration="3m25.83955391s" podCreationTimestamp="2026-03-07 01:54:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:57:38.941243127 +0000 UTC m=+204.723543147" watchObservedRunningTime="2026-03-07 01:57:40.83955391 +0000 UTC m=+206.621853921" Mar 7 01:57:41.698717 kubelet[2895]: E0307 01:57:41.690263 2895 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:57:41.706082 kubelet[2895]: E0307 01:57:41.700268 2895 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:57:42.061576 systemd[1]: Started sshd@16-10.0.0.118:22-10.0.0.1:50410.service - OpenSSH per-connection server daemon (10.0.0.1:50410). Mar 7 01:57:42.712036 sshd[5278]: Accepted publickey for core from 10.0.0.1 port 50410 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:57:42.722400 sshd[5278]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:57:42.766973 systemd-logind[1563]: New session 17 of user core. Mar 7 01:57:42.792452 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 7 01:57:43.189431 systemd-journald[1139]: Under memory pressure, flushing caches. Mar 7 01:57:43.168017 systemd-resolved[1474]: Under memory pressure, flushing caches. Mar 7 01:57:43.168059 systemd-resolved[1474]: Flushed all caches. Mar 7 01:57:44.339146 sshd[5278]: pam_unix(sshd:session): session closed for user core Mar 7 01:57:44.393038 systemd[1]: sshd@16-10.0.0.118:22-10.0.0.1:50410.service: Deactivated successfully. Mar 7 01:57:44.461104 systemd[1]: session-17.scope: Deactivated successfully. Mar 7 01:57:44.510914 systemd-logind[1563]: Session 17 logged out. Waiting for processes to exit. Mar 7 01:57:44.556366 systemd-logind[1563]: Removed session 17. Mar 7 01:57:45.224641 systemd-journald[1139]: Under memory pressure, flushing caches. Mar 7 01:57:45.223954 systemd-resolved[1474]: Under memory pressure, flushing caches. Mar 7 01:57:45.223966 systemd-resolved[1474]: Flushed all caches. Mar 7 01:57:49.369376 systemd[1]: Started sshd@17-10.0.0.118:22-10.0.0.1:50450.service - OpenSSH per-connection server daemon (10.0.0.1:50450). Mar 7 01:57:49.816038 sshd[5343]: Accepted publickey for core from 10.0.0.1 port 50450 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:57:49.835540 sshd[5343]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:57:49.880547 systemd-logind[1563]: New session 18 of user core. Mar 7 01:57:49.896677 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 7 01:57:51.111715 containerd[1587]: time="2026-03-07T01:57:51.104426678Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:57:51.115998 containerd[1587]: time="2026-03-07T01:57:51.114633095Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Mar 7 01:57:51.131112 containerd[1587]: time="2026-03-07T01:57:51.129476804Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:57:51.217196 systemd-journald[1139]: Under memory pressure, flushing caches. Mar 7 01:57:51.172328 systemd-resolved[1474]: Under memory pressure, flushing caches. Mar 7 01:57:51.172339 systemd-resolved[1474]: Flushed all caches. Mar 7 01:57:51.244143 containerd[1587]: time="2026-03-07T01:57:51.240698590Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:57:51.325947 containerd[1587]: time="2026-03-07T01:57:51.325195581Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 17.191537546s" Mar 7 01:57:51.325947 containerd[1587]: time="2026-03-07T01:57:51.325266265Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Mar 7 01:57:51.372354 containerd[1587]: time="2026-03-07T01:57:51.371132720Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 7 01:57:51.433754 sshd[5343]: pam_unix(sshd:session): session closed for user core Mar 7 01:57:51.487384 systemd[1]: sshd@17-10.0.0.118:22-10.0.0.1:50450.service: Deactivated successfully. Mar 7 01:57:51.501543 systemd[1]: session-18.scope: Deactivated successfully. Mar 7 01:57:51.537486 systemd-logind[1563]: Session 18 logged out. Waiting for processes to exit. Mar 7 01:57:51.551280 containerd[1587]: time="2026-03-07T01:57:51.551133293Z" level=info msg="CreateContainer within sandbox \"b5aa343b9e3d15bc599a0e79a28553e30ac5b4b63ac2d5e3305ca975e9cbf1a6\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 7 01:57:51.553512 systemd-logind[1563]: Removed session 18. Mar 7 01:57:51.790197 containerd[1587]: time="2026-03-07T01:57:51.789979075Z" level=info msg="CreateContainer within sandbox \"b5aa343b9e3d15bc599a0e79a28553e30ac5b4b63ac2d5e3305ca975e9cbf1a6\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"5d2ea5cc9f63a5aeecedd9ad2c678d62b18326afeccac731e868780ba027e7b1\"" Mar 7 01:57:51.797417 containerd[1587]: time="2026-03-07T01:57:51.796064948Z" level=info msg="StartContainer for \"5d2ea5cc9f63a5aeecedd9ad2c678d62b18326afeccac731e868780ba027e7b1\"" Mar 7 01:57:53.319259 systemd-journald[1139]: Under memory pressure, flushing caches. Mar 7 01:57:53.216269 systemd-resolved[1474]: Under memory pressure, flushing caches. Mar 7 01:57:53.216312 systemd-resolved[1474]: Flushed all caches. Mar 7 01:57:53.919196 containerd[1587]: time="2026-03-07T01:57:53.917511023Z" level=info msg="StartContainer for \"5d2ea5cc9f63a5aeecedd9ad2c678d62b18326afeccac731e868780ba027e7b1\" returns successfully" Mar 7 01:57:54.991736 kubelet[2895]: I0307 01:57:54.963269 2895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-64b76d49bd-sx6p4" podStartSLOduration=165.755144045 podStartE2EDuration="3m2.963242988s" podCreationTimestamp="2026-03-07 01:54:52 +0000 UTC" firstStartedPulling="2026-03-07 01:57:34.132383226 +0000 UTC m=+199.914683246" lastFinishedPulling="2026-03-07 01:57:51.340482178 +0000 UTC m=+217.122782189" observedRunningTime="2026-03-07 01:57:54.32870117 +0000 UTC m=+220.111001219" watchObservedRunningTime="2026-03-07 01:57:54.963242988 +0000 UTC m=+220.745543029" Mar 7 01:57:55.319427 systemd-journald[1139]: Under memory pressure, flushing caches. Mar 7 01:57:55.283562 systemd-resolved[1474]: Under memory pressure, flushing caches. Mar 7 01:57:55.283573 systemd-resolved[1474]: Flushed all caches. Mar 7 01:57:56.430410 systemd[1]: Started sshd@18-10.0.0.118:22-10.0.0.1:50834.service - OpenSSH per-connection server daemon (10.0.0.1:50834). Mar 7 01:57:56.760099 sshd[5434]: Accepted publickey for core from 10.0.0.1 port 50834 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:57:56.780250 sshd[5434]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:57:56.837243 systemd-logind[1563]: New session 19 of user core. Mar 7 01:57:56.875097 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 7 01:57:58.423492 sshd[5434]: pam_unix(sshd:session): session closed for user core Mar 7 01:57:58.439469 systemd[1]: sshd@18-10.0.0.118:22-10.0.0.1:50834.service: Deactivated successfully. Mar 7 01:57:58.470987 systemd[1]: session-19.scope: Deactivated successfully. Mar 7 01:57:58.477515 systemd-logind[1563]: Session 19 logged out. Waiting for processes to exit. Mar 7 01:57:58.488249 systemd-logind[1563]: Removed session 19. Mar 7 01:58:01.170169 systemd-journald[1139]: Under memory pressure, flushing caches. Mar 7 01:58:01.152708 systemd-resolved[1474]: Under memory pressure, flushing caches. Mar 7 01:58:01.152718 systemd-resolved[1474]: Flushed all caches. Mar 7 01:58:01.968506 kubelet[2895]: E0307 01:58:01.967112 2895 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:58:03.444457 systemd[1]: Started sshd@19-10.0.0.118:22-10.0.0.1:52042.service - OpenSSH per-connection server daemon (10.0.0.1:52042). Mar 7 01:58:03.772133 sshd[5462]: Accepted publickey for core from 10.0.0.1 port 52042 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:58:03.779949 sshd[5462]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:58:03.816706 systemd-logind[1563]: New session 20 of user core. Mar 7 01:58:03.840715 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 7 01:58:04.816462 sshd[5462]: pam_unix(sshd:session): session closed for user core Mar 7 01:58:04.827086 systemd[1]: sshd@19-10.0.0.118:22-10.0.0.1:52042.service: Deactivated successfully. Mar 7 01:58:04.835884 systemd[1]: session-20.scope: Deactivated successfully. Mar 7 01:58:04.851735 systemd-logind[1563]: Session 20 logged out. Waiting for processes to exit. Mar 7 01:58:04.872012 systemd-logind[1563]: Removed session 20. Mar 7 01:58:05.273508 containerd[1587]: time="2026-03-07T01:58:05.271677057Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Mar 7 01:58:05.273508 containerd[1587]: time="2026-03-07T01:58:05.272931106Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:58:05.278259 containerd[1587]: time="2026-03-07T01:58:05.277422217Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:58:05.317862 containerd[1587]: time="2026-03-07T01:58:05.317417086Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:58:05.319634 containerd[1587]: time="2026-03-07T01:58:05.318463898Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 13.947227893s" Mar 7 01:58:05.319634 containerd[1587]: time="2026-03-07T01:58:05.318515807Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 7 01:58:05.326622 containerd[1587]: time="2026-03-07T01:58:05.325365451Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Mar 7 01:58:05.347455 containerd[1587]: time="2026-03-07T01:58:05.347092720Z" level=info msg="CreateContainer within sandbox \"2eca5351ddf51a03079b83951726d11a4f35ce17ffa910b10559dd3765798a41\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 7 01:58:05.420458 containerd[1587]: time="2026-03-07T01:58:05.420105319Z" level=info msg="CreateContainer within sandbox \"2eca5351ddf51a03079b83951726d11a4f35ce17ffa910b10559dd3765798a41\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"bf158452475a786e8c3fa116ccb074c9f223607d6671ca09c52e0d61abd7858e\"" Mar 7 01:58:05.427410 containerd[1587]: time="2026-03-07T01:58:05.423733306Z" level=info msg="StartContainer for \"bf158452475a786e8c3fa116ccb074c9f223607d6671ca09c52e0d61abd7858e\"" Mar 7 01:58:05.860304 containerd[1587]: time="2026-03-07T01:58:05.859422203Z" level=info msg="StartContainer for \"bf158452475a786e8c3fa116ccb074c9f223607d6671ca09c52e0d61abd7858e\" returns successfully" Mar 7 01:58:07.243227 systemd-journald[1139]: Under memory pressure, flushing caches. Mar 7 01:58:07.232760 systemd-resolved[1474]: Under memory pressure, flushing caches. Mar 7 01:58:07.233372 systemd-resolved[1474]: Flushed all caches. Mar 7 01:58:09.855866 systemd[1]: Started sshd@20-10.0.0.118:22-10.0.0.1:52062.service - OpenSSH per-connection server daemon (10.0.0.1:52062). Mar 7 01:58:10.337984 sshd[5537]: Accepted publickey for core from 10.0.0.1 port 52062 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:58:10.346389 sshd[5537]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:58:10.393102 systemd-logind[1563]: New session 21 of user core. Mar 7 01:58:10.427464 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 7 01:58:11.906614 sshd[5537]: pam_unix(sshd:session): session closed for user core Mar 7 01:58:11.931261 systemd-logind[1563]: Session 21 logged out. Waiting for processes to exit. Mar 7 01:58:11.935545 systemd[1]: sshd@20-10.0.0.118:22-10.0.0.1:52062.service: Deactivated successfully. Mar 7 01:58:11.958228 systemd[1]: session-21.scope: Deactivated successfully. Mar 7 01:58:11.965038 systemd-logind[1563]: Removed session 21. Mar 7 01:58:12.562233 kubelet[2895]: I0307 01:58:12.559204 2895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-647c78759d-qnq6w" podStartSLOduration=173.929319697 podStartE2EDuration="3m23.55917549s" podCreationTimestamp="2026-03-07 01:54:49 +0000 UTC" firstStartedPulling="2026-03-07 01:57:35.693136266 +0000 UTC m=+201.475436276" lastFinishedPulling="2026-03-07 01:58:05.322992059 +0000 UTC m=+231.105292069" observedRunningTime="2026-03-07 01:58:06.669668552 +0000 UTC m=+232.451968572" watchObservedRunningTime="2026-03-07 01:58:12.55917549 +0000 UTC m=+238.341475520" Mar 7 01:58:13.151751 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2123770713.mount: Deactivated successfully. Mar 7 01:58:13.186325 systemd-resolved[1474]: Under memory pressure, flushing caches. Mar 7 01:58:13.187343 systemd-journald[1139]: Under memory pressure, flushing caches. Mar 7 01:58:13.186338 systemd-resolved[1474]: Flushed all caches. Mar 7 01:58:16.968168 systemd[1]: Started sshd@21-10.0.0.118:22-10.0.0.1:59008.service - OpenSSH per-connection server daemon (10.0.0.1:59008). Mar 7 01:58:17.802171 sshd[5587]: Accepted publickey for core from 10.0.0.1 port 59008 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:58:17.821879 sshd[5587]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:58:18.074455 systemd-logind[1563]: New session 22 of user core. Mar 7 01:58:18.114969 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 7 01:58:19.228347 systemd-journald[1139]: Under memory pressure, flushing caches. Mar 7 01:58:19.229508 systemd-resolved[1474]: Under memory pressure, flushing caches. Mar 7 01:58:19.229563 systemd-resolved[1474]: Flushed all caches. Mar 7 01:58:21.318884 systemd-journald[1139]: Under memory pressure, flushing caches. Mar 7 01:58:21.253907 systemd-resolved[1474]: Under memory pressure, flushing caches. Mar 7 01:58:21.253921 systemd-resolved[1474]: Flushed all caches. Mar 7 01:58:21.655261 sshd[5587]: pam_unix(sshd:session): session closed for user core Mar 7 01:58:21.676883 containerd[1587]: time="2026-03-07T01:58:21.676118357Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:58:21.689869 containerd[1587]: time="2026-03-07T01:58:21.679152727Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Mar 7 01:58:21.700305 systemd[1]: sshd@21-10.0.0.118:22-10.0.0.1:59008.service: Deactivated successfully. Mar 7 01:58:21.744574 systemd[1]: session-22.scope: Deactivated successfully. Mar 7 01:58:21.745329 systemd-logind[1563]: Session 22 logged out. Waiting for processes to exit. Mar 7 01:58:21.791768 systemd-logind[1563]: Removed session 22. Mar 7 01:58:21.820093 containerd[1587]: time="2026-03-07T01:58:21.819133484Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:58:21.860974 containerd[1587]: time="2026-03-07T01:58:21.859157623Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 16.533740184s" Mar 7 01:58:21.860974 containerd[1587]: time="2026-03-07T01:58:21.859227786Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Mar 7 01:58:21.865274 containerd[1587]: time="2026-03-07T01:58:21.863288466Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:58:21.940997 containerd[1587]: time="2026-03-07T01:58:21.938544619Z" level=info msg="CreateContainer within sandbox \"3808eb8e32ccdb51bc83cad45f95f70614838041c57cb138d807c7854f874b86\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Mar 7 01:58:21.974303 containerd[1587]: time="2026-03-07T01:58:21.969778971Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Mar 7 01:58:22.176577 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3626163902.mount: Deactivated successfully. Mar 7 01:58:22.196945 containerd[1587]: time="2026-03-07T01:58:22.192213664Z" level=info msg="CreateContainer within sandbox \"3808eb8e32ccdb51bc83cad45f95f70614838041c57cb138d807c7854f874b86\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"da7eafe706baa521b15ca74c47dbf70d51058d81210aa6d983f4756685d4442c\"" Mar 7 01:58:22.219949 containerd[1587]: time="2026-03-07T01:58:22.219886433Z" level=info msg="StartContainer for \"da7eafe706baa521b15ca74c47dbf70d51058d81210aa6d983f4756685d4442c\"" Mar 7 01:58:23.140152 containerd[1587]: time="2026-03-07T01:58:23.136498624Z" level=info msg="StartContainer for \"da7eafe706baa521b15ca74c47dbf70d51058d81210aa6d983f4756685d4442c\" returns successfully" Mar 7 01:58:24.832660 systemd[1]: run-containerd-runc-k8s.io-da7eafe706baa521b15ca74c47dbf70d51058d81210aa6d983f4756685d4442c-runc.Fn3wuz.mount: Deactivated successfully. Mar 7 01:58:26.093590 systemd[1]: run-containerd-runc-k8s.io-da7eafe706baa521b15ca74c47dbf70d51058d81210aa6d983f4756685d4442c-runc.7ikZpH.mount: Deactivated successfully. Mar 7 01:58:26.769106 systemd[1]: Started sshd@22-10.0.0.118:22-10.0.0.1:60198.service - OpenSSH per-connection server daemon (10.0.0.1:60198). Mar 7 01:58:27.024481 containerd[1587]: time="2026-03-07T01:58:27.024312946Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:58:27.034101 containerd[1587]: time="2026-03-07T01:58:27.033428373Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Mar 7 01:58:27.056236 containerd[1587]: time="2026-03-07T01:58:27.054657584Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:58:27.081082 containerd[1587]: time="2026-03-07T01:58:27.078526676Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:58:27.090979 containerd[1587]: time="2026-03-07T01:58:27.086785725Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 5.116847373s" Mar 7 01:58:27.090979 containerd[1587]: time="2026-03-07T01:58:27.088192148Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Mar 7 01:58:27.131025 containerd[1587]: time="2026-03-07T01:58:27.130538535Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 7 01:58:27.249072 systemd-journald[1139]: Under memory pressure, flushing caches. Mar 7 01:58:27.249237 containerd[1587]: time="2026-03-07T01:58:27.215614541Z" level=info msg="CreateContainer within sandbox \"9d1309318e68cdea2cc4111d5b8b78378adedd3b86d01f8688a68e6e67d54f38\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Mar 7 01:58:27.231601 systemd-resolved[1474]: Under memory pressure, flushing caches. Mar 7 01:58:27.231657 systemd-resolved[1474]: Flushed all caches. Mar 7 01:58:27.342213 sshd[5717]: Accepted publickey for core from 10.0.0.1 port 60198 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:58:27.361541 sshd[5717]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:58:27.378457 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2814789574.mount: Deactivated successfully. Mar 7 01:58:27.426181 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 7 01:58:27.437116 systemd-logind[1563]: New session 23 of user core. Mar 7 01:58:27.466190 containerd[1587]: time="2026-03-07T01:58:27.442513496Z" level=info msg="CreateContainer within sandbox \"9d1309318e68cdea2cc4111d5b8b78378adedd3b86d01f8688a68e6e67d54f38\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"949065465f25c5bd4783f910946e300b41bb8f35cbead2ccee0116c46c09810b\"" Mar 7 01:58:27.466190 containerd[1587]: time="2026-03-07T01:58:27.451615296Z" level=info msg="StartContainer for \"949065465f25c5bd4783f910946e300b41bb8f35cbead2ccee0116c46c09810b\"" Mar 7 01:58:27.636349 containerd[1587]: time="2026-03-07T01:58:27.636153427Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:58:27.640396 containerd[1587]: time="2026-03-07T01:58:27.639724313Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Mar 7 01:58:27.690254 containerd[1587]: time="2026-03-07T01:58:27.685527287Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 554.890998ms" Mar 7 01:58:27.690254 containerd[1587]: time="2026-03-07T01:58:27.685593823Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 7 01:58:27.741604 containerd[1587]: time="2026-03-07T01:58:27.741549688Z" level=info msg="CreateContainer within sandbox \"83e3ac5dc2575dc5b140baf524d9f29b21bfd293c9a9c176ca60056dce45eb49\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 7 01:58:28.129410 containerd[1587]: time="2026-03-07T01:58:28.128176663Z" level=info msg="CreateContainer within sandbox \"83e3ac5dc2575dc5b140baf524d9f29b21bfd293c9a9c176ca60056dce45eb49\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"f2c98154c66609ea66432e8c575f3d6a9f04ed1230b83b90a97ba6370201e813\"" Mar 7 01:58:28.167158 containerd[1587]: time="2026-03-07T01:58:28.167100564Z" level=info msg="StartContainer for \"f2c98154c66609ea66432e8c575f3d6a9f04ed1230b83b90a97ba6370201e813\"" Mar 7 01:58:29.174010 containerd[1587]: time="2026-03-07T01:58:29.173954360Z" level=info msg="StartContainer for \"949065465f25c5bd4783f910946e300b41bb8f35cbead2ccee0116c46c09810b\" returns successfully" Mar 7 01:58:29.273070 systemd-journald[1139]: Under memory pressure, flushing caches. Mar 7 01:58:29.249694 systemd-resolved[1474]: Under memory pressure, flushing caches. Mar 7 01:58:29.249722 systemd-resolved[1474]: Flushed all caches. Mar 7 01:58:29.284939 containerd[1587]: time="2026-03-07T01:58:29.284679813Z" level=info msg="StartContainer for \"f2c98154c66609ea66432e8c575f3d6a9f04ed1230b83b90a97ba6370201e813\" returns successfully" Mar 7 01:58:29.379207 containerd[1587]: time="2026-03-07T01:58:29.373422503Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Mar 7 01:58:29.875200 sshd[5717]: pam_unix(sshd:session): session closed for user core Mar 7 01:58:29.898083 systemd-logind[1563]: Session 23 logged out. Waiting for processes to exit. Mar 7 01:58:29.901206 systemd[1]: sshd@22-10.0.0.118:22-10.0.0.1:60198.service: Deactivated successfully. Mar 7 01:58:29.936527 systemd[1]: session-23.scope: Deactivated successfully. Mar 7 01:58:29.939379 systemd-logind[1563]: Removed session 23. Mar 7 01:58:29.956882 kubelet[2895]: I0307 01:58:29.955320 2895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-647c78759d-8p5hf" podStartSLOduration=171.113149533 podStartE2EDuration="3m40.955294652s" podCreationTimestamp="2026-03-07 01:54:49 +0000 UTC" firstStartedPulling="2026-03-07 01:57:37.845920515 +0000 UTC m=+203.628220535" lastFinishedPulling="2026-03-07 01:58:27.688065645 +0000 UTC m=+253.470365654" observedRunningTime="2026-03-07 01:58:29.952575281 +0000 UTC m=+255.734875342" watchObservedRunningTime="2026-03-07 01:58:29.955294652 +0000 UTC m=+255.737594672" Mar 7 01:58:29.956882 kubelet[2895]: I0307 01:58:29.955511 2895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-5b85766d88-w5svg" podStartSLOduration=174.595146599 podStartE2EDuration="3m39.955498788s" podCreationTimestamp="2026-03-07 01:54:50 +0000 UTC" firstStartedPulling="2026-03-07 01:57:36.518154568 +0000 UTC m=+202.300454578" lastFinishedPulling="2026-03-07 01:58:21.878506757 +0000 UTC m=+247.660806767" observedRunningTime="2026-03-07 01:58:24.576551042 +0000 UTC m=+250.358851072" watchObservedRunningTime="2026-03-07 01:58:29.955498788 +0000 UTC m=+255.737798798" Mar 7 01:58:31.117789 kubelet[2895]: E0307 01:58:31.117643 2895 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:58:34.936889 systemd[1]: Started sshd@23-10.0.0.118:22-10.0.0.1:53452.service - OpenSSH per-connection server daemon (10.0.0.1:53452). Mar 7 01:58:35.407531 sshd[5879]: Accepted publickey for core from 10.0.0.1 port 53452 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:58:35.398617 sshd[5879]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:58:35.433634 systemd-logind[1563]: New session 24 of user core. Mar 7 01:58:35.466924 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 7 01:58:35.885632 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2838044560.mount: Deactivated successfully. Mar 7 01:58:36.325978 containerd[1587]: time="2026-03-07T01:58:36.317588511Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:58:36.362452 containerd[1587]: time="2026-03-07T01:58:36.357579998Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Mar 7 01:58:36.409402 containerd[1587]: time="2026-03-07T01:58:36.409285200Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:58:36.471334 containerd[1587]: time="2026-03-07T01:58:36.469493544Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:58:36.495605 containerd[1587]: time="2026-03-07T01:58:36.493637718Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 7.115529177s" Mar 7 01:58:36.495605 containerd[1587]: time="2026-03-07T01:58:36.493768045Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Mar 7 01:58:36.538024 containerd[1587]: time="2026-03-07T01:58:36.537581323Z" level=info msg="CreateContainer within sandbox \"9d1309318e68cdea2cc4111d5b8b78378adedd3b86d01f8688a68e6e67d54f38\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Mar 7 01:58:36.767711 containerd[1587]: time="2026-03-07T01:58:36.766872075Z" level=info msg="CreateContainer within sandbox \"9d1309318e68cdea2cc4111d5b8b78378adedd3b86d01f8688a68e6e67d54f38\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"16d97fa1907b6391585b57fb24fbf3d86c39f027ad285187cb473c27ce7340ff\"" Mar 7 01:58:36.779256 containerd[1587]: time="2026-03-07T01:58:36.778346453Z" level=info msg="StartContainer for \"16d97fa1907b6391585b57fb24fbf3d86c39f027ad285187cb473c27ce7340ff\"" Mar 7 01:58:37.196529 systemd-journald[1139]: Under memory pressure, flushing caches. Mar 7 01:58:37.185182 systemd-resolved[1474]: Under memory pressure, flushing caches. Mar 7 01:58:37.185218 systemd-resolved[1474]: Flushed all caches. Mar 7 01:58:37.348176 systemd[1]: run-containerd-runc-k8s.io-16d97fa1907b6391585b57fb24fbf3d86c39f027ad285187cb473c27ce7340ff-runc.tRFqB4.mount: Deactivated successfully. Mar 7 01:58:37.890618 sshd[5879]: pam_unix(sshd:session): session closed for user core Mar 7 01:58:37.943379 systemd[1]: sshd@23-10.0.0.118:22-10.0.0.1:53452.service: Deactivated successfully. Mar 7 01:58:37.974536 systemd[1]: session-24.scope: Deactivated successfully. Mar 7 01:58:37.979373 systemd-logind[1563]: Session 24 logged out. Waiting for processes to exit. Mar 7 01:58:37.985112 systemd-logind[1563]: Removed session 24. Mar 7 01:58:38.197530 containerd[1587]: time="2026-03-07T01:58:38.197002078Z" level=info msg="StartContainer for \"16d97fa1907b6391585b57fb24fbf3d86c39f027ad285187cb473c27ce7340ff\" returns successfully" Mar 7 01:58:39.248863 systemd-journald[1139]: Under memory pressure, flushing caches. Mar 7 01:58:39.244562 systemd-resolved[1474]: Under memory pressure, flushing caches. Mar 7 01:58:39.244573 systemd-resolved[1474]: Flushed all caches. Mar 7 01:58:39.461383 kubelet[2895]: I0307 01:58:39.459164 2895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-564b587fc5-hz8rt" podStartSLOduration=27.637277384 podStartE2EDuration="1m26.459135453s" podCreationTimestamp="2026-03-07 01:57:13 +0000 UTC" firstStartedPulling="2026-03-07 01:57:37.672766448 +0000 UTC m=+203.455066458" lastFinishedPulling="2026-03-07 01:58:36.494624517 +0000 UTC m=+262.276924527" observedRunningTime="2026-03-07 01:58:39.435928398 +0000 UTC m=+265.218228418" watchObservedRunningTime="2026-03-07 01:58:39.459135453 +0000 UTC m=+265.241435464" Mar 7 01:58:40.971375 kubelet[2895]: E0307 01:58:40.968623 2895 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:58:42.899245 systemd[1]: Started sshd@24-10.0.0.118:22-10.0.0.1:55832.service - OpenSSH per-connection server daemon (10.0.0.1:55832). Mar 7 01:58:42.971353 kubelet[2895]: E0307 01:58:42.970595 2895 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:58:43.226149 sshd[5966]: Accepted publickey for core from 10.0.0.1 port 55832 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:58:43.248709 sshd[5966]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:58:43.295954 systemd-logind[1563]: New session 25 of user core. Mar 7 01:58:43.376283 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 7 01:58:44.463879 sshd[5966]: pam_unix(sshd:session): session closed for user core Mar 7 01:58:44.498662 systemd[1]: sshd@24-10.0.0.118:22-10.0.0.1:55832.service: Deactivated successfully. Mar 7 01:58:44.545602 systemd[1]: session-25.scope: Deactivated successfully. Mar 7 01:58:44.560703 systemd-logind[1563]: Session 25 logged out. Waiting for processes to exit. Mar 7 01:58:44.585721 systemd-logind[1563]: Removed session 25. Mar 7 01:58:49.089082 kubelet[2895]: E0307 01:58:49.087190 2895 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:58:49.497641 systemd[1]: Started sshd@25-10.0.0.118:22-10.0.0.1:55908.service - OpenSSH per-connection server daemon (10.0.0.1:55908). Mar 7 01:58:49.723646 sshd[5993]: Accepted publickey for core from 10.0.0.1 port 55908 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:58:49.730658 sshd[5993]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:58:49.768895 systemd-logind[1563]: New session 26 of user core. Mar 7 01:58:49.794881 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 7 01:58:50.770478 sshd[5993]: pam_unix(sshd:session): session closed for user core Mar 7 01:58:50.796038 systemd[1]: sshd@25-10.0.0.118:22-10.0.0.1:55908.service: Deactivated successfully. Mar 7 01:58:50.814082 systemd-logind[1563]: Session 26 logged out. Waiting for processes to exit. Mar 7 01:58:50.816308 systemd[1]: session-26.scope: Deactivated successfully. Mar 7 01:58:50.824254 systemd-logind[1563]: Removed session 26. Mar 7 01:58:55.834156 systemd[1]: Started sshd@26-10.0.0.118:22-10.0.0.1:57096.service - OpenSSH per-connection server daemon (10.0.0.1:57096). Mar 7 01:58:56.161408 sshd[6062]: Accepted publickey for core from 10.0.0.1 port 57096 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:58:56.164626 sshd[6062]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:58:56.190028 systemd-logind[1563]: New session 27 of user core. Mar 7 01:58:56.194648 systemd[1]: Started session-27.scope - Session 27 of User core. Mar 7 01:58:57.142553 sshd[6062]: pam_unix(sshd:session): session closed for user core Mar 7 01:58:57.164685 systemd[1]: sshd@26-10.0.0.118:22-10.0.0.1:57096.service: Deactivated successfully. Mar 7 01:58:57.181154 systemd[1]: session-27.scope: Deactivated successfully. Mar 7 01:58:57.185190 systemd-logind[1563]: Session 27 logged out. Waiting for processes to exit. Mar 7 01:58:57.193337 systemd-logind[1563]: Removed session 27. Mar 7 01:59:02.184435 systemd[1]: Started sshd@27-10.0.0.118:22-10.0.0.1:46618.service - OpenSSH per-connection server daemon (10.0.0.1:46618). Mar 7 01:59:02.345377 sshd[6110]: Accepted publickey for core from 10.0.0.1 port 46618 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:59:02.361212 sshd[6110]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:59:02.388145 systemd-logind[1563]: New session 28 of user core. Mar 7 01:59:02.411549 systemd[1]: Started session-28.scope - Session 28 of User core. Mar 7 01:59:03.570630 sshd[6110]: pam_unix(sshd:session): session closed for user core Mar 7 01:59:03.593701 systemd[1]: sshd@27-10.0.0.118:22-10.0.0.1:46618.service: Deactivated successfully. Mar 7 01:59:03.600585 systemd-logind[1563]: Session 28 logged out. Waiting for processes to exit. Mar 7 01:59:03.601755 systemd[1]: session-28.scope: Deactivated successfully. Mar 7 01:59:03.617272 systemd-logind[1563]: Removed session 28. Mar 7 01:59:08.604578 systemd[1]: Started sshd@28-10.0.0.118:22-10.0.0.1:46666.service - OpenSSH per-connection server daemon (10.0.0.1:46666). Mar 7 01:59:08.697498 sshd[6127]: Accepted publickey for core from 10.0.0.1 port 46666 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:59:08.703568 sshd[6127]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:59:08.727444 systemd-logind[1563]: New session 29 of user core. Mar 7 01:59:08.735613 systemd[1]: Started session-29.scope - Session 29 of User core. Mar 7 01:59:08.969011 kubelet[2895]: E0307 01:59:08.968030 2895 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:59:08.969730 kubelet[2895]: E0307 01:59:08.969702 2895 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:59:09.172986 sshd[6127]: pam_unix(sshd:session): session closed for user core Mar 7 01:59:09.182359 systemd[1]: sshd@28-10.0.0.118:22-10.0.0.1:46666.service: Deactivated successfully. Mar 7 01:59:09.187701 systemd-logind[1563]: Session 29 logged out. Waiting for processes to exit. Mar 7 01:59:09.189970 systemd[1]: session-29.scope: Deactivated successfully. Mar 7 01:59:09.193005 systemd-logind[1563]: Removed session 29. Mar 7 01:59:10.976987 kubelet[2895]: E0307 01:59:10.973120 2895 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:59:14.275463 systemd[1]: Started sshd@29-10.0.0.118:22-10.0.0.1:49388.service - OpenSSH per-connection server daemon (10.0.0.1:49388). Mar 7 01:59:14.622936 sshd[6165]: Accepted publickey for core from 10.0.0.1 port 49388 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:59:14.633753 sshd[6165]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:59:14.697424 systemd-logind[1563]: New session 30 of user core. Mar 7 01:59:14.743512 systemd[1]: Started session-30.scope - Session 30 of User core. Mar 7 01:59:16.039492 sshd[6165]: pam_unix(sshd:session): session closed for user core Mar 7 01:59:16.072360 systemd[1]: sshd@29-10.0.0.118:22-10.0.0.1:49388.service: Deactivated successfully. Mar 7 01:59:16.128341 systemd[1]: session-30.scope: Deactivated successfully. Mar 7 01:59:16.129973 systemd-logind[1563]: Session 30 logged out. Waiting for processes to exit. Mar 7 01:59:16.142638 systemd-logind[1563]: Removed session 30. Mar 7 01:59:17.208623 systemd-journald[1139]: Under memory pressure, flushing caches. Mar 7 01:59:17.183675 systemd-resolved[1474]: Under memory pressure, flushing caches. Mar 7 01:59:17.183688 systemd-resolved[1474]: Flushed all caches. Mar 7 01:59:21.079896 systemd[1]: Started sshd@30-10.0.0.118:22-10.0.0.1:45942.service - OpenSSH per-connection server daemon (10.0.0.1:45942). Mar 7 01:59:21.362302 sshd[6184]: Accepted publickey for core from 10.0.0.1 port 45942 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:59:21.367763 sshd[6184]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:59:21.427516 systemd-logind[1563]: New session 31 of user core. Mar 7 01:59:21.453473 systemd[1]: Started session-31.scope - Session 31 of User core. Mar 7 01:59:22.183191 sshd[6184]: pam_unix(sshd:session): session closed for user core Mar 7 01:59:22.215678 systemd[1]: sshd@30-10.0.0.118:22-10.0.0.1:45942.service: Deactivated successfully. Mar 7 01:59:22.232535 systemd[1]: session-31.scope: Deactivated successfully. Mar 7 01:59:22.235309 systemd-logind[1563]: Session 31 logged out. Waiting for processes to exit. Mar 7 01:59:22.238332 systemd-logind[1563]: Removed session 31. Mar 7 01:59:27.226303 systemd[1]: Started sshd@31-10.0.0.118:22-10.0.0.1:45950.service - OpenSSH per-connection server daemon (10.0.0.1:45950). Mar 7 01:59:27.603661 sshd[6237]: Accepted publickey for core from 10.0.0.1 port 45950 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:59:27.607599 sshd[6237]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:59:27.640038 systemd-logind[1563]: New session 32 of user core. Mar 7 01:59:27.659930 systemd[1]: Started session-32.scope - Session 32 of User core. Mar 7 01:59:28.285015 sshd[6237]: pam_unix(sshd:session): session closed for user core Mar 7 01:59:28.294453 systemd[1]: sshd@31-10.0.0.118:22-10.0.0.1:45950.service: Deactivated successfully. Mar 7 01:59:28.304715 systemd[1]: session-32.scope: Deactivated successfully. Mar 7 01:59:28.308044 systemd-logind[1563]: Session 32 logged out. Waiting for processes to exit. Mar 7 01:59:28.311270 systemd-logind[1563]: Removed session 32. Mar 7 01:59:33.346061 systemd[1]: Started sshd@32-10.0.0.118:22-10.0.0.1:44980.service - OpenSSH per-connection server daemon (10.0.0.1:44980). Mar 7 01:59:33.703026 sshd[6307]: Accepted publickey for core from 10.0.0.1 port 44980 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:59:33.739075 sshd[6307]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:59:33.885311 systemd-logind[1563]: New session 33 of user core. Mar 7 01:59:33.922579 systemd[1]: Started session-33.scope - Session 33 of User core. Mar 7 01:59:35.258055 systemd-journald[1139]: Under memory pressure, flushing caches. Mar 7 01:59:35.234158 sshd[6307]: pam_unix(sshd:session): session closed for user core Mar 7 01:59:35.245157 systemd-resolved[1474]: Under memory pressure, flushing caches. Mar 7 01:59:35.245168 systemd-resolved[1474]: Flushed all caches. Mar 7 01:59:35.259760 systemd[1]: sshd@32-10.0.0.118:22-10.0.0.1:44980.service: Deactivated successfully. Mar 7 01:59:35.261102 systemd-logind[1563]: Session 33 logged out. Waiting for processes to exit. Mar 7 01:59:35.273101 systemd[1]: session-33.scope: Deactivated successfully. Mar 7 01:59:35.287308 systemd-logind[1563]: Removed session 33. Mar 7 01:59:40.313575 systemd[1]: Started sshd@33-10.0.0.118:22-10.0.0.1:58154.service - OpenSSH per-connection server daemon (10.0.0.1:58154). Mar 7 01:59:40.556503 sshd[6332]: Accepted publickey for core from 10.0.0.1 port 58154 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:59:40.560100 sshd[6332]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:59:40.642241 systemd-logind[1563]: New session 34 of user core. Mar 7 01:59:40.659394 systemd[1]: Started session-34.scope - Session 34 of User core. Mar 7 01:59:42.097917 sshd[6332]: pam_unix(sshd:session): session closed for user core Mar 7 01:59:42.143607 systemd[1]: Started sshd@34-10.0.0.118:22-10.0.0.1:58184.service - OpenSSH per-connection server daemon (10.0.0.1:58184). Mar 7 01:59:42.147165 systemd[1]: sshd@33-10.0.0.118:22-10.0.0.1:58154.service: Deactivated successfully. Mar 7 01:59:42.186661 systemd-logind[1563]: Session 34 logged out. Waiting for processes to exit. Mar 7 01:59:42.204054 systemd[1]: session-34.scope: Deactivated successfully. Mar 7 01:59:42.210691 systemd-logind[1563]: Removed session 34. Mar 7 01:59:42.531031 sshd[6345]: Accepted publickey for core from 10.0.0.1 port 58184 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:59:42.550917 sshd[6345]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:59:42.604010 systemd-logind[1563]: New session 35 of user core. Mar 7 01:59:42.628393 systemd[1]: Started session-35.scope - Session 35 of User core. Mar 7 01:59:43.243471 systemd-journald[1139]: Under memory pressure, flushing caches. Mar 7 01:59:43.242461 systemd-resolved[1474]: Under memory pressure, flushing caches. Mar 7 01:59:43.242472 systemd-resolved[1474]: Flushed all caches. Mar 7 01:59:45.057976 sshd[6345]: pam_unix(sshd:session): session closed for user core Mar 7 01:59:45.112258 systemd[1]: Started sshd@35-10.0.0.118:22-10.0.0.1:58200.service - OpenSSH per-connection server daemon (10.0.0.1:58200). Mar 7 01:59:45.114234 systemd[1]: sshd@34-10.0.0.118:22-10.0.0.1:58184.service: Deactivated successfully. Mar 7 01:59:45.133392 systemd[1]: session-35.scope: Deactivated successfully. Mar 7 01:59:45.137788 systemd-logind[1563]: Session 35 logged out. Waiting for processes to exit. Mar 7 01:59:45.141902 systemd-logind[1563]: Removed session 35. Mar 7 01:59:45.292411 systemd-journald[1139]: Under memory pressure, flushing caches. Mar 7 01:59:45.287158 systemd-resolved[1474]: Under memory pressure, flushing caches. Mar 7 01:59:45.287168 systemd-resolved[1474]: Flushed all caches. Mar 7 01:59:45.306898 sshd[6382]: Accepted publickey for core from 10.0.0.1 port 58200 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:59:45.306642 sshd[6382]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:59:45.409404 systemd-logind[1563]: New session 36 of user core. Mar 7 01:59:45.453664 systemd[1]: Started session-36.scope - Session 36 of User core. Mar 7 01:59:46.200736 sshd[6382]: pam_unix(sshd:session): session closed for user core Mar 7 01:59:46.223492 systemd-logind[1563]: Session 36 logged out. Waiting for processes to exit. Mar 7 01:59:46.225198 systemd[1]: sshd@35-10.0.0.118:22-10.0.0.1:58200.service: Deactivated successfully. Mar 7 01:59:46.260944 systemd[1]: session-36.scope: Deactivated successfully. Mar 7 01:59:46.280490 systemd-logind[1563]: Removed session 36. Mar 7 01:59:51.247333 systemd[1]: Started sshd@36-10.0.0.118:22-10.0.0.1:51042.service - OpenSSH per-connection server daemon (10.0.0.1:51042). Mar 7 01:59:51.371168 sshd[6402]: Accepted publickey for core from 10.0.0.1 port 51042 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:59:51.380968 sshd[6402]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:59:51.442226 systemd-logind[1563]: New session 37 of user core. Mar 7 01:59:51.458730 systemd[1]: Started session-37.scope - Session 37 of User core. Mar 7 01:59:52.372753 sshd[6402]: pam_unix(sshd:session): session closed for user core Mar 7 01:59:52.414118 systemd[1]: sshd@36-10.0.0.118:22-10.0.0.1:51042.service: Deactivated successfully. Mar 7 01:59:52.490599 systemd[1]: session-37.scope: Deactivated successfully. Mar 7 01:59:52.521308 systemd-logind[1563]: Session 37 logged out. Waiting for processes to exit. Mar 7 01:59:52.538240 systemd-logind[1563]: Removed session 37. Mar 7 01:59:52.984117 kubelet[2895]: E0307 01:59:52.968045 2895 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:59:54.973413 kubelet[2895]: E0307 01:59:54.971516 2895 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:59:57.185445 systemd[1]: run-containerd-runc-k8s.io-da7eafe706baa521b15ca74c47dbf70d51058d81210aa6d983f4756685d4442c-runc.TeTbgX.mount: Deactivated successfully. Mar 7 01:59:57.431549 systemd[1]: Started sshd@37-10.0.0.118:22-10.0.0.1:51062.service - OpenSSH per-connection server daemon (10.0.0.1:51062). Mar 7 01:59:57.564903 sshd[6462]: Accepted publickey for core from 10.0.0.1 port 51062 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:59:57.566069 sshd[6462]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:59:57.609564 systemd-logind[1563]: New session 38 of user core. Mar 7 01:59:57.642778 systemd[1]: Started session-38.scope - Session 38 of User core. Mar 7 01:59:59.042903 sshd[6462]: pam_unix(sshd:session): session closed for user core Mar 7 01:59:59.061035 systemd[1]: sshd@37-10.0.0.118:22-10.0.0.1:51062.service: Deactivated successfully. Mar 7 01:59:59.094611 systemd[1]: session-38.scope: Deactivated successfully. Mar 7 01:59:59.096975 systemd-logind[1563]: Session 38 logged out. Waiting for processes to exit. Mar 7 01:59:59.125319 systemd-logind[1563]: Removed session 38. Mar 7 02:00:01.994977 kubelet[2895]: E0307 02:00:01.994923 2895 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:00:04.144389 systemd[1]: Started sshd@38-10.0.0.118:22-10.0.0.1:40478.service - OpenSSH per-connection server daemon (10.0.0.1:40478). Mar 7 02:00:04.661522 sshd[6480]: Accepted publickey for core from 10.0.0.1 port 40478 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 02:00:04.676948 sshd[6480]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:00:04.799984 systemd-logind[1563]: New session 39 of user core. Mar 7 02:00:04.823147 systemd[1]: Started session-39.scope - Session 39 of User core. Mar 7 02:00:05.770529 sshd[6480]: pam_unix(sshd:session): session closed for user core Mar 7 02:00:05.796111 systemd[1]: sshd@38-10.0.0.118:22-10.0.0.1:40478.service: Deactivated successfully. Mar 7 02:00:05.832464 systemd-logind[1563]: Session 39 logged out. Waiting for processes to exit. Mar 7 02:00:05.836397 systemd[1]: session-39.scope: Deactivated successfully. Mar 7 02:00:05.847396 systemd-logind[1563]: Removed session 39. Mar 7 02:00:09.985352 kubelet[2895]: E0307 02:00:09.976983 2895 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:00:10.818068 systemd[1]: Started sshd@39-10.0.0.118:22-10.0.0.1:53152.service - OpenSSH per-connection server daemon (10.0.0.1:53152). Mar 7 02:00:10.982960 sshd[6510]: Accepted publickey for core from 10.0.0.1 port 53152 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 02:00:11.011134 sshd[6510]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:00:11.047768 systemd-logind[1563]: New session 40 of user core. Mar 7 02:00:11.072671 systemd[1]: Started session-40.scope - Session 40 of User core. Mar 7 02:00:11.981293 kubelet[2895]: E0307 02:00:11.980074 2895 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:00:12.655157 sshd[6510]: pam_unix(sshd:session): session closed for user core Mar 7 02:00:12.754991 systemd[1]: Started sshd@40-10.0.0.118:22-10.0.0.1:53160.service - OpenSSH per-connection server daemon (10.0.0.1:53160). Mar 7 02:00:12.756001 systemd[1]: sshd@39-10.0.0.118:22-10.0.0.1:53152.service: Deactivated successfully. Mar 7 02:00:12.850727 systemd[1]: session-40.scope: Deactivated successfully. Mar 7 02:00:12.855867 systemd-logind[1563]: Session 40 logged out. Waiting for processes to exit. Mar 7 02:00:12.885789 systemd-logind[1563]: Removed session 40. Mar 7 02:00:13.090419 sshd[6533]: Accepted publickey for core from 10.0.0.1 port 53160 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 02:00:13.112584 sshd[6533]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:00:13.168392 systemd-logind[1563]: New session 41 of user core. Mar 7 02:00:13.190092 systemd[1]: Started session-41.scope - Session 41 of User core. Mar 7 02:00:13.270288 systemd-journald[1139]: Under memory pressure, flushing caches. Mar 7 02:00:13.262761 systemd-resolved[1474]: Under memory pressure, flushing caches. Mar 7 02:00:13.262774 systemd-resolved[1474]: Flushed all caches. Mar 7 02:00:14.831994 sshd[6533]: pam_unix(sshd:session): session closed for user core Mar 7 02:00:14.871730 systemd[1]: Started sshd@41-10.0.0.118:22-10.0.0.1:53164.service - OpenSSH per-connection server daemon (10.0.0.1:53164). Mar 7 02:00:14.876397 systemd[1]: sshd@40-10.0.0.118:22-10.0.0.1:53160.service: Deactivated successfully. Mar 7 02:00:14.901512 systemd[1]: session-41.scope: Deactivated successfully. Mar 7 02:00:14.946989 systemd-logind[1563]: Session 41 logged out. Waiting for processes to exit. Mar 7 02:00:14.950466 systemd-logind[1563]: Removed session 41. Mar 7 02:00:14.967981 kubelet[2895]: E0307 02:00:14.967284 2895 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:00:15.095241 sshd[6561]: Accepted publickey for core from 10.0.0.1 port 53164 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 02:00:15.112953 sshd[6561]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:00:15.136763 systemd-logind[1563]: New session 42 of user core. Mar 7 02:00:15.146734 systemd[1]: Started session-42.scope - Session 42 of User core. Mar 7 02:00:17.574930 sshd[6561]: pam_unix(sshd:session): session closed for user core Mar 7 02:00:17.610410 systemd[1]: Started sshd@42-10.0.0.118:22-10.0.0.1:53208.service - OpenSSH per-connection server daemon (10.0.0.1:53208). Mar 7 02:00:17.639406 systemd[1]: sshd@41-10.0.0.118:22-10.0.0.1:53164.service: Deactivated successfully. Mar 7 02:00:17.652896 systemd-logind[1563]: Session 42 logged out. Waiting for processes to exit. Mar 7 02:00:17.666119 systemd[1]: session-42.scope: Deactivated successfully. Mar 7 02:00:17.711139 systemd-logind[1563]: Removed session 42. Mar 7 02:00:18.017629 sshd[6602]: Accepted publickey for core from 10.0.0.1 port 53208 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 02:00:18.027072 sshd[6602]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:00:18.101310 systemd-logind[1563]: New session 43 of user core. Mar 7 02:00:18.116600 systemd[1]: Started session-43.scope - Session 43 of User core. Mar 7 02:00:19.201450 systemd-resolved[1474]: Under memory pressure, flushing caches. Mar 7 02:00:19.223442 systemd-journald[1139]: Under memory pressure, flushing caches. Mar 7 02:00:19.201462 systemd-resolved[1474]: Flushed all caches. Mar 7 02:00:21.270056 systemd-journald[1139]: Under memory pressure, flushing caches. Mar 7 02:00:21.246520 systemd-resolved[1474]: Under memory pressure, flushing caches. Mar 7 02:00:21.246530 systemd-resolved[1474]: Flushed all caches. Mar 7 02:00:22.307584 sshd[6602]: pam_unix(sshd:session): session closed for user core Mar 7 02:00:22.347486 systemd[1]: Started sshd@43-10.0.0.118:22-10.0.0.1:44220.service - OpenSSH per-connection server daemon (10.0.0.1:44220). Mar 7 02:00:22.381987 systemd[1]: sshd@42-10.0.0.118:22-10.0.0.1:53208.service: Deactivated successfully. Mar 7 02:00:22.430631 systemd-logind[1563]: Session 43 logged out. Waiting for processes to exit. Mar 7 02:00:22.447077 systemd[1]: session-43.scope: Deactivated successfully. Mar 7 02:00:22.449725 systemd-logind[1563]: Removed session 43. Mar 7 02:00:22.624588 sshd[6616]: Accepted publickey for core from 10.0.0.1 port 44220 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 02:00:22.653750 sshd[6616]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:00:22.699071 systemd-logind[1563]: New session 44 of user core. Mar 7 02:00:22.730537 systemd[1]: Started session-44.scope - Session 44 of User core. Mar 7 02:00:23.696361 sshd[6616]: pam_unix(sshd:session): session closed for user core Mar 7 02:00:23.732157 systemd[1]: sshd@43-10.0.0.118:22-10.0.0.1:44220.service: Deactivated successfully. Mar 7 02:00:23.757542 systemd[1]: session-44.scope: Deactivated successfully. Mar 7 02:00:23.772774 systemd-logind[1563]: Session 44 logged out. Waiting for processes to exit. Mar 7 02:00:23.782356 systemd-logind[1563]: Removed session 44. Mar 7 02:00:28.790142 systemd[1]: Started sshd@44-10.0.0.118:22-10.0.0.1:44246.service - OpenSSH per-connection server daemon (10.0.0.1:44246). Mar 7 02:00:29.127393 sshd[6707]: Accepted publickey for core from 10.0.0.1 port 44246 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 02:00:29.143878 sshd[6707]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:00:29.218725 systemd-logind[1563]: New session 45 of user core. Mar 7 02:00:29.257967 systemd[1]: Started session-45.scope - Session 45 of User core. Mar 7 02:00:30.456713 sshd[6707]: pam_unix(sshd:session): session closed for user core Mar 7 02:00:30.481033 systemd[1]: sshd@44-10.0.0.118:22-10.0.0.1:44246.service: Deactivated successfully. Mar 7 02:00:30.518009 systemd[1]: session-45.scope: Deactivated successfully. Mar 7 02:00:30.529631 systemd-logind[1563]: Session 45 logged out. Waiting for processes to exit. Mar 7 02:00:30.544066 systemd-logind[1563]: Removed session 45. Mar 7 02:00:34.575305 kubelet[2895]: E0307 02:00:34.571518 2895 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:00:35.586415 systemd[1]: Started sshd@45-10.0.0.118:22-10.0.0.1:50646.service - OpenSSH per-connection server daemon (10.0.0.1:50646). Mar 7 02:00:35.875544 sshd[6763]: Accepted publickey for core from 10.0.0.1 port 50646 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 02:00:35.895180 sshd[6763]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:00:35.928781 systemd-logind[1563]: New session 46 of user core. Mar 7 02:00:35.947186 systemd[1]: Started session-46.scope - Session 46 of User core. Mar 7 02:00:36.441651 sshd[6763]: pam_unix(sshd:session): session closed for user core Mar 7 02:00:36.454066 systemd-logind[1563]: Session 46 logged out. Waiting for processes to exit. Mar 7 02:00:36.465984 systemd[1]: sshd@45-10.0.0.118:22-10.0.0.1:50646.service: Deactivated successfully. Mar 7 02:00:36.533576 systemd[1]: session-46.scope: Deactivated successfully. Mar 7 02:00:36.553549 systemd-logind[1563]: Removed session 46. Mar 7 02:00:41.501655 systemd[1]: Started sshd@46-10.0.0.118:22-10.0.0.1:47882.service - OpenSSH per-connection server daemon (10.0.0.1:47882). Mar 7 02:00:41.725328 sshd[6779]: Accepted publickey for core from 10.0.0.1 port 47882 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 02:00:41.729414 sshd[6779]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:00:41.788742 systemd-logind[1563]: New session 47 of user core. Mar 7 02:00:41.797637 systemd[1]: Started session-47.scope - Session 47 of User core. Mar 7 02:00:43.261495 systemd-resolved[1474]: Under memory pressure, flushing caches. Mar 7 02:00:43.285168 systemd-journald[1139]: Under memory pressure, flushing caches. Mar 7 02:00:43.261504 systemd-resolved[1474]: Flushed all caches. Mar 7 02:00:43.620024 sshd[6779]: pam_unix(sshd:session): session closed for user core Mar 7 02:00:43.658321 systemd[1]: sshd@46-10.0.0.118:22-10.0.0.1:47882.service: Deactivated successfully. Mar 7 02:00:43.690140 systemd-logind[1563]: Session 47 logged out. Waiting for processes to exit. Mar 7 02:00:43.699027 systemd[1]: session-47.scope: Deactivated successfully. Mar 7 02:00:43.721452 systemd-logind[1563]: Removed session 47. Mar 7 02:00:48.663858 systemd[1]: Started sshd@47-10.0.0.118:22-10.0.0.1:47898.service - OpenSSH per-connection server daemon (10.0.0.1:47898). Mar 7 02:00:48.924617 sshd[6817]: Accepted publickey for core from 10.0.0.1 port 47898 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 02:00:48.952345 sshd[6817]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:00:49.006524 systemd-logind[1563]: New session 48 of user core. Mar 7 02:00:49.012506 systemd[1]: Started session-48.scope - Session 48 of User core. Mar 7 02:00:50.095661 sshd[6817]: pam_unix(sshd:session): session closed for user core Mar 7 02:00:50.114593 systemd[1]: sshd@47-10.0.0.118:22-10.0.0.1:47898.service: Deactivated successfully. Mar 7 02:00:50.140469 systemd-logind[1563]: Session 48 logged out. Waiting for processes to exit. Mar 7 02:00:50.141353 systemd[1]: session-48.scope: Deactivated successfully. Mar 7 02:00:50.147119 systemd-logind[1563]: Removed session 48. Mar 7 02:00:55.157592 systemd[1]: Started sshd@48-10.0.0.118:22-10.0.0.1:53418.service - OpenSSH per-connection server daemon (10.0.0.1:53418). Mar 7 02:00:55.323586 sshd[6856]: Accepted publickey for core from 10.0.0.1 port 53418 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 02:00:55.336184 sshd[6856]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:00:55.374131 systemd-logind[1563]: New session 49 of user core. Mar 7 02:00:55.390427 systemd[1]: Started session-49.scope - Session 49 of User core. Mar 7 02:00:55.970905 sshd[6856]: pam_unix(sshd:session): session closed for user core Mar 7 02:00:55.977758 systemd[1]: sshd@48-10.0.0.118:22-10.0.0.1:53418.service: Deactivated successfully. Mar 7 02:00:56.002022 systemd[1]: session-49.scope: Deactivated successfully. Mar 7 02:00:56.016119 systemd-logind[1563]: Session 49 logged out. Waiting for processes to exit. Mar 7 02:00:56.021871 systemd-logind[1563]: Removed session 49. Mar 7 02:00:59.975918 kubelet[2895]: E0307 02:00:59.971159 2895 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:01:01.022756 systemd[1]: Started sshd@49-10.0.0.118:22-10.0.0.1:46286.service - OpenSSH per-connection server daemon (10.0.0.1:46286). Mar 7 02:01:01.194589 sshd[6893]: Accepted publickey for core from 10.0.0.1 port 46286 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 02:01:01.203692 sshd[6893]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:01:01.245183 systemd-logind[1563]: New session 50 of user core. Mar 7 02:01:01.260529 systemd[1]: Started session-50.scope - Session 50 of User core. Mar 7 02:01:01.834043 sshd[6893]: pam_unix(sshd:session): session closed for user core Mar 7 02:01:01.855999 systemd[1]: sshd@49-10.0.0.118:22-10.0.0.1:46286.service: Deactivated successfully. Mar 7 02:01:01.863686 systemd-logind[1563]: Session 50 logged out. Waiting for processes to exit. Mar 7 02:01:01.867101 systemd[1]: session-50.scope: Deactivated successfully. Mar 7 02:01:01.884064 systemd-logind[1563]: Removed session 50. Mar 7 02:01:02.970770 kubelet[2895]: E0307 02:01:02.970124 2895 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:01:06.891648 systemd[1]: Started sshd@50-10.0.0.118:22-10.0.0.1:46290.service - OpenSSH per-connection server daemon (10.0.0.1:46290). Mar 7 02:01:07.180255 sshd[6908]: Accepted publickey for core from 10.0.0.1 port 46290 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 02:01:07.195979 sshd[6908]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:01:07.283942 systemd-logind[1563]: New session 51 of user core. Mar 7 02:01:07.301360 systemd[1]: Started session-51.scope - Session 51 of User core. Mar 7 02:01:07.978320 sshd[6908]: pam_unix(sshd:session): session closed for user core Mar 7 02:01:08.002189 systemd[1]: sshd@50-10.0.0.118:22-10.0.0.1:46290.service: Deactivated successfully. Mar 7 02:01:08.041033 systemd-logind[1563]: Session 51 logged out. Waiting for processes to exit. Mar 7 02:01:08.047098 systemd[1]: session-51.scope: Deactivated successfully. Mar 7 02:01:08.052132 systemd-logind[1563]: Removed session 51.