Mar 7 01:00:49.833557 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Mar 6 22:58:19 -00 2026 Mar 7 01:00:49.833598 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=531e046a631dbba7b4aae1b7955ffa961f5ce7d570e89a624d767cf739ab70b5 Mar 7 01:00:49.833620 kernel: BIOS-provided physical RAM map: Mar 7 01:00:49.833631 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Mar 7 01:00:49.833641 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Mar 7 01:00:49.833652 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Mar 7 01:00:49.833665 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Mar 7 01:00:49.833674 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Mar 7 01:00:49.833685 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 7 01:00:49.833702 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Mar 7 01:00:49.833745 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 7 01:00:49.833755 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Mar 7 01:00:49.833789 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 7 01:00:49.833802 kernel: NX (Execute Disable) protection: active Mar 7 01:00:49.833814 kernel: APIC: Static calls initialized Mar 7 01:00:49.833854 kernel: SMBIOS 2.8 present. Mar 7 01:00:49.833868 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Mar 7 01:00:49.833879 kernel: Hypervisor detected: KVM Mar 7 01:00:49.833889 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 7 01:00:49.833901 kernel: kvm-clock: using sched offset of 16649749340 cycles Mar 7 01:00:49.833914 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 7 01:00:49.833926 kernel: tsc: Detected 2445.426 MHz processor Mar 7 01:00:49.833936 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 7 01:00:49.833949 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 7 01:00:49.833969 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Mar 7 01:00:49.833981 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Mar 7 01:00:49.833991 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 7 01:00:49.834003 kernel: Using GB pages for direct mapping Mar 7 01:00:49.834015 kernel: ACPI: Early table checksum verification disabled Mar 7 01:00:49.834027 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Mar 7 01:00:49.834038 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:00:49.834050 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:00:49.834062 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:00:49.834081 kernel: ACPI: FACS 0x000000009CFE0000 000040 Mar 7 01:00:49.834092 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:00:49.834104 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:00:49.834162 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:00:49.834179 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:00:49.834192 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Mar 7 01:00:49.834204 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Mar 7 01:00:49.834225 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Mar 7 01:00:49.834243 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Mar 7 01:00:49.834256 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Mar 7 01:00:49.834268 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Mar 7 01:00:49.834280 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Mar 7 01:00:49.834292 kernel: No NUMA configuration found Mar 7 01:00:49.834305 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Mar 7 01:00:49.834322 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Mar 7 01:00:49.834334 kernel: Zone ranges: Mar 7 01:00:49.834347 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 7 01:00:49.834360 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Mar 7 01:00:49.834371 kernel: Normal empty Mar 7 01:00:49.834383 kernel: Movable zone start for each node Mar 7 01:00:49.834395 kernel: Early memory node ranges Mar 7 01:00:49.834409 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Mar 7 01:00:49.834420 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Mar 7 01:00:49.834431 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Mar 7 01:00:49.834451 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 7 01:00:49.834484 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Mar 7 01:00:49.834499 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Mar 7 01:00:49.834512 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 7 01:00:49.834525 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 7 01:00:49.834536 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 7 01:00:49.834548 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 7 01:00:49.834561 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 7 01:00:49.834574 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 7 01:00:49.834592 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 7 01:00:49.834604 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 7 01:00:49.834617 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 7 01:00:49.834629 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 7 01:00:49.834640 kernel: TSC deadline timer available Mar 7 01:00:49.834653 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Mar 7 01:00:49.834666 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 7 01:00:49.834678 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 7 01:00:49.854862 kernel: kvm-guest: setup PV sched yield Mar 7 01:00:49.854933 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Mar 7 01:00:49.854948 kernel: Booting paravirtualized kernel on KVM Mar 7 01:00:49.854961 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 7 01:00:49.854974 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Mar 7 01:00:49.854987 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Mar 7 01:00:49.854999 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Mar 7 01:00:49.855012 kernel: pcpu-alloc: [0] 0 1 2 3 Mar 7 01:00:49.855025 kernel: kvm-guest: PV spinlocks enabled Mar 7 01:00:49.855036 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 7 01:00:49.855059 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=531e046a631dbba7b4aae1b7955ffa961f5ce7d570e89a624d767cf739ab70b5 Mar 7 01:00:49.855071 kernel: random: crng init done Mar 7 01:00:49.855084 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 7 01:00:49.855096 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 7 01:00:49.855110 kernel: Fallback order for Node 0: 0 Mar 7 01:00:49.855204 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Mar 7 01:00:49.855215 kernel: Policy zone: DMA32 Mar 7 01:00:49.855225 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 7 01:00:49.855241 kernel: Memory: 2434608K/2571752K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 136884K reserved, 0K cma-reserved) Mar 7 01:00:49.855251 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 7 01:00:49.855261 kernel: ftrace: allocating 37996 entries in 149 pages Mar 7 01:00:49.855271 kernel: ftrace: allocated 149 pages with 4 groups Mar 7 01:00:49.855284 kernel: Dynamic Preempt: voluntary Mar 7 01:00:49.855295 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 7 01:00:49.855306 kernel: rcu: RCU event tracing is enabled. Mar 7 01:00:49.855317 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 7 01:00:49.855327 kernel: Trampoline variant of Tasks RCU enabled. Mar 7 01:00:49.855342 kernel: Rude variant of Tasks RCU enabled. Mar 7 01:00:49.855352 kernel: Tracing variant of Tasks RCU enabled. Mar 7 01:00:49.855362 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 7 01:00:49.855372 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 7 01:00:49.855405 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Mar 7 01:00:49.855416 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 7 01:00:49.855426 kernel: Console: colour VGA+ 80x25 Mar 7 01:00:49.855436 kernel: printk: console [ttyS0] enabled Mar 7 01:00:49.855446 kernel: ACPI: Core revision 20230628 Mar 7 01:00:49.855461 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 7 01:00:49.855471 kernel: APIC: Switch to symmetric I/O mode setup Mar 7 01:00:49.855481 kernel: x2apic enabled Mar 7 01:00:49.855492 kernel: APIC: Switched APIC routing to: physical x2apic Mar 7 01:00:49.855502 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Mar 7 01:00:49.855513 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Mar 7 01:00:49.855523 kernel: kvm-guest: setup PV IPIs Mar 7 01:00:49.855533 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 7 01:00:49.855558 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Mar 7 01:00:49.855569 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Mar 7 01:00:49.855582 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 7 01:00:49.855593 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 7 01:00:49.855608 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 7 01:00:49.855619 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 7 01:00:49.855629 kernel: Spectre V2 : Mitigation: Retpolines Mar 7 01:00:49.855640 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 7 01:00:49.855651 kernel: Speculative Store Bypass: Vulnerable Mar 7 01:00:49.855665 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Mar 7 01:00:49.855693 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Mar 7 01:00:49.855704 kernel: active return thunk: srso_alias_return_thunk Mar 7 01:00:49.855742 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Mar 7 01:00:49.855753 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Mar 7 01:00:49.855763 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Mar 7 01:00:49.855774 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 7 01:00:49.855785 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 7 01:00:49.855800 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 7 01:00:49.855811 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 7 01:00:49.855822 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Mar 7 01:00:49.855832 kernel: Freeing SMP alternatives memory: 32K Mar 7 01:00:49.855843 kernel: pid_max: default: 32768 minimum: 301 Mar 7 01:00:49.855854 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 7 01:00:49.855864 kernel: landlock: Up and running. Mar 7 01:00:49.855875 kernel: SELinux: Initializing. Mar 7 01:00:49.855885 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 7 01:00:49.855900 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 7 01:00:49.855911 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Mar 7 01:00:49.855921 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 7 01:00:49.855932 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 7 01:00:49.855943 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 7 01:00:49.855954 kernel: Performance Events: PMU not available due to virtualization, using software events only. Mar 7 01:00:49.855964 kernel: signal: max sigframe size: 1776 Mar 7 01:00:49.855990 kernel: rcu: Hierarchical SRCU implementation. Mar 7 01:00:49.856001 kernel: rcu: Max phase no-delay instances is 400. Mar 7 01:00:49.856015 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 7 01:00:49.856026 kernel: smp: Bringing up secondary CPUs ... Mar 7 01:00:49.856036 kernel: smpboot: x86: Booting SMP configuration: Mar 7 01:00:49.856047 kernel: .... node #0, CPUs: #1 #2 #3 Mar 7 01:00:49.856058 kernel: smp: Brought up 1 node, 4 CPUs Mar 7 01:00:49.856068 kernel: smpboot: Max logical packages: 1 Mar 7 01:00:49.856079 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Mar 7 01:00:49.856089 kernel: devtmpfs: initialized Mar 7 01:00:49.856100 kernel: x86/mm: Memory block size: 128MB Mar 7 01:00:49.856158 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 7 01:00:49.856175 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 7 01:00:49.856190 kernel: pinctrl core: initialized pinctrl subsystem Mar 7 01:00:49.856202 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 7 01:00:49.856215 kernel: audit: initializing netlink subsys (disabled) Mar 7 01:00:49.856229 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 7 01:00:49.856241 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 7 01:00:49.856255 kernel: audit: type=2000 audit(1772845241.907:1): state=initialized audit_enabled=0 res=1 Mar 7 01:00:49.856268 kernel: cpuidle: using governor menu Mar 7 01:00:49.856289 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 7 01:00:49.856303 kernel: dca service started, version 1.12.1 Mar 7 01:00:49.856315 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Mar 7 01:00:49.856330 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Mar 7 01:00:49.856342 kernel: PCI: Using configuration type 1 for base access Mar 7 01:00:49.856356 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 7 01:00:49.856369 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 7 01:00:49.856382 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 7 01:00:49.856396 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 7 01:00:49.856415 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 7 01:00:49.856428 kernel: ACPI: Added _OSI(Module Device) Mar 7 01:00:49.856441 kernel: ACPI: Added _OSI(Processor Device) Mar 7 01:00:49.856454 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 7 01:00:49.856467 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 7 01:00:49.856480 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 7 01:00:49.856493 kernel: ACPI: Interpreter enabled Mar 7 01:00:49.856506 kernel: ACPI: PM: (supports S0 S3 S5) Mar 7 01:00:49.856518 kernel: ACPI: Using IOAPIC for interrupt routing Mar 7 01:00:49.856538 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 7 01:00:49.856550 kernel: PCI: Using E820 reservations for host bridge windows Mar 7 01:00:49.856564 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 7 01:00:49.856577 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 7 01:00:49.860244 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 7 01:00:49.860483 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 7 01:00:49.860671 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 7 01:00:49.860693 kernel: PCI host bridge to bus 0000:00 Mar 7 01:00:49.863075 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 7 01:00:49.863354 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 7 01:00:49.863583 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 7 01:00:49.863835 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Mar 7 01:00:49.864056 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 7 01:00:49.864328 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Mar 7 01:00:49.864564 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 7 01:00:49.867970 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Mar 7 01:00:49.868263 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Mar 7 01:00:49.868454 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Mar 7 01:00:49.868643 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Mar 7 01:00:49.870931 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Mar 7 01:00:49.871224 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 7 01:00:49.871536 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Mar 7 01:00:49.871800 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Mar 7 01:00:49.871994 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Mar 7 01:00:49.872256 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Mar 7 01:00:49.872587 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Mar 7 01:00:49.875906 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Mar 7 01:00:49.876172 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Mar 7 01:00:49.876403 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Mar 7 01:00:49.876685 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Mar 7 01:00:49.879013 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Mar 7 01:00:49.879318 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Mar 7 01:00:49.879512 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Mar 7 01:00:49.879747 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Mar 7 01:00:49.880015 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Mar 7 01:00:49.880289 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 7 01:00:49.880534 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Mar 7 01:00:49.882828 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Mar 7 01:00:49.883082 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Mar 7 01:00:49.883465 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Mar 7 01:00:49.884220 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Mar 7 01:00:49.884258 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 7 01:00:49.884274 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 7 01:00:49.884286 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 7 01:00:49.884300 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 7 01:00:49.884313 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 7 01:00:49.884326 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 7 01:00:49.884339 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 7 01:00:49.884351 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 7 01:00:49.884371 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 7 01:00:49.884383 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 7 01:00:49.884395 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 7 01:00:49.884408 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 7 01:00:49.884422 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 7 01:00:49.884435 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 7 01:00:49.884449 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 7 01:00:49.884464 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 7 01:00:49.884478 kernel: iommu: Default domain type: Translated Mar 7 01:00:49.884500 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 7 01:00:49.884514 kernel: PCI: Using ACPI for IRQ routing Mar 7 01:00:49.884526 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 7 01:00:49.884537 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Mar 7 01:00:49.884547 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Mar 7 01:00:49.885888 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 7 01:00:49.886099 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 7 01:00:49.886349 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 7 01:00:49.886372 kernel: vgaarb: loaded Mar 7 01:00:49.886397 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 7 01:00:49.886409 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 7 01:00:49.886420 kernel: clocksource: Switched to clocksource kvm-clock Mar 7 01:00:49.886431 kernel: VFS: Disk quotas dquot_6.6.0 Mar 7 01:00:49.886442 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 7 01:00:49.886453 kernel: pnp: PnP ACPI init Mar 7 01:00:49.886929 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Mar 7 01:00:49.886949 kernel: pnp: PnP ACPI: found 6 devices Mar 7 01:00:49.886967 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 7 01:00:49.886978 kernel: NET: Registered PF_INET protocol family Mar 7 01:00:49.886988 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 7 01:00:49.886999 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 7 01:00:49.887010 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 7 01:00:49.887021 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 7 01:00:49.887031 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 7 01:00:49.887042 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 7 01:00:49.887053 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 7 01:00:49.887067 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 7 01:00:49.887078 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 7 01:00:49.887090 kernel: NET: Registered PF_XDP protocol family Mar 7 01:00:49.887346 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 7 01:00:49.887575 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 7 01:00:49.887926 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 7 01:00:49.888179 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Mar 7 01:00:49.888379 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Mar 7 01:00:49.888576 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Mar 7 01:00:49.888594 kernel: PCI: CLS 0 bytes, default 64 Mar 7 01:00:49.888606 kernel: Initialise system trusted keyrings Mar 7 01:00:49.888619 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 7 01:00:49.888630 kernel: Key type asymmetric registered Mar 7 01:00:49.888772 kernel: Asymmetric key parser 'x509' registered Mar 7 01:00:49.888790 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 7 01:00:49.888803 kernel: io scheduler mq-deadline registered Mar 7 01:00:49.888839 kernel: io scheduler kyber registered Mar 7 01:00:49.888861 kernel: io scheduler bfq registered Mar 7 01:00:49.888874 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 7 01:00:49.888887 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 7 01:00:49.888899 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 7 01:00:49.888910 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 7 01:00:49.888921 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 7 01:00:49.888933 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 7 01:00:49.888945 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 7 01:00:49.888957 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 7 01:00:49.888973 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 7 01:00:49.889463 kernel: rtc_cmos 00:04: RTC can wake from S4 Mar 7 01:00:49.889794 kernel: rtc_cmos 00:04: registered as rtc0 Mar 7 01:00:49.890034 kernel: rtc_cmos 00:04: setting system clock to 2026-03-07T01:00:47 UTC (1772845247) Mar 7 01:00:49.890057 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 7 01:00:49.890344 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Mar 7 01:00:49.890366 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 7 01:00:49.890380 kernel: NET: Registered PF_INET6 protocol family Mar 7 01:00:49.890401 kernel: Segment Routing with IPv6 Mar 7 01:00:49.890415 kernel: In-situ OAM (IOAM) with IPv6 Mar 7 01:00:49.890428 kernel: NET: Registered PF_PACKET protocol family Mar 7 01:00:49.890440 kernel: Key type dns_resolver registered Mar 7 01:00:49.890454 kernel: IPI shorthand broadcast: enabled Mar 7 01:00:49.890466 kernel: sched_clock: Marking stable (5101044387, 563788366)->(6772764404, -1107931651) Mar 7 01:00:49.890480 kernel: registered taskstats version 1 Mar 7 01:00:49.890492 kernel: Loading compiled-in X.509 certificates Mar 7 01:00:49.890505 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: da286e6f6c247ee6f65a875c513de7da57782e90' Mar 7 01:00:49.890524 kernel: Key type .fscrypt registered Mar 7 01:00:49.890537 kernel: Key type fscrypt-provisioning registered Mar 7 01:00:49.890551 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 7 01:00:49.890563 kernel: ima: Allocated hash algorithm: sha1 Mar 7 01:00:49.890687 kernel: ima: No architecture policies found Mar 7 01:00:49.890705 kernel: clk: Disabling unused clocks Mar 7 01:00:49.890745 kernel: Freeing unused kernel image (initmem) memory: 42892K Mar 7 01:00:49.890838 kernel: Write protecting the kernel read-only data: 36864k Mar 7 01:00:49.890852 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Mar 7 01:00:49.890874 kernel: Run /init as init process Mar 7 01:00:49.890887 kernel: with arguments: Mar 7 01:00:49.890900 kernel: /init Mar 7 01:00:49.890913 kernel: with environment: Mar 7 01:00:49.890927 kernel: HOME=/ Mar 7 01:00:49.890939 kernel: TERM=linux Mar 7 01:00:49.890955 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 7 01:00:49.890972 systemd[1]: Detected virtualization kvm. Mar 7 01:00:49.890993 systemd[1]: Detected architecture x86-64. Mar 7 01:00:49.891006 systemd[1]: Running in initrd. Mar 7 01:00:49.891020 systemd[1]: No hostname configured, using default hostname. Mar 7 01:00:49.891034 systemd[1]: Hostname set to . Mar 7 01:00:49.891048 systemd[1]: Initializing machine ID from VM UUID. Mar 7 01:00:49.891062 systemd[1]: Queued start job for default target initrd.target. Mar 7 01:00:49.891076 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 01:00:49.891090 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 01:00:49.891111 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 7 01:00:49.891174 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 7 01:00:49.891190 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 7 01:00:49.891205 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 7 01:00:49.891221 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 7 01:00:49.891236 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 7 01:00:49.891257 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 01:00:49.891272 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 7 01:00:49.891285 systemd[1]: Reached target paths.target - Path Units. Mar 7 01:00:49.891299 systemd[1]: Reached target slices.target - Slice Units. Mar 7 01:00:49.891313 systemd[1]: Reached target swap.target - Swaps. Mar 7 01:00:49.891351 systemd[1]: Reached target timers.target - Timer Units. Mar 7 01:00:49.891372 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 7 01:00:49.891391 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 7 01:00:49.891406 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 7 01:00:49.891420 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 7 01:00:49.891435 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 7 01:00:49.891449 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 7 01:00:49.891464 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 01:00:49.891478 systemd[1]: Reached target sockets.target - Socket Units. Mar 7 01:00:49.891493 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 7 01:00:49.891517 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 7 01:00:49.891533 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 7 01:00:49.891546 systemd[1]: Starting systemd-fsck-usr.service... Mar 7 01:00:49.891567 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 7 01:00:49.891581 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 7 01:00:49.891596 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:00:49.891649 systemd-journald[195]: Collecting audit messages is disabled. Mar 7 01:00:49.891691 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 7 01:00:49.891735 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 01:00:49.891755 systemd[1]: Finished systemd-fsck-usr.service. Mar 7 01:00:49.891779 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 7 01:00:49.891893 systemd-journald[195]: Journal started Mar 7 01:00:49.891919 systemd-journald[195]: Runtime Journal (/run/log/journal/19d35f7e1b834a2bae2b65386b52285a) is 6.0M, max 48.4M, 42.3M free. Mar 7 01:00:49.913497 systemd[1]: Started systemd-journald.service - Journal Service. Mar 7 01:00:49.919970 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 7 01:00:49.986808 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 7 01:00:50.203641 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 7 01:00:50.203684 kernel: Bridge firewalling registered Mar 7 01:00:49.990513 systemd-modules-load[196]: Inserted module 'overlay' Mar 7 01:00:50.094860 systemd-modules-load[196]: Inserted module 'br_netfilter' Mar 7 01:00:50.237982 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 7 01:00:50.252960 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:00:50.262092 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 01:00:50.284584 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 01:00:50.295701 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 7 01:00:50.317518 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 7 01:00:50.388835 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 7 01:00:50.498682 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 7 01:00:50.522944 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:00:50.568105 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 01:00:50.629365 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 7 01:00:50.899759 dracut-cmdline[233]: dracut-dracut-053 Mar 7 01:00:50.942924 dracut-cmdline[233]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=531e046a631dbba7b4aae1b7955ffa961f5ce7d570e89a624d767cf739ab70b5 Mar 7 01:00:50.942900 systemd-resolved[230]: Positive Trust Anchors: Mar 7 01:00:50.942916 systemd-resolved[230]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 7 01:00:50.942963 systemd-resolved[230]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 7 01:00:50.982432 systemd-resolved[230]: Defaulting to hostname 'linux'. Mar 7 01:00:50.985743 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 7 01:00:51.007457 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 7 01:00:51.612190 kernel: SCSI subsystem initialized Mar 7 01:00:51.681558 kernel: Loading iSCSI transport class v2.0-870. Mar 7 01:00:51.835195 kernel: hrtimer: interrupt took 33751299 ns Mar 7 01:00:51.880518 kernel: iscsi: registered transport (tcp) Mar 7 01:00:51.928935 kernel: iscsi: registered transport (qla4xxx) Mar 7 01:00:51.929191 kernel: QLogic iSCSI HBA Driver Mar 7 01:00:52.103365 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 7 01:00:52.133487 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 7 01:00:52.265882 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 7 01:00:52.273521 kernel: device-mapper: uevent: version 1.0.3 Mar 7 01:00:52.282488 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 7 01:00:52.448021 kernel: raid6: avx2x4 gen() 20345 MB/s Mar 7 01:00:52.469928 kernel: raid6: avx2x2 gen() 14645 MB/s Mar 7 01:00:52.488108 kernel: raid6: avx2x1 gen() 9039 MB/s Mar 7 01:00:52.488336 kernel: raid6: using algorithm avx2x4 gen() 20345 MB/s Mar 7 01:00:52.513276 kernel: raid6: .... xor() 3624 MB/s, rmw enabled Mar 7 01:00:52.513390 kernel: raid6: using avx2x2 recovery algorithm Mar 7 01:00:52.574689 kernel: xor: automatically using best checksumming function avx Mar 7 01:00:53.106552 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 7 01:00:53.155011 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 7 01:00:53.180280 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 01:00:53.239871 systemd-udevd[416]: Using default interface naming scheme 'v255'. Mar 7 01:00:53.261244 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 01:00:53.300447 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 7 01:00:53.358873 dracut-pre-trigger[425]: rd.md=0: removing MD RAID activation Mar 7 01:00:53.472189 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 7 01:00:53.504651 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 7 01:00:53.767338 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 01:00:53.850681 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 7 01:00:53.955711 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 7 01:00:53.967871 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 7 01:00:53.974598 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 01:00:53.980184 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 7 01:00:54.008378 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 7 01:00:54.023190 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 7 01:00:54.023361 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:00:54.041049 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 01:00:54.060694 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 01:00:54.061056 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:00:54.067006 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:00:54.080963 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:00:54.089969 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 7 01:00:54.129173 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Mar 7 01:00:54.129564 kernel: cryptd: max_cpu_qlen set to 1000 Mar 7 01:00:54.142716 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 7 01:00:54.165835 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 7 01:00:54.165956 kernel: GPT:9289727 != 19775487 Mar 7 01:00:54.165999 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 7 01:00:54.166020 kernel: GPT:9289727 != 19775487 Mar 7 01:00:54.166038 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 7 01:00:54.166056 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 7 01:00:54.227180 kernel: libata version 3.00 loaded. Mar 7 01:00:54.282079 kernel: AVX2 version of gcm_enc/dec engaged. Mar 7 01:00:54.288202 kernel: AES CTR mode by8 optimization enabled Mar 7 01:00:54.288259 kernel: ahci 0000:00:1f.2: version 3.0 Mar 7 01:00:54.288598 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 7 01:00:54.299198 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Mar 7 01:00:54.299587 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 7 01:00:54.535281 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 7 01:00:54.717882 kernel: scsi host0: ahci Mar 7 01:00:54.718707 kernel: scsi host1: ahci Mar 7 01:00:54.719093 kernel: scsi host2: ahci Mar 7 01:00:54.719485 kernel: scsi host3: ahci Mar 7 01:00:54.719882 kernel: scsi host4: ahci Mar 7 01:00:54.720238 kernel: scsi host5: ahci Mar 7 01:00:54.720478 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Mar 7 01:00:54.720496 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Mar 7 01:00:54.720511 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Mar 7 01:00:54.720535 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Mar 7 01:00:54.720550 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Mar 7 01:00:54.720566 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Mar 7 01:00:54.720581 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (471) Mar 7 01:00:54.720598 kernel: BTRFS: device fsid 3bed8db9-42ad-4483-9cc8-1ad17a6cd948 devid 1 transid 34 /dev/vda3 scanned by (udev-worker) (474) Mar 7 01:00:54.730257 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:00:54.775424 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 7 01:00:54.806408 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 7 01:00:54.829997 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 7 01:00:54.880374 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 7 01:00:54.880454 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 7 01:00:54.889171 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Mar 7 01:00:54.889248 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 7 01:00:54.890039 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 7 01:00:54.931006 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 7 01:00:54.931045 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Mar 7 01:00:54.931072 kernel: ata3.00: applying bridge limits Mar 7 01:00:54.938864 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 7 01:00:54.944264 kernel: ata3.00: configured for UDMA/100 Mar 7 01:00:54.947634 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 7 01:00:54.961868 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 7 01:00:54.972222 disk-uuid[573]: Primary Header is updated. Mar 7 01:00:54.972222 disk-uuid[573]: Secondary Entries is updated. Mar 7 01:00:54.972222 disk-uuid[573]: Secondary Header is updated. Mar 7 01:00:54.984415 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 01:00:54.997826 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 7 01:00:55.093629 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 7 01:00:55.198262 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:00:55.278653 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Mar 7 01:00:55.279213 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 7 01:00:55.328891 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Mar 7 01:00:56.140530 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 7 01:00:56.146097 disk-uuid[575]: The operation has completed successfully. Mar 7 01:00:56.322761 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 7 01:00:56.324250 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 7 01:00:56.357688 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 7 01:00:56.406561 sh[601]: Success Mar 7 01:00:56.464223 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Mar 7 01:00:56.623387 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 7 01:00:56.640871 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 7 01:00:56.691198 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 7 01:00:56.857868 kernel: BTRFS info (device dm-0): first mount of filesystem 3bed8db9-42ad-4483-9cc8-1ad17a6cd948 Mar 7 01:00:56.857958 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:00:56.874835 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 7 01:00:56.874930 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 7 01:00:56.880854 kernel: BTRFS info (device dm-0): using free space tree Mar 7 01:00:56.919964 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 7 01:00:56.940682 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 7 01:00:57.015109 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 7 01:00:57.066291 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 7 01:00:57.149188 kernel: BTRFS info (device vda6): first mount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:00:57.149281 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:00:57.149304 kernel: BTRFS info (device vda6): using free space tree Mar 7 01:00:57.186439 kernel: BTRFS info (device vda6): auto enabling async discard Mar 7 01:00:57.228868 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 7 01:00:57.251244 kernel: BTRFS info (device vda6): last unmount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:00:57.301101 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 7 01:00:57.334796 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 7 01:00:57.669082 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 7 01:00:57.703436 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 7 01:00:58.154402 systemd-networkd[782]: lo: Link UP Mar 7 01:00:58.154583 systemd-networkd[782]: lo: Gained carrier Mar 7 01:00:58.165357 systemd-networkd[782]: Enumeration completed Mar 7 01:00:58.166502 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 7 01:00:58.175268 systemd-networkd[782]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:00:58.175275 systemd-networkd[782]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 7 01:00:58.201935 ignition[719]: Ignition 2.19.0 Mar 7 01:00:58.183624 systemd-networkd[782]: eth0: Link UP Mar 7 01:00:58.201947 ignition[719]: Stage: fetch-offline Mar 7 01:00:58.183632 systemd-networkd[782]: eth0: Gained carrier Mar 7 01:00:58.202051 ignition[719]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:00:58.183648 systemd-networkd[782]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:00:58.202082 ignition[719]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 7 01:00:58.254859 systemd[1]: Reached target network.target - Network. Mar 7 01:00:58.202551 ignition[719]: parsed url from cmdline: "" Mar 7 01:00:58.271062 systemd-networkd[782]: eth0: DHCPv4 address 10.0.0.13/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 7 01:00:58.202557 ignition[719]: no config URL provided Mar 7 01:00:58.202567 ignition[719]: reading system config file "/usr/lib/ignition/user.ign" Mar 7 01:00:58.202582 ignition[719]: no config at "/usr/lib/ignition/user.ign" Mar 7 01:00:58.202627 ignition[719]: op(1): [started] loading QEMU firmware config module Mar 7 01:00:58.202635 ignition[719]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 7 01:00:58.278943 ignition[719]: op(1): [finished] loading QEMU firmware config module Mar 7 01:00:58.729712 ignition[719]: parsing config with SHA512: 3ffb5ac9ff14d5c6bfd5792161c27d5598651470a1b2d8d818cf8091910d11faca74d0393d484da2157a9bf748acdb16930b474c0c96697a844962ec33c2d21e Mar 7 01:00:58.786819 unknown[719]: fetched base config from "system" Mar 7 01:00:58.787575 ignition[719]: fetch-offline: fetch-offline passed Mar 7 01:00:58.786837 unknown[719]: fetched user config from "qemu" Mar 7 01:00:58.788063 ignition[719]: Ignition finished successfully Mar 7 01:00:58.813850 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 7 01:00:58.824228 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 7 01:00:58.854571 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 7 01:00:58.991996 ignition[793]: Ignition 2.19.0 Mar 7 01:00:58.992787 ignition[793]: Stage: kargs Mar 7 01:00:59.011515 ignition[793]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:00:59.011769 ignition[793]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 7 01:00:59.066817 ignition[793]: kargs: kargs passed Mar 7 01:00:59.067541 ignition[793]: Ignition finished successfully Mar 7 01:00:59.089281 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 7 01:00:59.141733 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 7 01:00:59.258683 ignition[801]: Ignition 2.19.0 Mar 7 01:00:59.258700 ignition[801]: Stage: disks Mar 7 01:00:59.262174 ignition[801]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:00:59.262198 ignition[801]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 7 01:00:59.270469 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 7 01:00:59.266048 ignition[801]: disks: disks passed Mar 7 01:00:59.279015 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 7 01:00:59.266183 ignition[801]: Ignition finished successfully Mar 7 01:00:59.293608 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 7 01:00:59.303029 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 7 01:00:59.338025 systemd[1]: Reached target sysinit.target - System Initialization. Mar 7 01:00:59.351720 systemd[1]: Reached target basic.target - Basic System. Mar 7 01:00:59.533655 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 7 01:00:59.780967 systemd-fsck[811]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 7 01:00:59.800191 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 7 01:00:59.850062 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 7 01:01:00.041964 systemd-networkd[782]: eth0: Gained IPv6LL Mar 7 01:01:00.647688 kernel: EXT4-fs (vda9): mounted filesystem aab0506b-de72-4dd2-9393-24d7958f49a5 r/w with ordered data mode. Quota mode: none. Mar 7 01:01:00.673466 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 7 01:01:00.704278 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 7 01:01:00.745963 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 7 01:01:00.808414 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (819) Mar 7 01:01:00.818098 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 7 01:01:00.835407 kernel: BTRFS info (device vda6): first mount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:01:00.835446 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:01:00.835462 kernel: BTRFS info (device vda6): using free space tree Mar 7 01:01:00.820471 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 7 01:01:00.820549 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 7 01:01:00.820590 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 7 01:01:00.880591 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 7 01:01:00.892853 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 7 01:01:00.905866 kernel: BTRFS info (device vda6): auto enabling async discard Mar 7 01:01:00.919423 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 7 01:01:01.107029 initrd-setup-root[843]: cut: /sysroot/etc/passwd: No such file or directory Mar 7 01:01:01.139898 initrd-setup-root[850]: cut: /sysroot/etc/group: No such file or directory Mar 7 01:01:01.174460 initrd-setup-root[857]: cut: /sysroot/etc/shadow: No such file or directory Mar 7 01:01:01.201816 initrd-setup-root[864]: cut: /sysroot/etc/gshadow: No such file or directory Mar 7 01:01:02.036528 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 7 01:01:02.107819 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 7 01:01:02.146015 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 7 01:01:02.190400 kernel: BTRFS info (device vda6): last unmount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:01:02.206367 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 7 01:01:02.362473 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 7 01:01:02.410818 ignition[932]: INFO : Ignition 2.19.0 Mar 7 01:01:02.410818 ignition[932]: INFO : Stage: mount Mar 7 01:01:02.420632 ignition[932]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 01:01:02.420632 ignition[932]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 7 01:01:02.420632 ignition[932]: INFO : mount: mount passed Mar 7 01:01:02.420632 ignition[932]: INFO : Ignition finished successfully Mar 7 01:01:02.429793 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 7 01:01:02.476990 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 7 01:01:02.522537 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 7 01:01:02.577267 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (945) Mar 7 01:01:02.590808 kernel: BTRFS info (device vda6): first mount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:01:02.590907 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:01:02.596820 kernel: BTRFS info (device vda6): using free space tree Mar 7 01:01:02.638666 kernel: BTRFS info (device vda6): auto enabling async discard Mar 7 01:01:02.646793 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 7 01:01:03.069693 ignition[962]: INFO : Ignition 2.19.0 Mar 7 01:01:03.069693 ignition[962]: INFO : Stage: files Mar 7 01:01:03.069693 ignition[962]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 01:01:03.069693 ignition[962]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 7 01:01:03.069693 ignition[962]: DEBUG : files: compiled without relabeling support, skipping Mar 7 01:01:03.118506 ignition[962]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 7 01:01:03.118506 ignition[962]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 7 01:01:03.118506 ignition[962]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 7 01:01:03.118506 ignition[962]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 7 01:01:03.118506 ignition[962]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 7 01:01:03.118506 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 7 01:01:03.118506 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 7 01:01:03.096367 unknown[962]: wrote ssh authorized keys file for user: core Mar 7 01:01:03.308988 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 7 01:01:05.224509 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 7 01:01:05.224509 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 7 01:01:05.273221 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 7 01:01:05.273221 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 7 01:01:05.273221 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 7 01:01:05.273221 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 7 01:01:05.273221 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 7 01:01:05.273221 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 7 01:01:05.273221 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 7 01:01:05.273221 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 7 01:01:05.273221 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 7 01:01:05.273221 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 7 01:01:05.273221 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 7 01:01:05.273221 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 7 01:01:05.273221 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.35.1-x86-64.raw: attempt #1 Mar 7 01:01:05.939939 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 7 01:01:11.011862 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 7 01:01:11.011862 ignition[962]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 7 01:01:11.029024 ignition[962]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 7 01:01:11.029024 ignition[962]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 7 01:01:11.029024 ignition[962]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 7 01:01:11.029024 ignition[962]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Mar 7 01:01:11.029024 ignition[962]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 7 01:01:11.029024 ignition[962]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 7 01:01:11.029024 ignition[962]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Mar 7 01:01:11.029024 ignition[962]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Mar 7 01:01:11.189735 ignition[962]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 7 01:01:11.206396 ignition[962]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 7 01:01:11.206396 ignition[962]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Mar 7 01:01:11.206396 ignition[962]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Mar 7 01:01:11.206396 ignition[962]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Mar 7 01:01:11.206396 ignition[962]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 7 01:01:11.268223 ignition[962]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 7 01:01:11.268223 ignition[962]: INFO : files: files passed Mar 7 01:01:11.268223 ignition[962]: INFO : Ignition finished successfully Mar 7 01:01:11.215694 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 7 01:01:11.276671 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 7 01:01:11.297276 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 7 01:01:11.327741 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 7 01:01:11.328000 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 7 01:01:11.451963 initrd-setup-root-after-ignition[991]: grep: /sysroot/oem/oem-release: No such file or directory Mar 7 01:01:11.467984 initrd-setup-root-after-ignition[993]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 7 01:01:11.467984 initrd-setup-root-after-ignition[993]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 7 01:01:11.490289 initrd-setup-root-after-ignition[997]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 7 01:01:11.501975 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 7 01:01:11.516205 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 7 01:01:11.546661 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 7 01:01:11.657571 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 7 01:01:11.657884 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 7 01:01:11.675438 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 7 01:01:11.685947 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 7 01:01:11.690068 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 7 01:01:11.714676 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 7 01:01:11.775335 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 7 01:01:11.805855 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 7 01:01:11.850261 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 7 01:01:11.857708 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 01:01:11.862171 systemd[1]: Stopped target timers.target - Timer Units. Mar 7 01:01:11.862355 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 7 01:01:11.862544 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 7 01:01:11.872065 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 7 01:01:12.136637 systemd[1]: Stopped target basic.target - Basic System. Mar 7 01:01:12.161838 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 7 01:01:12.184602 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 7 01:01:12.214321 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 7 01:01:12.245218 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 7 01:01:12.281114 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 7 01:01:12.375312 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 7 01:01:12.415652 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 7 01:01:12.422274 systemd[1]: Stopped target swap.target - Swaps. Mar 7 01:01:12.429416 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 7 01:01:12.429879 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 7 01:01:12.465002 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 7 01:01:12.478349 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 01:01:12.498070 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 7 01:01:12.500696 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 01:01:12.522248 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 7 01:01:12.522483 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 7 01:01:12.550350 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 7 01:01:12.555744 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 7 01:01:12.578324 systemd[1]: Stopped target paths.target - Path Units. Mar 7 01:01:12.584106 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 7 01:01:12.590446 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 01:01:12.617764 systemd[1]: Stopped target slices.target - Slice Units. Mar 7 01:01:12.623953 systemd[1]: Stopped target sockets.target - Socket Units. Mar 7 01:01:12.660622 systemd[1]: iscsid.socket: Deactivated successfully. Mar 7 01:01:12.660990 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 7 01:01:12.665427 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 7 01:01:12.665642 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 7 01:01:12.680510 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 7 01:01:12.680751 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 7 01:01:12.699613 systemd[1]: ignition-files.service: Deactivated successfully. Mar 7 01:01:12.699935 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 7 01:01:12.741093 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 7 01:01:12.749654 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 7 01:01:12.752944 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 01:01:12.779405 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 7 01:01:12.782687 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 7 01:01:12.783037 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 01:01:12.819058 ignition[1017]: INFO : Ignition 2.19.0 Mar 7 01:01:12.819058 ignition[1017]: INFO : Stage: umount Mar 7 01:01:12.819058 ignition[1017]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 01:01:12.819058 ignition[1017]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 7 01:01:12.819058 ignition[1017]: INFO : umount: umount passed Mar 7 01:01:12.819058 ignition[1017]: INFO : Ignition finished successfully Mar 7 01:01:12.787888 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 7 01:01:12.788196 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 7 01:01:12.816306 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 7 01:01:12.816531 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 7 01:01:12.837050 systemd[1]: Stopped target network.target - Network. Mar 7 01:01:12.862349 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 7 01:01:12.862749 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 7 01:01:12.863575 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 7 01:01:12.863828 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 7 01:01:12.869895 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 7 01:01:12.870061 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 7 01:01:12.930671 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 7 01:01:12.931212 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 7 01:01:12.951641 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 7 01:01:12.966566 systemd-networkd[782]: eth0: DHCPv6 lease lost Mar 7 01:01:12.967435 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 7 01:01:12.988358 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 7 01:01:12.991458 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 7 01:01:12.991738 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 7 01:01:13.022854 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 7 01:01:13.023277 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 7 01:01:13.071705 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 7 01:01:13.085630 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 7 01:01:13.116975 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 7 01:01:13.117571 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 7 01:01:13.151916 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 7 01:01:13.152213 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 7 01:01:13.187764 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 7 01:01:13.189037 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 7 01:01:13.244537 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 7 01:01:13.255967 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 7 01:01:13.256107 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 7 01:01:13.269651 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 7 01:01:13.271880 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 7 01:01:13.280309 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 7 01:01:13.280421 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 7 01:01:13.291873 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 7 01:01:13.292009 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 01:01:13.306507 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 01:01:13.375495 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 7 01:01:13.379411 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 7 01:01:13.402080 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 7 01:01:13.402505 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 01:01:13.410752 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 7 01:01:13.410897 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 7 01:01:13.415826 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 7 01:01:13.415897 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 01:01:13.439203 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 7 01:01:13.439317 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 7 01:01:13.464408 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 7 01:01:13.464526 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 7 01:01:13.479679 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 7 01:01:13.479843 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:01:13.526843 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 7 01:01:13.564864 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 7 01:01:13.565031 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 01:01:13.580241 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 01:01:13.580392 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:01:13.599938 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 7 01:01:13.600286 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 7 01:01:13.611599 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 7 01:01:13.648641 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 7 01:01:13.719669 systemd[1]: Switching root. Mar 7 01:01:13.788415 systemd-journald[195]: Journal stopped Mar 7 01:01:20.769493 systemd-journald[195]: Received SIGTERM from PID 1 (systemd). Mar 7 01:01:20.769592 kernel: SELinux: policy capability network_peer_controls=1 Mar 7 01:01:20.769622 kernel: SELinux: policy capability open_perms=1 Mar 7 01:01:20.769689 kernel: SELinux: policy capability extended_socket_class=1 Mar 7 01:01:20.769710 kernel: SELinux: policy capability always_check_network=0 Mar 7 01:01:20.769728 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 7 01:01:20.769746 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 7 01:01:20.769763 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 7 01:01:20.769788 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 7 01:01:20.769865 kernel: audit: type=1403 audit(1772845274.510:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 7 01:01:20.769891 systemd[1]: Successfully loaded SELinux policy in 146.211ms. Mar 7 01:01:20.770037 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 217.102ms. Mar 7 01:01:20.770060 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 7 01:01:20.770079 systemd[1]: Detected virtualization kvm. Mar 7 01:01:20.770097 systemd[1]: Detected architecture x86-64. Mar 7 01:01:20.770114 systemd[1]: Detected first boot. Mar 7 01:01:20.770207 systemd[1]: Initializing machine ID from VM UUID. Mar 7 01:01:20.770225 zram_generator::config[1061]: No configuration found. Mar 7 01:01:20.770244 systemd[1]: Populated /etc with preset unit settings. Mar 7 01:01:20.770261 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 7 01:01:20.770278 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 7 01:01:20.770307 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 7 01:01:20.770326 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 7 01:01:20.770346 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 7 01:01:20.770408 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 7 01:01:20.770462 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 7 01:01:20.770483 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 7 01:01:20.770502 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 7 01:01:20.770520 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 7 01:01:20.770538 systemd[1]: Created slice user.slice - User and Session Slice. Mar 7 01:01:20.770564 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 01:01:20.770583 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 01:01:20.770603 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 7 01:01:20.770628 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 7 01:01:20.770648 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 7 01:01:20.770669 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 7 01:01:20.770690 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 7 01:01:20.770708 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 01:01:20.770727 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 7 01:01:20.770750 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 7 01:01:20.770769 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 7 01:01:20.770834 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 7 01:01:20.770860 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 01:01:20.770880 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 7 01:01:20.770898 systemd[1]: Reached target slices.target - Slice Units. Mar 7 01:01:20.771020 systemd[1]: Reached target swap.target - Swaps. Mar 7 01:01:20.771041 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 7 01:01:20.771060 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 7 01:01:20.771079 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 7 01:01:20.771098 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 7 01:01:20.771180 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 01:01:20.771205 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 7 01:01:20.771224 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 7 01:01:20.771243 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 7 01:01:20.771262 systemd[1]: Mounting media.mount - External Media Directory... Mar 7 01:01:20.771284 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:01:20.771302 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 7 01:01:20.771324 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 7 01:01:20.771341 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 7 01:01:20.771371 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 7 01:01:20.771390 systemd[1]: Reached target machines.target - Containers. Mar 7 01:01:20.771406 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 7 01:01:20.771424 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 7 01:01:20.771441 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 7 01:01:20.771458 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 7 01:01:20.771476 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 7 01:01:20.771537 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 7 01:01:20.771566 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 7 01:01:20.771585 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 7 01:01:20.771606 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 7 01:01:20.771626 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 7 01:01:20.771644 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 7 01:01:20.771663 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 7 01:01:20.771683 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 7 01:01:20.771703 systemd[1]: Stopped systemd-fsck-usr.service. Mar 7 01:01:20.771721 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 7 01:01:20.771744 kernel: fuse: init (API version 7.39) Mar 7 01:01:20.771763 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 7 01:01:20.771782 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 7 01:01:20.771840 kernel: loop: module loaded Mar 7 01:01:20.771863 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 7 01:01:20.771881 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 7 01:01:20.771901 systemd[1]: verity-setup.service: Deactivated successfully. Mar 7 01:01:20.772002 systemd[1]: Stopped verity-setup.service. Mar 7 01:01:20.772030 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:01:20.772095 systemd-journald[1159]: Collecting audit messages is disabled. Mar 7 01:01:20.783223 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 7 01:01:20.783421 systemd-journald[1159]: Journal started Mar 7 01:01:20.789033 systemd-journald[1159]: Runtime Journal (/run/log/journal/19d35f7e1b834a2bae2b65386b52285a) is 6.0M, max 48.4M, 42.3M free. Mar 7 01:01:17.609631 systemd[1]: Queued start job for default target multi-user.target. Mar 7 01:01:17.719584 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 7 01:01:17.721085 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 7 01:01:17.746851 systemd[1]: systemd-journald.service: Consumed 1.882s CPU time. Mar 7 01:01:20.804086 systemd[1]: Started systemd-journald.service - Journal Service. Mar 7 01:01:20.810445 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 7 01:01:20.815248 systemd[1]: Mounted media.mount - External Media Directory. Mar 7 01:01:20.820342 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 7 01:01:20.835425 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 7 01:01:20.854861 kernel: ACPI: bus type drm_connector registered Mar 7 01:01:20.893344 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 7 01:01:21.061556 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 7 01:01:21.079242 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 01:01:21.087673 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 7 01:01:21.088285 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 7 01:01:21.095049 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 7 01:01:21.095458 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 7 01:01:21.105854 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 7 01:01:21.106252 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 7 01:01:21.117284 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 7 01:01:21.117629 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 7 01:01:21.125736 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 7 01:01:21.128367 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 7 01:01:21.142481 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 7 01:01:21.142893 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 7 01:01:21.161332 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 7 01:01:21.168562 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 7 01:01:21.177765 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 7 01:01:21.303022 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 7 01:01:21.334011 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 7 01:01:21.381948 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 7 01:01:21.388477 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 7 01:01:21.388713 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 7 01:01:21.399083 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 7 01:01:21.423024 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 7 01:01:21.445330 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 7 01:01:21.458480 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 7 01:01:21.470773 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 7 01:01:21.703031 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 7 01:01:21.716382 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 7 01:01:21.729392 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 7 01:01:21.738551 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 7 01:01:21.749251 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 7 01:01:21.783520 systemd-journald[1159]: Time spent on flushing to /var/log/journal/19d35f7e1b834a2bae2b65386b52285a is 376.403ms for 940 entries. Mar 7 01:01:21.783520 systemd-journald[1159]: System Journal (/var/log/journal/19d35f7e1b834a2bae2b65386b52285a) is 8.0M, max 195.6M, 187.6M free. Mar 7 01:01:22.274006 systemd-journald[1159]: Received client request to flush runtime journal. Mar 7 01:01:22.274183 kernel: loop0: detected capacity change from 0 to 142488 Mar 7 01:01:21.780892 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 7 01:01:21.805781 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 7 01:01:21.821953 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 01:01:21.835910 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 7 01:01:21.861435 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 7 01:01:21.887683 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 7 01:01:22.077051 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 7 01:01:22.201538 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 7 01:01:22.235264 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 7 01:01:22.272355 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 7 01:01:22.277368 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 7 01:01:22.373189 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 7 01:01:22.376611 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 7 01:01:22.387982 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 7 01:01:22.401307 udevadm[1201]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 7 01:01:22.422592 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 7 01:01:22.441637 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 7 01:01:22.477856 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 7 01:01:22.719858 kernel: loop1: detected capacity change from 0 to 140768 Mar 7 01:01:23.021887 systemd-tmpfiles[1209]: ACLs are not supported, ignoring. Mar 7 01:01:23.021922 systemd-tmpfiles[1209]: ACLs are not supported, ignoring. Mar 7 01:01:23.294733 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 01:01:23.351933 kernel: loop2: detected capacity change from 0 to 217752 Mar 7 01:01:23.647262 kernel: loop3: detected capacity change from 0 to 142488 Mar 7 01:01:23.724195 kernel: loop4: detected capacity change from 0 to 140768 Mar 7 01:01:23.854183 kernel: loop5: detected capacity change from 0 to 217752 Mar 7 01:01:24.200588 (sd-merge)[1214]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 7 01:01:24.204458 (sd-merge)[1214]: Merged extensions into '/usr'. Mar 7 01:01:24.240067 systemd[1]: Reloading requested from client PID 1189 ('systemd-sysext') (unit systemd-sysext.service)... Mar 7 01:01:24.250407 systemd[1]: Reloading... Mar 7 01:01:24.801113 zram_generator::config[1236]: No configuration found. Mar 7 01:01:25.897297 ldconfig[1184]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 7 01:01:26.024032 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 01:01:26.117577 systemd[1]: Reloading finished in 1865 ms. Mar 7 01:01:26.179427 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 7 01:01:26.188322 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 7 01:01:26.223928 systemd[1]: Starting ensure-sysext.service... Mar 7 01:01:26.303772 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 7 01:01:26.627235 systemd[1]: Reloading requested from client PID 1277 ('systemctl') (unit ensure-sysext.service)... Mar 7 01:01:26.627260 systemd[1]: Reloading... Mar 7 01:01:27.060650 systemd-tmpfiles[1278]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 7 01:01:27.066593 systemd-tmpfiles[1278]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 7 01:01:27.071097 systemd-tmpfiles[1278]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 7 01:01:27.077870 systemd-tmpfiles[1278]: ACLs are not supported, ignoring. Mar 7 01:01:27.079092 systemd-tmpfiles[1278]: ACLs are not supported, ignoring. Mar 7 01:01:27.094265 systemd-tmpfiles[1278]: Detected autofs mount point /boot during canonicalization of boot. Mar 7 01:01:27.094285 systemd-tmpfiles[1278]: Skipping /boot Mar 7 01:01:27.128924 zram_generator::config[1305]: No configuration found. Mar 7 01:01:27.142952 systemd-tmpfiles[1278]: Detected autofs mount point /boot during canonicalization of boot. Mar 7 01:01:27.159163 systemd-tmpfiles[1278]: Skipping /boot Mar 7 01:01:27.931521 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 01:01:28.060919 systemd[1]: Reloading finished in 1432 ms. Mar 7 01:01:28.108002 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 7 01:01:28.126947 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 01:01:28.171668 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 7 01:01:28.190786 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 7 01:01:28.206090 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 7 01:01:28.266672 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 7 01:01:28.295180 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 01:01:28.316742 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 7 01:01:28.361948 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 7 01:01:28.368250 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 7 01:01:28.385959 systemd-udevd[1358]: Using default interface naming scheme 'v255'. Mar 7 01:01:28.387571 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:01:28.390017 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 7 01:01:28.398244 augenrules[1367]: No rules Mar 7 01:01:28.400545 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 7 01:01:28.414194 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 7 01:01:28.430373 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 7 01:01:28.443223 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 7 01:01:28.474445 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 7 01:01:28.482072 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:01:28.490629 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 7 01:01:28.508905 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 01:01:28.522557 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 7 01:01:28.522909 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 7 01:01:28.534296 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 7 01:01:28.534731 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 7 01:01:28.557234 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 7 01:01:28.574527 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 7 01:01:28.574818 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 7 01:01:28.590514 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 7 01:01:28.702762 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 7 01:01:28.885719 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 7 01:01:29.284893 systemd[1]: Finished ensure-sysext.service. Mar 7 01:01:29.302767 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:01:29.303216 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 7 01:01:29.314538 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 7 01:01:29.318875 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 7 01:01:29.335408 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 7 01:01:29.345015 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 7 01:01:29.366723 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 7 01:01:29.380594 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 7 01:01:29.391472 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 7 01:01:29.401308 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 7 01:01:29.401380 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:01:29.403107 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 7 01:01:29.403404 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 7 01:01:29.411347 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 7 01:01:29.411731 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 7 01:01:29.419853 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 7 01:01:29.422988 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 7 01:01:30.394791 systemd-resolved[1354]: Positive Trust Anchors: Mar 7 01:01:30.395256 systemd-resolved[1354]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 7 01:01:30.395305 systemd-resolved[1354]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 7 01:01:30.999037 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 7 01:01:31.005954 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 7 01:01:31.006928 systemd-resolved[1354]: Defaulting to hostname 'linux'. Mar 7 01:01:31.010670 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 7 01:01:31.010996 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 7 01:01:31.030637 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 7 01:01:31.048052 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 7 01:01:31.057772 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 7 01:01:31.112592 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1382) Mar 7 01:01:31.168502 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Mar 7 01:01:31.213502 kernel: ACPI: button: Power Button [PWRF] Mar 7 01:01:31.480627 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:01:31.590393 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 7 01:01:31.591733 systemd-networkd[1412]: lo: Link UP Mar 7 01:01:31.591744 systemd-networkd[1412]: lo: Gained carrier Mar 7 01:01:31.603330 systemd[1]: Reached target time-set.target - System Time Set. Mar 7 01:01:31.603616 systemd-networkd[1412]: Enumeration completed Mar 7 01:01:31.610323 systemd-networkd[1412]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:01:31.610334 systemd-networkd[1412]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 7 01:01:31.611391 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 7 01:01:31.618265 systemd-networkd[1412]: eth0: Link UP Mar 7 01:01:31.618286 systemd-networkd[1412]: eth0: Gained carrier Mar 7 01:01:31.618321 systemd-networkd[1412]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:01:31.621491 systemd[1]: Reached target network.target - Network. Mar 7 01:01:31.710361 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Mar 7 01:01:31.718426 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 7 01:01:31.737183 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 7 01:01:31.746544 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Mar 7 01:01:31.749805 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 7 01:01:31.754373 systemd-networkd[1412]: eth0: DHCPv4 address 10.0.0.13/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 7 01:01:31.756093 systemd-timesyncd[1413]: Network configuration changed, trying to establish connection. Mar 7 01:01:32.183090 systemd-resolved[1354]: Clock change detected. Flushing caches. Mar 7 01:01:32.183258 systemd-timesyncd[1413]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 7 01:01:32.183425 systemd-timesyncd[1413]: Initial clock synchronization to Sat 2026-03-07 01:01:32.182994 UTC. Mar 7 01:01:32.260992 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 7 01:01:32.311711 kernel: mousedev: PS/2 mouse device common for all mice Mar 7 01:01:32.301000 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 7 01:01:32.778651 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 7 01:01:33.169892 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:01:33.338409 kernel: kvm_amd: TSC scaling supported Mar 7 01:01:33.341347 kernel: kvm_amd: Nested Virtualization enabled Mar 7 01:01:33.341405 kernel: kvm_amd: Nested Paging enabled Mar 7 01:01:33.341465 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Mar 7 01:01:33.342573 kernel: kvm_amd: PMU virtualization is disabled Mar 7 01:01:33.683945 systemd-networkd[1412]: eth0: Gained IPv6LL Mar 7 01:01:33.708052 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 7 01:01:33.716136 systemd[1]: Reached target network-online.target - Network is Online. Mar 7 01:01:33.849030 kernel: EDAC MC: Ver: 3.0.0 Mar 7 01:01:34.292068 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 7 01:01:34.326639 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 7 01:01:34.383530 lvm[1444]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 7 01:01:34.453276 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 7 01:01:34.465288 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 7 01:01:34.480871 systemd[1]: Reached target sysinit.target - System Initialization. Mar 7 01:01:34.492297 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 7 01:01:34.504129 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 7 01:01:34.519581 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 7 01:01:34.527287 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 7 01:01:34.534153 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 7 01:01:34.540573 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 7 01:01:34.540669 systemd[1]: Reached target paths.target - Path Units. Mar 7 01:01:34.544281 systemd[1]: Reached target timers.target - Timer Units. Mar 7 01:01:34.559785 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 7 01:01:34.586240 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 7 01:01:34.603610 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 7 01:01:34.611127 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 7 01:01:34.619794 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 7 01:01:34.630017 systemd[1]: Reached target sockets.target - Socket Units. Mar 7 01:01:34.636155 systemd[1]: Reached target basic.target - Basic System. Mar 7 01:01:34.636360 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 7 01:01:34.636429 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 7 01:01:34.639681 systemd[1]: Starting containerd.service - containerd container runtime... Mar 7 01:01:34.653134 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 7 01:01:34.667052 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 7 01:01:34.682026 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 7 01:01:34.697776 lvm[1448]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 7 01:01:34.716168 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 7 01:01:34.726134 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 7 01:01:34.734016 jq[1452]: false Mar 7 01:01:34.735958 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:01:34.756934 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 7 01:01:34.770984 extend-filesystems[1453]: Found loop3 Mar 7 01:01:34.770984 extend-filesystems[1453]: Found loop4 Mar 7 01:01:34.770984 extend-filesystems[1453]: Found loop5 Mar 7 01:01:34.770984 extend-filesystems[1453]: Found sr0 Mar 7 01:01:34.770984 extend-filesystems[1453]: Found vda Mar 7 01:01:34.818301 extend-filesystems[1453]: Found vda1 Mar 7 01:01:34.818301 extend-filesystems[1453]: Found vda2 Mar 7 01:01:34.818301 extend-filesystems[1453]: Found vda3 Mar 7 01:01:34.818301 extend-filesystems[1453]: Found usr Mar 7 01:01:34.818301 extend-filesystems[1453]: Found vda4 Mar 7 01:01:34.818301 extend-filesystems[1453]: Found vda6 Mar 7 01:01:34.818301 extend-filesystems[1453]: Found vda7 Mar 7 01:01:34.818301 extend-filesystems[1453]: Found vda9 Mar 7 01:01:34.818301 extend-filesystems[1453]: Checking size of /dev/vda9 Mar 7 01:01:34.908878 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1388) Mar 7 01:01:34.908929 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 7 01:01:34.775863 dbus-daemon[1451]: [system] SELinux support is enabled Mar 7 01:01:34.770982 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 7 01:01:34.909489 extend-filesystems[1453]: Resized partition /dev/vda9 Mar 7 01:01:34.819519 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 7 01:01:34.915631 extend-filesystems[1469]: resize2fs 1.47.1 (20-May-2024) Mar 7 01:01:34.877426 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 7 01:01:34.905942 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 7 01:01:34.957869 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 7 01:01:34.968493 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 7 01:01:34.971009 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 7 01:01:34.984278 systemd[1]: Starting update-engine.service - Update Engine... Mar 7 01:01:34.999055 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 7 01:01:35.010404 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 7 01:01:35.018248 jq[1482]: true Mar 7 01:01:35.026186 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 7 01:01:35.045004 update_engine[1479]: I20260307 01:01:35.042793 1479 main.cc:92] Flatcar Update Engine starting Mar 7 01:01:35.048136 update_engine[1479]: I20260307 01:01:35.046010 1479 update_check_scheduler.cc:74] Next update check in 5m49s Mar 7 01:01:35.060854 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 7 01:01:35.061248 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 7 01:01:35.072594 systemd[1]: motdgen.service: Deactivated successfully. Mar 7 01:01:35.072999 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 7 01:01:35.083560 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 7 01:01:35.110556 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 7 01:01:35.107413 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 7 01:01:35.107949 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 7 01:01:35.175585 systemd-logind[1475]: Watching system buttons on /dev/input/event1 (Power Button) Mar 7 01:01:35.175634 systemd-logind[1475]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 7 01:01:35.178192 systemd-logind[1475]: New seat seat0. Mar 7 01:01:35.195519 extend-filesystems[1469]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 7 01:01:35.195519 extend-filesystems[1469]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 7 01:01:35.195519 extend-filesystems[1469]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 7 01:01:35.260098 extend-filesystems[1453]: Resized filesystem in /dev/vda9 Mar 7 01:01:35.202315 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 7 01:01:35.203851 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 7 01:01:35.276598 jq[1487]: true Mar 7 01:01:35.242544 (ntainerd)[1488]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 7 01:01:35.244011 systemd[1]: Started systemd-logind.service - User Login Management. Mar 7 01:01:35.316931 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 7 01:01:35.317390 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 7 01:01:35.341609 dbus-daemon[1451]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 7 01:01:35.367938 tar[1486]: linux-amd64/LICENSE Mar 7 01:01:35.368937 tar[1486]: linux-amd64/helm Mar 7 01:01:35.427697 systemd[1]: Started update-engine.service - Update Engine. Mar 7 01:01:35.435418 sshd_keygen[1481]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 7 01:01:35.440811 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 7 01:01:35.441317 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 7 01:01:35.441582 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 7 01:01:35.452493 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 7 01:01:35.452711 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 7 01:01:35.538305 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 7 01:01:36.060918 bash[1526]: Updated "/home/core/.ssh/authorized_keys" Mar 7 01:01:36.077293 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 7 01:01:36.096130 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 7 01:01:36.136920 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 7 01:01:36.144840 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 7 01:01:36.156007 locksmithd[1525]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 7 01:01:36.310001 systemd[1]: issuegen.service: Deactivated successfully. Mar 7 01:01:36.310582 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 7 01:01:36.364245 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 7 01:01:36.588898 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 7 01:01:36.696606 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 7 01:01:36.735024 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 7 01:01:36.741402 systemd[1]: Reached target getty.target - Login Prompts. Mar 7 01:01:38.678063 containerd[1488]: time="2026-03-07T01:01:38.661846447Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Mar 7 01:01:39.257343 containerd[1488]: time="2026-03-07T01:01:39.257101086Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 7 01:01:39.274828 containerd[1488]: time="2026-03-07T01:01:39.272286366Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 7 01:01:39.274828 containerd[1488]: time="2026-03-07T01:01:39.274306508Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 7 01:01:39.274828 containerd[1488]: time="2026-03-07T01:01:39.274347053Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 7 01:01:39.275085 containerd[1488]: time="2026-03-07T01:01:39.275056087Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 7 01:01:39.275210 containerd[1488]: time="2026-03-07T01:01:39.275185459Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 7 01:01:39.277947 containerd[1488]: time="2026-03-07T01:01:39.275407874Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 01:01:39.277947 containerd[1488]: time="2026-03-07T01:01:39.275439613Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 7 01:01:39.277947 containerd[1488]: time="2026-03-07T01:01:39.275916263Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 01:01:39.277947 containerd[1488]: time="2026-03-07T01:01:39.275941991Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 7 01:01:39.277947 containerd[1488]: time="2026-03-07T01:01:39.275963491Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 01:01:39.277947 containerd[1488]: time="2026-03-07T01:01:39.275978960Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 7 01:01:39.277947 containerd[1488]: time="2026-03-07T01:01:39.276121687Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 7 01:01:39.277947 containerd[1488]: time="2026-03-07T01:01:39.276668107Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 7 01:01:39.278921 containerd[1488]: time="2026-03-07T01:01:39.278881949Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 01:01:39.279025 containerd[1488]: time="2026-03-07T01:01:39.279001102Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 7 01:01:39.280100 containerd[1488]: time="2026-03-07T01:01:39.280071520Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 7 01:01:39.280427 containerd[1488]: time="2026-03-07T01:01:39.280401977Z" level=info msg="metadata content store policy set" policy=shared Mar 7 01:01:39.308963 containerd[1488]: time="2026-03-07T01:01:39.306499394Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 7 01:01:39.308963 containerd[1488]: time="2026-03-07T01:01:39.306682757Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 7 01:01:39.308963 containerd[1488]: time="2026-03-07T01:01:39.306755743Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 7 01:01:39.308963 containerd[1488]: time="2026-03-07T01:01:39.306784266Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 7 01:01:39.308963 containerd[1488]: time="2026-03-07T01:01:39.306810485Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 7 01:01:39.308963 containerd[1488]: time="2026-03-07T01:01:39.307075019Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 7 01:01:39.311686 containerd[1488]: time="2026-03-07T01:01:39.310709825Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 7 01:01:39.311686 containerd[1488]: time="2026-03-07T01:01:39.311116775Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 7 01:01:39.311686 containerd[1488]: time="2026-03-07T01:01:39.311145449Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 7 01:01:39.311686 containerd[1488]: time="2026-03-07T01:01:39.311167459Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 7 01:01:39.311686 containerd[1488]: time="2026-03-07T01:01:39.311188449Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 7 01:01:39.311686 containerd[1488]: time="2026-03-07T01:01:39.311241127Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 7 01:01:39.311686 containerd[1488]: time="2026-03-07T01:01:39.311286201Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 7 01:01:39.311686 containerd[1488]: time="2026-03-07T01:01:39.311310797Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 7 01:01:39.311686 containerd[1488]: time="2026-03-07T01:01:39.311332508Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 7 01:01:39.311686 containerd[1488]: time="2026-03-07T01:01:39.311353136Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 7 01:01:39.311686 containerd[1488]: time="2026-03-07T01:01:39.311384986Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 7 01:01:39.311686 containerd[1488]: time="2026-03-07T01:01:39.311407378Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 7 01:01:39.311686 containerd[1488]: time="2026-03-07T01:01:39.311492546Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 7 01:01:39.311686 containerd[1488]: time="2026-03-07T01:01:39.311520128Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 7 01:01:39.312244 containerd[1488]: time="2026-03-07T01:01:39.311541508Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 7 01:01:39.313950 containerd[1488]: time="2026-03-07T01:01:39.313343392Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 7 01:01:39.314595 containerd[1488]: time="2026-03-07T01:01:39.314385207Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 7 01:01:39.314595 containerd[1488]: time="2026-03-07T01:01:39.314504911Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 7 01:01:39.314595 containerd[1488]: time="2026-03-07T01:01:39.314529506Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 7 01:01:39.315177 containerd[1488]: time="2026-03-07T01:01:39.314550545Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 7 01:01:39.315177 containerd[1488]: time="2026-03-07T01:01:39.315090423Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 7 01:01:39.315244 containerd[1488]: time="2026-03-07T01:01:39.315205148Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 7 01:01:39.315364 containerd[1488]: time="2026-03-07T01:01:39.315304984Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 7 01:01:39.315364 containerd[1488]: time="2026-03-07T01:01:39.315330201Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 7 01:01:39.315485 containerd[1488]: time="2026-03-07T01:01:39.315410611Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 7 01:01:39.315519 containerd[1488]: time="2026-03-07T01:01:39.315491953Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 7 01:01:39.316261 containerd[1488]: time="2026-03-07T01:01:39.315605996Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 7 01:01:39.316261 containerd[1488]: time="2026-03-07T01:01:39.315632937Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 7 01:01:39.316261 containerd[1488]: time="2026-03-07T01:01:39.315649026Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 7 01:01:39.316261 containerd[1488]: time="2026-03-07T01:01:39.315814184Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 7 01:01:39.318022 containerd[1488]: time="2026-03-07T01:01:39.317924154Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 7 01:01:39.318022 containerd[1488]: time="2026-03-07T01:01:39.317988404Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 7 01:01:39.318138 containerd[1488]: time="2026-03-07T01:01:39.318032737Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 7 01:01:39.318138 containerd[1488]: time="2026-03-07T01:01:39.318054668Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 7 01:01:39.318138 containerd[1488]: time="2026-03-07T01:01:39.318084934Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 7 01:01:39.318138 containerd[1488]: time="2026-03-07T01:01:39.318128826Z" level=info msg="NRI interface is disabled by configuration." Mar 7 01:01:39.318230 containerd[1488]: time="2026-03-07T01:01:39.318147491Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 7 01:01:39.321694 containerd[1488]: time="2026-03-07T01:01:39.320787419Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 7 01:01:39.321694 containerd[1488]: time="2026-03-07T01:01:39.320920568Z" level=info msg="Connect containerd service" Mar 7 01:01:39.321694 containerd[1488]: time="2026-03-07T01:01:39.321022128Z" level=info msg="using legacy CRI server" Mar 7 01:01:39.321694 containerd[1488]: time="2026-03-07T01:01:39.321063876Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 7 01:01:39.321694 containerd[1488]: time="2026-03-07T01:01:39.321274939Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 7 01:01:39.333072 containerd[1488]: time="2026-03-07T01:01:39.329285505Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 7 01:01:39.333072 containerd[1488]: time="2026-03-07T01:01:39.329944571Z" level=info msg="Start subscribing containerd event" Mar 7 01:01:39.333072 containerd[1488]: time="2026-03-07T01:01:39.330217841Z" level=info msg="Start recovering state" Mar 7 01:01:39.333072 containerd[1488]: time="2026-03-07T01:01:39.330627826Z" level=info msg="Start event monitor" Mar 7 01:01:39.333072 containerd[1488]: time="2026-03-07T01:01:39.330708626Z" level=info msg="Start snapshots syncer" Mar 7 01:01:39.333072 containerd[1488]: time="2026-03-07T01:01:39.330799025Z" level=info msg="Start cni network conf syncer for default" Mar 7 01:01:39.333072 containerd[1488]: time="2026-03-07T01:01:39.330810848Z" level=info msg="Start streaming server" Mar 7 01:01:39.333072 containerd[1488]: time="2026-03-07T01:01:39.331996025Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 7 01:01:39.333072 containerd[1488]: time="2026-03-07T01:01:39.332081514Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 7 01:01:39.333582 systemd[1]: Started containerd.service - containerd container runtime. Mar 7 01:01:39.348164 containerd[1488]: time="2026-03-07T01:01:39.333912072Z" level=info msg="containerd successfully booted in 0.681711s" Mar 7 01:01:40.024288 tar[1486]: linux-amd64/README.md Mar 7 01:01:40.109423 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 7 01:01:42.581316 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 7 01:01:42.596427 systemd[1]: Started sshd@0-10.0.0.13:22-10.0.0.1:48848.service - OpenSSH per-connection server daemon (10.0.0.1:48848). Mar 7 01:01:42.922268 sshd[1562]: Accepted publickey for core from 10.0.0.1 port 48848 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:01:42.933097 sshd[1562]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:01:43.058642 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 7 01:01:43.073309 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 7 01:01:43.086648 systemd-logind[1475]: New session 1 of user core. Mar 7 01:01:43.678408 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 7 01:01:43.944898 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 7 01:01:44.542958 (systemd)[1566]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 7 01:01:45.688693 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:01:45.690934 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 7 01:01:45.693907 (kubelet)[1577]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:01:45.779217 systemd[1566]: Queued start job for default target default.target. Mar 7 01:01:45.800313 systemd[1566]: Created slice app.slice - User Application Slice. Mar 7 01:01:45.800369 systemd[1566]: Reached target paths.target - Paths. Mar 7 01:01:45.800396 systemd[1566]: Reached target timers.target - Timers. Mar 7 01:01:45.820813 systemd[1566]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 7 01:01:46.106918 systemd[1566]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 7 01:01:46.107191 systemd[1566]: Reached target sockets.target - Sockets. Mar 7 01:01:46.107216 systemd[1566]: Reached target basic.target - Basic System. Mar 7 01:01:46.107298 systemd[1566]: Reached target default.target - Main User Target. Mar 7 01:01:46.107366 systemd[1566]: Startup finished in 1.327s. Mar 7 01:01:46.107520 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 7 01:01:46.126829 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 7 01:01:46.131132 systemd[1]: Startup finished in 5.508s (kernel) + 26.130s (initrd) + 31.337s (userspace) = 1min 2.977s. Mar 7 01:01:46.487385 systemd[1]: Started sshd@1-10.0.0.13:22-10.0.0.1:48850.service - OpenSSH per-connection server daemon (10.0.0.1:48850). Mar 7 01:01:47.117441 sshd[1583]: Accepted publickey for core from 10.0.0.1 port 48850 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:01:47.151330 sshd[1583]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:01:47.529821 systemd-logind[1475]: New session 2 of user core. Mar 7 01:01:47.548843 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 7 01:01:47.795570 sshd[1583]: pam_unix(sshd:session): session closed for user core Mar 7 01:01:47.844360 systemd[1]: Started sshd@2-10.0.0.13:22-10.0.0.1:48878.service - OpenSSH per-connection server daemon (10.0.0.1:48878). Mar 7 01:01:47.845607 systemd[1]: sshd@1-10.0.0.13:22-10.0.0.1:48850.service: Deactivated successfully. Mar 7 01:01:47.858017 systemd[1]: session-2.scope: Deactivated successfully. Mar 7 01:01:47.881258 systemd-logind[1475]: Session 2 logged out. Waiting for processes to exit. Mar 7 01:01:47.891751 systemd-logind[1475]: Removed session 2. Mar 7 01:01:47.956100 sshd[1593]: Accepted publickey for core from 10.0.0.1 port 48878 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:01:47.969298 sshd[1593]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:01:47.988808 systemd-logind[1475]: New session 3 of user core. Mar 7 01:01:48.005769 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 7 01:01:48.650349 sshd[1593]: pam_unix(sshd:session): session closed for user core Mar 7 01:01:48.679236 systemd[1]: sshd@2-10.0.0.13:22-10.0.0.1:48878.service: Deactivated successfully. Mar 7 01:01:48.690641 systemd[1]: session-3.scope: Deactivated successfully. Mar 7 01:01:48.707636 systemd-logind[1475]: Session 3 logged out. Waiting for processes to exit. Mar 7 01:01:48.738612 systemd[1]: Started sshd@3-10.0.0.13:22-10.0.0.1:48888.service - OpenSSH per-connection server daemon (10.0.0.1:48888). Mar 7 01:01:48.752852 systemd-logind[1475]: Removed session 3. Mar 7 01:01:48.854910 sshd[1602]: Accepted publickey for core from 10.0.0.1 port 48888 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:01:48.953244 sshd[1602]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:01:49.254029 systemd-logind[1475]: New session 4 of user core. Mar 7 01:01:49.290892 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 7 01:01:49.435587 sshd[1602]: pam_unix(sshd:session): session closed for user core Mar 7 01:01:50.013381 systemd[1]: sshd@3-10.0.0.13:22-10.0.0.1:48888.service: Deactivated successfully. Mar 7 01:01:50.023916 systemd[1]: session-4.scope: Deactivated successfully. Mar 7 01:01:50.032766 systemd-logind[1475]: Session 4 logged out. Waiting for processes to exit. Mar 7 01:01:50.055244 systemd[1]: Started sshd@4-10.0.0.13:22-10.0.0.1:48898.service - OpenSSH per-connection server daemon (10.0.0.1:48898). Mar 7 01:01:50.074082 systemd-logind[1475]: Removed session 4. Mar 7 01:01:50.441659 sshd[1610]: Accepted publickey for core from 10.0.0.1 port 48898 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:01:50.457268 sshd[1610]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:01:50.498549 systemd-logind[1475]: New session 5 of user core. Mar 7 01:01:50.513109 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 7 01:01:51.999856 sudo[1613]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 7 01:01:52.001143 sudo[1613]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 01:01:52.102639 sudo[1613]: pam_unix(sudo:session): session closed for user root Mar 7 01:01:52.109212 sshd[1610]: pam_unix(sshd:session): session closed for user core Mar 7 01:01:52.196403 systemd[1]: sshd@4-10.0.0.13:22-10.0.0.1:48898.service: Deactivated successfully. Mar 7 01:01:52.206127 systemd[1]: session-5.scope: Deactivated successfully. Mar 7 01:01:52.220577 systemd-logind[1475]: Session 5 logged out. Waiting for processes to exit. Mar 7 01:01:52.253777 systemd[1]: Started sshd@5-10.0.0.13:22-10.0.0.1:47046.service - OpenSSH per-connection server daemon (10.0.0.1:47046). Mar 7 01:01:52.256180 systemd-logind[1475]: Removed session 5. Mar 7 01:01:52.297479 kubelet[1577]: E0307 01:01:52.296685 1577 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:01:52.306279 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:01:52.306638 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:01:52.307255 systemd[1]: kubelet.service: Consumed 8.827s CPU time. Mar 7 01:01:52.377175 sshd[1619]: Accepted publickey for core from 10.0.0.1 port 47046 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:01:52.382094 sshd[1619]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:01:52.412321 systemd-logind[1475]: New session 6 of user core. Mar 7 01:01:52.431135 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 7 01:01:53.141870 sudo[1625]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 7 01:01:53.142797 sudo[1625]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 01:01:53.192937 sudo[1625]: pam_unix(sudo:session): session closed for user root Mar 7 01:01:53.216334 sudo[1624]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Mar 7 01:01:53.219012 sudo[1624]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 01:01:53.275286 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Mar 7 01:01:53.289158 auditctl[1628]: No rules Mar 7 01:01:53.295042 systemd[1]: audit-rules.service: Deactivated successfully. Mar 7 01:01:53.295659 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Mar 7 01:01:53.321255 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 7 01:01:53.435181 augenrules[1646]: No rules Mar 7 01:01:53.440074 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 7 01:01:53.458701 sudo[1624]: pam_unix(sudo:session): session closed for user root Mar 7 01:01:53.552431 sshd[1619]: pam_unix(sshd:session): session closed for user core Mar 7 01:01:53.590029 systemd[1]: sshd@5-10.0.0.13:22-10.0.0.1:47046.service: Deactivated successfully. Mar 7 01:01:53.601304 systemd[1]: session-6.scope: Deactivated successfully. Mar 7 01:01:53.646788 systemd-logind[1475]: Session 6 logged out. Waiting for processes to exit. Mar 7 01:01:53.674167 systemd[1]: Started sshd@6-10.0.0.13:22-10.0.0.1:47058.service - OpenSSH per-connection server daemon (10.0.0.1:47058). Mar 7 01:01:53.678955 systemd-logind[1475]: Removed session 6. Mar 7 01:01:53.777334 sshd[1654]: Accepted publickey for core from 10.0.0.1 port 47058 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:01:53.785151 sshd[1654]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:01:53.814920 systemd-logind[1475]: New session 7 of user core. Mar 7 01:01:53.824143 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 7 01:01:53.982468 sudo[1657]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 7 01:01:53.985498 sudo[1657]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 01:02:00.173068 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 7 01:02:00.174311 (dockerd)[1675]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 7 01:02:02.575254 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 7 01:02:02.655216 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:02:06.638019 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:02:06.638148 (kubelet)[1693]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:02:06.691989 dockerd[1675]: time="2026-03-07T01:02:06.691000877Z" level=info msg="Starting up" Mar 7 01:02:06.977982 kubelet[1693]: E0307 01:02:06.977607 1693 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:02:06.991805 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:02:06.992408 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:02:06.993911 systemd[1]: kubelet.service: Consumed 2.246s CPU time. Mar 7 01:02:08.164974 dockerd[1675]: time="2026-03-07T01:02:08.163789975Z" level=info msg="Loading containers: start." Mar 7 01:02:09.040849 kernel: Initializing XFRM netlink socket Mar 7 01:02:09.611393 systemd-networkd[1412]: docker0: Link UP Mar 7 01:02:09.694581 dockerd[1675]: time="2026-03-07T01:02:09.693933120Z" level=info msg="Loading containers: done." Mar 7 01:02:09.920532 dockerd[1675]: time="2026-03-07T01:02:09.918647506Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 7 01:02:09.920532 dockerd[1675]: time="2026-03-07T01:02:09.919010595Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Mar 7 01:02:09.921209 dockerd[1675]: time="2026-03-07T01:02:09.921133649Z" level=info msg="Daemon has completed initialization" Mar 7 01:02:10.254564 dockerd[1675]: time="2026-03-07T01:02:10.252800223Z" level=info msg="API listen on /run/docker.sock" Mar 7 01:02:10.256704 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 7 01:02:15.423776 containerd[1488]: time="2026-03-07T01:02:15.421965021Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.2\"" Mar 7 01:02:17.184796 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 7 01:02:17.211225 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:02:18.038669 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount667614543.mount: Deactivated successfully. Mar 7 01:02:18.072247 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:02:18.080208 (kubelet)[1851]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:02:18.803031 kubelet[1851]: E0307 01:02:18.802224 1851 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:02:18.812473 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:02:18.812924 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:02:18.813618 systemd[1]: kubelet.service: Consumed 1.018s CPU time. Mar 7 01:02:20.167851 update_engine[1479]: I20260307 01:02:20.160195 1479 update_attempter.cc:509] Updating boot flags... Mar 7 01:02:20.726479 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1877) Mar 7 01:02:28.952664 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 7 01:02:29.002218 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:02:29.696971 containerd[1488]: time="2026-03-07T01:02:29.683655679Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:02:29.790927 containerd[1488]: time="2026-03-07T01:02:29.789461755Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.35.2: active requests=0, bytes read=27696467" Mar 7 01:02:29.808885 containerd[1488]: time="2026-03-07T01:02:29.805788571Z" level=info msg="ImageCreate event name:\"sha256:66108468ce51257077e642f2f509cd61d470029036a7954a1a47ca15b2706dda\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:02:29.828340 containerd[1488]: time="2026-03-07T01:02:29.828255818Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:68cdc586f13b13edb7aa30a18155be530136a39cfd5ef8672aad8ccc98f0a7f7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:02:29.875527 containerd[1488]: time="2026-03-07T01:02:29.872246600Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.35.2\" with image id \"sha256:66108468ce51257077e642f2f509cd61d470029036a7954a1a47ca15b2706dda\", repo tag \"registry.k8s.io/kube-apiserver:v1.35.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:68cdc586f13b13edb7aa30a18155be530136a39cfd5ef8672aad8ccc98f0a7f7\", size \"27693066\" in 14.450127457s" Mar 7 01:02:29.875527 containerd[1488]: time="2026-03-07T01:02:29.872389334Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.2\" returns image reference \"sha256:66108468ce51257077e642f2f509cd61d470029036a7954a1a47ca15b2706dda\"" Mar 7 01:02:29.893194 containerd[1488]: time="2026-03-07T01:02:29.890374525Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.2\"" Mar 7 01:02:30.334081 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:02:30.339013 (kubelet)[1935]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:02:30.900375 kubelet[1935]: E0307 01:02:30.897867 1935 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:02:30.919847 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:02:30.920171 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:02:30.922929 systemd[1]: kubelet.service: Consumed 1.182s CPU time. Mar 7 01:02:38.540047 containerd[1488]: time="2026-03-07T01:02:38.536500946Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:02:38.540047 containerd[1488]: time="2026-03-07T01:02:38.539440781Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.35.2: active requests=0, bytes read=21450700" Mar 7 01:02:38.542931 containerd[1488]: time="2026-03-07T01:02:38.541008068Z" level=info msg="ImageCreate event name:\"sha256:0f2dd35011c05b55a97c9304ae1d36cfd58499cc1fd3dd8ccdf6efef1144e36a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:02:38.661501 containerd[1488]: time="2026-03-07T01:02:38.659510808Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d9784320a41dd1b155c0ad8fdb5823d60c475870f3dd23865edde36b585748f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:02:38.667636 containerd[1488]: time="2026-03-07T01:02:38.663477595Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.35.2\" with image id \"sha256:0f2dd35011c05b55a97c9304ae1d36cfd58499cc1fd3dd8ccdf6efef1144e36a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.35.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d9784320a41dd1b155c0ad8fdb5823d60c475870f3dd23865edde36b585748f2\", size \"23142311\" in 8.773044171s" Mar 7 01:02:38.667636 containerd[1488]: time="2026-03-07T01:02:38.663524843Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.2\" returns image reference \"sha256:0f2dd35011c05b55a97c9304ae1d36cfd58499cc1fd3dd8ccdf6efef1144e36a\"" Mar 7 01:02:38.676888 containerd[1488]: time="2026-03-07T01:02:38.674176603Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.2\"" Mar 7 01:02:40.985989 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Mar 7 01:02:41.005207 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:02:41.709144 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:02:41.764649 (kubelet)[1959]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:02:42.086104 kubelet[1959]: E0307 01:02:42.083869 1959 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:02:42.089037 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:02:42.091258 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:02:46.031286 containerd[1488]: time="2026-03-07T01:02:46.027323736Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:02:46.037361 containerd[1488]: time="2026-03-07T01:02:46.037252144Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.35.2: active requests=0, bytes read=15548429" Mar 7 01:02:46.040764 containerd[1488]: time="2026-03-07T01:02:46.040517637Z" level=info msg="ImageCreate event name:\"sha256:ee83c410d7938aa1752b4e79a8d51f03710b4becc23b2e095fba471049fb2914\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:02:46.062011 containerd[1488]: time="2026-03-07T01:02:46.052835313Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5833e2c4b779215efe7a48126c067de199e86aa5a86518693adeef16db0ff943\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:02:46.062011 containerd[1488]: time="2026-03-07T01:02:46.054638240Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.35.2\" with image id \"sha256:ee83c410d7938aa1752b4e79a8d51f03710b4becc23b2e095fba471049fb2914\", repo tag \"registry.k8s.io/kube-scheduler:v1.35.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5833e2c4b779215efe7a48126c067de199e86aa5a86518693adeef16db0ff943\", size \"17240058\" in 7.380232931s" Mar 7 01:02:46.062011 containerd[1488]: time="2026-03-07T01:02:46.058770846Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.2\" returns image reference \"sha256:ee83c410d7938aa1752b4e79a8d51f03710b4becc23b2e095fba471049fb2914\"" Mar 7 01:02:46.079426 containerd[1488]: time="2026-03-07T01:02:46.067232984Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.2\"" Mar 7 01:02:52.196639 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Mar 7 01:02:52.299663 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:02:53.052086 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:02:53.117367 (kubelet)[1979]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:02:53.178452 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3384568414.mount: Deactivated successfully. Mar 7 01:02:53.557221 kubelet[1979]: E0307 01:02:53.542168 1979 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:02:53.570317 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:02:53.570640 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:02:57.240279 containerd[1488]: time="2026-03-07T01:02:57.239195340Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:02:57.289862 containerd[1488]: time="2026-03-07T01:02:57.289466605Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.35.2: active requests=0, bytes read=25685312" Mar 7 01:02:57.346508 containerd[1488]: time="2026-03-07T01:02:57.343366388Z" level=info msg="ImageCreate event name:\"sha256:3c471cf273e44f68c91b48985c27627d581915b9ee5e72f7227bbf2146008b5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:02:57.524643 containerd[1488]: time="2026-03-07T01:02:57.521003269Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:015265214cc874b593a7adccdcfe4ac15d2b8e9ae89881bdcd5bcb99d42e1862\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:02:57.542323 containerd[1488]: time="2026-03-07T01:02:57.534608057Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.35.2\" with image id \"sha256:3c471cf273e44f68c91b48985c27627d581915b9ee5e72f7227bbf2146008b5e\", repo tag \"registry.k8s.io/kube-proxy:v1.35.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:015265214cc874b593a7adccdcfe4ac15d2b8e9ae89881bdcd5bcb99d42e1862\", size \"25684331\" in 11.463384862s" Mar 7 01:02:57.542323 containerd[1488]: time="2026-03-07T01:02:57.534787251Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.2\" returns image reference \"sha256:3c471cf273e44f68c91b48985c27627d581915b9ee5e72f7227bbf2146008b5e\"" Mar 7 01:02:57.551380 containerd[1488]: time="2026-03-07T01:02:57.549575255Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\"" Mar 7 01:02:59.346273 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1149062456.mount: Deactivated successfully. Mar 7 01:03:03.736605 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Mar 7 01:03:03.771243 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:03:05.716104 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:03:05.731634 (kubelet)[2056]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:03:06.459198 kubelet[2056]: E0307 01:03:06.458214 2056 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:03:06.489339 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:03:06.489901 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:03:06.492910 systemd[1]: kubelet.service: Consumed 1.437s CPU time. Mar 7 01:03:09.663375 containerd[1488]: time="2026-03-07T01:03:09.661217022Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:03:09.675159 containerd[1488]: time="2026-03-07T01:03:09.674622522Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.13.1: active requests=0, bytes read=23556542" Mar 7 01:03:09.675904 containerd[1488]: time="2026-03-07T01:03:09.675609812Z" level=info msg="ImageCreate event name:\"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:03:09.693944 containerd[1488]: time="2026-03-07T01:03:09.693435986Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:03:09.699469 containerd[1488]: time="2026-03-07T01:03:09.697146675Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.13.1\" with image id \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\", repo tag \"registry.k8s.io/coredns/coredns:v1.13.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\", size \"23553139\" in 12.147477596s" Mar 7 01:03:09.699469 containerd[1488]: time="2026-03-07T01:03:09.697261728Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\" returns image reference \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\"" Mar 7 01:03:09.706917 containerd[1488]: time="2026-03-07T01:03:09.704409451Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Mar 7 01:03:11.104560 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount106691304.mount: Deactivated successfully. Mar 7 01:03:11.130585 containerd[1488]: time="2026-03-07T01:03:11.128899843Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:03:11.130585 containerd[1488]: time="2026-03-07T01:03:11.129451331Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Mar 7 01:03:11.134427 containerd[1488]: time="2026-03-07T01:03:11.133283312Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:03:11.142694 containerd[1488]: time="2026-03-07T01:03:11.142401602Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:03:11.146417 containerd[1488]: time="2026-03-07T01:03:11.146289961Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 1.441814037s" Mar 7 01:03:11.146417 containerd[1488]: time="2026-03-07T01:03:11.146371752Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Mar 7 01:03:11.156326 containerd[1488]: time="2026-03-07T01:03:11.156195182Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\"" Mar 7 01:03:12.387541 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount236714454.mount: Deactivated successfully. Mar 7 01:03:16.692331 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Mar 7 01:03:16.733555 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:03:17.465092 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:03:17.486475 (kubelet)[2128]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:03:17.985450 kubelet[2128]: E0307 01:03:17.985391 2128 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:03:17.995014 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:03:17.995531 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:03:23.249352 containerd[1488]: time="2026-03-07T01:03:23.248549683Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.6-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:03:23.271430 containerd[1488]: time="2026-03-07T01:03:23.271199707Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.6-0: active requests=0, bytes read=23630322" Mar 7 01:03:23.274807 containerd[1488]: time="2026-03-07T01:03:23.274482351Z" level=info msg="ImageCreate event name:\"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:03:23.286990 containerd[1488]: time="2026-03-07T01:03:23.286694552Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:03:23.295315 containerd[1488]: time="2026-03-07T01:03:23.289400333Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.6-0\" with image id \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\", repo tag \"registry.k8s.io/etcd:3.6.6-0\", repo digest \"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\", size \"23641797\" in 12.133119621s" Mar 7 01:03:23.295315 containerd[1488]: time="2026-03-07T01:03:23.294705434Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\" returns image reference \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\"" Mar 7 01:03:27.589212 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:03:27.615255 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:03:27.774594 systemd[1]: Reloading requested from client PID 2181 ('systemctl') (unit session-7.scope)... Mar 7 01:03:27.774642 systemd[1]: Reloading... Mar 7 01:03:28.016605 zram_generator::config[2216]: No configuration found. Mar 7 01:03:28.601377 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 01:03:28.832339 systemd[1]: Reloading finished in 1056 ms. Mar 7 01:03:29.006370 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:03:29.021377 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:03:29.023967 systemd[1]: kubelet.service: Deactivated successfully. Mar 7 01:03:29.024573 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:03:29.036999 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:03:30.068633 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:03:30.117855 (kubelet)[2269]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 7 01:03:30.324863 kubelet[2269]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 7 01:03:31.395524 kubelet[2269]: I0307 01:03:31.389478 2269 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Mar 7 01:03:31.395524 kubelet[2269]: I0307 01:03:31.393648 2269 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 7 01:03:31.395524 kubelet[2269]: I0307 01:03:31.393813 2269 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 7 01:03:31.395524 kubelet[2269]: I0307 01:03:31.393825 2269 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 7 01:03:31.395524 kubelet[2269]: I0307 01:03:31.394472 2269 server.go:951] "Client rotation is on, will bootstrap in background" Mar 7 01:03:31.546821 kubelet[2269]: I0307 01:03:31.546683 2269 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 7 01:03:31.548984 kubelet[2269]: E0307 01:03:31.548704 2269 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.13:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 7 01:03:31.575786 kubelet[2269]: E0307 01:03:31.574247 2269 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 7 01:03:31.575786 kubelet[2269]: I0307 01:03:31.574332 2269 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 7 01:03:31.602254 kubelet[2269]: I0307 01:03:31.602151 2269 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 7 01:03:31.609614 kubelet[2269]: I0307 01:03:31.609490 2269 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 7 01:03:31.610119 kubelet[2269]: I0307 01:03:31.609578 2269 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 7 01:03:31.610683 kubelet[2269]: I0307 01:03:31.610154 2269 topology_manager.go:143] "Creating topology manager with none policy" Mar 7 01:03:31.610683 kubelet[2269]: I0307 01:03:31.610174 2269 container_manager_linux.go:308] "Creating device plugin manager" Mar 7 01:03:31.610683 kubelet[2269]: I0307 01:03:31.610338 2269 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Mar 7 01:03:31.619900 kubelet[2269]: I0307 01:03:31.619709 2269 state_mem.go:41] "Initialized" logger="CPUManager state memory" Mar 7 01:03:31.620766 kubelet[2269]: I0307 01:03:31.620685 2269 kubelet.go:482] "Attempting to sync node with API server" Mar 7 01:03:31.620865 kubelet[2269]: I0307 01:03:31.620796 2269 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 7 01:03:31.620916 kubelet[2269]: I0307 01:03:31.620884 2269 kubelet.go:394] "Adding apiserver pod source" Mar 7 01:03:31.620960 kubelet[2269]: I0307 01:03:31.620941 2269 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 7 01:03:31.631324 kubelet[2269]: I0307 01:03:31.631130 2269 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 7 01:03:31.637103 kubelet[2269]: I0307 01:03:31.635895 2269 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 7 01:03:31.640489 kubelet[2269]: I0307 01:03:31.638047 2269 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 7 01:03:31.640489 kubelet[2269]: W0307 01:03:31.638466 2269 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 7 01:03:31.654611 kubelet[2269]: I0307 01:03:31.651035 2269 server.go:1257] "Started kubelet" Mar 7 01:03:31.654611 kubelet[2269]: I0307 01:03:31.653849 2269 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Mar 7 01:03:31.667609 kubelet[2269]: I0307 01:03:31.667114 2269 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 7 01:03:31.667609 kubelet[2269]: I0307 01:03:31.667391 2269 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 7 01:03:31.672562 kubelet[2269]: I0307 01:03:31.670990 2269 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 7 01:03:31.672562 kubelet[2269]: I0307 01:03:31.671352 2269 server.go:317] "Adding debug handlers to kubelet server" Mar 7 01:03:31.712484 kubelet[2269]: I0307 01:03:31.709030 2269 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Mar 7 01:03:31.715876 kubelet[2269]: I0307 01:03:31.715811 2269 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 7 01:03:31.719276 kubelet[2269]: E0307 01:03:31.719205 2269 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:03:31.727952 kubelet[2269]: E0307 01:03:31.712459 2269 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.13:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.13:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.189a69824b23408f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-07 01:03:31.650977935 +0000 UTC m=+1.514867817,LastTimestamp:2026-03-07 01:03:31.650977935 +0000 UTC m=+1.514867817,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 7 01:03:31.730195 kubelet[2269]: E0307 01:03:31.720030 2269 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.13:6443: connect: connection refused" interval="200ms" Mar 7 01:03:31.730195 kubelet[2269]: I0307 01:03:31.720132 2269 volume_manager.go:311] "Starting Kubelet Volume Manager" Mar 7 01:03:31.730195 kubelet[2269]: I0307 01:03:31.720198 2269 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 7 01:03:31.730195 kubelet[2269]: E0307 01:03:31.728101 2269 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 7 01:03:31.730195 kubelet[2269]: I0307 01:03:31.728512 2269 reconciler.go:29] "Reconciler: start to sync state" Mar 7 01:03:31.743314 kubelet[2269]: I0307 01:03:31.741437 2269 factory.go:223] Registration of the containerd container factory successfully Mar 7 01:03:31.743314 kubelet[2269]: I0307 01:03:31.741491 2269 factory.go:223] Registration of the systemd container factory successfully Mar 7 01:03:31.743314 kubelet[2269]: I0307 01:03:31.741926 2269 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 7 01:03:31.812088 kubelet[2269]: I0307 01:03:31.810112 2269 cpu_manager.go:225] "Starting" policy="none" Mar 7 01:03:31.812088 kubelet[2269]: I0307 01:03:31.810144 2269 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 7 01:03:31.812088 kubelet[2269]: I0307 01:03:31.810196 2269 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Mar 7 01:03:31.829339 kubelet[2269]: E0307 01:03:31.828926 2269 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:03:31.831603 kubelet[2269]: I0307 01:03:31.830279 2269 policy_none.go:50] "Start" Mar 7 01:03:31.831603 kubelet[2269]: I0307 01:03:31.830328 2269 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 7 01:03:31.831603 kubelet[2269]: I0307 01:03:31.830650 2269 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 7 01:03:31.842315 kubelet[2269]: I0307 01:03:31.842267 2269 policy_none.go:44] "Start" Mar 7 01:03:31.872857 kubelet[2269]: I0307 01:03:31.872062 2269 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 7 01:03:31.885526 kubelet[2269]: I0307 01:03:31.882848 2269 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 7 01:03:31.885526 kubelet[2269]: I0307 01:03:31.882984 2269 status_manager.go:249] "Starting to sync pod status with apiserver" Mar 7 01:03:31.885526 kubelet[2269]: I0307 01:03:31.883162 2269 kubelet.go:2501] "Starting kubelet main sync loop" Mar 7 01:03:31.885526 kubelet[2269]: E0307 01:03:31.883490 2269 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 7 01:03:31.915562 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 7 01:03:31.930831 kubelet[2269]: E0307 01:03:31.930605 2269 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:03:31.931324 kubelet[2269]: E0307 01:03:31.931205 2269 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.13:6443: connect: connection refused" interval="400ms" Mar 7 01:03:31.959440 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 7 01:03:31.968355 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 7 01:03:31.984489 kubelet[2269]: E0307 01:03:31.984265 2269 kubelet.go:2525] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 7 01:03:31.992609 kubelet[2269]: E0307 01:03:31.989329 2269 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 7 01:03:31.992609 kubelet[2269]: I0307 01:03:31.989845 2269 eviction_manager.go:194] "Eviction manager: starting control loop" Mar 7 01:03:31.992609 kubelet[2269]: I0307 01:03:31.989890 2269 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 7 01:03:31.994392 kubelet[2269]: I0307 01:03:31.993304 2269 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Mar 7 01:03:31.995669 kubelet[2269]: E0307 01:03:31.995191 2269 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 7 01:03:31.995669 kubelet[2269]: E0307 01:03:31.995286 2269 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 7 01:03:32.097613 kubelet[2269]: I0307 01:03:32.097052 2269 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 7 01:03:32.097613 kubelet[2269]: E0307 01:03:32.097564 2269 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.13:6443/api/v1/nodes\": dial tcp 10.0.0.13:6443: connect: connection refused" node="localhost" Mar 7 01:03:32.234510 kubelet[2269]: I0307 01:03:32.232990 2269 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e9de956bf47baae4bcbff7d8bd8da672-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e9de956bf47baae4bcbff7d8bd8da672\") " pod="kube-system/kube-apiserver-localhost" Mar 7 01:03:32.234510 kubelet[2269]: I0307 01:03:32.233240 2269 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e9de956bf47baae4bcbff7d8bd8da672-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e9de956bf47baae4bcbff7d8bd8da672\") " pod="kube-system/kube-apiserver-localhost" Mar 7 01:03:32.234510 kubelet[2269]: I0307 01:03:32.233279 2269 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 01:03:32.234510 kubelet[2269]: I0307 01:03:32.233308 2269 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 01:03:32.234510 kubelet[2269]: I0307 01:03:32.233338 2269 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 01:03:32.237249 kubelet[2269]: I0307 01:03:32.233364 2269 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 01:03:32.237249 kubelet[2269]: I0307 01:03:32.233561 2269 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bd81bb6a14e176da833e3a8030ee5eac-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"bd81bb6a14e176da833e3a8030ee5eac\") " pod="kube-system/kube-scheduler-localhost" Mar 7 01:03:32.237249 kubelet[2269]: I0307 01:03:32.233588 2269 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e9de956bf47baae4bcbff7d8bd8da672-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e9de956bf47baae4bcbff7d8bd8da672\") " pod="kube-system/kube-apiserver-localhost" Mar 7 01:03:32.237249 kubelet[2269]: I0307 01:03:32.233614 2269 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 01:03:32.248929 systemd[1]: Created slice kubepods-burstable-pode9de956bf47baae4bcbff7d8bd8da672.slice - libcontainer container kubepods-burstable-pode9de956bf47baae4bcbff7d8bd8da672.slice. Mar 7 01:03:32.265688 kubelet[2269]: E0307 01:03:32.264591 2269 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:03:32.276626 systemd[1]: Created slice kubepods-burstable-podf420dd303687d038b2bc2fa1d277c55c.slice - libcontainer container kubepods-burstable-podf420dd303687d038b2bc2fa1d277c55c.slice. Mar 7 01:03:32.283322 kubelet[2269]: E0307 01:03:32.283180 2269 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:03:32.290679 systemd[1]: Created slice kubepods-burstable-podbd81bb6a14e176da833e3a8030ee5eac.slice - libcontainer container kubepods-burstable-podbd81bb6a14e176da833e3a8030ee5eac.slice. Mar 7 01:03:32.300293 kubelet[2269]: E0307 01:03:32.300183 2269 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:03:32.301017 kubelet[2269]: I0307 01:03:32.300928 2269 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 7 01:03:32.302278 kubelet[2269]: E0307 01:03:32.302083 2269 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.13:6443/api/v1/nodes\": dial tcp 10.0.0.13:6443: connect: connection refused" node="localhost" Mar 7 01:03:32.333132 kubelet[2269]: E0307 01:03:32.333003 2269 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.13:6443: connect: connection refused" interval="800ms" Mar 7 01:03:32.682635 kubelet[2269]: E0307 01:03:32.651333 2269 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:03:32.706110 containerd[1488]: time="2026-03-07T01:03:32.704444013Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:f420dd303687d038b2bc2fa1d277c55c,Namespace:kube-system,Attempt:0,}" Mar 7 01:03:32.707095 kubelet[2269]: E0307 01:03:32.705299 2269 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:03:32.707095 kubelet[2269]: I0307 01:03:32.705825 2269 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 7 01:03:32.707095 kubelet[2269]: E0307 01:03:32.706380 2269 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.13:6443/api/v1/nodes\": dial tcp 10.0.0.13:6443: connect: connection refused" node="localhost" Mar 7 01:03:32.708084 containerd[1488]: time="2026-03-07T01:03:32.707642105Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e9de956bf47baae4bcbff7d8bd8da672,Namespace:kube-system,Attempt:0,}" Mar 7 01:03:32.711853 kubelet[2269]: E0307 01:03:32.711235 2269 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:03:32.713968 containerd[1488]: time="2026-03-07T01:03:32.712160576Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:bd81bb6a14e176da833e3a8030ee5eac,Namespace:kube-system,Attempt:0,}" Mar 7 01:03:33.353285 kubelet[2269]: E0307 01:03:33.349880 2269 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.13:6443: connect: connection refused" interval="1.6s" Mar 7 01:03:33.517096 kubelet[2269]: I0307 01:03:33.516664 2269 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 7 01:03:33.518264 kubelet[2269]: E0307 01:03:33.517546 2269 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.13:6443/api/v1/nodes\": dial tcp 10.0.0.13:6443: connect: connection refused" node="localhost" Mar 7 01:03:33.668164 kubelet[2269]: E0307 01:03:33.659374 2269 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.13:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 7 01:03:34.589594 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3535043764.mount: Deactivated successfully. Mar 7 01:03:34.647085 containerd[1488]: time="2026-03-07T01:03:34.640066323Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:03:34.657672 containerd[1488]: time="2026-03-07T01:03:34.656316903Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 7 01:03:34.660801 containerd[1488]: time="2026-03-07T01:03:34.660302514Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:03:34.669228 containerd[1488]: time="2026-03-07T01:03:34.669089660Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:03:34.723874 containerd[1488]: time="2026-03-07T01:03:34.722791233Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:03:34.736645 containerd[1488]: time="2026-03-07T01:03:34.725285010Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Mar 7 01:03:34.748868 containerd[1488]: time="2026-03-07T01:03:34.748143382Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 7 01:03:34.757143 containerd[1488]: time="2026-03-07T01:03:34.756976016Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:03:34.758269 containerd[1488]: time="2026-03-07T01:03:34.757967678Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 2.052059882s" Mar 7 01:03:34.761060 containerd[1488]: time="2026-03-07T01:03:34.760978047Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 2.048478161s" Mar 7 01:03:34.770925 containerd[1488]: time="2026-03-07T01:03:34.770703070Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 2.062858699s" Mar 7 01:03:34.987858 kubelet[2269]: E0307 01:03:34.986871 2269 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.13:6443: connect: connection refused" interval="3.2s" Mar 7 01:03:35.293409 kubelet[2269]: I0307 01:03:35.291001 2269 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 7 01:03:35.293409 kubelet[2269]: E0307 01:03:35.292470 2269 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.13:6443/api/v1/nodes\": dial tcp 10.0.0.13:6443: connect: connection refused" node="localhost" Mar 7 01:03:35.943363 containerd[1488]: time="2026-03-07T01:03:35.941811673Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:03:35.943363 containerd[1488]: time="2026-03-07T01:03:35.941967222Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:03:35.943363 containerd[1488]: time="2026-03-07T01:03:35.941992518Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:03:35.943363 containerd[1488]: time="2026-03-07T01:03:35.942206916Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:03:35.989097 containerd[1488]: time="2026-03-07T01:03:35.986017504Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:03:35.989097 containerd[1488]: time="2026-03-07T01:03:35.986231321Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:03:35.989097 containerd[1488]: time="2026-03-07T01:03:35.986358408Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:03:35.989097 containerd[1488]: time="2026-03-07T01:03:35.987050593Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:03:36.004647 containerd[1488]: time="2026-03-07T01:03:36.003321085Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:03:36.004647 containerd[1488]: time="2026-03-07T01:03:36.003645177Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:03:36.004647 containerd[1488]: time="2026-03-07T01:03:36.003670874Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:03:36.004647 containerd[1488]: time="2026-03-07T01:03:36.003844226Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:03:36.975122 systemd[1]: run-containerd-runc-k8s.io-c54fef11d14a2931dfe61ce8be47d56163079d94c011d5e9a518639d381f1d5d-runc.KdkkTB.mount: Deactivated successfully. Mar 7 01:03:37.020188 systemd[1]: Started cri-containerd-c54fef11d14a2931dfe61ce8be47d56163079d94c011d5e9a518639d381f1d5d.scope - libcontainer container c54fef11d14a2931dfe61ce8be47d56163079d94c011d5e9a518639d381f1d5d. Mar 7 01:03:37.032949 systemd[1]: Started cri-containerd-ff18e1c2d12e4c0d91ab41b70fc27776edae7b732428f678477cc5d707ada38e.scope - libcontainer container ff18e1c2d12e4c0d91ab41b70fc27776edae7b732428f678477cc5d707ada38e. Mar 7 01:03:37.060342 systemd[1]: Started cri-containerd-8d5a9702fbfc4fbf3014510e620c5fb6df5ee699f179dc179dc0af77a4519295.scope - libcontainer container 8d5a9702fbfc4fbf3014510e620c5fb6df5ee699f179dc179dc0af77a4519295. Mar 7 01:03:37.740670 containerd[1488]: time="2026-03-07T01:03:37.740259170Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:f420dd303687d038b2bc2fa1d277c55c,Namespace:kube-system,Attempt:0,} returns sandbox id \"c54fef11d14a2931dfe61ce8be47d56163079d94c011d5e9a518639d381f1d5d\"" Mar 7 01:03:37.753523 kubelet[2269]: E0307 01:03:37.753444 2269 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:03:37.763023 containerd[1488]: time="2026-03-07T01:03:37.762941729Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e9de956bf47baae4bcbff7d8bd8da672,Namespace:kube-system,Attempt:0,} returns sandbox id \"8d5a9702fbfc4fbf3014510e620c5fb6df5ee699f179dc179dc0af77a4519295\"" Mar 7 01:03:37.776005 kubelet[2269]: E0307 01:03:37.766575 2269 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:03:38.207478 kubelet[2269]: E0307 01:03:38.207298 2269 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.13:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 7 01:03:38.211331 kubelet[2269]: E0307 01:03:38.211192 2269 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.13:6443: connect: connection refused" interval="6.4s" Mar 7 01:03:38.214310 containerd[1488]: time="2026-03-07T01:03:38.214201436Z" level=info msg="CreateContainer within sandbox \"c54fef11d14a2931dfe61ce8be47d56163079d94c011d5e9a518639d381f1d5d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 7 01:03:38.230631 containerd[1488]: time="2026-03-07T01:03:38.229670637Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:bd81bb6a14e176da833e3a8030ee5eac,Namespace:kube-system,Attempt:0,} returns sandbox id \"ff18e1c2d12e4c0d91ab41b70fc27776edae7b732428f678477cc5d707ada38e\"" Mar 7 01:03:38.235883 kubelet[2269]: E0307 01:03:38.232238 2269 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:03:38.239503 containerd[1488]: time="2026-03-07T01:03:38.239449611Z" level=info msg="CreateContainer within sandbox \"8d5a9702fbfc4fbf3014510e620c5fb6df5ee699f179dc179dc0af77a4519295\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 7 01:03:38.254678 containerd[1488]: time="2026-03-07T01:03:38.254580914Z" level=info msg="CreateContainer within sandbox \"ff18e1c2d12e4c0d91ab41b70fc27776edae7b732428f678477cc5d707ada38e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 7 01:03:38.294603 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1867948766.mount: Deactivated successfully. Mar 7 01:03:38.360608 containerd[1488]: time="2026-03-07T01:03:38.357158918Z" level=info msg="CreateContainer within sandbox \"c54fef11d14a2931dfe61ce8be47d56163079d94c011d5e9a518639d381f1d5d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"5b2b6feac1dd035656ccafc804de4f1773436a5b50171d940a4e15ec188d3f24\"" Mar 7 01:03:38.363515 containerd[1488]: time="2026-03-07T01:03:38.361400748Z" level=info msg="CreateContainer within sandbox \"8d5a9702fbfc4fbf3014510e620c5fb6df5ee699f179dc179dc0af77a4519295\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"3752669fe9d4c8424299e55e17f6cf30957a09687280fcc27d65a06d256a929e\"" Mar 7 01:03:38.363598 containerd[1488]: time="2026-03-07T01:03:38.363545061Z" level=info msg="StartContainer for \"5b2b6feac1dd035656ccafc804de4f1773436a5b50171d940a4e15ec188d3f24\"" Mar 7 01:03:38.368546 containerd[1488]: time="2026-03-07T01:03:38.363976330Z" level=info msg="StartContainer for \"3752669fe9d4c8424299e55e17f6cf30957a09687280fcc27d65a06d256a929e\"" Mar 7 01:03:38.415990 containerd[1488]: time="2026-03-07T01:03:38.415901106Z" level=info msg="CreateContainer within sandbox \"ff18e1c2d12e4c0d91ab41b70fc27776edae7b732428f678477cc5d707ada38e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"edcd7e65edd1c28e711c1218e50aa556c16d6baf71028e3a7b09699a935f42ec\"" Mar 7 01:03:38.426831 containerd[1488]: time="2026-03-07T01:03:38.425806743Z" level=info msg="StartContainer for \"edcd7e65edd1c28e711c1218e50aa556c16d6baf71028e3a7b09699a935f42ec\"" Mar 7 01:03:38.621803 kubelet[2269]: I0307 01:03:38.594454 2269 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 7 01:03:38.621803 kubelet[2269]: E0307 01:03:38.605610 2269 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.13:6443/api/v1/nodes\": dial tcp 10.0.0.13:6443: connect: connection refused" node="localhost" Mar 7 01:03:38.784707 systemd[1]: Started cri-containerd-3752669fe9d4c8424299e55e17f6cf30957a09687280fcc27d65a06d256a929e.scope - libcontainer container 3752669fe9d4c8424299e55e17f6cf30957a09687280fcc27d65a06d256a929e. Mar 7 01:03:38.791908 systemd[1]: Started cri-containerd-5b2b6feac1dd035656ccafc804de4f1773436a5b50171d940a4e15ec188d3f24.scope - libcontainer container 5b2b6feac1dd035656ccafc804de4f1773436a5b50171d940a4e15ec188d3f24. Mar 7 01:03:38.827867 systemd[1]: Started cri-containerd-edcd7e65edd1c28e711c1218e50aa556c16d6baf71028e3a7b09699a935f42ec.scope - libcontainer container edcd7e65edd1c28e711c1218e50aa556c16d6baf71028e3a7b09699a935f42ec. Mar 7 01:03:40.138075 containerd[1488]: time="2026-03-07T01:03:40.125075969Z" level=info msg="StartContainer for \"3752669fe9d4c8424299e55e17f6cf30957a09687280fcc27d65a06d256a929e\" returns successfully" Mar 7 01:03:40.162836 containerd[1488]: time="2026-03-07T01:03:40.162711115Z" level=info msg="StartContainer for \"5b2b6feac1dd035656ccafc804de4f1773436a5b50171d940a4e15ec188d3f24\" returns successfully" Mar 7 01:03:40.260684 containerd[1488]: time="2026-03-07T01:03:40.258708766Z" level=info msg="StartContainer for \"edcd7e65edd1c28e711c1218e50aa556c16d6baf71028e3a7b09699a935f42ec\" returns successfully" Mar 7 01:03:41.394432 kubelet[2269]: E0307 01:03:41.391252 2269 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:03:41.394432 kubelet[2269]: E0307 01:03:41.393097 2269 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:03:41.401288 kubelet[2269]: E0307 01:03:41.399878 2269 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:03:41.401288 kubelet[2269]: E0307 01:03:41.400106 2269 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:03:41.403838 kubelet[2269]: E0307 01:03:41.403699 2269 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:03:41.404069 kubelet[2269]: E0307 01:03:41.403935 2269 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:03:42.047891 kubelet[2269]: E0307 01:03:42.045295 2269 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 7 01:03:42.415257 kubelet[2269]: E0307 01:03:42.414651 2269 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:03:42.415257 kubelet[2269]: E0307 01:03:42.415076 2269 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:03:42.419232 kubelet[2269]: E0307 01:03:42.417700 2269 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:03:42.424636 kubelet[2269]: E0307 01:03:42.421276 2269 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:03:42.424636 kubelet[2269]: E0307 01:03:42.423913 2269 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:03:42.426918 kubelet[2269]: E0307 01:03:42.426855 2269 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:03:43.548308 kubelet[2269]: E0307 01:03:43.547655 2269 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:03:43.548308 kubelet[2269]: E0307 01:03:43.548187 2269 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:03:43.548308 kubelet[2269]: E0307 01:03:43.554005 2269 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:03:43.548308 kubelet[2269]: E0307 01:03:43.554568 2269 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:03:44.554093 kubelet[2269]: E0307 01:03:44.553684 2269 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:03:44.575386 kubelet[2269]: E0307 01:03:44.558649 2269 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:03:45.031973 kubelet[2269]: I0307 01:03:45.027884 2269 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 7 01:03:50.664852 kubelet[2269]: E0307 01:03:50.663224 2269 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:03:50.664852 kubelet[2269]: E0307 01:03:50.663408 2269 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:03:50.664852 kubelet[2269]: E0307 01:03:50.665195 2269 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:03:50.687192 kubelet[2269]: E0307 01:03:50.671966 2269 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:03:51.473214 kubelet[2269]: E0307 01:03:51.471937 2269 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.13:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.189a69824b23408f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-07 01:03:31.650977935 +0000 UTC m=+1.514867817,LastTimestamp:2026-03-07 01:03:31.650977935 +0000 UTC m=+1.514867817,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 7 01:03:51.798063 kubelet[2269]: E0307 01:03:51.784601 2269 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:03:51.798063 kubelet[2269]: E0307 01:03:51.785250 2269 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:03:52.394505 kubelet[2269]: E0307 01:03:52.393049 2269 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 7 01:03:52.455606 kubelet[2269]: E0307 01:03:52.454764 2269 nodelease.go:50] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Mar 7 01:03:52.683640 kubelet[2269]: I0307 01:03:52.677248 2269 kubelet_node_status.go:77] "Successfully registered node" node="localhost" Mar 7 01:03:52.722647 kubelet[2269]: I0307 01:03:52.722601 2269 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 7 01:03:52.802383 kubelet[2269]: E0307 01:03:52.802329 2269 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Mar 7 01:03:52.802383 kubelet[2269]: I0307 01:03:52.802380 2269 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 7 01:03:52.812634 kubelet[2269]: E0307 01:03:52.812548 2269 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Mar 7 01:03:52.812634 kubelet[2269]: I0307 01:03:52.812611 2269 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 7 01:03:52.819846 kubelet[2269]: E0307 01:03:52.818035 2269 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Mar 7 01:03:53.452296 kubelet[2269]: I0307 01:03:53.438688 2269 apiserver.go:52] "Watching apiserver" Mar 7 01:03:53.530901 kubelet[2269]: I0307 01:03:53.530848 2269 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 7 01:03:54.423873 kubelet[2269]: I0307 01:03:54.417188 2269 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 7 01:03:54.456518 kubelet[2269]: E0307 01:03:54.456433 2269 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:03:54.843911 kubelet[2269]: E0307 01:03:54.843421 2269 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:04:00.134167 systemd[1]: Reloading requested from client PID 2564 ('systemctl') (unit session-7.scope)... Mar 7 01:04:00.134230 systemd[1]: Reloading... Mar 7 01:04:00.588802 zram_generator::config[2600]: No configuration found. Mar 7 01:04:00.677612 kubelet[2269]: I0307 01:04:00.676608 2269 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 7 01:04:00.739870 kubelet[2269]: E0307 01:04:00.739051 2269 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:04:01.105678 kubelet[2269]: I0307 01:04:01.104990 2269 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=7.104597906 podStartE2EDuration="7.104597906s" podCreationTimestamp="2026-03-07 01:03:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:04:00.949145557 +0000 UTC m=+30.813035469" watchObservedRunningTime="2026-03-07 01:04:01.104597906 +0000 UTC m=+30.968487790" Mar 7 01:04:01.127455 kubelet[2269]: E0307 01:04:01.127001 2269 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:04:01.181392 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 01:04:01.657132 systemd[1]: Reloading finished in 1521 ms. Mar 7 01:04:02.151315 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:04:02.221219 systemd[1]: kubelet.service: Deactivated successfully. Mar 7 01:04:02.221656 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:04:02.221801 systemd[1]: kubelet.service: Consumed 10.091s CPU time, 131.0M memory peak, 0B memory swap peak. Mar 7 01:04:02.251686 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:04:02.709937 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:04:03.620428 (kubelet)[2648]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 7 01:04:03.885908 kubelet[2648]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 7 01:04:03.929181 kubelet[2648]: I0307 01:04:03.929090 2648 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Mar 7 01:04:03.929486 kubelet[2648]: I0307 01:04:03.929461 2648 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 7 01:04:03.929599 kubelet[2648]: I0307 01:04:03.929580 2648 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 7 01:04:03.929681 kubelet[2648]: I0307 01:04:03.929663 2648 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 7 01:04:03.930534 kubelet[2648]: I0307 01:04:03.930507 2648 server.go:951] "Client rotation is on, will bootstrap in background" Mar 7 01:04:03.939533 kubelet[2648]: I0307 01:04:03.939486 2648 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 7 01:04:03.952844 kubelet[2648]: I0307 01:04:03.952796 2648 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 7 01:04:03.971857 kubelet[2648]: E0307 01:04:03.968287 2648 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 7 01:04:03.971857 kubelet[2648]: I0307 01:04:03.968382 2648 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 7 01:04:03.987024 kubelet[2648]: I0307 01:04:03.986909 2648 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 7 01:04:03.987804 kubelet[2648]: I0307 01:04:03.987701 2648 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 7 01:04:03.988156 kubelet[2648]: I0307 01:04:03.987913 2648 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 7 01:04:03.988406 kubelet[2648]: I0307 01:04:03.988384 2648 topology_manager.go:143] "Creating topology manager with none policy" Mar 7 01:04:03.988493 kubelet[2648]: I0307 01:04:03.988479 2648 container_manager_linux.go:308] "Creating device plugin manager" Mar 7 01:04:03.988609 kubelet[2648]: I0307 01:04:03.988591 2648 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Mar 7 01:04:03.989413 kubelet[2648]: I0307 01:04:03.989364 2648 state_mem.go:41] "Initialized" logger="CPUManager state memory" Mar 7 01:04:03.991879 kubelet[2648]: I0307 01:04:03.991855 2648 kubelet.go:482] "Attempting to sync node with API server" Mar 7 01:04:03.992023 kubelet[2648]: I0307 01:04:03.992003 2648 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 7 01:04:03.995113 kubelet[2648]: I0307 01:04:03.995025 2648 kubelet.go:394] "Adding apiserver pod source" Mar 7 01:04:03.995414 kubelet[2648]: I0307 01:04:03.995330 2648 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 7 01:04:04.004499 kubelet[2648]: I0307 01:04:04.004452 2648 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 7 01:04:04.005932 kubelet[2648]: I0307 01:04:04.005896 2648 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 7 01:04:04.006153 kubelet[2648]: I0307 01:04:04.006128 2648 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 7 01:04:04.082926 kubelet[2648]: I0307 01:04:04.082707 2648 server.go:1257] "Started kubelet" Mar 7 01:04:04.084132 kubelet[2648]: I0307 01:04:04.084018 2648 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Mar 7 01:04:04.089324 kubelet[2648]: I0307 01:04:04.085662 2648 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 7 01:04:04.089572 kubelet[2648]: I0307 01:04:04.089542 2648 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 7 01:04:04.091987 kubelet[2648]: I0307 01:04:04.091957 2648 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 7 01:04:04.302853 kubelet[2648]: I0307 01:04:04.301424 2648 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 7 01:04:04.318786 kubelet[2648]: I0307 01:04:04.303407 2648 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Mar 7 01:04:04.318786 kubelet[2648]: I0307 01:04:04.303817 2648 server.go:317] "Adding debug handlers to kubelet server" Mar 7 01:04:04.318786 kubelet[2648]: I0307 01:04:04.305387 2648 volume_manager.go:311] "Starting Kubelet Volume Manager" Mar 7 01:04:04.318786 kubelet[2648]: I0307 01:04:04.305688 2648 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 7 01:04:04.318786 kubelet[2648]: I0307 01:04:04.306349 2648 reconciler.go:29] "Reconciler: start to sync state" Mar 7 01:04:04.318786 kubelet[2648]: E0307 01:04:04.306395 2648 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:04:04.326022 kubelet[2648]: I0307 01:04:04.322526 2648 factory.go:223] Registration of the systemd container factory successfully Mar 7 01:04:04.326022 kubelet[2648]: I0307 01:04:04.323072 2648 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 7 01:04:04.329676 kubelet[2648]: I0307 01:04:04.329480 2648 factory.go:223] Registration of the containerd container factory successfully Mar 7 01:04:04.410522 kubelet[2648]: E0307 01:04:04.407489 2648 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:04:04.510854 kubelet[2648]: E0307 01:04:04.510683 2648 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:04:04.599419 kubelet[2648]: I0307 01:04:04.598025 2648 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 7 01:04:04.753798 kubelet[2648]: I0307 01:04:04.753474 2648 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 7 01:04:04.753798 kubelet[2648]: I0307 01:04:04.753540 2648 status_manager.go:249] "Starting to sync pod status with apiserver" Mar 7 01:04:04.753798 kubelet[2648]: I0307 01:04:04.753586 2648 kubelet.go:2501] "Starting kubelet main sync loop" Mar 7 01:04:04.758349 kubelet[2648]: E0307 01:04:04.753699 2648 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 7 01:04:04.861557 kubelet[2648]: E0307 01:04:04.859788 2648 kubelet.go:2525] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 7 01:04:05.051927 kubelet[2648]: I0307 01:04:05.048251 2648 apiserver.go:52] "Watching apiserver" Mar 7 01:04:05.239482 kubelet[2648]: E0307 01:04:05.095403 2648 kubelet.go:2525] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 7 01:04:05.291800 kubelet[2648]: I0307 01:04:05.290782 2648 cpu_manager.go:225] "Starting" policy="none" Mar 7 01:04:05.291800 kubelet[2648]: I0307 01:04:05.290809 2648 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 7 01:04:05.291800 kubelet[2648]: I0307 01:04:05.290838 2648 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Mar 7 01:04:05.291800 kubelet[2648]: I0307 01:04:05.291102 2648 state_mem.go:94] "Updated default CPUSet" logger="CPUManager state checkpoint.CPUManager state memory" cpuSet="" Mar 7 01:04:05.291800 kubelet[2648]: I0307 01:04:05.291116 2648 state_mem.go:102] "Updated CPUSet assignments" logger="CPUManager state checkpoint.CPUManager state memory" assignments={} Mar 7 01:04:05.291800 kubelet[2648]: I0307 01:04:05.291138 2648 policy_none.go:50] "Start" Mar 7 01:04:05.291800 kubelet[2648]: I0307 01:04:05.291149 2648 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 7 01:04:05.291800 kubelet[2648]: I0307 01:04:05.291162 2648 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 7 01:04:05.291800 kubelet[2648]: I0307 01:04:05.291450 2648 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Mar 7 01:04:05.291800 kubelet[2648]: I0307 01:04:05.291463 2648 policy_none.go:44] "Start" Mar 7 01:04:05.314183 kubelet[2648]: E0307 01:04:05.313528 2648 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 7 01:04:05.314183 kubelet[2648]: I0307 01:04:05.313856 2648 eviction_manager.go:194] "Eviction manager: starting control loop" Mar 7 01:04:05.314183 kubelet[2648]: I0307 01:04:05.313874 2648 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 7 01:04:05.317587 kubelet[2648]: I0307 01:04:05.316081 2648 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Mar 7 01:04:05.326037 kubelet[2648]: E0307 01:04:05.325996 2648 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 7 01:04:05.610446 kubelet[2648]: I0307 01:04:05.599021 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e9de956bf47baae4bcbff7d8bd8da672-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e9de956bf47baae4bcbff7d8bd8da672\") " pod="kube-system/kube-apiserver-localhost" Mar 7 01:04:05.610446 kubelet[2648]: I0307 01:04:05.599088 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e9de956bf47baae4bcbff7d8bd8da672-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e9de956bf47baae4bcbff7d8bd8da672\") " pod="kube-system/kube-apiserver-localhost" Mar 7 01:04:05.610446 kubelet[2648]: I0307 01:04:05.599128 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e9de956bf47baae4bcbff7d8bd8da672-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e9de956bf47baae4bcbff7d8bd8da672\") " pod="kube-system/kube-apiserver-localhost" Mar 7 01:04:05.610446 kubelet[2648]: I0307 01:04:05.600159 2648 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 7 01:04:05.637107 kubelet[2648]: I0307 01:04:05.636339 2648 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 7 01:04:05.706425 kubelet[2648]: I0307 01:04:05.705839 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 01:04:05.711971 kubelet[2648]: I0307 01:04:05.708818 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 01:04:05.711971 kubelet[2648]: I0307 01:04:05.708853 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 01:04:05.711971 kubelet[2648]: I0307 01:04:05.708875 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 01:04:05.711971 kubelet[2648]: I0307 01:04:05.708896 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 01:04:05.711971 kubelet[2648]: I0307 01:04:05.709056 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bd81bb6a14e176da833e3a8030ee5eac-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"bd81bb6a14e176da833e3a8030ee5eac\") " pod="kube-system/kube-scheduler-localhost" Mar 7 01:04:05.711971 kubelet[2648]: I0307 01:04:05.709938 2648 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 7 01:04:06.172586 kubelet[2648]: E0307 01:04:05.911209 2648 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:04:06.268939 kubelet[2648]: I0307 01:04:06.188188 2648 kubelet_node_status.go:123] "Node was previously registered" node="localhost" Mar 7 01:04:06.268939 kubelet[2648]: I0307 01:04:06.190275 2648 kubelet_node_status.go:77] "Successfully registered node" node="localhost" Mar 7 01:04:06.268939 kubelet[2648]: I0307 01:04:06.190317 2648 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 7 01:04:06.292641 kubelet[2648]: E0307 01:04:06.283411 2648 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:04:06.292641 kubelet[2648]: I0307 01:04:06.285978 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5e888a75-9252-456b-a231-d302108951e4-kube-proxy\") pod \"kube-proxy-wxwn7\" (UID: \"5e888a75-9252-456b-a231-d302108951e4\") " pod="kube-system/kube-proxy-wxwn7" Mar 7 01:04:06.292641 kubelet[2648]: I0307 01:04:06.286023 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9h2bp\" (UniqueName: \"kubernetes.io/projected/5e888a75-9252-456b-a231-d302108951e4-kube-api-access-9h2bp\") pod \"kube-proxy-wxwn7\" (UID: \"5e888a75-9252-456b-a231-d302108951e4\") " pod="kube-system/kube-proxy-wxwn7" Mar 7 01:04:06.292641 kubelet[2648]: I0307 01:04:06.286061 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5e888a75-9252-456b-a231-d302108951e4-xtables-lock\") pod \"kube-proxy-wxwn7\" (UID: \"5e888a75-9252-456b-a231-d302108951e4\") " pod="kube-system/kube-proxy-wxwn7" Mar 7 01:04:06.292641 kubelet[2648]: I0307 01:04:06.286086 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5e888a75-9252-456b-a231-d302108951e4-lib-modules\") pod \"kube-proxy-wxwn7\" (UID: \"5e888a75-9252-456b-a231-d302108951e4\") " pod="kube-system/kube-proxy-wxwn7" Mar 7 01:04:06.292641 kubelet[2648]: E0307 01:04:06.288055 2648 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:04:06.328434 systemd[1]: Created slice kubepods-besteffort-pod5e888a75_9252_456b_a231_d302108951e4.slice - libcontainer container kubepods-besteffort-pod5e888a75_9252_456b_a231_d302108951e4.slice. Mar 7 01:04:06.408982 containerd[1488]: time="2026-03-07T01:04:06.405438814Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 7 01:04:06.425595 kubelet[2648]: I0307 01:04:06.416952 2648 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 7 01:04:06.589537 kubelet[2648]: I0307 01:04:06.585821 2648 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.585799723 podStartE2EDuration="1.585799723s" podCreationTimestamp="2026-03-07 01:04:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:04:06.468525156 +0000 UTC m=+2.827966316" watchObservedRunningTime="2026-03-07 01:04:06.585799723 +0000 UTC m=+2.945240873" Mar 7 01:04:06.771364 kubelet[2648]: E0307 01:04:06.767119 2648 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:04:06.778146 containerd[1488]: time="2026-03-07T01:04:06.777166072Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wxwn7,Uid:5e888a75-9252-456b-a231-d302108951e4,Namespace:kube-system,Attempt:0,}" Mar 7 01:04:06.829581 kubelet[2648]: E0307 01:04:06.828706 2648 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:04:06.829581 kubelet[2648]: E0307 01:04:06.828774 2648 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:04:06.830849 kubelet[2648]: E0307 01:04:06.830821 2648 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:04:07.230341 containerd[1488]: time="2026-03-07T01:04:07.227895198Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:04:07.230341 containerd[1488]: time="2026-03-07T01:04:07.228079330Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:04:07.230341 containerd[1488]: time="2026-03-07T01:04:07.228100509Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:04:07.230341 containerd[1488]: time="2026-03-07T01:04:07.228319737Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:04:07.472646 systemd[1]: Started cri-containerd-3eb63180627f34f8b4948fe54f6fba25562d7adc8462cd91f1fcfb3b639d608e.scope - libcontainer container 3eb63180627f34f8b4948fe54f6fba25562d7adc8462cd91f1fcfb3b639d608e. Mar 7 01:04:07.598788 containerd[1488]: time="2026-03-07T01:04:07.598591200Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wxwn7,Uid:5e888a75-9252-456b-a231-d302108951e4,Namespace:kube-system,Attempt:0,} returns sandbox id \"3eb63180627f34f8b4948fe54f6fba25562d7adc8462cd91f1fcfb3b639d608e\"" Mar 7 01:04:07.603549 kubelet[2648]: E0307 01:04:07.602096 2648 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:04:07.773126 containerd[1488]: time="2026-03-07T01:04:07.770436078Z" level=info msg="CreateContainer within sandbox \"3eb63180627f34f8b4948fe54f6fba25562d7adc8462cd91f1fcfb3b639d608e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 7 01:04:07.867102 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount628549450.mount: Deactivated successfully. Mar 7 01:04:07.878082 containerd[1488]: time="2026-03-07T01:04:07.877897779Z" level=info msg="CreateContainer within sandbox \"3eb63180627f34f8b4948fe54f6fba25562d7adc8462cd91f1fcfb3b639d608e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"620aea6f08236b2f66103f5466adcb5f32f7c2591d03ad993a26c998fc9935af\"" Mar 7 01:04:07.881930 containerd[1488]: time="2026-03-07T01:04:07.881803521Z" level=info msg="StartContainer for \"620aea6f08236b2f66103f5466adcb5f32f7c2591d03ad993a26c998fc9935af\"" Mar 7 01:04:08.300558 systemd[1]: Started cri-containerd-620aea6f08236b2f66103f5466adcb5f32f7c2591d03ad993a26c998fc9935af.scope - libcontainer container 620aea6f08236b2f66103f5466adcb5f32f7c2591d03ad993a26c998fc9935af. Mar 7 01:04:09.672938 containerd[1488]: time="2026-03-07T01:04:09.670835072Z" level=info msg="StartContainer for \"620aea6f08236b2f66103f5466adcb5f32f7c2591d03ad993a26c998fc9935af\" returns successfully" Mar 7 01:04:10.558779 kubelet[2648]: E0307 01:04:10.556439 2648 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:04:10.833060 kubelet[2648]: I0307 01:04:10.818265 2648 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-wxwn7" podStartSLOduration=5.8181950780000005 podStartE2EDuration="5.818195078s" podCreationTimestamp="2026-03-07 01:04:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:04:10.81807812 +0000 UTC m=+7.177519280" watchObservedRunningTime="2026-03-07 01:04:10.818195078 +0000 UTC m=+7.177636207" Mar 7 01:04:11.631394 kubelet[2648]: E0307 01:04:11.630986 2648 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:04:12.643020 kubelet[2648]: I0307 01:04:12.638226 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrvk2\" (UniqueName: \"kubernetes.io/projected/07b06cef-fb5b-4181-8d19-8ab0534508ff-kube-api-access-hrvk2\") pod \"tigera-operator-6cf4cccc57-xq58h\" (UID: \"07b06cef-fb5b-4181-8d19-8ab0534508ff\") " pod="tigera-operator/tigera-operator-6cf4cccc57-xq58h" Mar 7 01:04:12.643020 kubelet[2648]: I0307 01:04:12.642289 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/07b06cef-fb5b-4181-8d19-8ab0534508ff-var-lib-calico\") pod \"tigera-operator-6cf4cccc57-xq58h\" (UID: \"07b06cef-fb5b-4181-8d19-8ab0534508ff\") " pod="tigera-operator/tigera-operator-6cf4cccc57-xq58h" Mar 7 01:04:12.734698 systemd[1]: Created slice kubepods-besteffort-pod07b06cef_fb5b_4181_8d19_8ab0534508ff.slice - libcontainer container kubepods-besteffort-pod07b06cef_fb5b_4181_8d19_8ab0534508ff.slice. Mar 7 01:04:13.250398 containerd[1488]: time="2026-03-07T01:04:13.247923855Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6cf4cccc57-xq58h,Uid:07b06cef-fb5b-4181-8d19-8ab0534508ff,Namespace:tigera-operator,Attempt:0,}" Mar 7 01:04:14.098610 containerd[1488]: time="2026-03-07T01:04:14.091535190Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:04:14.098610 containerd[1488]: time="2026-03-07T01:04:14.091796277Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:04:14.098610 containerd[1488]: time="2026-03-07T01:04:14.091816875Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:04:14.098610 containerd[1488]: time="2026-03-07T01:04:14.091985889Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:04:14.189618 systemd[1]: run-containerd-runc-k8s.io-a4d4059cb2498a41ca7ce75503b74d72fcf51885f94e9275ea97dd84a97b92f3-runc.8ZTIqJ.mount: Deactivated successfully. Mar 7 01:04:14.222051 systemd[1]: Started cri-containerd-a4d4059cb2498a41ca7ce75503b74d72fcf51885f94e9275ea97dd84a97b92f3.scope - libcontainer container a4d4059cb2498a41ca7ce75503b74d72fcf51885f94e9275ea97dd84a97b92f3. Mar 7 01:04:14.465400 containerd[1488]: time="2026-03-07T01:04:14.464810830Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6cf4cccc57-xq58h,Uid:07b06cef-fb5b-4181-8d19-8ab0534508ff,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"a4d4059cb2498a41ca7ce75503b74d72fcf51885f94e9275ea97dd84a97b92f3\"" Mar 7 01:04:15.259480 containerd[1488]: time="2026-03-07T01:04:15.255870650Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Mar 7 01:04:18.445119 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1440934306.mount: Deactivated successfully. Mar 7 01:04:28.691198 containerd[1488]: time="2026-03-07T01:04:28.690054172Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:04:28.708221 containerd[1488]: time="2026-03-07T01:04:28.706668719Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Mar 7 01:04:28.736830 containerd[1488]: time="2026-03-07T01:04:28.736041366Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:04:28.824501 containerd[1488]: time="2026-03-07T01:04:28.822252538Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:04:28.824501 containerd[1488]: time="2026-03-07T01:04:28.823582528Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 13.56765911s" Mar 7 01:04:28.824501 containerd[1488]: time="2026-03-07T01:04:28.823616781Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Mar 7 01:04:28.874279 containerd[1488]: time="2026-03-07T01:04:28.873816805Z" level=info msg="CreateContainer within sandbox \"a4d4059cb2498a41ca7ce75503b74d72fcf51885f94e9275ea97dd84a97b92f3\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 7 01:04:28.963665 containerd[1488]: time="2026-03-07T01:04:28.958522432Z" level=info msg="CreateContainer within sandbox \"a4d4059cb2498a41ca7ce75503b74d72fcf51885f94e9275ea97dd84a97b92f3\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"c6c1f79ccbeae568d929413d66248b3541fdb90991d7b73527f9b68407655295\"" Mar 7 01:04:29.028341 containerd[1488]: time="2026-03-07T01:04:29.023954198Z" level=info msg="StartContainer for \"c6c1f79ccbeae568d929413d66248b3541fdb90991d7b73527f9b68407655295\"" Mar 7 01:04:29.539176 systemd[1]: Started cri-containerd-c6c1f79ccbeae568d929413d66248b3541fdb90991d7b73527f9b68407655295.scope - libcontainer container c6c1f79ccbeae568d929413d66248b3541fdb90991d7b73527f9b68407655295. Mar 7 01:04:29.917101 containerd[1488]: time="2026-03-07T01:04:29.916236586Z" level=info msg="StartContainer for \"c6c1f79ccbeae568d929413d66248b3541fdb90991d7b73527f9b68407655295\" returns successfully" Mar 7 01:04:30.617220 kubelet[2648]: I0307 01:04:30.616366 2648 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6cf4cccc57-xq58h" podStartSLOduration=5.038003886 podStartE2EDuration="18.616346184s" podCreationTimestamp="2026-03-07 01:04:12 +0000 UTC" firstStartedPulling="2026-03-07 01:04:15.249709456 +0000 UTC m=+11.609150586" lastFinishedPulling="2026-03-07 01:04:28.828051744 +0000 UTC m=+25.187492884" observedRunningTime="2026-03-07 01:04:30.613044625 +0000 UTC m=+26.972485755" watchObservedRunningTime="2026-03-07 01:04:30.616346184 +0000 UTC m=+26.975787345" Mar 7 01:04:45.710622 sudo[1657]: pam_unix(sudo:session): session closed for user root Mar 7 01:04:45.726005 sshd[1654]: pam_unix(sshd:session): session closed for user core Mar 7 01:04:45.744050 systemd[1]: sshd@6-10.0.0.13:22-10.0.0.1:47058.service: Deactivated successfully. Mar 7 01:04:45.757205 systemd[1]: session-7.scope: Deactivated successfully. Mar 7 01:04:45.757852 systemd[1]: session-7.scope: Consumed 20.847s CPU time, 161.4M memory peak, 0B memory swap peak. Mar 7 01:04:45.761177 systemd-logind[1475]: Session 7 logged out. Waiting for processes to exit. Mar 7 01:04:45.767946 systemd-logind[1475]: Removed session 7. Mar 7 01:05:04.992887 systemd[1]: Created slice kubepods-besteffort-pod596ee093_ce59_4fc1_9699_37d54a297dd7.slice - libcontainer container kubepods-besteffort-pod596ee093_ce59_4fc1_9699_37d54a297dd7.slice. Mar 7 01:05:05.047333 kubelet[2648]: I0307 01:05:05.046652 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9qhj\" (UniqueName: \"kubernetes.io/projected/596ee093-ce59-4fc1-9699-37d54a297dd7-kube-api-access-w9qhj\") pod \"calico-typha-549658f756-klb7h\" (UID: \"596ee093-ce59-4fc1-9699-37d54a297dd7\") " pod="calico-system/calico-typha-549658f756-klb7h" Mar 7 01:05:05.047333 kubelet[2648]: I0307 01:05:05.046799 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/596ee093-ce59-4fc1-9699-37d54a297dd7-typha-certs\") pod \"calico-typha-549658f756-klb7h\" (UID: \"596ee093-ce59-4fc1-9699-37d54a297dd7\") " pod="calico-system/calico-typha-549658f756-klb7h" Mar 7 01:05:05.047333 kubelet[2648]: I0307 01:05:05.046827 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/596ee093-ce59-4fc1-9699-37d54a297dd7-tigera-ca-bundle\") pod \"calico-typha-549658f756-klb7h\" (UID: \"596ee093-ce59-4fc1-9699-37d54a297dd7\") " pod="calico-system/calico-typha-549658f756-klb7h" Mar 7 01:05:05.631143 kubelet[2648]: E0307 01:05:05.630313 2648 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:05:05.638661 containerd[1488]: time="2026-03-07T01:05:05.638605098Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-549658f756-klb7h,Uid:596ee093-ce59-4fc1-9699-37d54a297dd7,Namespace:calico-system,Attempt:0,}" Mar 7 01:05:05.638864 systemd[1]: Created slice kubepods-besteffort-poddd88f83c_7f94_41cf_8a43_76356403a154.slice - libcontainer container kubepods-besteffort-poddd88f83c_7f94_41cf_8a43_76356403a154.slice. Mar 7 01:05:05.689998 kubelet[2648]: I0307 01:05:05.689649 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/dd88f83c-7f94-41cf-8a43-76356403a154-cni-bin-dir\") pod \"calico-node-4vgcl\" (UID: \"dd88f83c-7f94-41cf-8a43-76356403a154\") " pod="calico-system/calico-node-4vgcl" Mar 7 01:05:05.689998 kubelet[2648]: I0307 01:05:05.689834 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd88f83c-7f94-41cf-8a43-76356403a154-tigera-ca-bundle\") pod \"calico-node-4vgcl\" (UID: \"dd88f83c-7f94-41cf-8a43-76356403a154\") " pod="calico-system/calico-node-4vgcl" Mar 7 01:05:05.689998 kubelet[2648]: I0307 01:05:05.689869 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/dd88f83c-7f94-41cf-8a43-76356403a154-var-lib-calico\") pod \"calico-node-4vgcl\" (UID: \"dd88f83c-7f94-41cf-8a43-76356403a154\") " pod="calico-system/calico-node-4vgcl" Mar 7 01:05:05.689998 kubelet[2648]: I0307 01:05:05.689898 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/dd88f83c-7f94-41cf-8a43-76356403a154-cni-net-dir\") pod \"calico-node-4vgcl\" (UID: \"dd88f83c-7f94-41cf-8a43-76356403a154\") " pod="calico-system/calico-node-4vgcl" Mar 7 01:05:05.689998 kubelet[2648]: I0307 01:05:05.689929 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/dd88f83c-7f94-41cf-8a43-76356403a154-bpffs\") pod \"calico-node-4vgcl\" (UID: \"dd88f83c-7f94-41cf-8a43-76356403a154\") " pod="calico-system/calico-node-4vgcl" Mar 7 01:05:05.691177 kubelet[2648]: I0307 01:05:05.689960 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dd88f83c-7f94-41cf-8a43-76356403a154-lib-modules\") pod \"calico-node-4vgcl\" (UID: \"dd88f83c-7f94-41cf-8a43-76356403a154\") " pod="calico-system/calico-node-4vgcl" Mar 7 01:05:05.691177 kubelet[2648]: I0307 01:05:05.689986 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/dd88f83c-7f94-41cf-8a43-76356403a154-sys-fs\") pod \"calico-node-4vgcl\" (UID: \"dd88f83c-7f94-41cf-8a43-76356403a154\") " pod="calico-system/calico-node-4vgcl" Mar 7 01:05:05.691177 kubelet[2648]: I0307 01:05:05.690009 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/dd88f83c-7f94-41cf-8a43-76356403a154-policysync\") pod \"calico-node-4vgcl\" (UID: \"dd88f83c-7f94-41cf-8a43-76356403a154\") " pod="calico-system/calico-node-4vgcl" Mar 7 01:05:05.691177 kubelet[2648]: I0307 01:05:05.690034 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dd88f83c-7f94-41cf-8a43-76356403a154-xtables-lock\") pod \"calico-node-4vgcl\" (UID: \"dd88f83c-7f94-41cf-8a43-76356403a154\") " pod="calico-system/calico-node-4vgcl" Mar 7 01:05:05.691177 kubelet[2648]: I0307 01:05:05.690059 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/dd88f83c-7f94-41cf-8a43-76356403a154-cni-log-dir\") pod \"calico-node-4vgcl\" (UID: \"dd88f83c-7f94-41cf-8a43-76356403a154\") " pod="calico-system/calico-node-4vgcl" Mar 7 01:05:05.691551 kubelet[2648]: I0307 01:05:05.690162 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/dd88f83c-7f94-41cf-8a43-76356403a154-flexvol-driver-host\") pod \"calico-node-4vgcl\" (UID: \"dd88f83c-7f94-41cf-8a43-76356403a154\") " pod="calico-system/calico-node-4vgcl" Mar 7 01:05:05.691551 kubelet[2648]: I0307 01:05:05.690200 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/dd88f83c-7f94-41cf-8a43-76356403a154-nodeproc\") pod \"calico-node-4vgcl\" (UID: \"dd88f83c-7f94-41cf-8a43-76356403a154\") " pod="calico-system/calico-node-4vgcl" Mar 7 01:05:05.691551 kubelet[2648]: I0307 01:05:05.690225 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/dd88f83c-7f94-41cf-8a43-76356403a154-var-run-calico\") pod \"calico-node-4vgcl\" (UID: \"dd88f83c-7f94-41cf-8a43-76356403a154\") " pod="calico-system/calico-node-4vgcl" Mar 7 01:05:05.691551 kubelet[2648]: I0307 01:05:05.690247 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/dd88f83c-7f94-41cf-8a43-76356403a154-node-certs\") pod \"calico-node-4vgcl\" (UID: \"dd88f83c-7f94-41cf-8a43-76356403a154\") " pod="calico-system/calico-node-4vgcl" Mar 7 01:05:05.691551 kubelet[2648]: I0307 01:05:05.690272 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxbm5\" (UniqueName: \"kubernetes.io/projected/dd88f83c-7f94-41cf-8a43-76356403a154-kube-api-access-sxbm5\") pod \"calico-node-4vgcl\" (UID: \"dd88f83c-7f94-41cf-8a43-76356403a154\") " pod="calico-system/calico-node-4vgcl" Mar 7 01:05:05.753168 kubelet[2648]: E0307 01:05:05.743358 2648 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gfb8z" podUID="3e8af9de-d045-4f61-b8c9-f94c3122f507" Mar 7 01:05:05.790708 kubelet[2648]: I0307 01:05:05.790570 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3e8af9de-d045-4f61-b8c9-f94c3122f507-kubelet-dir\") pod \"csi-node-driver-gfb8z\" (UID: \"3e8af9de-d045-4f61-b8c9-f94c3122f507\") " pod="calico-system/csi-node-driver-gfb8z" Mar 7 01:05:05.790708 kubelet[2648]: I0307 01:05:05.790672 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/3e8af9de-d045-4f61-b8c9-f94c3122f507-varrun\") pod \"csi-node-driver-gfb8z\" (UID: \"3e8af9de-d045-4f61-b8c9-f94c3122f507\") " pod="calico-system/csi-node-driver-gfb8z" Mar 7 01:05:05.797976 kubelet[2648]: I0307 01:05:05.792506 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/3e8af9de-d045-4f61-b8c9-f94c3122f507-socket-dir\") pod \"csi-node-driver-gfb8z\" (UID: \"3e8af9de-d045-4f61-b8c9-f94c3122f507\") " pod="calico-system/csi-node-driver-gfb8z" Mar 7 01:05:05.797976 kubelet[2648]: I0307 01:05:05.792540 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xvf5\" (UniqueName: \"kubernetes.io/projected/3e8af9de-d045-4f61-b8c9-f94c3122f507-kube-api-access-6xvf5\") pod \"csi-node-driver-gfb8z\" (UID: \"3e8af9de-d045-4f61-b8c9-f94c3122f507\") " pod="calico-system/csi-node-driver-gfb8z" Mar 7 01:05:05.797976 kubelet[2648]: I0307 01:05:05.792656 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/3e8af9de-d045-4f61-b8c9-f94c3122f507-registration-dir\") pod \"csi-node-driver-gfb8z\" (UID: \"3e8af9de-d045-4f61-b8c9-f94c3122f507\") " pod="calico-system/csi-node-driver-gfb8z" Mar 7 01:05:05.806265 kubelet[2648]: E0307 01:05:05.805916 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:05.806265 kubelet[2648]: W0307 01:05:05.805973 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:05.806265 kubelet[2648]: E0307 01:05:05.806030 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:05.816354 kubelet[2648]: E0307 01:05:05.813865 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:05.816354 kubelet[2648]: W0307 01:05:05.813904 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:05.816354 kubelet[2648]: E0307 01:05:05.813939 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:05.822272 kubelet[2648]: E0307 01:05:05.821844 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:05.822272 kubelet[2648]: W0307 01:05:05.821874 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:05.822272 kubelet[2648]: E0307 01:05:05.821905 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:05.822898 kubelet[2648]: E0307 01:05:05.822876 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:05.823015 kubelet[2648]: W0307 01:05:05.822991 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:05.823160 kubelet[2648]: E0307 01:05:05.823134 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:05.825902 kubelet[2648]: E0307 01:05:05.825813 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:05.825902 kubelet[2648]: W0307 01:05:05.825832 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:05.826454 kubelet[2648]: E0307 01:05:05.826324 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:05.827491 kubelet[2648]: E0307 01:05:05.827433 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:05.827491 kubelet[2648]: W0307 01:05:05.827452 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:05.827491 kubelet[2648]: E0307 01:05:05.827471 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:05.832711 kubelet[2648]: E0307 01:05:05.832313 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:05.833243 kubelet[2648]: W0307 01:05:05.833120 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:05.837488 kubelet[2648]: E0307 01:05:05.833563 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:05.838619 kubelet[2648]: E0307 01:05:05.838440 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:05.838619 kubelet[2648]: W0307 01:05:05.838541 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:05.839098 kubelet[2648]: E0307 01:05:05.838879 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:05.840265 kubelet[2648]: E0307 01:05:05.839966 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:05.840265 kubelet[2648]: W0307 01:05:05.839985 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:05.840265 kubelet[2648]: E0307 01:05:05.840005 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:05.845239 kubelet[2648]: E0307 01:05:05.843621 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:05.845239 kubelet[2648]: W0307 01:05:05.843645 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:05.845239 kubelet[2648]: E0307 01:05:05.843670 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:05.845239 kubelet[2648]: E0307 01:05:05.844886 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:05.845239 kubelet[2648]: W0307 01:05:05.844900 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:05.845239 kubelet[2648]: E0307 01:05:05.844919 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:05.849706 kubelet[2648]: E0307 01:05:05.845800 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:05.849706 kubelet[2648]: W0307 01:05:05.845819 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:05.849706 kubelet[2648]: E0307 01:05:05.845840 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:05.849917 containerd[1488]: time="2026-03-07T01:05:05.845955227Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:05:05.849917 containerd[1488]: time="2026-03-07T01:05:05.846041067Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:05:05.849917 containerd[1488]: time="2026-03-07T01:05:05.846061946Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:05:05.849917 containerd[1488]: time="2026-03-07T01:05:05.846273561Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:05:05.851242 kubelet[2648]: E0307 01:05:05.850929 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:05.851242 kubelet[2648]: W0307 01:05:05.850950 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:05.851242 kubelet[2648]: E0307 01:05:05.850973 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:05.852675 kubelet[2648]: E0307 01:05:05.852612 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:05.852675 kubelet[2648]: W0307 01:05:05.852648 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:05.852675 kubelet[2648]: E0307 01:05:05.852668 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:05.853678 kubelet[2648]: E0307 01:05:05.853154 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:05.853678 kubelet[2648]: W0307 01:05:05.853188 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:05.853678 kubelet[2648]: E0307 01:05:05.853205 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:05.855297 kubelet[2648]: E0307 01:05:05.853973 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:05.855297 kubelet[2648]: W0307 01:05:05.853986 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:05.855297 kubelet[2648]: E0307 01:05:05.854002 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:05.855297 kubelet[2648]: E0307 01:05:05.854422 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:05.855297 kubelet[2648]: W0307 01:05:05.854434 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:05.855297 kubelet[2648]: E0307 01:05:05.854448 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:05.855297 kubelet[2648]: E0307 01:05:05.854855 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:05.855297 kubelet[2648]: W0307 01:05:05.854866 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:05.855297 kubelet[2648]: E0307 01:05:05.854879 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:05.881017 kubelet[2648]: E0307 01:05:05.864400 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:05.881017 kubelet[2648]: W0307 01:05:05.864428 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:05.881017 kubelet[2648]: E0307 01:05:05.864457 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:05.881017 kubelet[2648]: E0307 01:05:05.865019 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:05.881017 kubelet[2648]: W0307 01:05:05.865036 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:05.881017 kubelet[2648]: E0307 01:05:05.865058 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:05.881017 kubelet[2648]: E0307 01:05:05.867647 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:05.881017 kubelet[2648]: W0307 01:05:05.867666 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:05.881017 kubelet[2648]: E0307 01:05:05.867687 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:05.881017 kubelet[2648]: E0307 01:05:05.868177 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:05.884216 kubelet[2648]: W0307 01:05:05.868196 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:05.884216 kubelet[2648]: E0307 01:05:05.868217 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:05.884216 kubelet[2648]: E0307 01:05:05.868659 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:05.884216 kubelet[2648]: W0307 01:05:05.868673 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:05.884216 kubelet[2648]: E0307 01:05:05.868693 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:05.884216 kubelet[2648]: E0307 01:05:05.869249 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:05.884216 kubelet[2648]: W0307 01:05:05.869264 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:05.884216 kubelet[2648]: E0307 01:05:05.869281 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:05.884216 kubelet[2648]: E0307 01:05:05.881767 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:05.884216 kubelet[2648]: W0307 01:05:05.881802 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:05.884629 kubelet[2648]: E0307 01:05:05.881871 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:05.895481 kubelet[2648]: E0307 01:05:05.895334 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:05.895481 kubelet[2648]: W0307 01:05:05.895371 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:05.895481 kubelet[2648]: E0307 01:05:05.895407 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:05.897785 kubelet[2648]: E0307 01:05:05.897695 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:05.897982 kubelet[2648]: W0307 01:05:05.897858 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:05.897982 kubelet[2648]: E0307 01:05:05.897886 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:05.898575 kubelet[2648]: E0307 01:05:05.898405 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:05.898575 kubelet[2648]: W0307 01:05:05.898421 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:05.898575 kubelet[2648]: E0307 01:05:05.898434 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:05.903611 kubelet[2648]: E0307 01:05:05.903579 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:05.904405 kubelet[2648]: W0307 01:05:05.903877 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:05.904405 kubelet[2648]: E0307 01:05:05.903949 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:05.906531 kubelet[2648]: E0307 01:05:05.906476 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:05.907987 kubelet[2648]: W0307 01:05:05.906803 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:05.907987 kubelet[2648]: E0307 01:05:05.906832 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:05.911184 kubelet[2648]: E0307 01:05:05.911165 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:05.911279 kubelet[2648]: W0307 01:05:05.911263 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:05.911401 kubelet[2648]: E0307 01:05:05.911383 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:05.930136 kubelet[2648]: E0307 01:05:05.928778 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:05.930136 kubelet[2648]: W0307 01:05:05.928832 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:05.930136 kubelet[2648]: E0307 01:05:05.928869 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:05.933202 kubelet[2648]: E0307 01:05:05.930644 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:05.933202 kubelet[2648]: W0307 01:05:05.930661 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:05.933202 kubelet[2648]: E0307 01:05:05.930686 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:05.933202 kubelet[2648]: E0307 01:05:05.931201 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:05.933202 kubelet[2648]: W0307 01:05:05.931216 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:05.933202 kubelet[2648]: E0307 01:05:05.931237 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:05.933202 kubelet[2648]: E0307 01:05:05.931691 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:05.933202 kubelet[2648]: W0307 01:05:05.931708 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:05.933202 kubelet[2648]: E0307 01:05:05.931796 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:05.945213 kubelet[2648]: E0307 01:05:05.935316 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:05.945213 kubelet[2648]: W0307 01:05:05.935345 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:05.945213 kubelet[2648]: E0307 01:05:05.935367 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:05.945213 kubelet[2648]: E0307 01:05:05.936497 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:05.945213 kubelet[2648]: W0307 01:05:05.936510 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:05.945213 kubelet[2648]: E0307 01:05:05.936528 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:05.945213 kubelet[2648]: E0307 01:05:05.936978 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:05.945213 kubelet[2648]: W0307 01:05:05.936994 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:05.945213 kubelet[2648]: E0307 01:05:05.937015 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:05.957381 kubelet[2648]: E0307 01:05:05.950463 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:05.957381 kubelet[2648]: W0307 01:05:05.950523 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:05.957381 kubelet[2648]: E0307 01:05:05.950603 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:05.957381 kubelet[2648]: E0307 01:05:05.952336 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:05.957381 kubelet[2648]: W0307 01:05:05.952358 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:05.957381 kubelet[2648]: E0307 01:05:05.952388 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:05.957381 kubelet[2648]: E0307 01:05:05.955155 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:05.957381 kubelet[2648]: W0307 01:05:05.955285 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:05.957381 kubelet[2648]: E0307 01:05:05.955321 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:05.957950 containerd[1488]: time="2026-03-07T01:05:05.957690637Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-4vgcl,Uid:dd88f83c-7f94-41cf-8a43-76356403a154,Namespace:calico-system,Attempt:0,}" Mar 7 01:05:05.958788 kubelet[2648]: E0307 01:05:05.958663 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:05.958788 kubelet[2648]: W0307 01:05:05.958711 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:05.958872 kubelet[2648]: E0307 01:05:05.958791 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:05.962234 kubelet[2648]: E0307 01:05:05.960678 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:05.962234 kubelet[2648]: W0307 01:05:05.960958 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:05.962234 kubelet[2648]: E0307 01:05:05.960989 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:05.962486 kubelet[2648]: E0307 01:05:05.962463 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:05.962601 kubelet[2648]: W0307 01:05:05.962577 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:05.962711 kubelet[2648]: E0307 01:05:05.962688 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:05.978947 systemd[1]: Started cri-containerd-7967520e4eaddc7fc09a95cb7d48c61f9cfcb1a8fa05df624fb757fc1d02ce6c.scope - libcontainer container 7967520e4eaddc7fc09a95cb7d48c61f9cfcb1a8fa05df624fb757fc1d02ce6c. Mar 7 01:05:05.990441 kubelet[2648]: E0307 01:05:05.990397 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:05.993566 kubelet[2648]: W0307 01:05:05.990688 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:05.993566 kubelet[2648]: E0307 01:05:05.993193 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:05.995698 kubelet[2648]: E0307 01:05:05.995675 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:05.995870 kubelet[2648]: W0307 01:05:05.995846 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:05.995999 kubelet[2648]: E0307 01:05:05.995976 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:05.999977 kubelet[2648]: E0307 01:05:05.999950 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:06.000174 kubelet[2648]: W0307 01:05:06.000150 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:06.000360 kubelet[2648]: E0307 01:05:06.000256 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:06.003431 kubelet[2648]: E0307 01:05:06.003409 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:06.006195 kubelet[2648]: W0307 01:05:06.006165 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:06.006348 kubelet[2648]: E0307 01:05:06.006325 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:06.008521 kubelet[2648]: E0307 01:05:06.008501 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:06.008639 kubelet[2648]: W0307 01:05:06.008617 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:06.008789 kubelet[2648]: E0307 01:05:06.008699 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:06.013255 kubelet[2648]: E0307 01:05:06.013222 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:06.013527 kubelet[2648]: W0307 01:05:06.013502 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:06.013699 kubelet[2648]: E0307 01:05:06.013674 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:06.026832 kubelet[2648]: E0307 01:05:06.026778 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:06.027046 kubelet[2648]: W0307 01:05:06.027019 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:06.029053 kubelet[2648]: E0307 01:05:06.028965 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:06.035479 kubelet[2648]: E0307 01:05:06.035448 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:06.035823 kubelet[2648]: W0307 01:05:06.035796 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:06.036120 kubelet[2648]: E0307 01:05:06.036052 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:06.104002 kubelet[2648]: E0307 01:05:06.103924 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:06.104002 kubelet[2648]: W0307 01:05:06.103985 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:06.104860 kubelet[2648]: E0307 01:05:06.104021 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:06.129049 containerd[1488]: time="2026-03-07T01:05:06.128420470Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:05:06.129049 containerd[1488]: time="2026-03-07T01:05:06.128537949Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:05:06.129049 containerd[1488]: time="2026-03-07T01:05:06.128565530Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:05:06.129049 containerd[1488]: time="2026-03-07T01:05:06.128793445Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:05:06.217527 systemd[1]: Started cri-containerd-e568dd4baa3e659efc52cdcfb26dcf1f5e244b9c5948ed83893c2b1bfad845d5.scope - libcontainer container e568dd4baa3e659efc52cdcfb26dcf1f5e244b9c5948ed83893c2b1bfad845d5. Mar 7 01:05:06.224502 containerd[1488]: time="2026-03-07T01:05:06.224460679Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-549658f756-klb7h,Uid:596ee093-ce59-4fc1-9699-37d54a297dd7,Namespace:calico-system,Attempt:0,} returns sandbox id \"7967520e4eaddc7fc09a95cb7d48c61f9cfcb1a8fa05df624fb757fc1d02ce6c\"" Mar 7 01:05:06.226981 kubelet[2648]: E0307 01:05:06.225552 2648 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:05:06.232286 containerd[1488]: time="2026-03-07T01:05:06.231498940Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Mar 7 01:05:06.317538 containerd[1488]: time="2026-03-07T01:05:06.317394902Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-4vgcl,Uid:dd88f83c-7f94-41cf-8a43-76356403a154,Namespace:calico-system,Attempt:0,} returns sandbox id \"e568dd4baa3e659efc52cdcfb26dcf1f5e244b9c5948ed83893c2b1bfad845d5\"" Mar 7 01:05:07.758534 kubelet[2648]: E0307 01:05:07.754165 2648 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gfb8z" podUID="3e8af9de-d045-4f61-b8c9-f94c3122f507" Mar 7 01:05:08.758851 kubelet[2648]: E0307 01:05:08.758205 2648 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:05:08.855962 kubelet[2648]: E0307 01:05:08.855867 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:08.855962 kubelet[2648]: W0307 01:05:08.855940 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:08.855962 kubelet[2648]: E0307 01:05:08.855976 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:08.871822 kubelet[2648]: E0307 01:05:08.871520 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:08.871822 kubelet[2648]: W0307 01:05:08.871601 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:08.871822 kubelet[2648]: E0307 01:05:08.871640 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:08.877342 kubelet[2648]: E0307 01:05:08.876028 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:08.877342 kubelet[2648]: W0307 01:05:08.876056 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:08.877342 kubelet[2648]: E0307 01:05:08.876180 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:08.881493 kubelet[2648]: E0307 01:05:08.881376 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:08.881493 kubelet[2648]: W0307 01:05:08.881417 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:08.881493 kubelet[2648]: E0307 01:05:08.881445 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:08.884155 kubelet[2648]: E0307 01:05:08.884100 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:08.884155 kubelet[2648]: W0307 01:05:08.884126 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:08.884155 kubelet[2648]: E0307 01:05:08.884154 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:08.907975 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2827714586.mount: Deactivated successfully. Mar 7 01:05:09.761342 kubelet[2648]: E0307 01:05:09.757954 2648 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gfb8z" podUID="3e8af9de-d045-4f61-b8c9-f94c3122f507" Mar 7 01:05:11.754597 kubelet[2648]: E0307 01:05:11.754479 2648 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gfb8z" podUID="3e8af9de-d045-4f61-b8c9-f94c3122f507" Mar 7 01:05:13.757501 kubelet[2648]: E0307 01:05:13.755430 2648 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gfb8z" podUID="3e8af9de-d045-4f61-b8c9-f94c3122f507" Mar 7 01:05:13.757501 kubelet[2648]: E0307 01:05:13.756974 2648 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:05:13.811492 kubelet[2648]: E0307 01:05:13.810854 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:13.811492 kubelet[2648]: W0307 01:05:13.810895 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:13.811492 kubelet[2648]: E0307 01:05:13.810930 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:13.814046 kubelet[2648]: E0307 01:05:13.813940 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:13.814574 kubelet[2648]: W0307 01:05:13.814457 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:13.814915 kubelet[2648]: E0307 01:05:13.814809 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:13.846991 kubelet[2648]: E0307 01:05:13.845687 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:13.886417 kubelet[2648]: W0307 01:05:13.846557 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:13.886417 kubelet[2648]: E0307 01:05:13.876312 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:13.889763 kubelet[2648]: E0307 01:05:13.889662 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:13.889960 kubelet[2648]: W0307 01:05:13.889875 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:13.889960 kubelet[2648]: E0307 01:05:13.889948 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:13.901551 kubelet[2648]: E0307 01:05:13.901482 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:13.901551 kubelet[2648]: W0307 01:05:13.901534 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:13.901874 kubelet[2648]: E0307 01:05:13.901564 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:13.909546 kubelet[2648]: E0307 01:05:13.909379 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:13.909546 kubelet[2648]: W0307 01:05:13.909432 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:13.909546 kubelet[2648]: E0307 01:05:13.909471 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:13.909955 kubelet[2648]: E0307 01:05:13.909899 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:13.909955 kubelet[2648]: W0307 01:05:13.909951 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:13.910127 kubelet[2648]: E0307 01:05:13.909976 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:13.913037 kubelet[2648]: E0307 01:05:13.912581 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:13.913037 kubelet[2648]: W0307 01:05:13.912607 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:13.913037 kubelet[2648]: E0307 01:05:13.912633 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:13.913037 kubelet[2648]: E0307 01:05:13.913063 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:13.913037 kubelet[2648]: W0307 01:05:13.913110 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:13.913037 kubelet[2648]: E0307 01:05:13.913131 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:13.913594 kubelet[2648]: E0307 01:05:13.913450 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:13.913594 kubelet[2648]: W0307 01:05:13.913462 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:13.913594 kubelet[2648]: E0307 01:05:13.913478 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:13.917143 kubelet[2648]: E0307 01:05:13.913961 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:13.917143 kubelet[2648]: W0307 01:05:13.913997 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:13.917143 kubelet[2648]: E0307 01:05:13.914018 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:13.917910 kubelet[2648]: E0307 01:05:13.917584 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:13.917910 kubelet[2648]: W0307 01:05:13.917604 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:13.917910 kubelet[2648]: E0307 01:05:13.917624 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:13.918365 kubelet[2648]: E0307 01:05:13.918344 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:13.918448 kubelet[2648]: W0307 01:05:13.918433 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:13.921806 kubelet[2648]: E0307 01:05:13.918923 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:13.926185 kubelet[2648]: E0307 01:05:13.925934 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:13.926185 kubelet[2648]: W0307 01:05:13.925965 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:13.926185 kubelet[2648]: E0307 01:05:13.925997 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:13.930174 kubelet[2648]: E0307 01:05:13.929263 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:13.930174 kubelet[2648]: W0307 01:05:13.929285 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:13.930174 kubelet[2648]: E0307 01:05:13.929308 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:14.233919 containerd[1488]: time="2026-03-07T01:05:14.232313945Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:05:14.239568 containerd[1488]: time="2026-03-07T01:05:14.238420581Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=36107596" Mar 7 01:05:14.241405 containerd[1488]: time="2026-03-07T01:05:14.241097790Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:05:14.256301 containerd[1488]: time="2026-03-07T01:05:14.254872643Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:05:14.276930 containerd[1488]: time="2026-03-07T01:05:14.259706101Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 8.028133835s" Mar 7 01:05:14.276930 containerd[1488]: time="2026-03-07T01:05:14.259818301Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Mar 7 01:05:14.290616 containerd[1488]: time="2026-03-07T01:05:14.290539857Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Mar 7 01:05:14.384048 containerd[1488]: time="2026-03-07T01:05:14.380503240Z" level=info msg="CreateContainer within sandbox \"7967520e4eaddc7fc09a95cb7d48c61f9cfcb1a8fa05df624fb757fc1d02ce6c\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 7 01:05:14.573983 containerd[1488]: time="2026-03-07T01:05:14.573046203Z" level=info msg="CreateContainer within sandbox \"7967520e4eaddc7fc09a95cb7d48c61f9cfcb1a8fa05df624fb757fc1d02ce6c\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"e8bb97d367b7e56b3c64068932d92fa974dc814eca39f1ed8634e87ec8d487be\"" Mar 7 01:05:14.587848 containerd[1488]: time="2026-03-07T01:05:14.587160307Z" level=info msg="StartContainer for \"e8bb97d367b7e56b3c64068932d92fa974dc814eca39f1ed8634e87ec8d487be\"" Mar 7 01:05:14.886202 systemd[1]: Started cri-containerd-e8bb97d367b7e56b3c64068932d92fa974dc814eca39f1ed8634e87ec8d487be.scope - libcontainer container e8bb97d367b7e56b3c64068932d92fa974dc814eca39f1ed8634e87ec8d487be. Mar 7 01:05:15.214938 containerd[1488]: time="2026-03-07T01:05:15.214614493Z" level=info msg="StartContainer for \"e8bb97d367b7e56b3c64068932d92fa974dc814eca39f1ed8634e87ec8d487be\" returns successfully" Mar 7 01:05:15.592528 kubelet[2648]: E0307 01:05:15.592257 2648 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:05:15.706427 kubelet[2648]: E0307 01:05:15.686161 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:15.706427 kubelet[2648]: W0307 01:05:15.686204 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:15.706427 kubelet[2648]: E0307 01:05:15.686326 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:15.706427 kubelet[2648]: E0307 01:05:15.689691 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:15.706427 kubelet[2648]: W0307 01:05:15.689771 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:15.706427 kubelet[2648]: E0307 01:05:15.689806 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:15.706427 kubelet[2648]: E0307 01:05:15.695871 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:15.706427 kubelet[2648]: W0307 01:05:15.695897 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:15.706427 kubelet[2648]: E0307 01:05:15.695929 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:15.706427 kubelet[2648]: E0307 01:05:15.696807 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:15.708093 kubelet[2648]: W0307 01:05:15.696825 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:15.708093 kubelet[2648]: E0307 01:05:15.696847 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:15.708093 kubelet[2648]: E0307 01:05:15.700833 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:15.708093 kubelet[2648]: W0307 01:05:15.701506 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:15.714463 kubelet[2648]: E0307 01:05:15.714236 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:15.731823 kubelet[2648]: E0307 01:05:15.717549 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:15.731823 kubelet[2648]: W0307 01:05:15.717667 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:15.731823 kubelet[2648]: E0307 01:05:15.717920 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:15.731823 kubelet[2648]: E0307 01:05:15.726327 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:15.731823 kubelet[2648]: W0307 01:05:15.726355 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:15.731823 kubelet[2648]: E0307 01:05:15.726386 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:15.746470 kubelet[2648]: E0307 01:05:15.745679 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:15.746470 kubelet[2648]: W0307 01:05:15.745775 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:15.746470 kubelet[2648]: E0307 01:05:15.745818 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:15.768798 kubelet[2648]: E0307 01:05:15.762210 2648 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gfb8z" podUID="3e8af9de-d045-4f61-b8c9-f94c3122f507" Mar 7 01:05:15.782302 kubelet[2648]: E0307 01:05:15.774201 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:15.782302 kubelet[2648]: W0307 01:05:15.774333 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:15.782302 kubelet[2648]: E0307 01:05:15.774442 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:15.823218 kubelet[2648]: E0307 01:05:15.806835 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:15.823218 kubelet[2648]: W0307 01:05:15.806872 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:15.823218 kubelet[2648]: E0307 01:05:15.806902 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:15.823218 kubelet[2648]: E0307 01:05:15.820063 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:15.823218 kubelet[2648]: W0307 01:05:15.822270 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:15.823218 kubelet[2648]: E0307 01:05:15.822309 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:15.855469 kubelet[2648]: E0307 01:05:15.833319 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:15.855469 kubelet[2648]: W0307 01:05:15.833357 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:15.855469 kubelet[2648]: E0307 01:05:15.833393 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:15.864297 kubelet[2648]: E0307 01:05:15.864258 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:15.864830 kubelet[2648]: W0307 01:05:15.864558 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:15.864830 kubelet[2648]: E0307 01:05:15.864634 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:15.866362 kubelet[2648]: E0307 01:05:15.866251 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:15.866362 kubelet[2648]: W0307 01:05:15.866272 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:15.866362 kubelet[2648]: E0307 01:05:15.866291 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:15.871900 kubelet[2648]: E0307 01:05:15.871307 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:15.871900 kubelet[2648]: W0307 01:05:15.871343 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:15.871900 kubelet[2648]: E0307 01:05:15.871374 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:15.872591 kubelet[2648]: E0307 01:05:15.872510 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:15.872591 kubelet[2648]: W0307 01:05:15.872528 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:15.872591 kubelet[2648]: E0307 01:05:15.872547 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:15.881873 kubelet[2648]: E0307 01:05:15.874572 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:15.881873 kubelet[2648]: W0307 01:05:15.874593 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:15.881873 kubelet[2648]: E0307 01:05:15.874614 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:15.915425 kubelet[2648]: E0307 01:05:15.903545 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:15.949639 kubelet[2648]: W0307 01:05:15.913790 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:15.973139 kubelet[2648]: E0307 01:05:15.919104 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:15.979915 kubelet[2648]: E0307 01:05:15.979875 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:15.980907 kubelet[2648]: W0307 01:05:15.980230 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:15.980907 kubelet[2648]: E0307 01:05:15.980636 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:15.981619 kubelet[2648]: E0307 01:05:15.981600 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:15.982501 kubelet[2648]: W0307 01:05:15.982482 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:15.982955 kubelet[2648]: E0307 01:05:15.982700 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:15.989555 kubelet[2648]: E0307 01:05:15.989469 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:15.989555 kubelet[2648]: W0307 01:05:15.989500 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:15.989555 kubelet[2648]: E0307 01:05:15.989528 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:16.008490 kubelet[2648]: E0307 01:05:16.001676 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:16.008490 kubelet[2648]: W0307 01:05:16.001702 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:16.008490 kubelet[2648]: E0307 01:05:16.001778 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:16.742059 kubelet[2648]: E0307 01:05:16.717686 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:16.742059 kubelet[2648]: W0307 01:05:16.736194 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:16.742059 kubelet[2648]: E0307 01:05:16.736917 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:16.742059 kubelet[2648]: E0307 01:05:16.739958 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:16.742059 kubelet[2648]: W0307 01:05:16.739981 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:16.742059 kubelet[2648]: E0307 01:05:16.740007 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:16.742059 kubelet[2648]: E0307 01:05:16.740857 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:16.742059 kubelet[2648]: W0307 01:05:16.740881 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:16.742059 kubelet[2648]: E0307 01:05:16.740906 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:16.742059 kubelet[2648]: E0307 01:05:16.742507 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:17.578561 kubelet[2648]: W0307 01:05:16.742525 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:17.578561 kubelet[2648]: E0307 01:05:16.742544 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:17.578561 kubelet[2648]: E0307 01:05:16.743033 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:17.578561 kubelet[2648]: W0307 01:05:16.743047 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:17.578561 kubelet[2648]: E0307 01:05:16.743062 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:17.578561 kubelet[2648]: E0307 01:05:16.743539 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:17.578561 kubelet[2648]: W0307 01:05:16.743552 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:17.578561 kubelet[2648]: E0307 01:05:16.743566 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:17.578561 kubelet[2648]: E0307 01:05:16.744703 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:17.578561 kubelet[2648]: W0307 01:05:16.744772 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:17.590258 kubelet[2648]: E0307 01:05:16.744793 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:17.590258 kubelet[2648]: E0307 01:05:16.746191 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:17.590258 kubelet[2648]: W0307 01:05:16.746204 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:17.590258 kubelet[2648]: E0307 01:05:16.746220 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:17.590258 kubelet[2648]: E0307 01:05:16.746491 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:17.590258 kubelet[2648]: W0307 01:05:16.746501 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:17.590258 kubelet[2648]: E0307 01:05:16.746513 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:17.590258 kubelet[2648]: E0307 01:05:16.746857 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:17.590258 kubelet[2648]: W0307 01:05:16.746868 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:17.590258 kubelet[2648]: E0307 01:05:16.746881 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:17.590588 kubelet[2648]: E0307 01:05:16.750954 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:17.590588 kubelet[2648]: W0307 01:05:16.750969 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:17.590588 kubelet[2648]: E0307 01:05:16.750986 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:17.590588 kubelet[2648]: E0307 01:05:16.766486 2648 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:05:17.638864 kubelet[2648]: E0307 01:05:17.633253 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:17.638864 kubelet[2648]: W0307 01:05:17.633287 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:17.638864 kubelet[2648]: E0307 01:05:17.633345 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:17.644990 kubelet[2648]: E0307 01:05:17.644967 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:17.654311 kubelet[2648]: W0307 01:05:17.654284 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:17.654420 kubelet[2648]: E0307 01:05:17.654397 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:17.655949 kubelet[2648]: E0307 01:05:17.655651 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:17.655949 kubelet[2648]: W0307 01:05:17.655773 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:17.655949 kubelet[2648]: E0307 01:05:17.655794 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:17.664280 kubelet[2648]: E0307 01:05:17.664258 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:17.664460 kubelet[2648]: W0307 01:05:17.664381 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:17.664460 kubelet[2648]: E0307 01:05:17.664407 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:17.665467 kubelet[2648]: E0307 01:05:17.665333 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:17.665467 kubelet[2648]: W0307 01:05:17.665350 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:17.665467 kubelet[2648]: E0307 01:05:17.665364 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:17.669351 kubelet[2648]: E0307 01:05:17.666058 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:17.669351 kubelet[2648]: W0307 01:05:17.669179 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:17.669351 kubelet[2648]: E0307 01:05:17.669198 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:17.678002 kubelet[2648]: E0307 01:05:17.677951 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:17.684549 kubelet[2648]: W0307 01:05:17.683542 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:17.684549 kubelet[2648]: E0307 01:05:17.683595 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:17.692529 kubelet[2648]: E0307 01:05:17.691550 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:17.692529 kubelet[2648]: W0307 01:05:17.691575 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:17.692529 kubelet[2648]: E0307 01:05:17.691607 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:17.692529 kubelet[2648]: E0307 01:05:17.692299 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:17.692529 kubelet[2648]: W0307 01:05:17.692316 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:17.692529 kubelet[2648]: E0307 01:05:17.692337 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:17.719050 kubelet[2648]: E0307 01:05:17.705680 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:17.719050 kubelet[2648]: W0307 01:05:17.705756 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:17.719050 kubelet[2648]: E0307 01:05:17.705799 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:17.719050 kubelet[2648]: E0307 01:05:17.707184 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:17.719050 kubelet[2648]: W0307 01:05:17.707201 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:17.719050 kubelet[2648]: E0307 01:05:17.707225 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:17.719050 kubelet[2648]: E0307 01:05:17.707883 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:17.719050 kubelet[2648]: W0307 01:05:17.707899 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:17.719050 kubelet[2648]: E0307 01:05:17.707922 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:17.719561 kubelet[2648]: E0307 01:05:17.719514 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:17.723330 kubelet[2648]: W0307 01:05:17.719844 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:17.723330 kubelet[2648]: E0307 01:05:17.719890 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:17.735871 kubelet[2648]: E0307 01:05:17.735493 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:17.735871 kubelet[2648]: W0307 01:05:17.735529 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:17.735871 kubelet[2648]: E0307 01:05:17.735560 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:17.751257 kubelet[2648]: E0307 01:05:17.750932 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:17.751257 kubelet[2648]: W0307 01:05:17.750975 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:17.751257 kubelet[2648]: E0307 01:05:17.751009 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:17.752143 kubelet[2648]: E0307 01:05:17.751843 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:17.752143 kubelet[2648]: W0307 01:05:17.751861 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:17.752143 kubelet[2648]: E0307 01:05:17.751884 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:17.763289 kubelet[2648]: E0307 01:05:17.761241 2648 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gfb8z" podUID="3e8af9de-d045-4f61-b8c9-f94c3122f507" Mar 7 01:05:17.801896 kubelet[2648]: E0307 01:05:17.795984 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:17.801896 kubelet[2648]: W0307 01:05:17.796044 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:17.826966 kubelet[2648]: E0307 01:05:17.826821 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:17.834109 kubelet[2648]: E0307 01:05:17.833890 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:17.834109 kubelet[2648]: W0307 01:05:17.834006 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:17.834109 kubelet[2648]: E0307 01:05:17.834037 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:17.838250 kubelet[2648]: E0307 01:05:17.835972 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:17.838250 kubelet[2648]: W0307 01:05:17.835992 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:17.838250 kubelet[2648]: E0307 01:05:17.836015 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:17.843132 kubelet[2648]: E0307 01:05:17.839932 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:17.843132 kubelet[2648]: W0307 01:05:17.840045 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:17.843132 kubelet[2648]: E0307 01:05:17.840171 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:17.856392 kubelet[2648]: E0307 01:05:17.847873 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:17.856392 kubelet[2648]: W0307 01:05:17.847904 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:17.856392 kubelet[2648]: E0307 01:05:17.847929 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:17.856392 kubelet[2648]: E0307 01:05:17.853564 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:17.856392 kubelet[2648]: W0307 01:05:17.853587 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:17.856392 kubelet[2648]: E0307 01:05:17.853683 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:17.861251 kubelet[2648]: E0307 01:05:17.860299 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:17.861251 kubelet[2648]: W0307 01:05:17.860325 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:17.861251 kubelet[2648]: E0307 01:05:17.860425 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:17.862979 kubelet[2648]: E0307 01:05:17.862390 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:17.862979 kubelet[2648]: W0307 01:05:17.862410 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:17.862979 kubelet[2648]: E0307 01:05:17.862485 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:17.864590 kubelet[2648]: E0307 01:05:17.864565 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:17.865163 kubelet[2648]: W0307 01:05:17.864807 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:17.865163 kubelet[2648]: E0307 01:05:17.864840 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:17.877547 kubelet[2648]: E0307 01:05:17.872897 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:17.877547 kubelet[2648]: W0307 01:05:17.872943 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:17.877547 kubelet[2648]: E0307 01:05:17.872983 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:17.879585 kubelet[2648]: E0307 01:05:17.879550 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:17.895824 kubelet[2648]: W0307 01:05:17.881316 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:17.895824 kubelet[2648]: E0307 01:05:17.881369 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:17.897423 kubelet[2648]: E0307 01:05:17.897116 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:17.897423 kubelet[2648]: W0307 01:05:17.897214 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:17.897423 kubelet[2648]: E0307 01:05:17.897297 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:17.905045 kubelet[2648]: E0307 01:05:17.904702 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:17.905045 kubelet[2648]: W0307 01:05:17.904808 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:17.905045 kubelet[2648]: E0307 01:05:17.904849 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:17.914594 kubelet[2648]: E0307 01:05:17.914544 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:17.915169 kubelet[2648]: W0307 01:05:17.914865 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:17.915169 kubelet[2648]: E0307 01:05:17.914910 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:17.915821 kubelet[2648]: E0307 01:05:17.915471 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:17.915821 kubelet[2648]: W0307 01:05:17.915513 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:17.915821 kubelet[2648]: E0307 01:05:17.915551 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:17.921147 kubelet[2648]: E0307 01:05:17.918511 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:17.921147 kubelet[2648]: W0307 01:05:17.918536 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:17.921147 kubelet[2648]: E0307 01:05:17.918555 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:17.931387 kubelet[2648]: E0307 01:05:17.931277 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:17.931387 kubelet[2648]: W0307 01:05:17.931327 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:17.931387 kubelet[2648]: E0307 01:05:17.931351 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:18.415809 containerd[1488]: time="2026-03-07T01:05:18.415164432Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:05:18.432617 containerd[1488]: time="2026-03-07T01:05:18.430480019Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4630250" Mar 7 01:05:18.437250 containerd[1488]: time="2026-03-07T01:05:18.434958028Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:05:18.459969 containerd[1488]: time="2026-03-07T01:05:18.458625624Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:05:18.459969 containerd[1488]: time="2026-03-07T01:05:18.459769616Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 4.169152185s" Mar 7 01:05:18.459969 containerd[1488]: time="2026-03-07T01:05:18.459811164Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Mar 7 01:05:18.503289 containerd[1488]: time="2026-03-07T01:05:18.502629972Z" level=info msg="CreateContainer within sandbox \"e568dd4baa3e659efc52cdcfb26dcf1f5e244b9c5948ed83893c2b1bfad845d5\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 7 01:05:18.567559 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount216355187.mount: Deactivated successfully. Mar 7 01:05:18.607185 containerd[1488]: time="2026-03-07T01:05:18.598524568Z" level=info msg="CreateContainer within sandbox \"e568dd4baa3e659efc52cdcfb26dcf1f5e244b9c5948ed83893c2b1bfad845d5\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"009dc7b6bf03f61511b665be8c4e4e6dc6a585e11b954666372aa0278cf02c8c\"" Mar 7 01:05:18.607185 containerd[1488]: time="2026-03-07T01:05:18.604407065Z" level=info msg="StartContainer for \"009dc7b6bf03f61511b665be8c4e4e6dc6a585e11b954666372aa0278cf02c8c\"" Mar 7 01:05:18.760226 kubelet[2648]: E0307 01:05:18.759863 2648 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:05:18.804689 kubelet[2648]: E0307 01:05:18.804416 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:18.804689 kubelet[2648]: W0307 01:05:18.804447 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:18.804689 kubelet[2648]: E0307 01:05:18.804481 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:18.808430 kubelet[2648]: E0307 01:05:18.808346 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:18.808430 kubelet[2648]: W0307 01:05:18.808396 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:18.808430 kubelet[2648]: E0307 01:05:18.808429 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:18.811930 kubelet[2648]: E0307 01:05:18.811557 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:18.811930 kubelet[2648]: W0307 01:05:18.811582 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:18.811930 kubelet[2648]: E0307 01:05:18.811605 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:18.813123 kubelet[2648]: E0307 01:05:18.813058 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:18.814305 kubelet[2648]: W0307 01:05:18.813216 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:18.814305 kubelet[2648]: E0307 01:05:18.813249 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:18.821435 kubelet[2648]: E0307 01:05:18.821361 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:18.821435 kubelet[2648]: W0307 01:05:18.821420 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:18.822014 kubelet[2648]: E0307 01:05:18.821456 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:18.823274 kubelet[2648]: E0307 01:05:18.823249 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:18.823483 kubelet[2648]: W0307 01:05:18.823377 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:18.823483 kubelet[2648]: E0307 01:05:18.823429 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:18.831520 kubelet[2648]: E0307 01:05:18.831482 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:18.835628 kubelet[2648]: W0307 01:05:18.835578 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:18.836931 kubelet[2648]: E0307 01:05:18.836845 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:18.857212 kubelet[2648]: E0307 01:05:18.857170 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:18.857445 kubelet[2648]: W0307 01:05:18.857417 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:18.857573 kubelet[2648]: E0307 01:05:18.857550 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:18.871660 systemd[1]: Started cri-containerd-009dc7b6bf03f61511b665be8c4e4e6dc6a585e11b954666372aa0278cf02c8c.scope - libcontainer container 009dc7b6bf03f61511b665be8c4e4e6dc6a585e11b954666372aa0278cf02c8c. Mar 7 01:05:18.885465 kubelet[2648]: E0307 01:05:18.883655 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:18.885705 kubelet[2648]: W0307 01:05:18.885673 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:18.885983 kubelet[2648]: E0307 01:05:18.885906 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:18.908302 kubelet[2648]: E0307 01:05:18.908227 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:18.908564 kubelet[2648]: W0307 01:05:18.908527 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:18.908693 kubelet[2648]: E0307 01:05:18.908670 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:18.934781 kubelet[2648]: E0307 01:05:18.934386 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:18.950448 kubelet[2648]: W0307 01:05:18.949248 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:18.950448 kubelet[2648]: E0307 01:05:18.949322 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:18.968258 kubelet[2648]: E0307 01:05:18.961707 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:18.968258 kubelet[2648]: W0307 01:05:18.961817 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:18.968258 kubelet[2648]: E0307 01:05:18.961847 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:18.969201 kubelet[2648]: E0307 01:05:18.969170 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:18.969357 kubelet[2648]: W0307 01:05:18.969331 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:18.969465 kubelet[2648]: E0307 01:05:18.969445 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:18.977550 kubelet[2648]: E0307 01:05:18.977514 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:18.977990 kubelet[2648]: W0307 01:05:18.977703 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:18.977990 kubelet[2648]: E0307 01:05:18.977803 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:18.980386 kubelet[2648]: E0307 01:05:18.980365 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:18.980794 kubelet[2648]: W0307 01:05:18.980489 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:18.980794 kubelet[2648]: E0307 01:05:18.980528 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:18.985429 kubelet[2648]: E0307 01:05:18.985395 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:18.985691 kubelet[2648]: W0307 01:05:18.985574 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:18.985691 kubelet[2648]: E0307 01:05:18.985613 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:18.998636 kubelet[2648]: E0307 01:05:18.998396 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:18.998636 kubelet[2648]: W0307 01:05:18.998425 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:18.998636 kubelet[2648]: E0307 01:05:18.998460 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:19.000806 kubelet[2648]: E0307 01:05:19.000515 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:19.000806 kubelet[2648]: W0307 01:05:19.000542 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:19.000806 kubelet[2648]: E0307 01:05:19.000570 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:19.011011 kubelet[2648]: E0307 01:05:19.010542 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:19.011011 kubelet[2648]: W0307 01:05:19.010591 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:19.011011 kubelet[2648]: E0307 01:05:19.010627 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:19.019760 kubelet[2648]: E0307 01:05:19.019415 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:19.019760 kubelet[2648]: W0307 01:05:19.019450 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:19.019760 kubelet[2648]: E0307 01:05:19.019487 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:19.027432 kubelet[2648]: E0307 01:05:19.027391 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:19.027612 kubelet[2648]: W0307 01:05:19.027586 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:19.027823 kubelet[2648]: E0307 01:05:19.027799 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:19.036808 kubelet[2648]: E0307 01:05:19.036442 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:19.036808 kubelet[2648]: W0307 01:05:19.036477 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:19.036808 kubelet[2648]: E0307 01:05:19.036508 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:19.042906 kubelet[2648]: E0307 01:05:19.039489 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:19.042906 kubelet[2648]: W0307 01:05:19.039514 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:19.042906 kubelet[2648]: E0307 01:05:19.039543 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:19.043968 kubelet[2648]: E0307 01:05:19.043656 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:19.044148 kubelet[2648]: W0307 01:05:19.044114 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:19.044296 kubelet[2648]: E0307 01:05:19.044269 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:19.052790 kubelet[2648]: E0307 01:05:19.051263 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:05:19.052790 kubelet[2648]: W0307 01:05:19.051295 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:05:19.052790 kubelet[2648]: E0307 01:05:19.051331 2648 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:05:19.149190 containerd[1488]: time="2026-03-07T01:05:19.148209695Z" level=info msg="StartContainer for \"009dc7b6bf03f61511b665be8c4e4e6dc6a585e11b954666372aa0278cf02c8c\" returns successfully" Mar 7 01:05:19.219508 systemd[1]: cri-containerd-009dc7b6bf03f61511b665be8c4e4e6dc6a585e11b954666372aa0278cf02c8c.scope: Deactivated successfully. Mar 7 01:05:19.430172 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-009dc7b6bf03f61511b665be8c4e4e6dc6a585e11b954666372aa0278cf02c8c-rootfs.mount: Deactivated successfully. Mar 7 01:05:19.756480 kubelet[2648]: E0307 01:05:19.754261 2648 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gfb8z" podUID="3e8af9de-d045-4f61-b8c9-f94c3122f507" Mar 7 01:05:19.829346 containerd[1488]: time="2026-03-07T01:05:19.827891428Z" level=info msg="shim disconnected" id=009dc7b6bf03f61511b665be8c4e4e6dc6a585e11b954666372aa0278cf02c8c namespace=k8s.io Mar 7 01:05:19.829346 containerd[1488]: time="2026-03-07T01:05:19.828001714Z" level=warning msg="cleaning up after shim disconnected" id=009dc7b6bf03f61511b665be8c4e4e6dc6a585e11b954666372aa0278cf02c8c namespace=k8s.io Mar 7 01:05:19.829346 containerd[1488]: time="2026-03-07T01:05:19.828046447Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:05:20.050830 kubelet[2648]: I0307 01:05:20.049417 2648 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-typha-549658f756-klb7h" podStartSLOduration=7.999814985 podStartE2EDuration="16.049398961s" podCreationTimestamp="2026-03-07 01:05:04 +0000 UTC" firstStartedPulling="2026-03-07 01:05:06.230807071 +0000 UTC m=+62.590248200" lastFinishedPulling="2026-03-07 01:05:14.280391006 +0000 UTC m=+70.639832176" observedRunningTime="2026-03-07 01:05:15.805937196 +0000 UTC m=+72.165378367" watchObservedRunningTime="2026-03-07 01:05:20.049398961 +0000 UTC m=+76.408840141" Mar 7 01:05:20.918833 containerd[1488]: time="2026-03-07T01:05:20.916290103Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Mar 7 01:05:21.771570 kubelet[2648]: E0307 01:05:21.756507 2648 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gfb8z" podUID="3e8af9de-d045-4f61-b8c9-f94c3122f507" Mar 7 01:05:24.044083 kubelet[2648]: E0307 01:05:24.041779 2648 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gfb8z" podUID="3e8af9de-d045-4f61-b8c9-f94c3122f507" Mar 7 01:05:25.844653 kubelet[2648]: E0307 01:05:25.844278 2648 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gfb8z" podUID="3e8af9de-d045-4f61-b8c9-f94c3122f507" Mar 7 01:05:27.758961 kubelet[2648]: E0307 01:05:27.758493 2648 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gfb8z" podUID="3e8af9de-d045-4f61-b8c9-f94c3122f507" Mar 7 01:05:29.786829 kubelet[2648]: E0307 01:05:29.784345 2648 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gfb8z" podUID="3e8af9de-d045-4f61-b8c9-f94c3122f507" Mar 7 01:05:31.756644 kubelet[2648]: E0307 01:05:31.755356 2648 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gfb8z" podUID="3e8af9de-d045-4f61-b8c9-f94c3122f507" Mar 7 01:05:33.758256 kubelet[2648]: E0307 01:05:33.757486 2648 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gfb8z" podUID="3e8af9de-d045-4f61-b8c9-f94c3122f507" Mar 7 01:05:35.762868 kubelet[2648]: E0307 01:05:35.759675 2648 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gfb8z" podUID="3e8af9de-d045-4f61-b8c9-f94c3122f507" Mar 7 01:05:37.764883 kubelet[2648]: E0307 01:05:37.757457 2648 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gfb8z" podUID="3e8af9de-d045-4f61-b8c9-f94c3122f507" Mar 7 01:05:39.754957 kubelet[2648]: E0307 01:05:39.754315 2648 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gfb8z" podUID="3e8af9de-d045-4f61-b8c9-f94c3122f507" Mar 7 01:05:41.770549 kubelet[2648]: E0307 01:05:41.766907 2648 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:05:41.770549 kubelet[2648]: E0307 01:05:41.769979 2648 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gfb8z" podUID="3e8af9de-d045-4f61-b8c9-f94c3122f507" Mar 7 01:05:43.765361 kubelet[2648]: E0307 01:05:43.754140 2648 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gfb8z" podUID="3e8af9de-d045-4f61-b8c9-f94c3122f507" Mar 7 01:05:45.763044 kubelet[2648]: E0307 01:05:45.758571 2648 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gfb8z" podUID="3e8af9de-d045-4f61-b8c9-f94c3122f507" Mar 7 01:05:46.793347 kubelet[2648]: E0307 01:05:46.789780 2648 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:05:47.757988 kubelet[2648]: E0307 01:05:47.754078 2648 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gfb8z" podUID="3e8af9de-d045-4f61-b8c9-f94c3122f507" Mar 7 01:05:49.763637 kubelet[2648]: E0307 01:05:49.759597 2648 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gfb8z" podUID="3e8af9de-d045-4f61-b8c9-f94c3122f507" Mar 7 01:05:51.280035 kubelet[2648]: E0307 01:05:51.277932 2648 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gfb8z" podUID="3e8af9de-d045-4f61-b8c9-f94c3122f507" Mar 7 01:05:52.768441 kubelet[2648]: E0307 01:05:52.765536 2648 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gfb8z" podUID="3e8af9de-d045-4f61-b8c9-f94c3122f507" Mar 7 01:05:54.758349 kubelet[2648]: E0307 01:05:54.758278 2648 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gfb8z" podUID="3e8af9de-d045-4f61-b8c9-f94c3122f507" Mar 7 01:05:56.899032 kubelet[2648]: E0307 01:05:56.888673 2648 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gfb8z" podUID="3e8af9de-d045-4f61-b8c9-f94c3122f507" Mar 7 01:05:58.756911 kubelet[2648]: E0307 01:05:58.756590 2648 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gfb8z" podUID="3e8af9de-d045-4f61-b8c9-f94c3122f507" Mar 7 01:06:00.775552 kubelet[2648]: E0307 01:06:00.759639 2648 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gfb8z" podUID="3e8af9de-d045-4f61-b8c9-f94c3122f507" Mar 7 01:06:02.794170 kubelet[2648]: E0307 01:06:02.793403 2648 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gfb8z" podUID="3e8af9de-d045-4f61-b8c9-f94c3122f507" Mar 7 01:06:04.361036 kubelet[2648]: E0307 01:06:04.359531 2648 kubelet_node_status.go:386] "Node not becoming ready in time after startup" Mar 7 01:06:04.778269 kubelet[2648]: E0307 01:06:04.778036 2648 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gfb8z" podUID="3e8af9de-d045-4f61-b8c9-f94c3122f507" Mar 7 01:06:06.366085 kubelet[2648]: E0307 01:06:06.365920 2648 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 7 01:06:06.769958 kubelet[2648]: E0307 01:06:06.766364 2648 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gfb8z" podUID="3e8af9de-d045-4f61-b8c9-f94c3122f507" Mar 7 01:06:07.964370 kubelet[2648]: E0307 01:06:07.962141 2648 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gfb8z" podUID="3e8af9de-d045-4f61-b8c9-f94c3122f507" Mar 7 01:06:09.848514 kubelet[2648]: E0307 01:06:09.841687 2648 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gfb8z" podUID="3e8af9de-d045-4f61-b8c9-f94c3122f507" Mar 7 01:06:11.649935 kubelet[2648]: E0307 01:06:11.649876 2648 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 7 01:06:11.756894 kubelet[2648]: E0307 01:06:11.754857 2648 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gfb8z" podUID="3e8af9de-d045-4f61-b8c9-f94c3122f507" Mar 7 01:06:11.803076 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount809433812.mount: Deactivated successfully. Mar 7 01:06:12.115523 containerd[1488]: time="2026-03-07T01:06:12.114825002Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:06:12.126652 containerd[1488]: time="2026-03-07T01:06:12.125461807Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Mar 7 01:06:12.133385 containerd[1488]: time="2026-03-07T01:06:12.131203805Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:06:12.141362 containerd[1488]: time="2026-03-07T01:06:12.139441563Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:06:12.152358 containerd[1488]: time="2026-03-07T01:06:12.150617517Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 51.234262393s" Mar 7 01:06:12.152358 containerd[1488]: time="2026-03-07T01:06:12.150702837Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Mar 7 01:06:12.196923 containerd[1488]: time="2026-03-07T01:06:12.194417628Z" level=info msg="CreateContainer within sandbox \"e568dd4baa3e659efc52cdcfb26dcf1f5e244b9c5948ed83893c2b1bfad845d5\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Mar 7 01:06:12.381488 containerd[1488]: time="2026-03-07T01:06:12.381156593Z" level=info msg="CreateContainer within sandbox \"e568dd4baa3e659efc52cdcfb26dcf1f5e244b9c5948ed83893c2b1bfad845d5\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"c2de4e11e12304599645af4f417a36aacbf461c91c9d634795f39735c7ff8a35\"" Mar 7 01:06:12.394008 containerd[1488]: time="2026-03-07T01:06:12.389378326Z" level=info msg="StartContainer for \"c2de4e11e12304599645af4f417a36aacbf461c91c9d634795f39735c7ff8a35\"" Mar 7 01:06:12.789422 systemd[1]: Started cri-containerd-c2de4e11e12304599645af4f417a36aacbf461c91c9d634795f39735c7ff8a35.scope - libcontainer container c2de4e11e12304599645af4f417a36aacbf461c91c9d634795f39735c7ff8a35. Mar 7 01:06:13.154694 systemd[1]: cri-containerd-c2de4e11e12304599645af4f417a36aacbf461c91c9d634795f39735c7ff8a35.scope: Deactivated successfully. Mar 7 01:06:13.218475 containerd[1488]: time="2026-03-07T01:06:13.218357472Z" level=info msg="StartContainer for \"c2de4e11e12304599645af4f417a36aacbf461c91c9d634795f39735c7ff8a35\" returns successfully" Mar 7 01:06:13.356560 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c2de4e11e12304599645af4f417a36aacbf461c91c9d634795f39735c7ff8a35-rootfs.mount: Deactivated successfully. Mar 7 01:06:13.404363 containerd[1488]: time="2026-03-07T01:06:13.403819951Z" level=info msg="shim disconnected" id=c2de4e11e12304599645af4f417a36aacbf461c91c9d634795f39735c7ff8a35 namespace=k8s.io Mar 7 01:06:13.404363 containerd[1488]: time="2026-03-07T01:06:13.404008062Z" level=warning msg="cleaning up after shim disconnected" id=c2de4e11e12304599645af4f417a36aacbf461c91c9d634795f39735c7ff8a35 namespace=k8s.io Mar 7 01:06:13.404363 containerd[1488]: time="2026-03-07T01:06:13.404032889Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:06:13.756011 kubelet[2648]: E0307 01:06:13.754078 2648 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gfb8z" podUID="3e8af9de-d045-4f61-b8c9-f94c3122f507" Mar 7 01:06:14.364346 containerd[1488]: time="2026-03-07T01:06:14.363432153Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Mar 7 01:06:15.754544 kubelet[2648]: E0307 01:06:15.754212 2648 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gfb8z" podUID="3e8af9de-d045-4f61-b8c9-f94c3122f507" Mar 7 01:06:16.674099 kubelet[2648]: E0307 01:06:16.674028 2648 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 7 01:06:16.773460 kubelet[2648]: E0307 01:06:16.764460 2648 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gfb8z" podUID="3e8af9de-d045-4f61-b8c9-f94c3122f507" Mar 7 01:06:18.761442 kubelet[2648]: E0307 01:06:18.758623 2648 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gfb8z" podUID="3e8af9de-d045-4f61-b8c9-f94c3122f507" Mar 7 01:06:20.763632 kubelet[2648]: E0307 01:06:20.761871 2648 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gfb8z" podUID="3e8af9de-d045-4f61-b8c9-f94c3122f507" Mar 7 01:06:21.687532 kubelet[2648]: E0307 01:06:21.684601 2648 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 7 01:06:22.765780 kubelet[2648]: E0307 01:06:22.755067 2648 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gfb8z" podUID="3e8af9de-d045-4f61-b8c9-f94c3122f507" Mar 7 01:06:22.765780 kubelet[2648]: E0307 01:06:22.756210 2648 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:06:24.759013 kubelet[2648]: E0307 01:06:24.758596 2648 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:06:24.759013 kubelet[2648]: E0307 01:06:24.779385 2648 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gfb8z" podUID="3e8af9de-d045-4f61-b8c9-f94c3122f507" Mar 7 01:06:26.724829 kubelet[2648]: E0307 01:06:26.721128 2648 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 7 01:06:26.762109 kubelet[2648]: E0307 01:06:26.761598 2648 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gfb8z" podUID="3e8af9de-d045-4f61-b8c9-f94c3122f507" Mar 7 01:06:27.576545 containerd[1488]: time="2026-03-07T01:06:27.576435646Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:06:27.588052 containerd[1488]: time="2026-03-07T01:06:27.587564296Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Mar 7 01:06:27.598744 containerd[1488]: time="2026-03-07T01:06:27.598250000Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:06:27.620167 containerd[1488]: time="2026-03-07T01:06:27.619946374Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:06:27.625921 containerd[1488]: time="2026-03-07T01:06:27.625258444Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 13.261767801s" Mar 7 01:06:27.625921 containerd[1488]: time="2026-03-07T01:06:27.625367088Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Mar 7 01:06:27.702418 containerd[1488]: time="2026-03-07T01:06:27.701905086Z" level=info msg="CreateContainer within sandbox \"e568dd4baa3e659efc52cdcfb26dcf1f5e244b9c5948ed83893c2b1bfad845d5\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 7 01:06:27.827897 containerd[1488]: time="2026-03-07T01:06:27.824192558Z" level=info msg="CreateContainer within sandbox \"e568dd4baa3e659efc52cdcfb26dcf1f5e244b9c5948ed83893c2b1bfad845d5\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"d909be7a7b6b0af773e33ed2dc5755e1f9a81f4fcfd2ee6b1fc43ecd027d6655\"" Mar 7 01:06:27.830323 containerd[1488]: time="2026-03-07T01:06:27.830223030Z" level=info msg="StartContainer for \"d909be7a7b6b0af773e33ed2dc5755e1f9a81f4fcfd2ee6b1fc43ecd027d6655\"" Mar 7 01:06:28.105611 systemd[1]: Started cri-containerd-d909be7a7b6b0af773e33ed2dc5755e1f9a81f4fcfd2ee6b1fc43ecd027d6655.scope - libcontainer container d909be7a7b6b0af773e33ed2dc5755e1f9a81f4fcfd2ee6b1fc43ecd027d6655. Mar 7 01:06:28.335022 containerd[1488]: time="2026-03-07T01:06:28.334799133Z" level=info msg="StartContainer for \"d909be7a7b6b0af773e33ed2dc5755e1f9a81f4fcfd2ee6b1fc43ecd027d6655\" returns successfully" Mar 7 01:06:28.756458 kubelet[2648]: E0307 01:06:28.755626 2648 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gfb8z" podUID="3e8af9de-d045-4f61-b8c9-f94c3122f507" Mar 7 01:06:30.778477 kubelet[2648]: E0307 01:06:30.778215 2648 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gfb8z" podUID="3e8af9de-d045-4f61-b8c9-f94c3122f507" Mar 7 01:06:31.740446 kubelet[2648]: E0307 01:06:31.736690 2648 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 7 01:06:32.830892 systemd[1]: cri-containerd-d909be7a7b6b0af773e33ed2dc5755e1f9a81f4fcfd2ee6b1fc43ecd027d6655.scope: Deactivated successfully. Mar 7 01:06:32.837976 systemd[1]: cri-containerd-d909be7a7b6b0af773e33ed2dc5755e1f9a81f4fcfd2ee6b1fc43ecd027d6655.scope: Consumed 1.896s CPU time. Mar 7 01:06:32.851631 kubelet[2648]: E0307 01:06:32.842346 2648 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gfb8z" podUID="3e8af9de-d045-4f61-b8c9-f94c3122f507" Mar 7 01:06:33.053814 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d909be7a7b6b0af773e33ed2dc5755e1f9a81f4fcfd2ee6b1fc43ecd027d6655-rootfs.mount: Deactivated successfully. Mar 7 01:06:33.090079 containerd[1488]: time="2026-03-07T01:06:33.089219949Z" level=info msg="shim disconnected" id=d909be7a7b6b0af773e33ed2dc5755e1f9a81f4fcfd2ee6b1fc43ecd027d6655 namespace=k8s.io Mar 7 01:06:33.090079 containerd[1488]: time="2026-03-07T01:06:33.089344211Z" level=warning msg="cleaning up after shim disconnected" id=d909be7a7b6b0af773e33ed2dc5755e1f9a81f4fcfd2ee6b1fc43ecd027d6655 namespace=k8s.io Mar 7 01:06:33.090079 containerd[1488]: time="2026-03-07T01:06:33.089364929Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:06:34.384418 containerd[1488]: time="2026-03-07T01:06:34.384050602Z" level=info msg="CreateContainer within sandbox \"e568dd4baa3e659efc52cdcfb26dcf1f5e244b9c5948ed83893c2b1bfad845d5\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 7 01:06:34.656395 containerd[1488]: time="2026-03-07T01:06:34.652825600Z" level=info msg="CreateContainer within sandbox \"e568dd4baa3e659efc52cdcfb26dcf1f5e244b9c5948ed83893c2b1bfad845d5\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"23a5bccb39efd246970799aee1a0248df1bdbf62c7f1d22bb61e46bff8fadc5c\"" Mar 7 01:06:34.656395 containerd[1488]: time="2026-03-07T01:06:34.654173498Z" level=info msg="StartContainer for \"23a5bccb39efd246970799aee1a0248df1bdbf62c7f1d22bb61e46bff8fadc5c\"" Mar 7 01:06:34.782372 kubelet[2648]: E0307 01:06:34.781109 2648 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gfb8z" podUID="3e8af9de-d045-4f61-b8c9-f94c3122f507" Mar 7 01:06:34.813161 systemd[1]: Started cri-containerd-23a5bccb39efd246970799aee1a0248df1bdbf62c7f1d22bb61e46bff8fadc5c.scope - libcontainer container 23a5bccb39efd246970799aee1a0248df1bdbf62c7f1d22bb61e46bff8fadc5c. Mar 7 01:06:35.012238 containerd[1488]: time="2026-03-07T01:06:35.008029085Z" level=info msg="StartContainer for \"23a5bccb39efd246970799aee1a0248df1bdbf62c7f1d22bb61e46bff8fadc5c\" returns successfully" Mar 7 01:06:35.386599 kubelet[2648]: I0307 01:06:35.384593 2648 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-node-4vgcl" podStartSLOduration=2.486028854 podStartE2EDuration="1m30.384569733s" podCreationTimestamp="2026-03-07 01:05:05 +0000 UTC" firstStartedPulling="2026-03-07 01:05:06.321778884 +0000 UTC m=+62.681220014" lastFinishedPulling="2026-03-07 01:06:34.220319764 +0000 UTC m=+150.579760893" observedRunningTime="2026-03-07 01:06:35.363537232 +0000 UTC m=+151.722978372" watchObservedRunningTime="2026-03-07 01:06:35.384569733 +0000 UTC m=+151.744010863" Mar 7 01:06:35.495180 systemd[1]: run-containerd-runc-k8s.io-23a5bccb39efd246970799aee1a0248df1bdbf62c7f1d22bb61e46bff8fadc5c-runc.yLeQGF.mount: Deactivated successfully. Mar 7 01:06:36.819270 systemd[1]: Created slice kubepods-besteffort-pod3e8af9de_d045_4f61_b8c9_f94c3122f507.slice - libcontainer container kubepods-besteffort-pod3e8af9de_d045_4f61_b8c9_f94c3122f507.slice. Mar 7 01:06:36.876677 containerd[1488]: time="2026-03-07T01:06:36.876607627Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gfb8z,Uid:3e8af9de-d045-4f61-b8c9-f94c3122f507,Namespace:calico-system,Attempt:0,}" Mar 7 01:06:38.566579 systemd-networkd[1412]: calid5f484f8cf5: Link UP Mar 7 01:06:38.567078 systemd-networkd[1412]: calid5f484f8cf5: Gained carrier Mar 7 01:06:38.720085 containerd[1488]: 2026-03-07 01:06:37.530 [ERROR][3712] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 7 01:06:38.720085 containerd[1488]: 2026-03-07 01:06:37.757 [INFO][3712] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--gfb8z-eth0 csi-node-driver- calico-system 3e8af9de-d045-4f61-b8c9-f94c3122f507 792 0 2026-03-07 01:05:05 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:589b8b8d94 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-gfb8z eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calid5f484f8cf5 [] [] }} ContainerID="569cc9d7e4c4e125b19a4c442602f6db0c9ff14abb70691b69bce8095349a4fe" Namespace="calico-system" Pod="csi-node-driver-gfb8z" WorkloadEndpoint="localhost-k8s-csi--node--driver--gfb8z-" Mar 7 01:06:38.720085 containerd[1488]: 2026-03-07 01:06:37.759 [INFO][3712] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="569cc9d7e4c4e125b19a4c442602f6db0c9ff14abb70691b69bce8095349a4fe" Namespace="calico-system" Pod="csi-node-driver-gfb8z" WorkloadEndpoint="localhost-k8s-csi--node--driver--gfb8z-eth0" Mar 7 01:06:38.720085 containerd[1488]: 2026-03-07 01:06:37.961 [INFO][3731] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="569cc9d7e4c4e125b19a4c442602f6db0c9ff14abb70691b69bce8095349a4fe" HandleID="k8s-pod-network.569cc9d7e4c4e125b19a4c442602f6db0c9ff14abb70691b69bce8095349a4fe" Workload="localhost-k8s-csi--node--driver--gfb8z-eth0" Mar 7 01:06:38.720085 containerd[1488]: 2026-03-07 01:06:38.009 [INFO][3731] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="569cc9d7e4c4e125b19a4c442602f6db0c9ff14abb70691b69bce8095349a4fe" HandleID="k8s-pod-network.569cc9d7e4c4e125b19a4c442602f6db0c9ff14abb70691b69bce8095349a4fe" Workload="localhost-k8s-csi--node--driver--gfb8z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002774b0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-gfb8z", "timestamp":"2026-03-07 01:06:37.960953007 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001122c0)} Mar 7 01:06:38.720085 containerd[1488]: 2026-03-07 01:06:38.009 [INFO][3731] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:06:38.720085 containerd[1488]: 2026-03-07 01:06:38.009 [INFO][3731] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:06:38.720085 containerd[1488]: 2026-03-07 01:06:38.009 [INFO][3731] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 7 01:06:38.720085 containerd[1488]: 2026-03-07 01:06:38.018 [INFO][3731] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.569cc9d7e4c4e125b19a4c442602f6db0c9ff14abb70691b69bce8095349a4fe" host="localhost" Mar 7 01:06:38.720085 containerd[1488]: 2026-03-07 01:06:38.051 [INFO][3731] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 7 01:06:38.720085 containerd[1488]: 2026-03-07 01:06:38.146 [INFO][3731] ipam/ipam.go 558: Ran out of existing affine blocks for host host="localhost" Mar 7 01:06:38.720085 containerd[1488]: 2026-03-07 01:06:38.161 [INFO][3731] ipam/ipam.go 575: Tried all affine blocks. Looking for an affine block with space, or a new unclaimed block host="localhost" Mar 7 01:06:38.720085 containerd[1488]: 2026-03-07 01:06:38.201 [INFO][3731] ipam/ipam_block_reader_writer.go 158: Found free block: 192.168.88.128/26 Mar 7 01:06:38.720085 containerd[1488]: 2026-03-07 01:06:38.206 [INFO][3731] ipam/ipam.go 588: Found unclaimed block in 44.908776ms host="localhost" subnet=192.168.88.128/26 Mar 7 01:06:38.720085 containerd[1488]: 2026-03-07 01:06:38.211 [INFO][3731] ipam/ipam_block_reader_writer.go 175: Trying to create affinity in pending state host="localhost" subnet=192.168.88.128/26 Mar 7 01:06:38.720085 containerd[1488]: 2026-03-07 01:06:38.240 [INFO][3731] ipam/ipam_block_reader_writer.go 205: Successfully created pending affinity for block host="localhost" subnet=192.168.88.128/26 Mar 7 01:06:38.720085 containerd[1488]: 2026-03-07 01:06:38.240 [INFO][3731] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 7 01:06:38.720085 containerd[1488]: 2026-03-07 01:06:38.261 [INFO][3731] ipam/ipam.go 165: The referenced block doesn't exist, trying to create it cidr=192.168.88.128/26 host="localhost" Mar 7 01:06:38.720085 containerd[1488]: 2026-03-07 01:06:38.314 [INFO][3731] ipam/ipam.go 172: Wrote affinity as pending cidr=192.168.88.128/26 host="localhost" Mar 7 01:06:38.720085 containerd[1488]: 2026-03-07 01:06:38.324 [INFO][3731] ipam/ipam.go 181: Attempting to claim the block cidr=192.168.88.128/26 host="localhost" Mar 7 01:06:38.720085 containerd[1488]: 2026-03-07 01:06:38.325 [INFO][3731] ipam/ipam_block_reader_writer.go 226: Attempting to create a new block affinityType="host" host="localhost" subnet=192.168.88.128/26 Mar 7 01:06:38.720085 containerd[1488]: 2026-03-07 01:06:38.347 [INFO][3731] ipam/ipam_block_reader_writer.go 267: Successfully created block Mar 7 01:06:38.720085 containerd[1488]: 2026-03-07 01:06:38.347 [INFO][3731] ipam/ipam_block_reader_writer.go 283: Confirming affinity host="localhost" subnet=192.168.88.128/26 Mar 7 01:06:38.720085 containerd[1488]: 2026-03-07 01:06:38.386 [INFO][3731] ipam/ipam_block_reader_writer.go 298: Successfully confirmed affinity host="localhost" subnet=192.168.88.128/26 Mar 7 01:06:38.720085 containerd[1488]: 2026-03-07 01:06:38.386 [INFO][3731] ipam/ipam.go 623: Block '192.168.88.128/26' has 64 free ips which is more than 1 ips required. host="localhost" subnet=192.168.88.128/26 Mar 7 01:06:38.720085 containerd[1488]: 2026-03-07 01:06:38.386 [INFO][3731] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.569cc9d7e4c4e125b19a4c442602f6db0c9ff14abb70691b69bce8095349a4fe" host="localhost" Mar 7 01:06:38.720085 containerd[1488]: 2026-03-07 01:06:38.395 [INFO][3731] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.569cc9d7e4c4e125b19a4c442602f6db0c9ff14abb70691b69bce8095349a4fe Mar 7 01:06:38.720085 containerd[1488]: 2026-03-07 01:06:38.410 [INFO][3731] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.569cc9d7e4c4e125b19a4c442602f6db0c9ff14abb70691b69bce8095349a4fe" host="localhost" Mar 7 01:06:38.720085 containerd[1488]: 2026-03-07 01:06:38.438 [INFO][3731] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.128/26] block=192.168.88.128/26 handle="k8s-pod-network.569cc9d7e4c4e125b19a4c442602f6db0c9ff14abb70691b69bce8095349a4fe" host="localhost" Mar 7 01:06:38.722440 containerd[1488]: 2026-03-07 01:06:38.439 [INFO][3731] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.128/26] handle="k8s-pod-network.569cc9d7e4c4e125b19a4c442602f6db0c9ff14abb70691b69bce8095349a4fe" host="localhost" Mar 7 01:06:38.722440 containerd[1488]: 2026-03-07 01:06:38.439 [INFO][3731] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:06:38.722440 containerd[1488]: 2026-03-07 01:06:38.439 [INFO][3731] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.128/26] IPv6=[] ContainerID="569cc9d7e4c4e125b19a4c442602f6db0c9ff14abb70691b69bce8095349a4fe" HandleID="k8s-pod-network.569cc9d7e4c4e125b19a4c442602f6db0c9ff14abb70691b69bce8095349a4fe" Workload="localhost-k8s-csi--node--driver--gfb8z-eth0" Mar 7 01:06:38.722440 containerd[1488]: 2026-03-07 01:06:38.451 [INFO][3712] cni-plugin/k8s.go 418: Populated endpoint ContainerID="569cc9d7e4c4e125b19a4c442602f6db0c9ff14abb70691b69bce8095349a4fe" Namespace="calico-system" Pod="csi-node-driver-gfb8z" WorkloadEndpoint="localhost-k8s-csi--node--driver--gfb8z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--gfb8z-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3e8af9de-d045-4f61-b8c9-f94c3122f507", ResourceVersion:"792", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 5, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-gfb8z", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.128/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid5f484f8cf5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:06:38.722440 containerd[1488]: 2026-03-07 01:06:38.452 [INFO][3712] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.128/32] ContainerID="569cc9d7e4c4e125b19a4c442602f6db0c9ff14abb70691b69bce8095349a4fe" Namespace="calico-system" Pod="csi-node-driver-gfb8z" WorkloadEndpoint="localhost-k8s-csi--node--driver--gfb8z-eth0" Mar 7 01:06:38.722440 containerd[1488]: 2026-03-07 01:06:38.452 [INFO][3712] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid5f484f8cf5 ContainerID="569cc9d7e4c4e125b19a4c442602f6db0c9ff14abb70691b69bce8095349a4fe" Namespace="calico-system" Pod="csi-node-driver-gfb8z" WorkloadEndpoint="localhost-k8s-csi--node--driver--gfb8z-eth0" Mar 7 01:06:38.722440 containerd[1488]: 2026-03-07 01:06:38.578 [INFO][3712] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="569cc9d7e4c4e125b19a4c442602f6db0c9ff14abb70691b69bce8095349a4fe" Namespace="calico-system" Pod="csi-node-driver-gfb8z" WorkloadEndpoint="localhost-k8s-csi--node--driver--gfb8z-eth0" Mar 7 01:06:38.722440 containerd[1488]: 2026-03-07 01:06:38.578 [INFO][3712] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="569cc9d7e4c4e125b19a4c442602f6db0c9ff14abb70691b69bce8095349a4fe" Namespace="calico-system" Pod="csi-node-driver-gfb8z" WorkloadEndpoint="localhost-k8s-csi--node--driver--gfb8z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--gfb8z-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3e8af9de-d045-4f61-b8c9-f94c3122f507", ResourceVersion:"792", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 5, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"569cc9d7e4c4e125b19a4c442602f6db0c9ff14abb70691b69bce8095349a4fe", Pod:"csi-node-driver-gfb8z", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.128/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid5f484f8cf5", MAC:"b6:34:f7:cf:8f:3b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:06:38.722440 containerd[1488]: 2026-03-07 01:06:38.686 [INFO][3712] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="569cc9d7e4c4e125b19a4c442602f6db0c9ff14abb70691b69bce8095349a4fe" Namespace="calico-system" Pod="csi-node-driver-gfb8z" WorkloadEndpoint="localhost-k8s-csi--node--driver--gfb8z-eth0" Mar 7 01:06:38.934858 containerd[1488]: time="2026-03-07T01:06:38.933962722Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:06:38.934858 containerd[1488]: time="2026-03-07T01:06:38.934098896Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:06:38.934858 containerd[1488]: time="2026-03-07T01:06:38.934119134Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:06:38.934858 containerd[1488]: time="2026-03-07T01:06:38.934318937Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:06:39.025985 systemd[1]: run-containerd-runc-k8s.io-569cc9d7e4c4e125b19a4c442602f6db0c9ff14abb70691b69bce8095349a4fe-runc.jHop7s.mount: Deactivated successfully. Mar 7 01:06:39.053086 systemd[1]: Started cri-containerd-569cc9d7e4c4e125b19a4c442602f6db0c9ff14abb70691b69bce8095349a4fe.scope - libcontainer container 569cc9d7e4c4e125b19a4c442602f6db0c9ff14abb70691b69bce8095349a4fe. Mar 7 01:06:39.101098 systemd-resolved[1354]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 7 01:06:39.182187 containerd[1488]: time="2026-03-07T01:06:39.182052854Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gfb8z,Uid:3e8af9de-d045-4f61-b8c9-f94c3122f507,Namespace:calico-system,Attempt:0,} returns sandbox id \"569cc9d7e4c4e125b19a4c442602f6db0c9ff14abb70691b69bce8095349a4fe\"" Mar 7 01:06:39.193897 containerd[1488]: time="2026-03-07T01:06:39.190433360Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Mar 7 01:06:39.597151 systemd-networkd[1412]: calid5f484f8cf5: Gained IPv6LL Mar 7 01:06:40.882815 systemd[1]: Created slice kubepods-besteffort-pod1006b8ab_f55e_4786_b9b3_bccfdf6a83f2.slice - libcontainer container kubepods-besteffort-pod1006b8ab_f55e_4786_b9b3_bccfdf6a83f2.slice. Mar 7 01:06:40.999419 kubelet[2648]: I0307 01:06:40.998890 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8ww4\" (UniqueName: \"kubernetes.io/projected/50430acc-71a2-4da9-9907-c7bac0a8a3e4-kube-api-access-h8ww4\") pod \"coredns-7d764666f9-msqcb\" (UID: \"50430acc-71a2-4da9-9907-c7bac0a8a3e4\") " pod="kube-system/coredns-7d764666f9-msqcb" Mar 7 01:06:40.999419 kubelet[2648]: I0307 01:06:40.999025 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbmll\" (UniqueName: \"kubernetes.io/projected/96fca932-ead7-4c21-85cc-a13f1ef84f2f-kube-api-access-xbmll\") pod \"calico-apiserver-c65f4d59c-5hmjn\" (UID: \"96fca932-ead7-4c21-85cc-a13f1ef84f2f\") " pod="calico-system/calico-apiserver-c65f4d59c-5hmjn" Mar 7 01:06:40.999419 kubelet[2648]: I0307 01:06:40.999078 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/654bfb29-b510-4d6f-8e3b-f63f1e64b596-calico-apiserver-certs\") pod \"calico-apiserver-c65f4d59c-lmhzz\" (UID: \"654bfb29-b510-4d6f-8e3b-f63f1e64b596\") " pod="calico-system/calico-apiserver-c65f4d59c-lmhzz" Mar 7 01:06:40.999419 kubelet[2648]: I0307 01:06:40.999108 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/96fca932-ead7-4c21-85cc-a13f1ef84f2f-calico-apiserver-certs\") pod \"calico-apiserver-c65f4d59c-5hmjn\" (UID: \"96fca932-ead7-4c21-85cc-a13f1ef84f2f\") " pod="calico-system/calico-apiserver-c65f4d59c-5hmjn" Mar 7 01:06:40.999419 kubelet[2648]: I0307 01:06:40.999147 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hbzw\" (UniqueName: \"kubernetes.io/projected/1006b8ab-f55e-4786-b9b3-bccfdf6a83f2-kube-api-access-4hbzw\") pod \"calico-kube-controllers-7fb9fb46d-4xvsw\" (UID: \"1006b8ab-f55e-4786-b9b3-bccfdf6a83f2\") " pod="calico-system/calico-kube-controllers-7fb9fb46d-4xvsw" Mar 7 01:06:41.000264 kubelet[2648]: I0307 01:06:40.999188 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgf74\" (UniqueName: \"kubernetes.io/projected/654bfb29-b510-4d6f-8e3b-f63f1e64b596-kube-api-access-lgf74\") pod \"calico-apiserver-c65f4d59c-lmhzz\" (UID: \"654bfb29-b510-4d6f-8e3b-f63f1e64b596\") " pod="calico-system/calico-apiserver-c65f4d59c-lmhzz" Mar 7 01:06:41.000264 kubelet[2648]: I0307 01:06:40.999214 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1006b8ab-f55e-4786-b9b3-bccfdf6a83f2-tigera-ca-bundle\") pod \"calico-kube-controllers-7fb9fb46d-4xvsw\" (UID: \"1006b8ab-f55e-4786-b9b3-bccfdf6a83f2\") " pod="calico-system/calico-kube-controllers-7fb9fb46d-4xvsw" Mar 7 01:06:41.000264 kubelet[2648]: I0307 01:06:40.999237 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/50430acc-71a2-4da9-9907-c7bac0a8a3e4-config-volume\") pod \"coredns-7d764666f9-msqcb\" (UID: \"50430acc-71a2-4da9-9907-c7bac0a8a3e4\") " pod="kube-system/coredns-7d764666f9-msqcb" Mar 7 01:06:41.016220 systemd[1]: Created slice kubepods-besteffort-pod654bfb29_b510_4d6f_8e3b_f63f1e64b596.slice - libcontainer container kubepods-besteffort-pod654bfb29_b510_4d6f_8e3b_f63f1e64b596.slice. Mar 7 01:06:41.092843 systemd[1]: Created slice kubepods-burstable-pod50430acc_71a2_4da9_9907_c7bac0a8a3e4.slice - libcontainer container kubepods-burstable-pod50430acc_71a2_4da9_9907_c7bac0a8a3e4.slice. Mar 7 01:06:41.113182 kubelet[2648]: I0307 01:06:41.113022 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6105a484-0b75-4a75-94f2-b23cf024997c-goldmane-ca-bundle\") pod \"goldmane-9f7667bb8-sjd5q\" (UID: \"6105a484-0b75-4a75-94f2-b23cf024997c\") " pod="calico-system/goldmane-9f7667bb8-sjd5q" Mar 7 01:06:41.116988 kubelet[2648]: I0307 01:06:41.116955 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/6105a484-0b75-4a75-94f2-b23cf024997c-goldmane-key-pair\") pod \"goldmane-9f7667bb8-sjd5q\" (UID: \"6105a484-0b75-4a75-94f2-b23cf024997c\") " pod="calico-system/goldmane-9f7667bb8-sjd5q" Mar 7 01:06:41.117198 kubelet[2648]: I0307 01:06:41.117176 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8hcn\" (UniqueName: \"kubernetes.io/projected/6105a484-0b75-4a75-94f2-b23cf024997c-kube-api-access-h8hcn\") pod \"goldmane-9f7667bb8-sjd5q\" (UID: \"6105a484-0b75-4a75-94f2-b23cf024997c\") " pod="calico-system/goldmane-9f7667bb8-sjd5q" Mar 7 01:06:41.117357 kubelet[2648]: I0307 01:06:41.117337 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/33181e76-a4a7-4c67-ac8b-561804e85a35-whisker-backend-key-pair\") pod \"whisker-7764966df5-bspf4\" (UID: \"33181e76-a4a7-4c67-ac8b-561804e85a35\") " pod="calico-system/whisker-7764966df5-bspf4" Mar 7 01:06:41.118895 kubelet[2648]: I0307 01:06:41.117518 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8p98\" (UniqueName: \"kubernetes.io/projected/8a88a6aa-d46d-4b55-b537-0a47ef29126b-kube-api-access-w8p98\") pod \"coredns-7d764666f9-wczgt\" (UID: \"8a88a6aa-d46d-4b55-b537-0a47ef29126b\") " pod="kube-system/coredns-7d764666f9-wczgt" Mar 7 01:06:41.118895 kubelet[2648]: I0307 01:06:41.117547 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/33181e76-a4a7-4c67-ac8b-561804e85a35-nginx-config\") pod \"whisker-7764966df5-bspf4\" (UID: \"33181e76-a4a7-4c67-ac8b-561804e85a35\") " pod="calico-system/whisker-7764966df5-bspf4" Mar 7 01:06:41.118895 kubelet[2648]: I0307 01:06:41.117569 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/33181e76-a4a7-4c67-ac8b-561804e85a35-whisker-ca-bundle\") pod \"whisker-7764966df5-bspf4\" (UID: \"33181e76-a4a7-4c67-ac8b-561804e85a35\") " pod="calico-system/whisker-7764966df5-bspf4" Mar 7 01:06:41.118895 kubelet[2648]: I0307 01:06:41.117636 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8a88a6aa-d46d-4b55-b537-0a47ef29126b-config-volume\") pod \"coredns-7d764666f9-wczgt\" (UID: \"8a88a6aa-d46d-4b55-b537-0a47ef29126b\") " pod="kube-system/coredns-7d764666f9-wczgt" Mar 7 01:06:41.118895 kubelet[2648]: I0307 01:06:41.117659 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srfnz\" (UniqueName: \"kubernetes.io/projected/33181e76-a4a7-4c67-ac8b-561804e85a35-kube-api-access-srfnz\") pod \"whisker-7764966df5-bspf4\" (UID: \"33181e76-a4a7-4c67-ac8b-561804e85a35\") " pod="calico-system/whisker-7764966df5-bspf4" Mar 7 01:06:41.119133 kubelet[2648]: I0307 01:06:41.117683 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6105a484-0b75-4a75-94f2-b23cf024997c-config\") pod \"goldmane-9f7667bb8-sjd5q\" (UID: \"6105a484-0b75-4a75-94f2-b23cf024997c\") " pod="calico-system/goldmane-9f7667bb8-sjd5q" Mar 7 01:06:41.404538 systemd[1]: Created slice kubepods-besteffort-pod96fca932_ead7_4c21_85cc_a13f1ef84f2f.slice - libcontainer container kubepods-besteffort-pod96fca932_ead7_4c21_85cc_a13f1ef84f2f.slice. Mar 7 01:06:41.628172 containerd[1488]: time="2026-03-07T01:06:41.628114962Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7fb9fb46d-4xvsw,Uid:1006b8ab-f55e-4786-b9b3-bccfdf6a83f2,Namespace:calico-system,Attempt:0,}" Mar 7 01:06:41.684008 systemd[1]: Created slice kubepods-besteffort-pod33181e76_a4a7_4c67_ac8b_561804e85a35.slice - libcontainer container kubepods-besteffort-pod33181e76_a4a7_4c67_ac8b_561804e85a35.slice. Mar 7 01:06:41.755160 containerd[1488]: time="2026-03-07T01:06:41.752978638Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c65f4d59c-lmhzz,Uid:654bfb29-b510-4d6f-8e3b-f63f1e64b596,Namespace:calico-system,Attempt:0,}" Mar 7 01:06:41.755509 systemd[1]: Created slice kubepods-burstable-pod8a88a6aa_d46d_4b55_b537_0a47ef29126b.slice - libcontainer container kubepods-burstable-pod8a88a6aa_d46d_4b55_b537_0a47ef29126b.slice. Mar 7 01:06:41.804587 systemd[1]: Created slice kubepods-besteffort-pod6105a484_0b75_4a75_94f2_b23cf024997c.slice - libcontainer container kubepods-besteffort-pod6105a484_0b75_4a75_94f2_b23cf024997c.slice. Mar 7 01:06:41.877678 kubelet[2648]: E0307 01:06:41.857070 2648 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:06:41.880087 containerd[1488]: time="2026-03-07T01:06:41.879881725Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-msqcb,Uid:50430acc-71a2-4da9-9907-c7bac0a8a3e4,Namespace:kube-system,Attempt:0,}" Mar 7 01:06:41.906508 containerd[1488]: time="2026-03-07T01:06:41.906320378Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c65f4d59c-5hmjn,Uid:96fca932-ead7-4c21-85cc-a13f1ef84f2f,Namespace:calico-system,Attempt:0,}" Mar 7 01:06:42.007229 containerd[1488]: time="2026-03-07T01:06:41.995632623Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-sjd5q,Uid:6105a484-0b75-4a75-94f2-b23cf024997c,Namespace:calico-system,Attempt:0,}" Mar 7 01:06:42.056404 containerd[1488]: time="2026-03-07T01:06:42.055872354Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7764966df5-bspf4,Uid:33181e76-a4a7-4c67-ac8b-561804e85a35,Namespace:calico-system,Attempt:0,}" Mar 7 01:06:42.106330 kubelet[2648]: E0307 01:06:42.101527 2648 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:06:42.121171 containerd[1488]: time="2026-03-07T01:06:42.121075654Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-wczgt,Uid:8a88a6aa-d46d-4b55-b537-0a47ef29126b,Namespace:kube-system,Attempt:0,}" Mar 7 01:06:42.854123 containerd[1488]: time="2026-03-07T01:06:42.854068685Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:06:42.878468 containerd[1488]: time="2026-03-07T01:06:42.877828162Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Mar 7 01:06:42.897963 containerd[1488]: time="2026-03-07T01:06:42.897904734Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:06:43.056098 containerd[1488]: time="2026-03-07T01:06:43.055943254Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:06:43.123590 containerd[1488]: time="2026-03-07T01:06:43.114390439Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 3.923896607s" Mar 7 01:06:43.123590 containerd[1488]: time="2026-03-07T01:06:43.114461211Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Mar 7 01:06:43.187574 containerd[1488]: time="2026-03-07T01:06:43.167276079Z" level=info msg="CreateContainer within sandbox \"569cc9d7e4c4e125b19a4c442602f6db0c9ff14abb70691b69bce8095349a4fe\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 7 01:06:44.216098 containerd[1488]: time="2026-03-07T01:06:44.186806048Z" level=info msg="CreateContainer within sandbox \"569cc9d7e4c4e125b19a4c442602f6db0c9ff14abb70691b69bce8095349a4fe\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"beb54b76eee74737467c17fc04755aff7645bb5aa4ca368d33565a0eb08b441b\"" Mar 7 01:06:44.239415 containerd[1488]: time="2026-03-07T01:06:44.236873458Z" level=info msg="StartContainer for \"beb54b76eee74737467c17fc04755aff7645bb5aa4ca368d33565a0eb08b441b\"" Mar 7 01:06:44.815209 kubelet[2648]: E0307 01:06:44.810775 2648 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:06:44.867648 systemd[1]: run-containerd-runc-k8s.io-beb54b76eee74737467c17fc04755aff7645bb5aa4ca368d33565a0eb08b441b-runc.GmJppS.mount: Deactivated successfully. Mar 7 01:06:44.890622 systemd[1]: Started cri-containerd-beb54b76eee74737467c17fc04755aff7645bb5aa4ca368d33565a0eb08b441b.scope - libcontainer container beb54b76eee74737467c17fc04755aff7645bb5aa4ca368d33565a0eb08b441b. Mar 7 01:06:44.952574 systemd-networkd[1412]: cali957c0323262: Link UP Mar 7 01:06:44.960059 systemd-networkd[1412]: cali957c0323262: Gained carrier Mar 7 01:06:45.097169 containerd[1488]: 2026-03-07 01:06:42.342 [ERROR][3910] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 7 01:06:45.097169 containerd[1488]: 2026-03-07 01:06:42.521 [INFO][3910] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--7fb9fb46d--4xvsw-eth0 calico-kube-controllers-7fb9fb46d- calico-system 1006b8ab-f55e-4786-b9b3-bccfdf6a83f2 1127 0 2026-03-07 01:05:05 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7fb9fb46d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-7fb9fb46d-4xvsw eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali957c0323262 [] [] }} ContainerID="b9bab95b32004840e60b88a83b22b072df30d01f98b815451a9dd75ba5ed28a8" Namespace="calico-system" Pod="calico-kube-controllers-7fb9fb46d-4xvsw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7fb9fb46d--4xvsw-" Mar 7 01:06:45.097169 containerd[1488]: 2026-03-07 01:06:42.521 [INFO][3910] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b9bab95b32004840e60b88a83b22b072df30d01f98b815451a9dd75ba5ed28a8" Namespace="calico-system" Pod="calico-kube-controllers-7fb9fb46d-4xvsw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7fb9fb46d--4xvsw-eth0" Mar 7 01:06:45.097169 containerd[1488]: 2026-03-07 01:06:43.844 [INFO][3980] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b9bab95b32004840e60b88a83b22b072df30d01f98b815451a9dd75ba5ed28a8" HandleID="k8s-pod-network.b9bab95b32004840e60b88a83b22b072df30d01f98b815451a9dd75ba5ed28a8" Workload="localhost-k8s-calico--kube--controllers--7fb9fb46d--4xvsw-eth0" Mar 7 01:06:45.097169 containerd[1488]: 2026-03-07 01:06:43.899 [INFO][3980] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="b9bab95b32004840e60b88a83b22b072df30d01f98b815451a9dd75ba5ed28a8" HandleID="k8s-pod-network.b9bab95b32004840e60b88a83b22b072df30d01f98b815451a9dd75ba5ed28a8" Workload="localhost-k8s-calico--kube--controllers--7fb9fb46d--4xvsw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004c8800), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-7fb9fb46d-4xvsw", "timestamp":"2026-03-07 01:06:43.844435386 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0000fe9a0)} Mar 7 01:06:45.097169 containerd[1488]: 2026-03-07 01:06:43.899 [INFO][3980] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:06:45.097169 containerd[1488]: 2026-03-07 01:06:43.899 [INFO][3980] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:06:45.097169 containerd[1488]: 2026-03-07 01:06:43.899 [INFO][3980] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 7 01:06:45.097169 containerd[1488]: 2026-03-07 01:06:43.936 [INFO][3980] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.b9bab95b32004840e60b88a83b22b072df30d01f98b815451a9dd75ba5ed28a8" host="localhost" Mar 7 01:06:45.097169 containerd[1488]: 2026-03-07 01:06:44.222 [INFO][3980] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 7 01:06:45.097169 containerd[1488]: 2026-03-07 01:06:44.327 [INFO][3980] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 7 01:06:45.097169 containerd[1488]: 2026-03-07 01:06:44.396 [INFO][3980] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 7 01:06:45.097169 containerd[1488]: 2026-03-07 01:06:44.463 [INFO][3980] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 7 01:06:45.097169 containerd[1488]: 2026-03-07 01:06:44.463 [INFO][3980] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b9bab95b32004840e60b88a83b22b072df30d01f98b815451a9dd75ba5ed28a8" host="localhost" Mar 7 01:06:45.097169 containerd[1488]: 2026-03-07 01:06:44.527 [INFO][3980] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.b9bab95b32004840e60b88a83b22b072df30d01f98b815451a9dd75ba5ed28a8 Mar 7 01:06:45.097169 containerd[1488]: 2026-03-07 01:06:44.608 [INFO][3980] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b9bab95b32004840e60b88a83b22b072df30d01f98b815451a9dd75ba5ed28a8" host="localhost" Mar 7 01:06:45.097169 containerd[1488]: 2026-03-07 01:06:44.756 [INFO][3980] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.b9bab95b32004840e60b88a83b22b072df30d01f98b815451a9dd75ba5ed28a8" host="localhost" Mar 7 01:06:45.097169 containerd[1488]: 2026-03-07 01:06:44.756 [INFO][3980] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.b9bab95b32004840e60b88a83b22b072df30d01f98b815451a9dd75ba5ed28a8" host="localhost" Mar 7 01:06:45.097169 containerd[1488]: 2026-03-07 01:06:44.756 [INFO][3980] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:06:45.097169 containerd[1488]: 2026-03-07 01:06:44.757 [INFO][3980] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="b9bab95b32004840e60b88a83b22b072df30d01f98b815451a9dd75ba5ed28a8" HandleID="k8s-pod-network.b9bab95b32004840e60b88a83b22b072df30d01f98b815451a9dd75ba5ed28a8" Workload="localhost-k8s-calico--kube--controllers--7fb9fb46d--4xvsw-eth0" Mar 7 01:06:45.100065 containerd[1488]: 2026-03-07 01:06:44.893 [INFO][3910] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b9bab95b32004840e60b88a83b22b072df30d01f98b815451a9dd75ba5ed28a8" Namespace="calico-system" Pod="calico-kube-controllers-7fb9fb46d-4xvsw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7fb9fb46d--4xvsw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7fb9fb46d--4xvsw-eth0", GenerateName:"calico-kube-controllers-7fb9fb46d-", Namespace:"calico-system", SelfLink:"", UID:"1006b8ab-f55e-4786-b9b3-bccfdf6a83f2", ResourceVersion:"1127", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 5, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7fb9fb46d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-7fb9fb46d-4xvsw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali957c0323262", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:06:45.100065 containerd[1488]: 2026-03-07 01:06:44.894 [INFO][3910] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="b9bab95b32004840e60b88a83b22b072df30d01f98b815451a9dd75ba5ed28a8" Namespace="calico-system" Pod="calico-kube-controllers-7fb9fb46d-4xvsw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7fb9fb46d--4xvsw-eth0" Mar 7 01:06:45.100065 containerd[1488]: 2026-03-07 01:06:44.894 [INFO][3910] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali957c0323262 ContainerID="b9bab95b32004840e60b88a83b22b072df30d01f98b815451a9dd75ba5ed28a8" Namespace="calico-system" Pod="calico-kube-controllers-7fb9fb46d-4xvsw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7fb9fb46d--4xvsw-eth0" Mar 7 01:06:45.100065 containerd[1488]: 2026-03-07 01:06:44.965 [INFO][3910] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b9bab95b32004840e60b88a83b22b072df30d01f98b815451a9dd75ba5ed28a8" Namespace="calico-system" Pod="calico-kube-controllers-7fb9fb46d-4xvsw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7fb9fb46d--4xvsw-eth0" Mar 7 01:06:45.100065 containerd[1488]: 2026-03-07 01:06:44.979 [INFO][3910] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b9bab95b32004840e60b88a83b22b072df30d01f98b815451a9dd75ba5ed28a8" Namespace="calico-system" Pod="calico-kube-controllers-7fb9fb46d-4xvsw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7fb9fb46d--4xvsw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7fb9fb46d--4xvsw-eth0", GenerateName:"calico-kube-controllers-7fb9fb46d-", Namespace:"calico-system", SelfLink:"", UID:"1006b8ab-f55e-4786-b9b3-bccfdf6a83f2", ResourceVersion:"1127", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 5, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7fb9fb46d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b9bab95b32004840e60b88a83b22b072df30d01f98b815451a9dd75ba5ed28a8", Pod:"calico-kube-controllers-7fb9fb46d-4xvsw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali957c0323262", MAC:"46:8c:97:74:06:d6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:06:45.100065 containerd[1488]: 2026-03-07 01:06:45.031 [INFO][3910] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b9bab95b32004840e60b88a83b22b072df30d01f98b815451a9dd75ba5ed28a8" Namespace="calico-system" Pod="calico-kube-controllers-7fb9fb46d-4xvsw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7fb9fb46d--4xvsw-eth0" Mar 7 01:06:45.247149 systemd-networkd[1412]: calif6d0b42f0cf: Link UP Mar 7 01:06:45.253527 systemd-networkd[1412]: calif6d0b42f0cf: Gained carrier Mar 7 01:06:46.092115 containerd[1488]: time="2026-03-07T01:06:46.089986965Z" level=info msg="StartContainer for \"beb54b76eee74737467c17fc04755aff7645bb5aa4ca368d33565a0eb08b441b\" returns successfully" Mar 7 01:06:46.147768 containerd[1488]: time="2026-03-07T01:06:46.139328979Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Mar 7 01:06:46.147768 containerd[1488]: 2026-03-07 01:06:42.531 [ERROR][3924] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 7 01:06:46.147768 containerd[1488]: 2026-03-07 01:06:42.880 [INFO][3924] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--c65f4d59c--lmhzz-eth0 calico-apiserver-c65f4d59c- calico-system 654bfb29-b510-4d6f-8e3b-f63f1e64b596 1131 0 2026-03-07 01:05:01 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:c65f4d59c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-c65f4d59c-lmhzz eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calif6d0b42f0cf [] [] }} ContainerID="e459ffc8474870699f27fbfdc2fadd3e749dcfba2e1d90cf95c7c8073fb04557" Namespace="calico-system" Pod="calico-apiserver-c65f4d59c-lmhzz" WorkloadEndpoint="localhost-k8s-calico--apiserver--c65f4d59c--lmhzz-" Mar 7 01:06:46.147768 containerd[1488]: 2026-03-07 01:06:42.880 [INFO][3924] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e459ffc8474870699f27fbfdc2fadd3e749dcfba2e1d90cf95c7c8073fb04557" Namespace="calico-system" Pod="calico-apiserver-c65f4d59c-lmhzz" WorkloadEndpoint="localhost-k8s-calico--apiserver--c65f4d59c--lmhzz-eth0" Mar 7 01:06:46.147768 containerd[1488]: 2026-03-07 01:06:44.341 [INFO][4030] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e459ffc8474870699f27fbfdc2fadd3e749dcfba2e1d90cf95c7c8073fb04557" HandleID="k8s-pod-network.e459ffc8474870699f27fbfdc2fadd3e749dcfba2e1d90cf95c7c8073fb04557" Workload="localhost-k8s-calico--apiserver--c65f4d59c--lmhzz-eth0" Mar 7 01:06:46.147768 containerd[1488]: 2026-03-07 01:06:44.438 [INFO][4030] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="e459ffc8474870699f27fbfdc2fadd3e749dcfba2e1d90cf95c7c8073fb04557" HandleID="k8s-pod-network.e459ffc8474870699f27fbfdc2fadd3e749dcfba2e1d90cf95c7c8073fb04557" Workload="localhost-k8s-calico--apiserver--c65f4d59c--lmhzz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f1c0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-c65f4d59c-lmhzz", "timestamp":"2026-03-07 01:06:44.341368018 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001994a0)} Mar 7 01:06:46.147768 containerd[1488]: 2026-03-07 01:06:44.438 [INFO][4030] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:06:46.147768 containerd[1488]: 2026-03-07 01:06:44.779 [INFO][4030] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:06:46.147768 containerd[1488]: 2026-03-07 01:06:44.779 [INFO][4030] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 7 01:06:46.147768 containerd[1488]: 2026-03-07 01:06:44.810 [INFO][4030] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.e459ffc8474870699f27fbfdc2fadd3e749dcfba2e1d90cf95c7c8073fb04557" host="localhost" Mar 7 01:06:46.147768 containerd[1488]: 2026-03-07 01:06:44.866 [INFO][4030] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 7 01:06:46.147768 containerd[1488]: 2026-03-07 01:06:44.950 [INFO][4030] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 7 01:06:46.147768 containerd[1488]: 2026-03-07 01:06:44.981 [INFO][4030] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 7 01:06:46.147768 containerd[1488]: 2026-03-07 01:06:45.012 [INFO][4030] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 7 01:06:46.147768 containerd[1488]: 2026-03-07 01:06:45.012 [INFO][4030] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e459ffc8474870699f27fbfdc2fadd3e749dcfba2e1d90cf95c7c8073fb04557" host="localhost" Mar 7 01:06:46.147768 containerd[1488]: 2026-03-07 01:06:45.052 [INFO][4030] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.e459ffc8474870699f27fbfdc2fadd3e749dcfba2e1d90cf95c7c8073fb04557 Mar 7 01:06:46.147768 containerd[1488]: 2026-03-07 01:06:45.107 [INFO][4030] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e459ffc8474870699f27fbfdc2fadd3e749dcfba2e1d90cf95c7c8073fb04557" host="localhost" Mar 7 01:06:46.147768 containerd[1488]: 2026-03-07 01:06:45.143 [INFO][4030] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.e459ffc8474870699f27fbfdc2fadd3e749dcfba2e1d90cf95c7c8073fb04557" host="localhost" Mar 7 01:06:46.147768 containerd[1488]: 2026-03-07 01:06:45.144 [INFO][4030] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.e459ffc8474870699f27fbfdc2fadd3e749dcfba2e1d90cf95c7c8073fb04557" host="localhost" Mar 7 01:06:46.147768 containerd[1488]: 2026-03-07 01:06:45.144 [INFO][4030] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:06:46.147768 containerd[1488]: 2026-03-07 01:06:45.144 [INFO][4030] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="e459ffc8474870699f27fbfdc2fadd3e749dcfba2e1d90cf95c7c8073fb04557" HandleID="k8s-pod-network.e459ffc8474870699f27fbfdc2fadd3e749dcfba2e1d90cf95c7c8073fb04557" Workload="localhost-k8s-calico--apiserver--c65f4d59c--lmhzz-eth0" Mar 7 01:06:46.155978 containerd[1488]: 2026-03-07 01:06:45.157 [INFO][3924] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e459ffc8474870699f27fbfdc2fadd3e749dcfba2e1d90cf95c7c8073fb04557" Namespace="calico-system" Pod="calico-apiserver-c65f4d59c-lmhzz" WorkloadEndpoint="localhost-k8s-calico--apiserver--c65f4d59c--lmhzz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--c65f4d59c--lmhzz-eth0", GenerateName:"calico-apiserver-c65f4d59c-", Namespace:"calico-system", SelfLink:"", UID:"654bfb29-b510-4d6f-8e3b-f63f1e64b596", ResourceVersion:"1131", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 5, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c65f4d59c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-c65f4d59c-lmhzz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calif6d0b42f0cf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:06:46.155978 containerd[1488]: 2026-03-07 01:06:45.157 [INFO][3924] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="e459ffc8474870699f27fbfdc2fadd3e749dcfba2e1d90cf95c7c8073fb04557" Namespace="calico-system" Pod="calico-apiserver-c65f4d59c-lmhzz" WorkloadEndpoint="localhost-k8s-calico--apiserver--c65f4d59c--lmhzz-eth0" Mar 7 01:06:46.155978 containerd[1488]: 2026-03-07 01:06:45.157 [INFO][3924] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif6d0b42f0cf ContainerID="e459ffc8474870699f27fbfdc2fadd3e749dcfba2e1d90cf95c7c8073fb04557" Namespace="calico-system" Pod="calico-apiserver-c65f4d59c-lmhzz" WorkloadEndpoint="localhost-k8s-calico--apiserver--c65f4d59c--lmhzz-eth0" Mar 7 01:06:46.155978 containerd[1488]: 2026-03-07 01:06:45.296 [INFO][3924] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e459ffc8474870699f27fbfdc2fadd3e749dcfba2e1d90cf95c7c8073fb04557" Namespace="calico-system" Pod="calico-apiserver-c65f4d59c-lmhzz" WorkloadEndpoint="localhost-k8s-calico--apiserver--c65f4d59c--lmhzz-eth0" Mar 7 01:06:46.155978 containerd[1488]: 2026-03-07 01:06:45.323 [INFO][3924] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e459ffc8474870699f27fbfdc2fadd3e749dcfba2e1d90cf95c7c8073fb04557" Namespace="calico-system" Pod="calico-apiserver-c65f4d59c-lmhzz" WorkloadEndpoint="localhost-k8s-calico--apiserver--c65f4d59c--lmhzz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--c65f4d59c--lmhzz-eth0", GenerateName:"calico-apiserver-c65f4d59c-", Namespace:"calico-system", SelfLink:"", UID:"654bfb29-b510-4d6f-8e3b-f63f1e64b596", ResourceVersion:"1131", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 5, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c65f4d59c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e459ffc8474870699f27fbfdc2fadd3e749dcfba2e1d90cf95c7c8073fb04557", Pod:"calico-apiserver-c65f4d59c-lmhzz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calif6d0b42f0cf", MAC:"7a:90:28:1c:2a:f1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:06:46.155978 containerd[1488]: 2026-03-07 01:06:46.053 [INFO][3924] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e459ffc8474870699f27fbfdc2fadd3e749dcfba2e1d90cf95c7c8073fb04557" Namespace="calico-system" Pod="calico-apiserver-c65f4d59c-lmhzz" WorkloadEndpoint="localhost-k8s-calico--apiserver--c65f4d59c--lmhzz-eth0" Mar 7 01:06:46.364925 kernel: calico-node[3898]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Mar 7 01:06:46.422096 containerd[1488]: time="2026-03-07T01:06:46.419791095Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:06:46.422096 containerd[1488]: time="2026-03-07T01:06:46.420132001Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:06:46.422096 containerd[1488]: time="2026-03-07T01:06:46.420161757Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:06:46.422096 containerd[1488]: time="2026-03-07T01:06:46.421114616Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:06:46.509812 containerd[1488]: time="2026-03-07T01:06:46.505109236Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:06:46.509812 containerd[1488]: time="2026-03-07T01:06:46.507222241Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:06:46.509812 containerd[1488]: time="2026-03-07T01:06:46.507436000Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:06:46.544078 containerd[1488]: time="2026-03-07T01:06:46.518041358Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:06:46.666555 systemd-networkd[1412]: cali7436dcf88b9: Link UP Mar 7 01:06:46.677058 systemd-networkd[1412]: cali7436dcf88b9: Gained carrier Mar 7 01:06:46.842546 systemd[1]: Started cri-containerd-e459ffc8474870699f27fbfdc2fadd3e749dcfba2e1d90cf95c7c8073fb04557.scope - libcontainer container e459ffc8474870699f27fbfdc2fadd3e749dcfba2e1d90cf95c7c8073fb04557. Mar 7 01:06:46.897490 systemd-networkd[1412]: cali957c0323262: Gained IPv6LL Mar 7 01:06:47.086493 containerd[1488]: 2026-03-07 01:06:43.904 [ERROR][3996] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 7 01:06:47.086493 containerd[1488]: 2026-03-07 01:06:44.390 [INFO][3996] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--7764966df5--bspf4-eth0 whisker-7764966df5- calico-system 33181e76-a4a7-4c67-ac8b-561804e85a35 1143 0 2026-03-07 01:06:37 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7764966df5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-7764966df5-bspf4 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali7436dcf88b9 [] [] }} ContainerID="85745c8ad5511ef1ee25527c2e5559368ed89f0c601c17a6928346813318ed25" Namespace="calico-system" Pod="whisker-7764966df5-bspf4" WorkloadEndpoint="localhost-k8s-whisker--7764966df5--bspf4-" Mar 7 01:06:47.086493 containerd[1488]: 2026-03-07 01:06:44.390 [INFO][3996] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="85745c8ad5511ef1ee25527c2e5559368ed89f0c601c17a6928346813318ed25" Namespace="calico-system" Pod="whisker-7764966df5-bspf4" WorkloadEndpoint="localhost-k8s-whisker--7764966df5--bspf4-eth0" Mar 7 01:06:47.086493 containerd[1488]: 2026-03-07 01:06:44.929 [INFO][4095] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="85745c8ad5511ef1ee25527c2e5559368ed89f0c601c17a6928346813318ed25" HandleID="k8s-pod-network.85745c8ad5511ef1ee25527c2e5559368ed89f0c601c17a6928346813318ed25" Workload="localhost-k8s-whisker--7764966df5--bspf4-eth0" Mar 7 01:06:47.086493 containerd[1488]: 2026-03-07 01:06:45.022 [INFO][4095] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="85745c8ad5511ef1ee25527c2e5559368ed89f0c601c17a6928346813318ed25" HandleID="k8s-pod-network.85745c8ad5511ef1ee25527c2e5559368ed89f0c601c17a6928346813318ed25" Workload="localhost-k8s-whisker--7764966df5--bspf4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00070e390), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-7764966df5-bspf4", "timestamp":"2026-03-07 01:06:44.929620714 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001f0000)} Mar 7 01:06:47.086493 containerd[1488]: 2026-03-07 01:06:45.022 [INFO][4095] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:06:47.086493 containerd[1488]: 2026-03-07 01:06:45.145 [INFO][4095] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:06:47.086493 containerd[1488]: 2026-03-07 01:06:45.145 [INFO][4095] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 7 01:06:47.086493 containerd[1488]: 2026-03-07 01:06:45.261 [INFO][4095] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.85745c8ad5511ef1ee25527c2e5559368ed89f0c601c17a6928346813318ed25" host="localhost" Mar 7 01:06:47.086493 containerd[1488]: 2026-03-07 01:06:45.939 [INFO][4095] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 7 01:06:47.086493 containerd[1488]: 2026-03-07 01:06:46.141 [INFO][4095] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 7 01:06:47.086493 containerd[1488]: 2026-03-07 01:06:46.217 [INFO][4095] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 7 01:06:47.086493 containerd[1488]: 2026-03-07 01:06:46.324 [INFO][4095] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 7 01:06:47.086493 containerd[1488]: 2026-03-07 01:06:46.324 [INFO][4095] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.85745c8ad5511ef1ee25527c2e5559368ed89f0c601c17a6928346813318ed25" host="localhost" Mar 7 01:06:47.086493 containerd[1488]: 2026-03-07 01:06:46.396 [INFO][4095] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.85745c8ad5511ef1ee25527c2e5559368ed89f0c601c17a6928346813318ed25 Mar 7 01:06:47.086493 containerd[1488]: 2026-03-07 01:06:46.437 [INFO][4095] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.85745c8ad5511ef1ee25527c2e5559368ed89f0c601c17a6928346813318ed25" host="localhost" Mar 7 01:06:47.086493 containerd[1488]: 2026-03-07 01:06:46.601 [INFO][4095] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.85745c8ad5511ef1ee25527c2e5559368ed89f0c601c17a6928346813318ed25" host="localhost" Mar 7 01:06:47.086493 containerd[1488]: 2026-03-07 01:06:46.601 [INFO][4095] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.85745c8ad5511ef1ee25527c2e5559368ed89f0c601c17a6928346813318ed25" host="localhost" Mar 7 01:06:47.086493 containerd[1488]: 2026-03-07 01:06:46.601 [INFO][4095] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:06:47.086493 containerd[1488]: 2026-03-07 01:06:46.601 [INFO][4095] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="85745c8ad5511ef1ee25527c2e5559368ed89f0c601c17a6928346813318ed25" HandleID="k8s-pod-network.85745c8ad5511ef1ee25527c2e5559368ed89f0c601c17a6928346813318ed25" Workload="localhost-k8s-whisker--7764966df5--bspf4-eth0" Mar 7 01:06:47.121784 containerd[1488]: 2026-03-07 01:06:46.617 [INFO][3996] cni-plugin/k8s.go 418: Populated endpoint ContainerID="85745c8ad5511ef1ee25527c2e5559368ed89f0c601c17a6928346813318ed25" Namespace="calico-system" Pod="whisker-7764966df5-bspf4" WorkloadEndpoint="localhost-k8s-whisker--7764966df5--bspf4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7764966df5--bspf4-eth0", GenerateName:"whisker-7764966df5-", Namespace:"calico-system", SelfLink:"", UID:"33181e76-a4a7-4c67-ac8b-561804e85a35", ResourceVersion:"1143", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 6, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7764966df5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-7764966df5-bspf4", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali7436dcf88b9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:06:47.121784 containerd[1488]: 2026-03-07 01:06:46.618 [INFO][3996] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="85745c8ad5511ef1ee25527c2e5559368ed89f0c601c17a6928346813318ed25" Namespace="calico-system" Pod="whisker-7764966df5-bspf4" WorkloadEndpoint="localhost-k8s-whisker--7764966df5--bspf4-eth0" Mar 7 01:06:47.121784 containerd[1488]: 2026-03-07 01:06:46.618 [INFO][3996] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7436dcf88b9 ContainerID="85745c8ad5511ef1ee25527c2e5559368ed89f0c601c17a6928346813318ed25" Namespace="calico-system" Pod="whisker-7764966df5-bspf4" WorkloadEndpoint="localhost-k8s-whisker--7764966df5--bspf4-eth0" Mar 7 01:06:47.121784 containerd[1488]: 2026-03-07 01:06:46.684 [INFO][3996] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="85745c8ad5511ef1ee25527c2e5559368ed89f0c601c17a6928346813318ed25" Namespace="calico-system" Pod="whisker-7764966df5-bspf4" WorkloadEndpoint="localhost-k8s-whisker--7764966df5--bspf4-eth0" Mar 7 01:06:47.121784 containerd[1488]: 2026-03-07 01:06:46.685 [INFO][3996] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="85745c8ad5511ef1ee25527c2e5559368ed89f0c601c17a6928346813318ed25" Namespace="calico-system" Pod="whisker-7764966df5-bspf4" WorkloadEndpoint="localhost-k8s-whisker--7764966df5--bspf4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7764966df5--bspf4-eth0", GenerateName:"whisker-7764966df5-", Namespace:"calico-system", SelfLink:"", UID:"33181e76-a4a7-4c67-ac8b-561804e85a35", ResourceVersion:"1143", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 6, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7764966df5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"85745c8ad5511ef1ee25527c2e5559368ed89f0c601c17a6928346813318ed25", Pod:"whisker-7764966df5-bspf4", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali7436dcf88b9", MAC:"0a:7f:ab:84:46:a8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:06:47.121784 containerd[1488]: 2026-03-07 01:06:46.917 [INFO][3996] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="85745c8ad5511ef1ee25527c2e5559368ed89f0c601c17a6928346813318ed25" Namespace="calico-system" Pod="whisker-7764966df5-bspf4" WorkloadEndpoint="localhost-k8s-whisker--7764966df5--bspf4-eth0" Mar 7 01:06:47.190772 systemd-networkd[1412]: calif6d0b42f0cf: Gained IPv6LL Mar 7 01:06:47.541884 systemd[1]: Started cri-containerd-b9bab95b32004840e60b88a83b22b072df30d01f98b815451a9dd75ba5ed28a8.scope - libcontainer container b9bab95b32004840e60b88a83b22b072df30d01f98b815451a9dd75ba5ed28a8. Mar 7 01:06:47.623797 systemd-resolved[1354]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 7 01:06:47.784484 kubelet[2648]: E0307 01:06:47.782916 2648 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:06:47.832449 systemd-resolved[1354]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 7 01:06:47.934026 systemd-networkd[1412]: cali7436dcf88b9: Gained IPv6LL Mar 7 01:06:48.013530 containerd[1488]: time="2026-03-07T01:06:48.005897803Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:06:48.013530 containerd[1488]: time="2026-03-07T01:06:48.006028728Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:06:48.013530 containerd[1488]: time="2026-03-07T01:06:48.006044908Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:06:48.013530 containerd[1488]: time="2026-03-07T01:06:48.006171554Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:06:48.074938 systemd-networkd[1412]: calide867e38aa8: Link UP Mar 7 01:06:48.077624 systemd-networkd[1412]: calide867e38aa8: Gained carrier Mar 7 01:06:48.302254 containerd[1488]: time="2026-03-07T01:06:48.302134801Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c65f4d59c-lmhzz,Uid:654bfb29-b510-4d6f-8e3b-f63f1e64b596,Namespace:calico-system,Attempt:0,} returns sandbox id \"e459ffc8474870699f27fbfdc2fadd3e749dcfba2e1d90cf95c7c8073fb04557\"" Mar 7 01:06:48.433489 containerd[1488]: 2026-03-07 01:06:43.864 [ERROR][4009] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 7 01:06:48.433489 containerd[1488]: 2026-03-07 01:06:43.926 [INFO][4009] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--9f7667bb8--sjd5q-eth0 goldmane-9f7667bb8- calico-system 6105a484-0b75-4a75-94f2-b23cf024997c 1145 0 2026-03-07 01:05:02 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:9f7667bb8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-9f7667bb8-sjd5q eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calide867e38aa8 [] [] }} ContainerID="47fa83a8b92761d64502a8c925b6f7f38e795ae58bedaaef619bf3cda583a157" Namespace="calico-system" Pod="goldmane-9f7667bb8-sjd5q" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--sjd5q-" Mar 7 01:06:48.433489 containerd[1488]: 2026-03-07 01:06:43.927 [INFO][4009] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="47fa83a8b92761d64502a8c925b6f7f38e795ae58bedaaef619bf3cda583a157" Namespace="calico-system" Pod="goldmane-9f7667bb8-sjd5q" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--sjd5q-eth0" Mar 7 01:06:48.433489 containerd[1488]: 2026-03-07 01:06:45.047 [INFO][4067] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="47fa83a8b92761d64502a8c925b6f7f38e795ae58bedaaef619bf3cda583a157" HandleID="k8s-pod-network.47fa83a8b92761d64502a8c925b6f7f38e795ae58bedaaef619bf3cda583a157" Workload="localhost-k8s-goldmane--9f7667bb8--sjd5q-eth0" Mar 7 01:06:48.433489 containerd[1488]: 2026-03-07 01:06:45.250 [INFO][4067] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="47fa83a8b92761d64502a8c925b6f7f38e795ae58bedaaef619bf3cda583a157" HandleID="k8s-pod-network.47fa83a8b92761d64502a8c925b6f7f38e795ae58bedaaef619bf3cda583a157" Workload="localhost-k8s-goldmane--9f7667bb8--sjd5q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003e55e0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-9f7667bb8-sjd5q", "timestamp":"2026-03-07 01:06:45.047192426 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000378000)} Mar 7 01:06:48.433489 containerd[1488]: 2026-03-07 01:06:45.250 [INFO][4067] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:06:48.433489 containerd[1488]: 2026-03-07 01:06:46.830 [INFO][4067] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:06:48.433489 containerd[1488]: 2026-03-07 01:06:46.863 [INFO][4067] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 7 01:06:48.433489 containerd[1488]: 2026-03-07 01:06:47.237 [INFO][4067] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.47fa83a8b92761d64502a8c925b6f7f38e795ae58bedaaef619bf3cda583a157" host="localhost" Mar 7 01:06:48.433489 containerd[1488]: 2026-03-07 01:06:47.312 [INFO][4067] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 7 01:06:48.433489 containerd[1488]: 2026-03-07 01:06:47.502 [INFO][4067] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 7 01:06:48.433489 containerd[1488]: 2026-03-07 01:06:47.542 [INFO][4067] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 7 01:06:48.433489 containerd[1488]: 2026-03-07 01:06:47.625 [INFO][4067] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 7 01:06:48.433489 containerd[1488]: 2026-03-07 01:06:47.629 [INFO][4067] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.47fa83a8b92761d64502a8c925b6f7f38e795ae58bedaaef619bf3cda583a157" host="localhost" Mar 7 01:06:48.433489 containerd[1488]: 2026-03-07 01:06:47.734 [INFO][4067] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.47fa83a8b92761d64502a8c925b6f7f38e795ae58bedaaef619bf3cda583a157 Mar 7 01:06:48.433489 containerd[1488]: 2026-03-07 01:06:47.832 [INFO][4067] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.47fa83a8b92761d64502a8c925b6f7f38e795ae58bedaaef619bf3cda583a157" host="localhost" Mar 7 01:06:48.433489 containerd[1488]: 2026-03-07 01:06:47.990 [INFO][4067] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.47fa83a8b92761d64502a8c925b6f7f38e795ae58bedaaef619bf3cda583a157" host="localhost" Mar 7 01:06:48.433489 containerd[1488]: 2026-03-07 01:06:47.993 [INFO][4067] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.47fa83a8b92761d64502a8c925b6f7f38e795ae58bedaaef619bf3cda583a157" host="localhost" Mar 7 01:06:48.433489 containerd[1488]: 2026-03-07 01:06:47.996 [INFO][4067] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:06:48.433489 containerd[1488]: 2026-03-07 01:06:48.001 [INFO][4067] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="47fa83a8b92761d64502a8c925b6f7f38e795ae58bedaaef619bf3cda583a157" HandleID="k8s-pod-network.47fa83a8b92761d64502a8c925b6f7f38e795ae58bedaaef619bf3cda583a157" Workload="localhost-k8s-goldmane--9f7667bb8--sjd5q-eth0" Mar 7 01:06:48.443853 containerd[1488]: 2026-03-07 01:06:48.031 [INFO][4009] cni-plugin/k8s.go 418: Populated endpoint ContainerID="47fa83a8b92761d64502a8c925b6f7f38e795ae58bedaaef619bf3cda583a157" Namespace="calico-system" Pod="goldmane-9f7667bb8-sjd5q" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--sjd5q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--9f7667bb8--sjd5q-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"6105a484-0b75-4a75-94f2-b23cf024997c", ResourceVersion:"1145", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 5, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-9f7667bb8-sjd5q", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calide867e38aa8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:06:48.443853 containerd[1488]: 2026-03-07 01:06:48.031 [INFO][4009] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="47fa83a8b92761d64502a8c925b6f7f38e795ae58bedaaef619bf3cda583a157" Namespace="calico-system" Pod="goldmane-9f7667bb8-sjd5q" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--sjd5q-eth0" Mar 7 01:06:48.443853 containerd[1488]: 2026-03-07 01:06:48.031 [INFO][4009] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calide867e38aa8 ContainerID="47fa83a8b92761d64502a8c925b6f7f38e795ae58bedaaef619bf3cda583a157" Namespace="calico-system" Pod="goldmane-9f7667bb8-sjd5q" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--sjd5q-eth0" Mar 7 01:06:48.443853 containerd[1488]: 2026-03-07 01:06:48.099 [INFO][4009] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="47fa83a8b92761d64502a8c925b6f7f38e795ae58bedaaef619bf3cda583a157" Namespace="calico-system" Pod="goldmane-9f7667bb8-sjd5q" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--sjd5q-eth0" Mar 7 01:06:48.443853 containerd[1488]: 2026-03-07 01:06:48.106 [INFO][4009] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="47fa83a8b92761d64502a8c925b6f7f38e795ae58bedaaef619bf3cda583a157" Namespace="calico-system" Pod="goldmane-9f7667bb8-sjd5q" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--sjd5q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--9f7667bb8--sjd5q-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"6105a484-0b75-4a75-94f2-b23cf024997c", ResourceVersion:"1145", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 5, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"47fa83a8b92761d64502a8c925b6f7f38e795ae58bedaaef619bf3cda583a157", Pod:"goldmane-9f7667bb8-sjd5q", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calide867e38aa8", MAC:"d2:01:18:e6:e9:e4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:06:48.443853 containerd[1488]: 2026-03-07 01:06:48.279 [INFO][4009] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="47fa83a8b92761d64502a8c925b6f7f38e795ae58bedaaef619bf3cda583a157" Namespace="calico-system" Pod="goldmane-9f7667bb8-sjd5q" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--sjd5q-eth0" Mar 7 01:06:48.543665 systemd[1]: Started cri-containerd-85745c8ad5511ef1ee25527c2e5559368ed89f0c601c17a6928346813318ed25.scope - libcontainer container 85745c8ad5511ef1ee25527c2e5559368ed89f0c601c17a6928346813318ed25. Mar 7 01:06:48.698587 containerd[1488]: time="2026-03-07T01:06:48.698368015Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7fb9fb46d-4xvsw,Uid:1006b8ab-f55e-4786-b9b3-bccfdf6a83f2,Namespace:calico-system,Attempt:0,} returns sandbox id \"b9bab95b32004840e60b88a83b22b072df30d01f98b815451a9dd75ba5ed28a8\"" Mar 7 01:06:48.793925 systemd-networkd[1412]: cali8f8ed25dd5c: Link UP Mar 7 01:06:48.796853 systemd-networkd[1412]: cali8f8ed25dd5c: Gained carrier Mar 7 01:06:48.900006 containerd[1488]: time="2026-03-07T01:06:48.892067273Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:06:48.900006 containerd[1488]: time="2026-03-07T01:06:48.892152612Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:06:48.900006 containerd[1488]: time="2026-03-07T01:06:48.892169022Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:06:49.066051 containerd[1488]: time="2026-03-07T01:06:49.057934852Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:06:49.152004 systemd-resolved[1354]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 7 01:06:49.228434 containerd[1488]: 2026-03-07 01:06:43.429 [ERROR][3951] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 7 01:06:49.228434 containerd[1488]: 2026-03-07 01:06:43.938 [INFO][3951] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--c65f4d59c--5hmjn-eth0 calico-apiserver-c65f4d59c- calico-system 96fca932-ead7-4c21-85cc-a13f1ef84f2f 1141 0 2026-03-07 01:05:01 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:c65f4d59c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-c65f4d59c-5hmjn eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali8f8ed25dd5c [] [] }} ContainerID="fb413bdba32d068ddb7e497021eef5d50664703da79740a6d704d21ec5ea2076" Namespace="calico-system" Pod="calico-apiserver-c65f4d59c-5hmjn" WorkloadEndpoint="localhost-k8s-calico--apiserver--c65f4d59c--5hmjn-" Mar 7 01:06:49.228434 containerd[1488]: 2026-03-07 01:06:43.938 [INFO][3951] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fb413bdba32d068ddb7e497021eef5d50664703da79740a6d704d21ec5ea2076" Namespace="calico-system" Pod="calico-apiserver-c65f4d59c-5hmjn" WorkloadEndpoint="localhost-k8s-calico--apiserver--c65f4d59c--5hmjn-eth0" Mar 7 01:06:49.228434 containerd[1488]: 2026-03-07 01:06:45.940 [INFO][4080] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fb413bdba32d068ddb7e497021eef5d50664703da79740a6d704d21ec5ea2076" HandleID="k8s-pod-network.fb413bdba32d068ddb7e497021eef5d50664703da79740a6d704d21ec5ea2076" Workload="localhost-k8s-calico--apiserver--c65f4d59c--5hmjn-eth0" Mar 7 01:06:49.228434 containerd[1488]: 2026-03-07 01:06:46.224 [INFO][4080] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="fb413bdba32d068ddb7e497021eef5d50664703da79740a6d704d21ec5ea2076" HandleID="k8s-pod-network.fb413bdba32d068ddb7e497021eef5d50664703da79740a6d704d21ec5ea2076" Workload="localhost-k8s-calico--apiserver--c65f4d59c--5hmjn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00063cd70), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-c65f4d59c-5hmjn", "timestamp":"2026-03-07 01:06:45.940376944 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0000b4c60)} Mar 7 01:06:49.228434 containerd[1488]: 2026-03-07 01:06:46.224 [INFO][4080] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:06:49.228434 containerd[1488]: 2026-03-07 01:06:47.995 [INFO][4080] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:06:49.228434 containerd[1488]: 2026-03-07 01:06:47.995 [INFO][4080] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 7 01:06:49.228434 containerd[1488]: 2026-03-07 01:06:48.112 [INFO][4080] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.fb413bdba32d068ddb7e497021eef5d50664703da79740a6d704d21ec5ea2076" host="localhost" Mar 7 01:06:49.228434 containerd[1488]: 2026-03-07 01:06:48.260 [INFO][4080] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 7 01:06:49.228434 containerd[1488]: 2026-03-07 01:06:48.495 [INFO][4080] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 7 01:06:49.228434 containerd[1488]: 2026-03-07 01:06:48.564 [INFO][4080] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 7 01:06:49.228434 containerd[1488]: 2026-03-07 01:06:48.589 [INFO][4080] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 7 01:06:49.228434 containerd[1488]: 2026-03-07 01:06:48.589 [INFO][4080] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.fb413bdba32d068ddb7e497021eef5d50664703da79740a6d704d21ec5ea2076" host="localhost" Mar 7 01:06:49.228434 containerd[1488]: 2026-03-07 01:06:48.614 [INFO][4080] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.fb413bdba32d068ddb7e497021eef5d50664703da79740a6d704d21ec5ea2076 Mar 7 01:06:49.228434 containerd[1488]: 2026-03-07 01:06:48.634 [INFO][4080] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.fb413bdba32d068ddb7e497021eef5d50664703da79740a6d704d21ec5ea2076" host="localhost" Mar 7 01:06:49.228434 containerd[1488]: 2026-03-07 01:06:48.736 [INFO][4080] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.fb413bdba32d068ddb7e497021eef5d50664703da79740a6d704d21ec5ea2076" host="localhost" Mar 7 01:06:49.228434 containerd[1488]: 2026-03-07 01:06:48.739 [INFO][4080] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.fb413bdba32d068ddb7e497021eef5d50664703da79740a6d704d21ec5ea2076" host="localhost" Mar 7 01:06:49.228434 containerd[1488]: 2026-03-07 01:06:48.739 [INFO][4080] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:06:49.228434 containerd[1488]: 2026-03-07 01:06:48.739 [INFO][4080] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="fb413bdba32d068ddb7e497021eef5d50664703da79740a6d704d21ec5ea2076" HandleID="k8s-pod-network.fb413bdba32d068ddb7e497021eef5d50664703da79740a6d704d21ec5ea2076" Workload="localhost-k8s-calico--apiserver--c65f4d59c--5hmjn-eth0" Mar 7 01:06:49.229467 containerd[1488]: 2026-03-07 01:06:48.768 [INFO][3951] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fb413bdba32d068ddb7e497021eef5d50664703da79740a6d704d21ec5ea2076" Namespace="calico-system" Pod="calico-apiserver-c65f4d59c-5hmjn" WorkloadEndpoint="localhost-k8s-calico--apiserver--c65f4d59c--5hmjn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--c65f4d59c--5hmjn-eth0", GenerateName:"calico-apiserver-c65f4d59c-", Namespace:"calico-system", SelfLink:"", UID:"96fca932-ead7-4c21-85cc-a13f1ef84f2f", ResourceVersion:"1141", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 5, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c65f4d59c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-c65f4d59c-5hmjn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali8f8ed25dd5c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:06:49.229467 containerd[1488]: 2026-03-07 01:06:48.769 [INFO][3951] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="fb413bdba32d068ddb7e497021eef5d50664703da79740a6d704d21ec5ea2076" Namespace="calico-system" Pod="calico-apiserver-c65f4d59c-5hmjn" WorkloadEndpoint="localhost-k8s-calico--apiserver--c65f4d59c--5hmjn-eth0" Mar 7 01:06:49.229467 containerd[1488]: 2026-03-07 01:06:48.769 [INFO][3951] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8f8ed25dd5c ContainerID="fb413bdba32d068ddb7e497021eef5d50664703da79740a6d704d21ec5ea2076" Namespace="calico-system" Pod="calico-apiserver-c65f4d59c-5hmjn" WorkloadEndpoint="localhost-k8s-calico--apiserver--c65f4d59c--5hmjn-eth0" Mar 7 01:06:49.229467 containerd[1488]: 2026-03-07 01:06:48.848 [INFO][3951] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fb413bdba32d068ddb7e497021eef5d50664703da79740a6d704d21ec5ea2076" Namespace="calico-system" Pod="calico-apiserver-c65f4d59c-5hmjn" WorkloadEndpoint="localhost-k8s-calico--apiserver--c65f4d59c--5hmjn-eth0" Mar 7 01:06:49.229467 containerd[1488]: 2026-03-07 01:06:48.902 [INFO][3951] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fb413bdba32d068ddb7e497021eef5d50664703da79740a6d704d21ec5ea2076" Namespace="calico-system" Pod="calico-apiserver-c65f4d59c-5hmjn" WorkloadEndpoint="localhost-k8s-calico--apiserver--c65f4d59c--5hmjn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--c65f4d59c--5hmjn-eth0", GenerateName:"calico-apiserver-c65f4d59c-", Namespace:"calico-system", SelfLink:"", UID:"96fca932-ead7-4c21-85cc-a13f1ef84f2f", ResourceVersion:"1141", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 5, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c65f4d59c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fb413bdba32d068ddb7e497021eef5d50664703da79740a6d704d21ec5ea2076", Pod:"calico-apiserver-c65f4d59c-5hmjn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali8f8ed25dd5c", MAC:"12:ea:e9:96:22:45", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:06:49.229467 containerd[1488]: 2026-03-07 01:06:49.035 [INFO][3951] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fb413bdba32d068ddb7e497021eef5d50664703da79740a6d704d21ec5ea2076" Namespace="calico-system" Pod="calico-apiserver-c65f4d59c-5hmjn" WorkloadEndpoint="localhost-k8s-calico--apiserver--c65f4d59c--5hmjn-eth0" Mar 7 01:06:49.332986 systemd[1]: Started cri-containerd-47fa83a8b92761d64502a8c925b6f7f38e795ae58bedaaef619bf3cda583a157.scope - libcontainer container 47fa83a8b92761d64502a8c925b6f7f38e795ae58bedaaef619bf3cda583a157. Mar 7 01:06:49.442409 containerd[1488]: time="2026-03-07T01:06:49.435846829Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:06:49.442409 containerd[1488]: time="2026-03-07T01:06:49.435993724Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:06:49.442409 containerd[1488]: time="2026-03-07T01:06:49.436011567Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:06:49.442409 containerd[1488]: time="2026-03-07T01:06:49.436181855Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:06:49.496585 containerd[1488]: time="2026-03-07T01:06:49.493282574Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7764966df5-bspf4,Uid:33181e76-a4a7-4c67-ac8b-561804e85a35,Namespace:calico-system,Attempt:0,} returns sandbox id \"85745c8ad5511ef1ee25527c2e5559368ed89f0c601c17a6928346813318ed25\"" Mar 7 01:06:49.513518 systemd-resolved[1354]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 7 01:06:49.655689 systemd-networkd[1412]: calide867e38aa8: Gained IPv6LL Mar 7 01:06:49.672465 systemd[1]: Started cri-containerd-fb413bdba32d068ddb7e497021eef5d50664703da79740a6d704d21ec5ea2076.scope - libcontainer container fb413bdba32d068ddb7e497021eef5d50664703da79740a6d704d21ec5ea2076. Mar 7 01:06:49.732972 systemd-networkd[1412]: cali2d65225ff3a: Link UP Mar 7 01:06:49.739359 systemd-networkd[1412]: cali2d65225ff3a: Gained carrier Mar 7 01:06:49.817676 systemd-resolved[1354]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 7 01:06:49.884781 containerd[1488]: 2026-03-07 01:06:44.092 [ERROR][3968] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 7 01:06:49.884781 containerd[1488]: 2026-03-07 01:06:44.346 [INFO][3968] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7d764666f9--msqcb-eth0 coredns-7d764666f9- kube-system 50430acc-71a2-4da9-9907-c7bac0a8a3e4 1133 0 2026-03-07 01:04:06 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7d764666f9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7d764666f9-msqcb eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali2d65225ff3a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="cb2881e8d5e1898bc0c123c25bc81d6a713650b32e5144835b87f001f00b4dc8" Namespace="kube-system" Pod="coredns-7d764666f9-msqcb" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--msqcb-" Mar 7 01:06:49.884781 containerd[1488]: 2026-03-07 01:06:44.346 [INFO][3968] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cb2881e8d5e1898bc0c123c25bc81d6a713650b32e5144835b87f001f00b4dc8" Namespace="kube-system" Pod="coredns-7d764666f9-msqcb" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--msqcb-eth0" Mar 7 01:06:49.884781 containerd[1488]: 2026-03-07 01:06:46.138 [INFO][4094] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cb2881e8d5e1898bc0c123c25bc81d6a713650b32e5144835b87f001f00b4dc8" HandleID="k8s-pod-network.cb2881e8d5e1898bc0c123c25bc81d6a713650b32e5144835b87f001f00b4dc8" Workload="localhost-k8s-coredns--7d764666f9--msqcb-eth0" Mar 7 01:06:49.884781 containerd[1488]: 2026-03-07 01:06:46.356 [INFO][4094] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="cb2881e8d5e1898bc0c123c25bc81d6a713650b32e5144835b87f001f00b4dc8" HandleID="k8s-pod-network.cb2881e8d5e1898bc0c123c25bc81d6a713650b32e5144835b87f001f00b4dc8" Workload="localhost-k8s-coredns--7d764666f9--msqcb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0005349d0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7d764666f9-msqcb", "timestamp":"2026-03-07 01:06:46.138239483 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001c02c0)} Mar 7 01:06:49.884781 containerd[1488]: 2026-03-07 01:06:46.356 [INFO][4094] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:06:49.884781 containerd[1488]: 2026-03-07 01:06:48.740 [INFO][4094] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:06:49.884781 containerd[1488]: 2026-03-07 01:06:48.740 [INFO][4094] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 7 01:06:49.884781 containerd[1488]: 2026-03-07 01:06:48.819 [INFO][4094] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.cb2881e8d5e1898bc0c123c25bc81d6a713650b32e5144835b87f001f00b4dc8" host="localhost" Mar 7 01:06:49.884781 containerd[1488]: 2026-03-07 01:06:49.086 [INFO][4094] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 7 01:06:49.884781 containerd[1488]: 2026-03-07 01:06:49.255 [INFO][4094] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 7 01:06:49.884781 containerd[1488]: 2026-03-07 01:06:49.302 [INFO][4094] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 7 01:06:49.884781 containerd[1488]: 2026-03-07 01:06:49.337 [INFO][4094] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 7 01:06:49.884781 containerd[1488]: 2026-03-07 01:06:49.337 [INFO][4094] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.cb2881e8d5e1898bc0c123c25bc81d6a713650b32e5144835b87f001f00b4dc8" host="localhost" Mar 7 01:06:49.884781 containerd[1488]: 2026-03-07 01:06:49.392 [INFO][4094] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.cb2881e8d5e1898bc0c123c25bc81d6a713650b32e5144835b87f001f00b4dc8 Mar 7 01:06:49.884781 containerd[1488]: 2026-03-07 01:06:49.471 [INFO][4094] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.cb2881e8d5e1898bc0c123c25bc81d6a713650b32e5144835b87f001f00b4dc8" host="localhost" Mar 7 01:06:49.884781 containerd[1488]: 2026-03-07 01:06:49.536 [INFO][4094] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.cb2881e8d5e1898bc0c123c25bc81d6a713650b32e5144835b87f001f00b4dc8" host="localhost" Mar 7 01:06:49.884781 containerd[1488]: 2026-03-07 01:06:49.543 [INFO][4094] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.cb2881e8d5e1898bc0c123c25bc81d6a713650b32e5144835b87f001f00b4dc8" host="localhost" Mar 7 01:06:49.884781 containerd[1488]: 2026-03-07 01:06:49.549 [INFO][4094] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:06:49.884781 containerd[1488]: 2026-03-07 01:06:49.550 [INFO][4094] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="cb2881e8d5e1898bc0c123c25bc81d6a713650b32e5144835b87f001f00b4dc8" HandleID="k8s-pod-network.cb2881e8d5e1898bc0c123c25bc81d6a713650b32e5144835b87f001f00b4dc8" Workload="localhost-k8s-coredns--7d764666f9--msqcb-eth0" Mar 7 01:06:49.885879 containerd[1488]: 2026-03-07 01:06:49.598 [INFO][3968] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cb2881e8d5e1898bc0c123c25bc81d6a713650b32e5144835b87f001f00b4dc8" Namespace="kube-system" Pod="coredns-7d764666f9-msqcb" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--msqcb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--msqcb-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"50430acc-71a2-4da9-9907-c7bac0a8a3e4", ResourceVersion:"1133", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 4, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7d764666f9-msqcb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2d65225ff3a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:06:49.885879 containerd[1488]: 2026-03-07 01:06:49.599 [INFO][3968] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="cb2881e8d5e1898bc0c123c25bc81d6a713650b32e5144835b87f001f00b4dc8" Namespace="kube-system" Pod="coredns-7d764666f9-msqcb" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--msqcb-eth0" Mar 7 01:06:49.885879 containerd[1488]: 2026-03-07 01:06:49.599 [INFO][3968] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2d65225ff3a ContainerID="cb2881e8d5e1898bc0c123c25bc81d6a713650b32e5144835b87f001f00b4dc8" Namespace="kube-system" Pod="coredns-7d764666f9-msqcb" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--msqcb-eth0" Mar 7 01:06:49.885879 containerd[1488]: 2026-03-07 01:06:49.737 [INFO][3968] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cb2881e8d5e1898bc0c123c25bc81d6a713650b32e5144835b87f001f00b4dc8" Namespace="kube-system" Pod="coredns-7d764666f9-msqcb" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--msqcb-eth0" Mar 7 01:06:49.885879 containerd[1488]: 2026-03-07 01:06:49.742 [INFO][3968] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cb2881e8d5e1898bc0c123c25bc81d6a713650b32e5144835b87f001f00b4dc8" Namespace="kube-system" Pod="coredns-7d764666f9-msqcb" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--msqcb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--msqcb-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"50430acc-71a2-4da9-9907-c7bac0a8a3e4", ResourceVersion:"1133", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 4, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cb2881e8d5e1898bc0c123c25bc81d6a713650b32e5144835b87f001f00b4dc8", Pod:"coredns-7d764666f9-msqcb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2d65225ff3a", MAC:"0e:a4:7e:74:04:42", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:06:49.885879 containerd[1488]: 2026-03-07 01:06:49.834 [INFO][3968] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cb2881e8d5e1898bc0c123c25bc81d6a713650b32e5144835b87f001f00b4dc8" Namespace="kube-system" Pod="coredns-7d764666f9-msqcb" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--msqcb-eth0" Mar 7 01:06:49.902167 containerd[1488]: time="2026-03-07T01:06:49.902026164Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-sjd5q,Uid:6105a484-0b75-4a75-94f2-b23cf024997c,Namespace:calico-system,Attempt:0,} returns sandbox id \"47fa83a8b92761d64502a8c925b6f7f38e795ae58bedaaef619bf3cda583a157\"" Mar 7 01:06:50.098410 systemd-networkd[1412]: cali87c146eb6aa: Link UP Mar 7 01:06:50.103797 containerd[1488]: time="2026-03-07T01:06:50.101151645Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c65f4d59c-5hmjn,Uid:96fca932-ead7-4c21-85cc-a13f1ef84f2f,Namespace:calico-system,Attempt:0,} returns sandbox id \"fb413bdba32d068ddb7e497021eef5d50664703da79740a6d704d21ec5ea2076\"" Mar 7 01:06:50.099890 systemd-networkd[1412]: cali87c146eb6aa: Gained carrier Mar 7 01:06:50.159485 systemd-networkd[1412]: cali8f8ed25dd5c: Gained IPv6LL Mar 7 01:06:50.256793 containerd[1488]: time="2026-03-07T01:06:50.246800135Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:06:50.256793 containerd[1488]: time="2026-03-07T01:06:50.246971214Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:06:50.256793 containerd[1488]: time="2026-03-07T01:06:50.247013543Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:06:50.290391 containerd[1488]: time="2026-03-07T01:06:50.261082427Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:06:50.308507 containerd[1488]: 2026-03-07 01:06:43.893 [ERROR][3987] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 7 01:06:50.308507 containerd[1488]: 2026-03-07 01:06:44.412 [INFO][3987] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7d764666f9--wczgt-eth0 coredns-7d764666f9- kube-system 8a88a6aa-d46d-4b55-b537-0a47ef29126b 1146 0 2026-03-07 01:04:06 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7d764666f9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7d764666f9-wczgt eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali87c146eb6aa [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="b9ef2c9b0644bae53949c16a1a2185184f11470ccb90b2fe6f6347d6c1e1e665" Namespace="kube-system" Pod="coredns-7d764666f9-wczgt" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--wczgt-" Mar 7 01:06:50.308507 containerd[1488]: 2026-03-07 01:06:44.413 [INFO][3987] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b9ef2c9b0644bae53949c16a1a2185184f11470ccb90b2fe6f6347d6c1e1e665" Namespace="kube-system" Pod="coredns-7d764666f9-wczgt" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--wczgt-eth0" Mar 7 01:06:50.308507 containerd[1488]: 2026-03-07 01:06:46.244 [INFO][4107] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b9ef2c9b0644bae53949c16a1a2185184f11470ccb90b2fe6f6347d6c1e1e665" HandleID="k8s-pod-network.b9ef2c9b0644bae53949c16a1a2185184f11470ccb90b2fe6f6347d6c1e1e665" Workload="localhost-k8s-coredns--7d764666f9--wczgt-eth0" Mar 7 01:06:50.308507 containerd[1488]: 2026-03-07 01:06:46.366 [INFO][4107] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="b9ef2c9b0644bae53949c16a1a2185184f11470ccb90b2fe6f6347d6c1e1e665" HandleID="k8s-pod-network.b9ef2c9b0644bae53949c16a1a2185184f11470ccb90b2fe6f6347d6c1e1e665" Workload="localhost-k8s-coredns--7d764666f9--wczgt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00042dd20), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7d764666f9-wczgt", "timestamp":"2026-03-07 01:06:46.244151215 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000566dc0)} Mar 7 01:06:50.308507 containerd[1488]: 2026-03-07 01:06:46.378 [INFO][4107] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:06:50.308507 containerd[1488]: 2026-03-07 01:06:49.550 [INFO][4107] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:06:50.308507 containerd[1488]: 2026-03-07 01:06:49.550 [INFO][4107] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 7 01:06:50.308507 containerd[1488]: 2026-03-07 01:06:49.602 [INFO][4107] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.b9ef2c9b0644bae53949c16a1a2185184f11470ccb90b2fe6f6347d6c1e1e665" host="localhost" Mar 7 01:06:50.308507 containerd[1488]: 2026-03-07 01:06:49.655 [INFO][4107] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 7 01:06:50.308507 containerd[1488]: 2026-03-07 01:06:49.819 [INFO][4107] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 7 01:06:50.308507 containerd[1488]: 2026-03-07 01:06:49.855 [INFO][4107] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 7 01:06:50.308507 containerd[1488]: 2026-03-07 01:06:49.867 [INFO][4107] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 7 01:06:50.308507 containerd[1488]: 2026-03-07 01:06:49.867 [INFO][4107] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b9ef2c9b0644bae53949c16a1a2185184f11470ccb90b2fe6f6347d6c1e1e665" host="localhost" Mar 7 01:06:50.308507 containerd[1488]: 2026-03-07 01:06:49.891 [INFO][4107] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.b9ef2c9b0644bae53949c16a1a2185184f11470ccb90b2fe6f6347d6c1e1e665 Mar 7 01:06:50.308507 containerd[1488]: 2026-03-07 01:06:49.927 [INFO][4107] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b9ef2c9b0644bae53949c16a1a2185184f11470ccb90b2fe6f6347d6c1e1e665" host="localhost" Mar 7 01:06:50.308507 containerd[1488]: 2026-03-07 01:06:50.003 [INFO][4107] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.b9ef2c9b0644bae53949c16a1a2185184f11470ccb90b2fe6f6347d6c1e1e665" host="localhost" Mar 7 01:06:50.308507 containerd[1488]: 2026-03-07 01:06:50.003 [INFO][4107] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.b9ef2c9b0644bae53949c16a1a2185184f11470ccb90b2fe6f6347d6c1e1e665" host="localhost" Mar 7 01:06:50.308507 containerd[1488]: 2026-03-07 01:06:50.003 [INFO][4107] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:06:50.308507 containerd[1488]: 2026-03-07 01:06:50.003 [INFO][4107] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="b9ef2c9b0644bae53949c16a1a2185184f11470ccb90b2fe6f6347d6c1e1e665" HandleID="k8s-pod-network.b9ef2c9b0644bae53949c16a1a2185184f11470ccb90b2fe6f6347d6c1e1e665" Workload="localhost-k8s-coredns--7d764666f9--wczgt-eth0" Mar 7 01:06:50.309417 containerd[1488]: 2026-03-07 01:06:50.090 [INFO][3987] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b9ef2c9b0644bae53949c16a1a2185184f11470ccb90b2fe6f6347d6c1e1e665" Namespace="kube-system" Pod="coredns-7d764666f9-wczgt" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--wczgt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--wczgt-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"8a88a6aa-d46d-4b55-b537-0a47ef29126b", ResourceVersion:"1146", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 4, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7d764666f9-wczgt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali87c146eb6aa", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:06:50.309417 containerd[1488]: 2026-03-07 01:06:50.090 [INFO][3987] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="b9ef2c9b0644bae53949c16a1a2185184f11470ccb90b2fe6f6347d6c1e1e665" Namespace="kube-system" Pod="coredns-7d764666f9-wczgt" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--wczgt-eth0" Mar 7 01:06:50.309417 containerd[1488]: 2026-03-07 01:06:50.090 [INFO][3987] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali87c146eb6aa ContainerID="b9ef2c9b0644bae53949c16a1a2185184f11470ccb90b2fe6f6347d6c1e1e665" Namespace="kube-system" Pod="coredns-7d764666f9-wczgt" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--wczgt-eth0" Mar 7 01:06:50.309417 containerd[1488]: 2026-03-07 01:06:50.097 [INFO][3987] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b9ef2c9b0644bae53949c16a1a2185184f11470ccb90b2fe6f6347d6c1e1e665" Namespace="kube-system" Pod="coredns-7d764666f9-wczgt" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--wczgt-eth0" Mar 7 01:06:50.309417 containerd[1488]: 2026-03-07 01:06:50.097 [INFO][3987] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b9ef2c9b0644bae53949c16a1a2185184f11470ccb90b2fe6f6347d6c1e1e665" Namespace="kube-system" Pod="coredns-7d764666f9-wczgt" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--wczgt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--wczgt-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"8a88a6aa-d46d-4b55-b537-0a47ef29126b", ResourceVersion:"1146", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 4, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b9ef2c9b0644bae53949c16a1a2185184f11470ccb90b2fe6f6347d6c1e1e665", Pod:"coredns-7d764666f9-wczgt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali87c146eb6aa", MAC:"36:b7:46:42:89:fe", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:06:50.309417 containerd[1488]: 2026-03-07 01:06:50.225 [INFO][3987] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b9ef2c9b0644bae53949c16a1a2185184f11470ccb90b2fe6f6347d6c1e1e665" Namespace="kube-system" Pod="coredns-7d764666f9-wczgt" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--wczgt-eth0" Mar 7 01:06:50.594624 systemd[1]: Started cri-containerd-cb2881e8d5e1898bc0c123c25bc81d6a713650b32e5144835b87f001f00b4dc8.scope - libcontainer container cb2881e8d5e1898bc0c123c25bc81d6a713650b32e5144835b87f001f00b4dc8. Mar 7 01:06:50.646774 containerd[1488]: time="2026-03-07T01:06:50.644198815Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:06:50.654434 containerd[1488]: time="2026-03-07T01:06:50.647796170Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:06:50.654434 containerd[1488]: time="2026-03-07T01:06:50.647839270Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:06:50.654434 containerd[1488]: time="2026-03-07T01:06:50.648097512Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:06:50.658675 systemd-resolved[1354]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 7 01:06:50.829220 systemd[1]: Started cri-containerd-b9ef2c9b0644bae53949c16a1a2185184f11470ccb90b2fe6f6347d6c1e1e665.scope - libcontainer container b9ef2c9b0644bae53949c16a1a2185184f11470ccb90b2fe6f6347d6c1e1e665. Mar 7 01:06:50.844831 containerd[1488]: time="2026-03-07T01:06:50.842964857Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-msqcb,Uid:50430acc-71a2-4da9-9907-c7bac0a8a3e4,Namespace:kube-system,Attempt:0,} returns sandbox id \"cb2881e8d5e1898bc0c123c25bc81d6a713650b32e5144835b87f001f00b4dc8\"" Mar 7 01:06:50.851820 kubelet[2648]: E0307 01:06:50.851382 2648 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:06:50.878506 systemd-resolved[1354]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 7 01:06:50.881265 containerd[1488]: time="2026-03-07T01:06:50.881001603Z" level=info msg="CreateContainer within sandbox \"cb2881e8d5e1898bc0c123c25bc81d6a713650b32e5144835b87f001f00b4dc8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 7 01:06:51.224972 containerd[1488]: time="2026-03-07T01:06:51.222699067Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-wczgt,Uid:8a88a6aa-d46d-4b55-b537-0a47ef29126b,Namespace:kube-system,Attempt:0,} returns sandbox id \"b9ef2c9b0644bae53949c16a1a2185184f11470ccb90b2fe6f6347d6c1e1e665\"" Mar 7 01:06:51.227098 kubelet[2648]: E0307 01:06:51.226975 2648 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:06:51.249608 containerd[1488]: time="2026-03-07T01:06:51.248401044Z" level=info msg="CreateContainer within sandbox \"cb2881e8d5e1898bc0c123c25bc81d6a713650b32e5144835b87f001f00b4dc8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d62c894ddd85d622b9aacde4ce538f81f4fd29af61cee380c0eaccea09e058cd\"" Mar 7 01:06:51.254800 containerd[1488]: time="2026-03-07T01:06:51.250665242Z" level=info msg="StartContainer for \"d62c894ddd85d622b9aacde4ce538f81f4fd29af61cee380c0eaccea09e058cd\"" Mar 7 01:06:51.257508 containerd[1488]: time="2026-03-07T01:06:51.257321327Z" level=info msg="CreateContainer within sandbox \"b9ef2c9b0644bae53949c16a1a2185184f11470ccb90b2fe6f6347d6c1e1e665\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 7 01:06:51.315914 systemd-networkd[1412]: cali2d65225ff3a: Gained IPv6LL Mar 7 01:06:51.409370 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount355676988.mount: Deactivated successfully. Mar 7 01:06:51.477971 containerd[1488]: time="2026-03-07T01:06:51.477495633Z" level=info msg="CreateContainer within sandbox \"b9ef2c9b0644bae53949c16a1a2185184f11470ccb90b2fe6f6347d6c1e1e665\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5545093f88a6c176e3c2fc97d664be70f094ab8da3ab7d00bd04893df3e41931\"" Mar 7 01:06:51.490253 systemd[1]: Started cri-containerd-d62c894ddd85d622b9aacde4ce538f81f4fd29af61cee380c0eaccea09e058cd.scope - libcontainer container d62c894ddd85d622b9aacde4ce538f81f4fd29af61cee380c0eaccea09e058cd. Mar 7 01:06:51.523017 containerd[1488]: time="2026-03-07T01:06:51.521942414Z" level=info msg="StartContainer for \"5545093f88a6c176e3c2fc97d664be70f094ab8da3ab7d00bd04893df3e41931\"" Mar 7 01:06:51.860071 containerd[1488]: time="2026-03-07T01:06:51.859878505Z" level=info msg="StartContainer for \"d62c894ddd85d622b9aacde4ce538f81f4fd29af61cee380c0eaccea09e058cd\" returns successfully" Mar 7 01:06:51.937954 systemd[1]: Started cri-containerd-5545093f88a6c176e3c2fc97d664be70f094ab8da3ab7d00bd04893df3e41931.scope - libcontainer container 5545093f88a6c176e3c2fc97d664be70f094ab8da3ab7d00bd04893df3e41931. Mar 7 01:06:52.215692 systemd-networkd[1412]: cali87c146eb6aa: Gained IPv6LL Mar 7 01:06:52.321642 containerd[1488]: time="2026-03-07T01:06:52.319626783Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:06:52.321642 containerd[1488]: time="2026-03-07T01:06:52.321383483Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Mar 7 01:06:52.343345 containerd[1488]: time="2026-03-07T01:06:52.340948224Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:06:52.424492 containerd[1488]: time="2026-03-07T01:06:52.408175316Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 6.268762701s" Mar 7 01:06:52.424492 containerd[1488]: time="2026-03-07T01:06:52.408271215Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Mar 7 01:06:52.424492 containerd[1488]: time="2026-03-07T01:06:52.408592194Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:06:52.453995 containerd[1488]: time="2026-03-07T01:06:52.448192586Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 7 01:06:52.486807 systemd-networkd[1412]: vxlan.calico: Link UP Mar 7 01:06:52.486821 systemd-networkd[1412]: vxlan.calico: Gained carrier Mar 7 01:06:52.537398 containerd[1488]: time="2026-03-07T01:06:52.537339799Z" level=info msg="CreateContainer within sandbox \"569cc9d7e4c4e125b19a4c442602f6db0c9ff14abb70691b69bce8095349a4fe\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 7 01:06:52.662689 kubelet[2648]: E0307 01:06:52.662646 2648 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:06:52.750572 containerd[1488]: time="2026-03-07T01:06:52.750110412Z" level=info msg="CreateContainer within sandbox \"569cc9d7e4c4e125b19a4c442602f6db0c9ff14abb70691b69bce8095349a4fe\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"63d0b4e88b74e7b2b5533b2dd48d74d9f896d5af6f53f567e9f3a342ae34965f\"" Mar 7 01:06:52.750929 containerd[1488]: time="2026-03-07T01:06:52.750897391Z" level=info msg="StartContainer for \"5545093f88a6c176e3c2fc97d664be70f094ab8da3ab7d00bd04893df3e41931\" returns successfully" Mar 7 01:06:52.792887 containerd[1488]: time="2026-03-07T01:06:52.792825449Z" level=info msg="StartContainer for \"63d0b4e88b74e7b2b5533b2dd48d74d9f896d5af6f53f567e9f3a342ae34965f\"" Mar 7 01:06:53.204038 kubelet[2648]: I0307 01:06:53.203473 2648 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-msqcb" podStartSLOduration=167.203449098 podStartE2EDuration="2m47.203449098s" podCreationTimestamp="2026-03-07 01:04:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:06:53.135252766 +0000 UTC m=+169.494693916" watchObservedRunningTime="2026-03-07 01:06:53.203449098 +0000 UTC m=+169.562890248" Mar 7 01:06:53.422916 systemd[1]: run-containerd-runc-k8s.io-63d0b4e88b74e7b2b5533b2dd48d74d9f896d5af6f53f567e9f3a342ae34965f-runc.dPrC3J.mount: Deactivated successfully. Mar 7 01:06:53.502887 systemd[1]: Started cri-containerd-63d0b4e88b74e7b2b5533b2dd48d74d9f896d5af6f53f567e9f3a342ae34965f.scope - libcontainer container 63d0b4e88b74e7b2b5533b2dd48d74d9f896d5af6f53f567e9f3a342ae34965f. Mar 7 01:06:53.634005 systemd-networkd[1412]: vxlan.calico: Gained IPv6LL Mar 7 01:06:53.965752 kubelet[2648]: E0307 01:06:53.964232 2648 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:06:53.971104 kubelet[2648]: E0307 01:06:53.966836 2648 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:06:54.104167 containerd[1488]: time="2026-03-07T01:06:54.104010902Z" level=info msg="StartContainer for \"63d0b4e88b74e7b2b5533b2dd48d74d9f896d5af6f53f567e9f3a342ae34965f\" returns successfully" Mar 7 01:06:54.935852 kubelet[2648]: I0307 01:06:54.935798 2648 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 7 01:06:54.940550 kubelet[2648]: I0307 01:06:54.940511 2648 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 7 01:06:55.008288 kubelet[2648]: E0307 01:06:55.008248 2648 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:06:55.160253 kubelet[2648]: I0307 01:06:55.160166 2648 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/csi-node-driver-gfb8z" podStartSLOduration=96.920088727 podStartE2EDuration="1m50.160142573s" podCreationTimestamp="2026-03-07 01:05:05 +0000 UTC" firstStartedPulling="2026-03-07 01:06:39.188446771 +0000 UTC m=+155.547887901" lastFinishedPulling="2026-03-07 01:06:52.428500617 +0000 UTC m=+168.787941747" observedRunningTime="2026-03-07 01:06:55.153274862 +0000 UTC m=+171.512716012" watchObservedRunningTime="2026-03-07 01:06:55.160142573 +0000 UTC m=+171.519583704" Mar 7 01:06:55.177876 kubelet[2648]: I0307 01:06:55.176246 2648 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-wczgt" podStartSLOduration=169.176225255 podStartE2EDuration="2m49.176225255s" podCreationTimestamp="2026-03-07 01:04:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:06:54.087674059 +0000 UTC m=+170.447115189" watchObservedRunningTime="2026-03-07 01:06:55.176225255 +0000 UTC m=+171.535666395" Mar 7 01:06:56.213574 kubelet[2648]: E0307 01:06:56.209612 2648 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:07:02.758384 kubelet[2648]: E0307 01:07:02.758297 2648 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:07:03.986466 kubelet[2648]: E0307 01:07:03.985615 2648 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:07:04.125148 containerd[1488]: time="2026-03-07T01:07:04.123081794Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:07:04.140122 containerd[1488]: time="2026-03-07T01:07:04.139697377Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Mar 7 01:07:04.155190 containerd[1488]: time="2026-03-07T01:07:04.153465315Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:07:04.189496 containerd[1488]: time="2026-03-07T01:07:04.189236550Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:07:04.191511 containerd[1488]: time="2026-03-07T01:07:04.191218672Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 11.742961735s" Mar 7 01:07:04.191511 containerd[1488]: time="2026-03-07T01:07:04.191284084Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 7 01:07:04.207440 containerd[1488]: time="2026-03-07T01:07:04.206532541Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Mar 7 01:07:04.492855 containerd[1488]: time="2026-03-07T01:07:04.492539420Z" level=info msg="CreateContainer within sandbox \"e459ffc8474870699f27fbfdc2fadd3e749dcfba2e1d90cf95c7c8073fb04557\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 7 01:07:04.667101 containerd[1488]: time="2026-03-07T01:07:04.661615463Z" level=info msg="CreateContainer within sandbox \"e459ffc8474870699f27fbfdc2fadd3e749dcfba2e1d90cf95c7c8073fb04557\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"3f81b296f7fcd1920147887adaccb31745b8719a528b6776611d9f123fe1e3a4\"" Mar 7 01:07:04.688917 containerd[1488]: time="2026-03-07T01:07:04.688519419Z" level=info msg="StartContainer for \"3f81b296f7fcd1920147887adaccb31745b8719a528b6776611d9f123fe1e3a4\"" Mar 7 01:07:04.863258 systemd[1]: Started cri-containerd-3f81b296f7fcd1920147887adaccb31745b8719a528b6776611d9f123fe1e3a4.scope - libcontainer container 3f81b296f7fcd1920147887adaccb31745b8719a528b6776611d9f123fe1e3a4. Mar 7 01:07:05.118048 containerd[1488]: time="2026-03-07T01:07:05.117778592Z" level=info msg="StartContainer for \"3f81b296f7fcd1920147887adaccb31745b8719a528b6776611d9f123fe1e3a4\" returns successfully" Mar 7 01:07:07.813825 kubelet[2648]: I0307 01:07:07.812415 2648 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-apiserver-c65f4d59c-lmhzz" podStartSLOduration=110.933083342 podStartE2EDuration="2m6.812196442s" podCreationTimestamp="2026-03-07 01:05:01 +0000 UTC" firstStartedPulling="2026-03-07 01:06:48.324400376 +0000 UTC m=+164.683841506" lastFinishedPulling="2026-03-07 01:07:04.203513477 +0000 UTC m=+180.562954606" observedRunningTime="2026-03-07 01:07:06.015568376 +0000 UTC m=+182.375009527" watchObservedRunningTime="2026-03-07 01:07:07.812196442 +0000 UTC m=+184.171637592" Mar 7 01:07:09.964840 kubelet[2648]: E0307 01:07:09.927012 2648 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.04s" Mar 7 01:07:19.849985 systemd[1]: Started sshd@7-10.0.0.13:22-10.0.0.1:52190.service - OpenSSH per-connection server daemon (10.0.0.1:52190). Mar 7 01:07:20.461392 sshd[4883]: Accepted publickey for core from 10.0.0.1 port 52190 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:07:20.484327 sshd[4883]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:07:20.524955 systemd-logind[1475]: New session 8 of user core. Mar 7 01:07:20.537058 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 7 01:07:23.056818 sshd[4883]: pam_unix(sshd:session): session closed for user core Mar 7 01:07:23.098146 systemd-logind[1475]: Session 8 logged out. Waiting for processes to exit. Mar 7 01:07:23.109634 systemd[1]: sshd@7-10.0.0.13:22-10.0.0.1:52190.service: Deactivated successfully. Mar 7 01:07:23.127565 systemd[1]: session-8.scope: Deactivated successfully. Mar 7 01:07:23.133913 systemd-logind[1475]: Removed session 8. Mar 7 01:07:23.500319 containerd[1488]: time="2026-03-07T01:07:23.500229951Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:07:23.505504 containerd[1488]: time="2026-03-07T01:07:23.505435656Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Mar 7 01:07:23.510242 containerd[1488]: time="2026-03-07T01:07:23.509847900Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:07:23.532887 containerd[1488]: time="2026-03-07T01:07:23.532566437Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:07:23.537837 containerd[1488]: time="2026-03-07T01:07:23.536816339Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 19.32808365s" Mar 7 01:07:23.537837 containerd[1488]: time="2026-03-07T01:07:23.537636669Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Mar 7 01:07:23.543188 containerd[1488]: time="2026-03-07T01:07:23.542785298Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Mar 7 01:07:23.652979 containerd[1488]: time="2026-03-07T01:07:23.652528763Z" level=info msg="CreateContainer within sandbox \"b9bab95b32004840e60b88a83b22b072df30d01f98b815451a9dd75ba5ed28a8\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 7 01:07:23.751161 containerd[1488]: time="2026-03-07T01:07:23.750957551Z" level=info msg="CreateContainer within sandbox \"b9bab95b32004840e60b88a83b22b072df30d01f98b815451a9dd75ba5ed28a8\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"07f95685efd7944cd92b41bf15331c8481428eb4cc5b6b2b0b3f4c2e10e7ce8b\"" Mar 7 01:07:23.766011 containerd[1488]: time="2026-03-07T01:07:23.763465624Z" level=info msg="StartContainer for \"07f95685efd7944cd92b41bf15331c8481428eb4cc5b6b2b0b3f4c2e10e7ce8b\"" Mar 7 01:07:24.460304 update_engine[1479]: I20260307 01:07:24.450253 1479 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Mar 7 01:07:24.460304 update_engine[1479]: I20260307 01:07:24.453477 1479 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Mar 7 01:07:24.534296 update_engine[1479]: I20260307 01:07:24.476131 1479 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Mar 7 01:07:24.534296 update_engine[1479]: I20260307 01:07:24.491019 1479 omaha_request_params.cc:62] Current group set to lts Mar 7 01:07:24.534296 update_engine[1479]: I20260307 01:07:24.491221 1479 update_attempter.cc:499] Already updated boot flags. Skipping. Mar 7 01:07:24.534296 update_engine[1479]: I20260307 01:07:24.491247 1479 update_attempter.cc:643] Scheduling an action processor start. Mar 7 01:07:24.534296 update_engine[1479]: I20260307 01:07:24.491397 1479 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Mar 7 01:07:24.534296 update_engine[1479]: I20260307 01:07:24.491477 1479 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Mar 7 01:07:24.534296 update_engine[1479]: I20260307 01:07:24.491572 1479 omaha_request_action.cc:271] Posting an Omaha request to disabled Mar 7 01:07:24.534296 update_engine[1479]: I20260307 01:07:24.491587 1479 omaha_request_action.cc:272] Request: Mar 7 01:07:24.534296 update_engine[1479]: Mar 7 01:07:24.534296 update_engine[1479]: Mar 7 01:07:24.534296 update_engine[1479]: Mar 7 01:07:24.534296 update_engine[1479]: Mar 7 01:07:24.534296 update_engine[1479]: Mar 7 01:07:24.534296 update_engine[1479]: Mar 7 01:07:24.534296 update_engine[1479]: Mar 7 01:07:24.534296 update_engine[1479]: Mar 7 01:07:24.534296 update_engine[1479]: I20260307 01:07:24.491601 1479 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 7 01:07:24.534296 update_engine[1479]: I20260307 01:07:24.522707 1479 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 7 01:07:24.534296 update_engine[1479]: I20260307 01:07:24.523267 1479 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 7 01:07:24.541175 systemd[1]: Started cri-containerd-07f95685efd7944cd92b41bf15331c8481428eb4cc5b6b2b0b3f4c2e10e7ce8b.scope - libcontainer container 07f95685efd7944cd92b41bf15331c8481428eb4cc5b6b2b0b3f4c2e10e7ce8b. Mar 7 01:07:24.554255 update_engine[1479]: E20260307 01:07:24.554142 1479 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 7 01:07:24.554448 update_engine[1479]: I20260307 01:07:24.554325 1479 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Mar 7 01:07:24.721685 locksmithd[1525]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Mar 7 01:07:25.200092 containerd[1488]: time="2026-03-07T01:07:25.199923067Z" level=info msg="StartContainer for \"07f95685efd7944cd92b41bf15331c8481428eb4cc5b6b2b0b3f4c2e10e7ce8b\" returns successfully" Mar 7 01:07:27.121980 containerd[1488]: time="2026-03-07T01:07:27.110858342Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Mar 7 01:07:27.152987 containerd[1488]: time="2026-03-07T01:07:27.151248498Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:07:27.160818 containerd[1488]: time="2026-03-07T01:07:27.153003172Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:07:27.160818 containerd[1488]: time="2026-03-07T01:07:27.157228186Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:07:27.160944 containerd[1488]: time="2026-03-07T01:07:27.160864742Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 3.618021766s" Mar 7 01:07:27.160944 containerd[1488]: time="2026-03-07T01:07:27.160910638Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Mar 7 01:07:27.198292 containerd[1488]: time="2026-03-07T01:07:27.191801776Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Mar 7 01:07:27.236487 containerd[1488]: time="2026-03-07T01:07:27.236260303Z" level=info msg="CreateContainer within sandbox \"85745c8ad5511ef1ee25527c2e5559368ed89f0c601c17a6928346813318ed25\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Mar 7 01:07:27.794559 containerd[1488]: time="2026-03-07T01:07:27.794075889Z" level=info msg="CreateContainer within sandbox \"85745c8ad5511ef1ee25527c2e5559368ed89f0c601c17a6928346813318ed25\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"b70aca8a65229003fc0a68c6f22d992cf628ebcfeb36e4e1bfbc91e0ee10466c\"" Mar 7 01:07:27.803459 containerd[1488]: time="2026-03-07T01:07:27.799996661Z" level=info msg="StartContainer for \"b70aca8a65229003fc0a68c6f22d992cf628ebcfeb36e4e1bfbc91e0ee10466c\"" Mar 7 01:07:27.913791 kubelet[2648]: I0307 01:07:27.892233 2648 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7fb9fb46d-4xvsw" podStartSLOduration=108.065596137 podStartE2EDuration="2m22.892215476s" podCreationTimestamp="2026-03-07 01:05:05 +0000 UTC" firstStartedPulling="2026-03-07 01:06:48.713458586 +0000 UTC m=+165.072899736" lastFinishedPulling="2026-03-07 01:07:23.540077945 +0000 UTC m=+199.899519075" observedRunningTime="2026-03-07 01:07:26.258174197 +0000 UTC m=+202.617615347" watchObservedRunningTime="2026-03-07 01:07:27.892215476 +0000 UTC m=+204.251656616" Mar 7 01:07:28.200446 systemd[1]: Started sshd@8-10.0.0.13:22-10.0.0.1:55588.service - OpenSSH per-connection server daemon (10.0.0.1:55588). Mar 7 01:07:28.429823 systemd[1]: Started cri-containerd-b70aca8a65229003fc0a68c6f22d992cf628ebcfeb36e4e1bfbc91e0ee10466c.scope - libcontainer container b70aca8a65229003fc0a68c6f22d992cf628ebcfeb36e4e1bfbc91e0ee10466c. Mar 7 01:07:28.825786 sshd[5026]: Accepted publickey for core from 10.0.0.1 port 55588 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:07:28.842141 sshd[5026]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:07:28.927870 systemd-logind[1475]: New session 9 of user core. Mar 7 01:07:28.947690 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 7 01:07:28.953014 containerd[1488]: time="2026-03-07T01:07:28.952459142Z" level=info msg="StartContainer for \"b70aca8a65229003fc0a68c6f22d992cf628ebcfeb36e4e1bfbc91e0ee10466c\" returns successfully" Mar 7 01:07:30.331766 sshd[5026]: pam_unix(sshd:session): session closed for user core Mar 7 01:07:30.377417 systemd[1]: sshd@8-10.0.0.13:22-10.0.0.1:55588.service: Deactivated successfully. Mar 7 01:07:30.377835 systemd-logind[1475]: Session 9 logged out. Waiting for processes to exit. Mar 7 01:07:30.397382 systemd[1]: session-9.scope: Deactivated successfully. Mar 7 01:07:30.414888 systemd-logind[1475]: Removed session 9. Mar 7 01:07:35.160948 update_engine[1479]: I20260307 01:07:35.159703 1479 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 7 01:07:35.160948 update_engine[1479]: I20260307 01:07:35.160492 1479 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 7 01:07:35.163119 update_engine[1479]: I20260307 01:07:35.163074 1479 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 7 01:07:35.190684 update_engine[1479]: E20260307 01:07:35.189126 1479 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 7 01:07:35.190684 update_engine[1479]: I20260307 01:07:35.189639 1479 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Mar 7 01:07:35.450864 systemd[1]: Started sshd@9-10.0.0.13:22-10.0.0.1:35286.service - OpenSSH per-connection server daemon (10.0.0.1:35286). Mar 7 01:07:35.622800 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2426605243.mount: Deactivated successfully. Mar 7 01:07:35.810266 kubelet[2648]: E0307 01:07:35.805182 2648 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:07:35.949477 sshd[5070]: Accepted publickey for core from 10.0.0.1 port 35286 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:07:35.961987 sshd[5070]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:07:36.010164 systemd-logind[1475]: New session 10 of user core. Mar 7 01:07:36.029672 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 7 01:07:36.827876 systemd[1]: run-containerd-runc-k8s.io-23a5bccb39efd246970799aee1a0248df1bdbf62c7f1d22bb61e46bff8fadc5c-runc.QA8tuo.mount: Deactivated successfully. Mar 7 01:07:37.539859 sshd[5070]: pam_unix(sshd:session): session closed for user core Mar 7 01:07:37.562054 systemd-logind[1475]: Session 10 logged out. Waiting for processes to exit. Mar 7 01:07:37.564050 systemd[1]: sshd@9-10.0.0.13:22-10.0.0.1:35286.service: Deactivated successfully. Mar 7 01:07:37.586693 systemd[1]: session-10.scope: Deactivated successfully. Mar 7 01:07:37.601169 systemd-logind[1475]: Removed session 10. Mar 7 01:07:41.236992 containerd[1488]: time="2026-03-07T01:07:41.232149073Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:07:41.255830 containerd[1488]: time="2026-03-07T01:07:41.255699119Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Mar 7 01:07:41.498980 containerd[1488]: time="2026-03-07T01:07:41.466161663Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:07:41.564700 containerd[1488]: time="2026-03-07T01:07:41.564601623Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:07:41.600449 containerd[1488]: time="2026-03-07T01:07:41.600144076Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 14.408280784s" Mar 7 01:07:41.600774 containerd[1488]: time="2026-03-07T01:07:41.600652475Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Mar 7 01:07:41.610774 containerd[1488]: time="2026-03-07T01:07:41.608232209Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 7 01:07:41.634021 containerd[1488]: time="2026-03-07T01:07:41.633897678Z" level=info msg="CreateContainer within sandbox \"47fa83a8b92761d64502a8c925b6f7f38e795ae58bedaaef619bf3cda583a157\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Mar 7 01:07:41.731143 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3241587670.mount: Deactivated successfully. Mar 7 01:07:41.793027 containerd[1488]: time="2026-03-07T01:07:41.792635598Z" level=info msg="CreateContainer within sandbox \"47fa83a8b92761d64502a8c925b6f7f38e795ae58bedaaef619bf3cda583a157\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"22759cf2d7b1cf35c608c10aca1716b75c0bd7683a6f8a0b8c6650a44614e56a\"" Mar 7 01:07:41.798519 containerd[1488]: time="2026-03-07T01:07:41.796613487Z" level=info msg="StartContainer for \"22759cf2d7b1cf35c608c10aca1716b75c0bd7683a6f8a0b8c6650a44614e56a\"" Mar 7 01:07:42.197438 systemd[1]: Started cri-containerd-22759cf2d7b1cf35c608c10aca1716b75c0bd7683a6f8a0b8c6650a44614e56a.scope - libcontainer container 22759cf2d7b1cf35c608c10aca1716b75c0bd7683a6f8a0b8c6650a44614e56a. Mar 7 01:07:42.205899 containerd[1488]: time="2026-03-07T01:07:42.205842161Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:07:42.227806 containerd[1488]: time="2026-03-07T01:07:42.227589101Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Mar 7 01:07:42.248504 containerd[1488]: time="2026-03-07T01:07:42.246999818Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 638.726853ms" Mar 7 01:07:42.248504 containerd[1488]: time="2026-03-07T01:07:42.247086039Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 7 01:07:42.262176 containerd[1488]: time="2026-03-07T01:07:42.261288300Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Mar 7 01:07:42.299309 containerd[1488]: time="2026-03-07T01:07:42.296180880Z" level=info msg="CreateContainer within sandbox \"fb413bdba32d068ddb7e497021eef5d50664703da79740a6d704d21ec5ea2076\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 7 01:07:42.647879 systemd[1]: Started sshd@10-10.0.0.13:22-10.0.0.1:50646.service - OpenSSH per-connection server daemon (10.0.0.1:50646). Mar 7 01:07:42.816586 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1645891212.mount: Deactivated successfully. Mar 7 01:07:43.035703 containerd[1488]: time="2026-03-07T01:07:43.033272888Z" level=info msg="CreateContainer within sandbox \"fb413bdba32d068ddb7e497021eef5d50664703da79740a6d704d21ec5ea2076\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"3226ca0fce67efc2c1a12ef1127e3334e8690611fb3181f0dacdec4d37819972\"" Mar 7 01:07:43.099113 containerd[1488]: time="2026-03-07T01:07:43.089638038Z" level=info msg="StartContainer for \"22759cf2d7b1cf35c608c10aca1716b75c0bd7683a6f8a0b8c6650a44614e56a\" returns successfully" Mar 7 01:07:43.099113 containerd[1488]: time="2026-03-07T01:07:43.090069172Z" level=info msg="StartContainer for \"3226ca0fce67efc2c1a12ef1127e3334e8690611fb3181f0dacdec4d37819972\"" Mar 7 01:07:43.099469 sshd[5179]: Accepted publickey for core from 10.0.0.1 port 50646 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:07:43.122993 sshd[5179]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:07:43.181145 systemd-logind[1475]: New session 11 of user core. Mar 7 01:07:43.204979 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 7 01:07:43.450296 kubelet[2648]: I0307 01:07:43.441211 2648 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/goldmane-9f7667bb8-sjd5q" podStartSLOduration=109.76742089 podStartE2EDuration="2m41.441186075s" podCreationTimestamp="2026-03-07 01:05:02 +0000 UTC" firstStartedPulling="2026-03-07 01:06:49.933009292 +0000 UTC m=+166.292450423" lastFinishedPulling="2026-03-07 01:07:41.606774478 +0000 UTC m=+217.966215608" observedRunningTime="2026-03-07 01:07:43.423885914 +0000 UTC m=+219.783327075" watchObservedRunningTime="2026-03-07 01:07:43.441186075 +0000 UTC m=+219.800627206" Mar 7 01:07:43.537637 systemd[1]: Started cri-containerd-3226ca0fce67efc2c1a12ef1127e3334e8690611fb3181f0dacdec4d37819972.scope - libcontainer container 3226ca0fce67efc2c1a12ef1127e3334e8690611fb3181f0dacdec4d37819972. Mar 7 01:07:44.244920 containerd[1488]: time="2026-03-07T01:07:44.244521635Z" level=info msg="StartContainer for \"3226ca0fce67efc2c1a12ef1127e3334e8690611fb3181f0dacdec4d37819972\" returns successfully" Mar 7 01:07:45.126218 sshd[5179]: pam_unix(sshd:session): session closed for user core Mar 7 01:07:45.157198 update_engine[1479]: I20260307 01:07:45.154795 1479 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 7 01:07:45.157198 update_engine[1479]: I20260307 01:07:45.155220 1479 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 7 01:07:45.157198 update_engine[1479]: I20260307 01:07:45.155511 1479 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 7 01:07:45.186782 update_engine[1479]: E20260307 01:07:45.186039 1479 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 7 01:07:45.186782 update_engine[1479]: I20260307 01:07:45.186172 1479 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Mar 7 01:07:45.215425 systemd[1]: sshd@10-10.0.0.13:22-10.0.0.1:50646.service: Deactivated successfully. Mar 7 01:07:45.259362 systemd[1]: session-11.scope: Deactivated successfully. Mar 7 01:07:45.308137 systemd-logind[1475]: Session 11 logged out. Waiting for processes to exit. Mar 7 01:07:45.329634 systemd-logind[1475]: Removed session 11. Mar 7 01:07:49.282319 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3929019785.mount: Deactivated successfully. Mar 7 01:07:49.610940 containerd[1488]: time="2026-03-07T01:07:49.608479661Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Mar 7 01:07:49.610940 containerd[1488]: time="2026-03-07T01:07:49.608685364Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:07:49.619964 containerd[1488]: time="2026-03-07T01:07:49.619581010Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:07:49.654784 containerd[1488]: time="2026-03-07T01:07:49.653467310Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:07:49.655629 containerd[1488]: time="2026-03-07T01:07:49.655364079Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 7.393947079s" Mar 7 01:07:49.655629 containerd[1488]: time="2026-03-07T01:07:49.655449028Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Mar 7 01:07:49.717507 containerd[1488]: time="2026-03-07T01:07:49.716905251Z" level=info msg="CreateContainer within sandbox \"85745c8ad5511ef1ee25527c2e5559368ed89f0c601c17a6928346813318ed25\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Mar 7 01:07:49.955226 containerd[1488]: time="2026-03-07T01:07:49.955041029Z" level=info msg="CreateContainer within sandbox \"85745c8ad5511ef1ee25527c2e5559368ed89f0c601c17a6928346813318ed25\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"c4b40e0f48314add8dec5041b1dc89788dc8d167f6b0ff396748b9b591bd61d5\"" Mar 7 01:07:49.974885 containerd[1488]: time="2026-03-07T01:07:49.965087436Z" level=info msg="StartContainer for \"c4b40e0f48314add8dec5041b1dc89788dc8d167f6b0ff396748b9b591bd61d5\"" Mar 7 01:07:50.288167 systemd[1]: Started sshd@11-10.0.0.13:22-10.0.0.1:52192.service - OpenSSH per-connection server daemon (10.0.0.1:52192). Mar 7 01:07:50.409976 systemd[1]: Started cri-containerd-c4b40e0f48314add8dec5041b1dc89788dc8d167f6b0ff396748b9b591bd61d5.scope - libcontainer container c4b40e0f48314add8dec5041b1dc89788dc8d167f6b0ff396748b9b591bd61d5. Mar 7 01:07:50.754009 sshd[5322]: Accepted publickey for core from 10.0.0.1 port 52192 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:07:50.763532 kubelet[2648]: E0307 01:07:50.759764 2648 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:07:50.781535 sshd[5322]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:07:50.809133 systemd-logind[1475]: New session 12 of user core. Mar 7 01:07:50.825621 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 7 01:07:50.930766 containerd[1488]: time="2026-03-07T01:07:50.930598425Z" level=info msg="StartContainer for \"c4b40e0f48314add8dec5041b1dc89788dc8d167f6b0ff396748b9b591bd61d5\" returns successfully" Mar 7 01:07:51.903332 kubelet[2648]: I0307 01:07:51.894175 2648 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-apiserver-c65f4d59c-5hmjn" podStartSLOduration=118.758605806 podStartE2EDuration="2m50.894150662s" podCreationTimestamp="2026-03-07 01:05:01 +0000 UTC" firstStartedPulling="2026-03-07 01:06:50.117266848 +0000 UTC m=+166.476707978" lastFinishedPulling="2026-03-07 01:07:42.252811703 +0000 UTC m=+218.612252834" observedRunningTime="2026-03-07 01:07:44.563243936 +0000 UTC m=+220.922685066" watchObservedRunningTime="2026-03-07 01:07:51.894150662 +0000 UTC m=+228.253591802" Mar 7 01:07:52.812862 sshd[5322]: pam_unix(sshd:session): session closed for user core Mar 7 01:07:52.830427 systemd[1]: sshd@11-10.0.0.13:22-10.0.0.1:52192.service: Deactivated successfully. Mar 7 01:07:52.844659 systemd[1]: session-12.scope: Deactivated successfully. Mar 7 01:07:52.860978 systemd-logind[1475]: Session 12 logged out. Waiting for processes to exit. Mar 7 01:07:52.895545 systemd-logind[1475]: Removed session 12. Mar 7 01:07:55.159193 update_engine[1479]: I20260307 01:07:55.157660 1479 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 7 01:07:55.159193 update_engine[1479]: I20260307 01:07:55.158155 1479 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 7 01:07:55.159193 update_engine[1479]: I20260307 01:07:55.158936 1479 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 7 01:07:55.210482 update_engine[1479]: E20260307 01:07:55.208000 1479 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 7 01:07:55.210482 update_engine[1479]: I20260307 01:07:55.208165 1479 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Mar 7 01:07:55.210482 update_engine[1479]: I20260307 01:07:55.208194 1479 omaha_request_action.cc:617] Omaha request response: Mar 7 01:07:55.214084 update_engine[1479]: E20260307 01:07:55.213020 1479 omaha_request_action.cc:636] Omaha request network transfer failed. Mar 7 01:07:55.214084 update_engine[1479]: I20260307 01:07:55.213110 1479 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Mar 7 01:07:55.214084 update_engine[1479]: I20260307 01:07:55.213128 1479 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 7 01:07:55.214084 update_engine[1479]: I20260307 01:07:55.213142 1479 update_attempter.cc:306] Processing Done. Mar 7 01:07:55.214084 update_engine[1479]: E20260307 01:07:55.213169 1479 update_attempter.cc:619] Update failed. Mar 7 01:07:55.214084 update_engine[1479]: I20260307 01:07:55.213181 1479 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Mar 7 01:07:55.214084 update_engine[1479]: I20260307 01:07:55.213193 1479 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Mar 7 01:07:55.214084 update_engine[1479]: I20260307 01:07:55.213205 1479 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Mar 7 01:07:55.214084 update_engine[1479]: I20260307 01:07:55.213346 1479 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Mar 7 01:07:55.214084 update_engine[1479]: I20260307 01:07:55.213423 1479 omaha_request_action.cc:271] Posting an Omaha request to disabled Mar 7 01:07:55.214084 update_engine[1479]: I20260307 01:07:55.213441 1479 omaha_request_action.cc:272] Request: Mar 7 01:07:55.214084 update_engine[1479]: Mar 7 01:07:55.214084 update_engine[1479]: Mar 7 01:07:55.214084 update_engine[1479]: Mar 7 01:07:55.214084 update_engine[1479]: Mar 7 01:07:55.214084 update_engine[1479]: Mar 7 01:07:55.214084 update_engine[1479]: Mar 7 01:07:55.214084 update_engine[1479]: I20260307 01:07:55.213454 1479 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 7 01:07:55.218808 update_engine[1479]: I20260307 01:07:55.218276 1479 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 7 01:07:55.225250 locksmithd[1525]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Mar 7 01:07:55.225896 update_engine[1479]: I20260307 01:07:55.225304 1479 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 7 01:07:55.260449 update_engine[1479]: E20260307 01:07:55.258800 1479 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 7 01:07:55.260449 update_engine[1479]: I20260307 01:07:55.258956 1479 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Mar 7 01:07:55.260449 update_engine[1479]: I20260307 01:07:55.258986 1479 omaha_request_action.cc:617] Omaha request response: Mar 7 01:07:55.260449 update_engine[1479]: I20260307 01:07:55.259004 1479 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 7 01:07:55.260449 update_engine[1479]: I20260307 01:07:55.259018 1479 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 7 01:07:55.260449 update_engine[1479]: I20260307 01:07:55.259032 1479 update_attempter.cc:306] Processing Done. Mar 7 01:07:55.260449 update_engine[1479]: I20260307 01:07:55.259048 1479 update_attempter.cc:310] Error event sent. Mar 7 01:07:55.260449 update_engine[1479]: I20260307 01:07:55.259071 1479 update_check_scheduler.cc:74] Next update check in 45m44s Mar 7 01:07:55.260996 locksmithd[1525]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Mar 7 01:07:57.934200 systemd[1]: Started sshd@12-10.0.0.13:22-10.0.0.1:52228.service - OpenSSH per-connection server daemon (10.0.0.1:52228). Mar 7 01:07:58.289778 sshd[5381]: Accepted publickey for core from 10.0.0.1 port 52228 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:07:58.298930 sshd[5381]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:07:58.339935 systemd-logind[1475]: New session 13 of user core. Mar 7 01:07:58.369959 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 7 01:07:59.463173 sshd[5381]: pam_unix(sshd:session): session closed for user core Mar 7 01:07:59.524869 systemd[1]: sshd@12-10.0.0.13:22-10.0.0.1:52228.service: Deactivated successfully. Mar 7 01:07:59.562609 systemd[1]: session-13.scope: Deactivated successfully. Mar 7 01:07:59.583064 systemd-logind[1475]: Session 13 logged out. Waiting for processes to exit. Mar 7 01:07:59.592098 systemd-logind[1475]: Removed session 13. Mar 7 01:08:01.065330 kubelet[2648]: E0307 01:08:01.065119 2648 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:08:03.848352 kubelet[2648]: E0307 01:08:03.822576 2648 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:08:04.537013 systemd[1]: Started sshd@13-10.0.0.13:22-10.0.0.1:38524.service - OpenSSH per-connection server daemon (10.0.0.1:38524). Mar 7 01:08:05.191951 sshd[5396]: Accepted publickey for core from 10.0.0.1 port 38524 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:08:05.218483 sshd[5396]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:08:05.252510 systemd-logind[1475]: New session 14 of user core. Mar 7 01:08:05.293330 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 7 01:08:06.127876 sshd[5396]: pam_unix(sshd:session): session closed for user core Mar 7 01:08:06.157320 systemd[1]: sshd@13-10.0.0.13:22-10.0.0.1:38524.service: Deactivated successfully. Mar 7 01:08:06.196615 systemd[1]: session-14.scope: Deactivated successfully. Mar 7 01:08:06.212871 systemd-logind[1475]: Session 14 logged out. Waiting for processes to exit. Mar 7 01:08:06.224474 systemd-logind[1475]: Removed session 14. Mar 7 01:08:06.836193 kubelet[2648]: I0307 01:08:06.834274 2648 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/whisker-7764966df5-bspf4" podStartSLOduration=29.691226102 podStartE2EDuration="1m29.834253734s" podCreationTimestamp="2026-03-07 01:06:37 +0000 UTC" firstStartedPulling="2026-03-07 01:06:49.528631633 +0000 UTC m=+165.888072762" lastFinishedPulling="2026-03-07 01:07:49.671659265 +0000 UTC m=+226.031100394" observedRunningTime="2026-03-07 01:07:51.917878153 +0000 UTC m=+228.277319303" watchObservedRunningTime="2026-03-07 01:08:06.834253734 +0000 UTC m=+243.193694895" Mar 7 01:08:11.270678 systemd[1]: Started sshd@14-10.0.0.13:22-10.0.0.1:51954.service - OpenSSH per-connection server daemon (10.0.0.1:51954). Mar 7 01:08:11.590703 sshd[5439]: Accepted publickey for core from 10.0.0.1 port 51954 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:08:11.607563 sshd[5439]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:08:11.668017 systemd-logind[1475]: New session 15 of user core. Mar 7 01:08:11.705922 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 7 01:08:13.201113 sshd[5439]: pam_unix(sshd:session): session closed for user core Mar 7 01:08:13.226164 systemd[1]: sshd@14-10.0.0.13:22-10.0.0.1:51954.service: Deactivated successfully. Mar 7 01:08:13.233257 systemd[1]: session-15.scope: Deactivated successfully. Mar 7 01:08:13.246858 systemd-logind[1475]: Session 15 logged out. Waiting for processes to exit. Mar 7 01:08:13.260254 systemd-logind[1475]: Removed session 15. Mar 7 01:08:17.902634 kubelet[2648]: E0307 01:08:17.858557 2648 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:08:18.406277 systemd[1]: Started sshd@15-10.0.0.13:22-10.0.0.1:51962.service - OpenSSH per-connection server daemon (10.0.0.1:51962). Mar 7 01:08:19.010497 sshd[5484]: Accepted publickey for core from 10.0.0.1 port 51962 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:08:19.018512 sshd[5484]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:08:19.085152 systemd-logind[1475]: New session 16 of user core. Mar 7 01:08:19.106956 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 7 01:08:19.761534 kubelet[2648]: E0307 01:08:19.761352 2648 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:08:21.212122 sshd[5484]: pam_unix(sshd:session): session closed for user core Mar 7 01:08:21.233976 systemd-logind[1475]: Session 16 logged out. Waiting for processes to exit. Mar 7 01:08:21.253002 systemd[1]: sshd@15-10.0.0.13:22-10.0.0.1:51962.service: Deactivated successfully. Mar 7 01:08:21.280659 systemd[1]: session-16.scope: Deactivated successfully. Mar 7 01:08:21.294570 systemd-logind[1475]: Removed session 16. Mar 7 01:08:25.634506 kubelet[2648]: E0307 01:08:25.630710 2648 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:08:26.305050 systemd[1]: Started sshd@16-10.0.0.13:22-10.0.0.1:52620.service - OpenSSH per-connection server daemon (10.0.0.1:52620). Mar 7 01:08:26.488581 sshd[5540]: Accepted publickey for core from 10.0.0.1 port 52620 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:08:26.497915 sshd[5540]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:08:26.538089 systemd-logind[1475]: New session 17 of user core. Mar 7 01:08:26.594067 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 7 01:08:27.934856 sshd[5540]: pam_unix(sshd:session): session closed for user core Mar 7 01:08:28.016485 systemd[1]: sshd@16-10.0.0.13:22-10.0.0.1:52620.service: Deactivated successfully. Mar 7 01:08:28.026973 systemd-logind[1475]: Session 17 logged out. Waiting for processes to exit. Mar 7 01:08:28.048676 systemd[1]: session-17.scope: Deactivated successfully. Mar 7 01:08:28.061450 systemd-logind[1475]: Removed session 17. Mar 7 01:08:33.031037 systemd[1]: Started sshd@17-10.0.0.13:22-10.0.0.1:55498.service - OpenSSH per-connection server daemon (10.0.0.1:55498). Mar 7 01:08:33.244930 sshd[5578]: Accepted publickey for core from 10.0.0.1 port 55498 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:08:33.263388 sshd[5578]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:08:33.347001 systemd-logind[1475]: New session 18 of user core. Mar 7 01:08:33.395013 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 7 01:08:33.921241 sshd[5578]: pam_unix(sshd:session): session closed for user core Mar 7 01:08:34.002305 systemd[1]: sshd@17-10.0.0.13:22-10.0.0.1:55498.service: Deactivated successfully. Mar 7 01:08:34.020918 systemd[1]: session-18.scope: Deactivated successfully. Mar 7 01:08:34.030534 systemd-logind[1475]: Session 18 logged out. Waiting for processes to exit. Mar 7 01:08:34.038164 systemd-logind[1475]: Removed session 18. Mar 7 01:08:37.948043 systemd[1]: run-containerd-runc-k8s.io-23a5bccb39efd246970799aee1a0248df1bdbf62c7f1d22bb61e46bff8fadc5c-runc.5F1pnL.mount: Deactivated successfully. Mar 7 01:08:40.933262 systemd[1]: Started sshd@18-10.0.0.13:22-10.0.0.1:55536.service - OpenSSH per-connection server daemon (10.0.0.1:55536). Mar 7 01:08:42.115997 sshd[5615]: Accepted publickey for core from 10.0.0.1 port 55536 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:08:42.171263 sshd[5615]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:08:42.241826 systemd-logind[1475]: New session 19 of user core. Mar 7 01:08:42.290140 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 7 01:08:47.438391 sshd[5615]: pam_unix(sshd:session): session closed for user core Mar 7 01:08:47.519784 systemd[1]: sshd@18-10.0.0.13:22-10.0.0.1:55536.service: Deactivated successfully. Mar 7 01:08:47.599240 systemd[1]: session-19.scope: Deactivated successfully. Mar 7 01:08:47.605475 systemd[1]: session-19.scope: Consumed 1.049s CPU time. Mar 7 01:08:47.612478 systemd-logind[1475]: Session 19 logged out. Waiting for processes to exit. Mar 7 01:08:47.633243 systemd-logind[1475]: Removed session 19. Mar 7 01:08:50.913943 kubelet[2648]: E0307 01:08:50.880634 2648 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:08:52.523877 systemd[1]: Started sshd@19-10.0.0.13:22-10.0.0.1:52440.service - OpenSSH per-connection server daemon (10.0.0.1:52440). Mar 7 01:08:52.725596 sshd[5697]: Accepted publickey for core from 10.0.0.1 port 52440 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:08:52.740109 sshd[5697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:08:52.771364 kubelet[2648]: E0307 01:08:52.771089 2648 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:08:52.814644 systemd-logind[1475]: New session 20 of user core. Mar 7 01:08:52.822622 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 7 01:08:53.570659 sshd[5697]: pam_unix(sshd:session): session closed for user core Mar 7 01:08:53.599616 systemd[1]: sshd@19-10.0.0.13:22-10.0.0.1:52440.service: Deactivated successfully. Mar 7 01:08:53.616558 systemd[1]: session-20.scope: Deactivated successfully. Mar 7 01:08:53.621184 systemd-logind[1475]: Session 20 logged out. Waiting for processes to exit. Mar 7 01:08:53.631019 systemd-logind[1475]: Removed session 20. Mar 7 01:08:58.667767 systemd[1]: Started sshd@20-10.0.0.13:22-10.0.0.1:52472.service - OpenSSH per-connection server daemon (10.0.0.1:52472). Mar 7 01:08:58.928928 sshd[5735]: Accepted publickey for core from 10.0.0.1 port 52472 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:08:58.941376 sshd[5735]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:08:59.001176 systemd-logind[1475]: New session 21 of user core. Mar 7 01:08:59.018029 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 7 01:08:59.573324 sshd[5735]: pam_unix(sshd:session): session closed for user core Mar 7 01:08:59.609148 systemd[1]: sshd@20-10.0.0.13:22-10.0.0.1:52472.service: Deactivated successfully. Mar 7 01:08:59.627022 systemd[1]: session-21.scope: Deactivated successfully. Mar 7 01:08:59.633584 systemd-logind[1475]: Session 21 logged out. Waiting for processes to exit. Mar 7 01:08:59.639662 systemd-logind[1475]: Removed session 21. Mar 7 01:09:05.351068 systemd[1]: Started sshd@21-10.0.0.13:22-10.0.0.1:42992.service - OpenSSH per-connection server daemon (10.0.0.1:42992). Mar 7 01:09:05.887237 sshd[5752]: Accepted publickey for core from 10.0.0.1 port 42992 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:09:05.907069 sshd[5752]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:09:05.994287 systemd-logind[1475]: New session 22 of user core. Mar 7 01:09:06.019684 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 7 01:09:06.713849 sshd[5752]: pam_unix(sshd:session): session closed for user core Mar 7 01:09:06.781337 systemd[1]: sshd@21-10.0.0.13:22-10.0.0.1:42992.service: Deactivated successfully. Mar 7 01:09:06.794045 systemd[1]: session-22.scope: Deactivated successfully. Mar 7 01:09:06.801031 systemd-logind[1475]: Session 22 logged out. Waiting for processes to exit. Mar 7 01:09:06.817985 systemd-logind[1475]: Removed session 22. Mar 7 01:09:10.785972 kubelet[2648]: E0307 01:09:10.761011 2648 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:09:11.795348 systemd[1]: Started sshd@22-10.0.0.13:22-10.0.0.1:54230.service - OpenSSH per-connection server daemon (10.0.0.1:54230). Mar 7 01:09:12.022808 sshd[5791]: Accepted publickey for core from 10.0.0.1 port 54230 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:09:12.034189 sshd[5791]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:09:12.054838 systemd-logind[1475]: New session 23 of user core. Mar 7 01:09:12.076239 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 7 01:09:12.731016 sshd[5791]: pam_unix(sshd:session): session closed for user core Mar 7 01:09:12.754295 systemd[1]: sshd@22-10.0.0.13:22-10.0.0.1:54230.service: Deactivated successfully. Mar 7 01:09:12.797121 systemd[1]: session-23.scope: Deactivated successfully. Mar 7 01:09:12.827302 systemd-logind[1475]: Session 23 logged out. Waiting for processes to exit. Mar 7 01:09:12.841305 systemd-logind[1475]: Removed session 23. Mar 7 01:09:17.887403 systemd[1]: Started sshd@23-10.0.0.13:22-10.0.0.1:54282.service - OpenSSH per-connection server daemon (10.0.0.1:54282). Mar 7 01:09:18.200912 sshd[5829]: Accepted publickey for core from 10.0.0.1 port 54282 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:09:18.208200 sshd[5829]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:09:18.250111 systemd-logind[1475]: New session 24 of user core. Mar 7 01:09:18.278901 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 7 01:09:19.615943 sshd[5829]: pam_unix(sshd:session): session closed for user core Mar 7 01:09:19.655900 systemd[1]: sshd@23-10.0.0.13:22-10.0.0.1:54282.service: Deactivated successfully. Mar 7 01:09:19.668045 systemd[1]: session-24.scope: Deactivated successfully. Mar 7 01:09:19.689415 systemd-logind[1475]: Session 24 logged out. Waiting for processes to exit. Mar 7 01:09:19.695379 systemd-logind[1475]: Removed session 24. Mar 7 01:09:24.714090 systemd[1]: Started sshd@24-10.0.0.13:22-10.0.0.1:39980.service - OpenSSH per-connection server daemon (10.0.0.1:39980). Mar 7 01:09:24.765208 kubelet[2648]: E0307 01:09:24.765160 2648 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:09:25.309070 sshd[5844]: Accepted publickey for core from 10.0.0.1 port 39980 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:09:25.314188 sshd[5844]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:09:25.361024 systemd-logind[1475]: New session 25 of user core. Mar 7 01:09:25.386847 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 7 01:09:26.233230 sshd[5844]: pam_unix(sshd:session): session closed for user core Mar 7 01:09:26.250197 systemd[1]: sshd@24-10.0.0.13:22-10.0.0.1:39980.service: Deactivated successfully. Mar 7 01:09:26.263657 systemd[1]: session-25.scope: Deactivated successfully. Mar 7 01:09:26.267238 systemd-logind[1475]: Session 25 logged out. Waiting for processes to exit. Mar 7 01:09:26.284103 systemd-logind[1475]: Removed session 25. Mar 7 01:09:31.319119 systemd[1]: Started sshd@25-10.0.0.13:22-10.0.0.1:49364.service - OpenSSH per-connection server daemon (10.0.0.1:49364). Mar 7 01:09:31.465602 sshd[5880]: Accepted publickey for core from 10.0.0.1 port 49364 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:09:31.475390 sshd[5880]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:09:31.514504 systemd-logind[1475]: New session 26 of user core. Mar 7 01:09:31.532685 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 7 01:09:32.289169 sshd[5880]: pam_unix(sshd:session): session closed for user core Mar 7 01:09:32.318160 systemd-logind[1475]: Session 26 logged out. Waiting for processes to exit. Mar 7 01:09:32.323357 systemd[1]: sshd@25-10.0.0.13:22-10.0.0.1:49364.service: Deactivated successfully. Mar 7 01:09:32.343901 systemd[1]: session-26.scope: Deactivated successfully. Mar 7 01:09:32.355380 systemd-logind[1475]: Removed session 26. Mar 7 01:09:33.764335 kubelet[2648]: E0307 01:09:33.759446 2648 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:09:37.383951 systemd[1]: Started sshd@26-10.0.0.13:22-10.0.0.1:49412.service - OpenSSH per-connection server daemon (10.0.0.1:49412). Mar 7 01:09:37.530618 sshd[5919]: Accepted publickey for core from 10.0.0.1 port 49412 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:09:37.540292 sshd[5919]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:09:37.565594 systemd-logind[1475]: New session 27 of user core. Mar 7 01:09:37.589695 systemd[1]: Started session-27.scope - Session 27 of User core. Mar 7 01:09:38.289165 sshd[5919]: pam_unix(sshd:session): session closed for user core Mar 7 01:09:38.309592 systemd[1]: sshd@26-10.0.0.13:22-10.0.0.1:49412.service: Deactivated successfully. Mar 7 01:09:38.313951 systemd[1]: session-27.scope: Deactivated successfully. Mar 7 01:09:38.325121 systemd-logind[1475]: Session 27 logged out. Waiting for processes to exit. Mar 7 01:09:38.327872 systemd-logind[1475]: Removed session 27. Mar 7 01:09:43.378395 systemd[1]: Started sshd@27-10.0.0.13:22-10.0.0.1:41298.service - OpenSSH per-connection server daemon (10.0.0.1:41298). Mar 7 01:09:43.604532 sshd[5989]: Accepted publickey for core from 10.0.0.1 port 41298 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:09:43.603966 sshd[5989]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:09:43.649634 systemd-logind[1475]: New session 28 of user core. Mar 7 01:09:43.661071 systemd[1]: Started session-28.scope - Session 28 of User core. Mar 7 01:09:44.441148 sshd[5989]: pam_unix(sshd:session): session closed for user core Mar 7 01:09:44.459449 systemd-logind[1475]: Session 28 logged out. Waiting for processes to exit. Mar 7 01:09:44.463390 systemd[1]: sshd@27-10.0.0.13:22-10.0.0.1:41298.service: Deactivated successfully. Mar 7 01:09:44.495402 systemd[1]: session-28.scope: Deactivated successfully. Mar 7 01:09:44.502377 systemd-logind[1475]: Removed session 28. Mar 7 01:09:44.797519 kubelet[2648]: E0307 01:09:44.788985 2648 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:09:46.781107 kubelet[2648]: E0307 01:09:46.779203 2648 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:09:49.528185 systemd[1]: Started sshd@28-10.0.0.13:22-10.0.0.1:41300.service - OpenSSH per-connection server daemon (10.0.0.1:41300). Mar 7 01:09:49.810851 sshd[6027]: Accepted publickey for core from 10.0.0.1 port 41300 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:09:49.812683 sshd[6027]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:09:49.919080 systemd-logind[1475]: New session 29 of user core. Mar 7 01:09:49.960953 systemd[1]: Started session-29.scope - Session 29 of User core. Mar 7 01:09:51.321639 sshd[6027]: pam_unix(sshd:session): session closed for user core Mar 7 01:09:51.368762 systemd[1]: sshd@28-10.0.0.13:22-10.0.0.1:41300.service: Deactivated successfully. Mar 7 01:09:51.406891 systemd[1]: session-29.scope: Deactivated successfully. Mar 7 01:09:51.438598 systemd-logind[1475]: Session 29 logged out. Waiting for processes to exit. Mar 7 01:09:51.453549 systemd-logind[1475]: Removed session 29. Mar 7 01:09:56.453042 systemd[1]: Started sshd@29-10.0.0.13:22-10.0.0.1:55234.service - OpenSSH per-connection server daemon (10.0.0.1:55234). Mar 7 01:09:56.814275 sshd[6042]: Accepted publickey for core from 10.0.0.1 port 55234 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:09:56.830281 sshd[6042]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:09:56.883186 systemd-logind[1475]: New session 30 of user core. Mar 7 01:09:56.943313 systemd[1]: Started session-30.scope - Session 30 of User core. Mar 7 01:09:57.804013 sshd[6042]: pam_unix(sshd:session): session closed for user core Mar 7 01:09:57.832288 systemd[1]: sshd@29-10.0.0.13:22-10.0.0.1:55234.service: Deactivated successfully. Mar 7 01:09:57.845347 systemd[1]: session-30.scope: Deactivated successfully. Mar 7 01:09:57.852981 systemd-logind[1475]: Session 30 logged out. Waiting for processes to exit. Mar 7 01:09:57.864858 systemd-logind[1475]: Removed session 30. Mar 7 01:10:02.838267 systemd[1]: Started sshd@30-10.0.0.13:22-10.0.0.1:59288.service - OpenSSH per-connection server daemon (10.0.0.1:59288). Mar 7 01:10:03.189798 sshd[6112]: Accepted publickey for core from 10.0.0.1 port 59288 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:10:03.204999 sshd[6112]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:10:03.237781 systemd-logind[1475]: New session 31 of user core. Mar 7 01:10:03.269256 systemd[1]: Started session-31.scope - Session 31 of User core. Mar 7 01:10:04.115453 sshd[6112]: pam_unix(sshd:session): session closed for user core Mar 7 01:10:04.131300 systemd[1]: sshd@30-10.0.0.13:22-10.0.0.1:59288.service: Deactivated successfully. Mar 7 01:10:04.146017 systemd[1]: session-31.scope: Deactivated successfully. Mar 7 01:10:04.150995 systemd-logind[1475]: Session 31 logged out. Waiting for processes to exit. Mar 7 01:10:04.165102 systemd-logind[1475]: Removed session 31. Mar 7 01:10:05.792907 kubelet[2648]: E0307 01:10:05.792864 2648 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:10:06.803656 systemd[1]: run-containerd-runc-k8s.io-23a5bccb39efd246970799aee1a0248df1bdbf62c7f1d22bb61e46bff8fadc5c-runc.PC9h2x.mount: Deactivated successfully. Mar 7 01:10:09.276324 systemd[1]: Started sshd@31-10.0.0.13:22-10.0.0.1:59294.service - OpenSSH per-connection server daemon (10.0.0.1:59294). Mar 7 01:10:10.329625 sshd[6151]: Accepted publickey for core from 10.0.0.1 port 59294 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:10:10.402910 sshd[6151]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:10:10.453339 systemd-logind[1475]: New session 32 of user core. Mar 7 01:10:10.541123 systemd[1]: Started session-32.scope - Session 32 of User core. Mar 7 01:10:13.778151 kubelet[2648]: E0307 01:10:13.761901 2648 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.006s" Mar 7 01:10:13.955091 sshd[6151]: pam_unix(sshd:session): session closed for user core Mar 7 01:10:13.987419 systemd[1]: sshd@31-10.0.0.13:22-10.0.0.1:59294.service: Deactivated successfully. Mar 7 01:10:14.029481 systemd[1]: session-32.scope: Deactivated successfully. Mar 7 01:10:14.038700 systemd-logind[1475]: Session 32 logged out. Waiting for processes to exit. Mar 7 01:10:14.054267 systemd-logind[1475]: Removed session 32. Mar 7 01:10:19.127187 systemd[1]: Started sshd@32-10.0.0.13:22-10.0.0.1:41126.service - OpenSSH per-connection server daemon (10.0.0.1:41126). Mar 7 01:10:19.386900 sshd[6189]: Accepted publickey for core from 10.0.0.1 port 41126 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:10:19.391813 sshd[6189]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:10:19.446706 systemd-logind[1475]: New session 33 of user core. Mar 7 01:10:19.466447 systemd[1]: Started session-33.scope - Session 33 of User core. Mar 7 01:10:20.180227 sshd[6189]: pam_unix(sshd:session): session closed for user core Mar 7 01:10:20.201710 systemd[1]: sshd@32-10.0.0.13:22-10.0.0.1:41126.service: Deactivated successfully. Mar 7 01:10:20.208950 systemd[1]: session-33.scope: Deactivated successfully. Mar 7 01:10:20.215838 systemd-logind[1475]: Session 33 logged out. Waiting for processes to exit. Mar 7 01:10:20.222614 systemd-logind[1475]: Removed session 33. Mar 7 01:10:20.809265 kubelet[2648]: E0307 01:10:20.809160 2648 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:10:25.333782 systemd[1]: Started sshd@33-10.0.0.13:22-10.0.0.1:46820.service - OpenSSH per-connection server daemon (10.0.0.1:46820). Mar 7 01:10:25.545851 sshd[6205]: Accepted publickey for core from 10.0.0.1 port 46820 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:10:25.552341 sshd[6205]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:10:25.590095 systemd-logind[1475]: New session 34 of user core. Mar 7 01:10:25.621925 systemd[1]: Started session-34.scope - Session 34 of User core. Mar 7 01:10:26.450975 sshd[6205]: pam_unix(sshd:session): session closed for user core Mar 7 01:10:26.477245 systemd[1]: sshd@33-10.0.0.13:22-10.0.0.1:46820.service: Deactivated successfully. Mar 7 01:10:26.497412 systemd[1]: session-34.scope: Deactivated successfully. Mar 7 01:10:26.516883 systemd-logind[1475]: Session 34 logged out. Waiting for processes to exit. Mar 7 01:10:26.532490 systemd-logind[1475]: Removed session 34. Mar 7 01:10:31.545904 systemd[1]: Started sshd@34-10.0.0.13:22-10.0.0.1:35646.service - OpenSSH per-connection server daemon (10.0.0.1:35646). Mar 7 01:10:31.682876 sshd[6238]: Accepted publickey for core from 10.0.0.1 port 35646 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:10:31.695330 sshd[6238]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:10:31.760641 systemd-logind[1475]: New session 35 of user core. Mar 7 01:10:31.793054 systemd[1]: Started session-35.scope - Session 35 of User core. Mar 7 01:10:32.355208 sshd[6238]: pam_unix(sshd:session): session closed for user core Mar 7 01:10:32.377104 systemd[1]: sshd@34-10.0.0.13:22-10.0.0.1:35646.service: Deactivated successfully. Mar 7 01:10:32.394792 systemd[1]: session-35.scope: Deactivated successfully. Mar 7 01:10:32.404068 systemd-logind[1475]: Session 35 logged out. Waiting for processes to exit. Mar 7 01:10:32.409090 systemd-logind[1475]: Removed session 35. Mar 7 01:10:35.186987 kubelet[2648]: E0307 01:10:35.168489 2648 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:10:35.755815 kubelet[2648]: E0307 01:10:35.754575 2648 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:10:37.448861 systemd[1]: Started sshd@35-10.0.0.13:22-10.0.0.1:35652.service - OpenSSH per-connection server daemon (10.0.0.1:35652). Mar 7 01:10:37.683833 sshd[6278]: Accepted publickey for core from 10.0.0.1 port 35652 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:10:37.697114 sshd[6278]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:10:37.806934 systemd-logind[1475]: New session 36 of user core. Mar 7 01:10:37.848870 systemd[1]: Started session-36.scope - Session 36 of User core. Mar 7 01:10:38.800844 kubelet[2648]: E0307 01:10:38.792826 2648 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:10:39.343223 sshd[6278]: pam_unix(sshd:session): session closed for user core Mar 7 01:10:39.365394 systemd[1]: sshd@35-10.0.0.13:22-10.0.0.1:35652.service: Deactivated successfully. Mar 7 01:10:39.391279 systemd[1]: session-36.scope: Deactivated successfully. Mar 7 01:10:39.404157 systemd-logind[1475]: Session 36 logged out. Waiting for processes to exit. Mar 7 01:10:39.414034 systemd-logind[1475]: Removed session 36. Mar 7 01:10:44.413002 systemd[1]: Started sshd@36-10.0.0.13:22-10.0.0.1:60594.service - OpenSSH per-connection server daemon (10.0.0.1:60594). Mar 7 01:10:44.859647 sshd[6336]: Accepted publickey for core from 10.0.0.1 port 60594 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:10:44.896661 systemd[1]: run-containerd-runc-k8s.io-22759cf2d7b1cf35c608c10aca1716b75c0bd7683a6f8a0b8c6650a44614e56a-runc.RixShm.mount: Deactivated successfully. Mar 7 01:10:44.909995 sshd[6336]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:10:45.016615 systemd-logind[1475]: New session 37 of user core. Mar 7 01:10:45.038126 systemd[1]: Started session-37.scope - Session 37 of User core. Mar 7 01:10:45.950952 sshd[6336]: pam_unix(sshd:session): session closed for user core Mar 7 01:10:45.972457 systemd[1]: sshd@36-10.0.0.13:22-10.0.0.1:60594.service: Deactivated successfully. Mar 7 01:10:45.990672 systemd[1]: session-37.scope: Deactivated successfully. Mar 7 01:10:45.995955 systemd-logind[1475]: Session 37 logged out. Waiting for processes to exit. Mar 7 01:10:46.001840 systemd-logind[1475]: Removed session 37. Mar 7 01:10:51.050970 systemd[1]: Started sshd@37-10.0.0.13:22-10.0.0.1:53414.service - OpenSSH per-connection server daemon (10.0.0.1:53414). Mar 7 01:10:51.217584 sshd[6371]: Accepted publickey for core from 10.0.0.1 port 53414 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:10:51.225487 sshd[6371]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:10:51.254126 systemd-logind[1475]: New session 38 of user core. Mar 7 01:10:51.278026 systemd[1]: Started session-38.scope - Session 38 of User core. Mar 7 01:10:51.969124 sshd[6371]: pam_unix(sshd:session): session closed for user core Mar 7 01:10:52.011886 systemd-logind[1475]: Session 38 logged out. Waiting for processes to exit. Mar 7 01:10:52.021028 systemd[1]: sshd@37-10.0.0.13:22-10.0.0.1:53414.service: Deactivated successfully. Mar 7 01:10:52.044396 systemd[1]: session-38.scope: Deactivated successfully. Mar 7 01:10:52.051413 systemd-logind[1475]: Removed session 38. Mar 7 01:10:57.064073 systemd[1]: Started sshd@38-10.0.0.13:22-10.0.0.1:53426.service - OpenSSH per-connection server daemon (10.0.0.1:53426). Mar 7 01:10:57.321991 sshd[6386]: Accepted publickey for core from 10.0.0.1 port 53426 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:10:57.356935 sshd[6386]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:10:57.462286 systemd-logind[1475]: New session 39 of user core. Mar 7 01:10:57.497479 systemd[1]: Started session-39.scope - Session 39 of User core. Mar 7 01:10:58.117573 sshd[6386]: pam_unix(sshd:session): session closed for user core Mar 7 01:10:58.135003 systemd[1]: sshd@38-10.0.0.13:22-10.0.0.1:53426.service: Deactivated successfully. Mar 7 01:10:58.146898 systemd[1]: session-39.scope: Deactivated successfully. Mar 7 01:10:58.177464 systemd-logind[1475]: Session 39 logged out. Waiting for processes to exit. Mar 7 01:10:58.192047 systemd-logind[1475]: Removed session 39. Mar 7 01:10:59.801487 kubelet[2648]: E0307 01:10:59.800509 2648 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:11:03.243575 systemd[1]: Started sshd@39-10.0.0.13:22-10.0.0.1:47354.service - OpenSSH per-connection server daemon (10.0.0.1:47354). Mar 7 01:11:03.460517 sshd[6422]: Accepted publickey for core from 10.0.0.1 port 47354 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:11:03.485145 sshd[6422]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:11:03.534313 systemd-logind[1475]: New session 40 of user core. Mar 7 01:11:03.562102 systemd[1]: Started session-40.scope - Session 40 of User core. Mar 7 01:11:04.747511 sshd[6422]: pam_unix(sshd:session): session closed for user core Mar 7 01:11:04.775952 systemd-logind[1475]: Session 40 logged out. Waiting for processes to exit. Mar 7 01:11:04.795956 systemd[1]: sshd@39-10.0.0.13:22-10.0.0.1:47354.service: Deactivated successfully. Mar 7 01:11:04.809911 systemd[1]: session-40.scope: Deactivated successfully. Mar 7 01:11:04.821047 systemd-logind[1475]: Removed session 40. Mar 7 01:11:05.758046 kubelet[2648]: E0307 01:11:05.756643 2648 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:11:09.863605 systemd[1]: Started sshd@40-10.0.0.13:22-10.0.0.1:47370.service - OpenSSH per-connection server daemon (10.0.0.1:47370). Mar 7 01:11:10.044975 sshd[6463]: Accepted publickey for core from 10.0.0.1 port 47370 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:11:10.060816 sshd[6463]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:11:10.135640 systemd-logind[1475]: New session 41 of user core. Mar 7 01:11:10.189512 systemd[1]: Started session-41.scope - Session 41 of User core. Mar 7 01:11:11.152621 sshd[6463]: pam_unix(sshd:session): session closed for user core Mar 7 01:11:11.170434 systemd[1]: sshd@40-10.0.0.13:22-10.0.0.1:47370.service: Deactivated successfully. Mar 7 01:11:11.182176 systemd[1]: session-41.scope: Deactivated successfully. Mar 7 01:11:11.187192 systemd-logind[1475]: Session 41 logged out. Waiting for processes to exit. Mar 7 01:11:11.199876 systemd-logind[1475]: Removed session 41. Mar 7 01:11:15.767672 kubelet[2648]: E0307 01:11:15.757073 2648 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:11:16.311536 systemd[1]: Started sshd@41-10.0.0.13:22-10.0.0.1:46424.service - OpenSSH per-connection server daemon (10.0.0.1:46424). Mar 7 01:11:16.517273 sshd[6504]: Accepted publickey for core from 10.0.0.1 port 46424 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:11:16.518550 sshd[6504]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:11:16.541276 systemd-logind[1475]: New session 42 of user core. Mar 7 01:11:16.566146 systemd[1]: Started session-42.scope - Session 42 of User core. Mar 7 01:11:17.936378 kubelet[2648]: E0307 01:11:17.936175 2648 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.166s" Mar 7 01:11:18.305190 sshd[6504]: pam_unix(sshd:session): session closed for user core Mar 7 01:11:18.337110 systemd[1]: sshd@41-10.0.0.13:22-10.0.0.1:46424.service: Deactivated successfully. Mar 7 01:11:18.353814 systemd[1]: session-42.scope: Deactivated successfully. Mar 7 01:11:18.367443 systemd-logind[1475]: Session 42 logged out. Waiting for processes to exit. Mar 7 01:11:18.382825 systemd-logind[1475]: Removed session 42. Mar 7 01:11:23.353382 systemd[1]: Started sshd@42-10.0.0.13:22-10.0.0.1:55032.service - OpenSSH per-connection server daemon (10.0.0.1:55032). Mar 7 01:11:23.548041 sshd[6520]: Accepted publickey for core from 10.0.0.1 port 55032 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:11:23.564320 sshd[6520]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:11:23.614489 systemd-logind[1475]: New session 43 of user core. Mar 7 01:11:23.630097 systemd[1]: Started session-43.scope - Session 43 of User core. Mar 7 01:11:24.310919 sshd[6520]: pam_unix(sshd:session): session closed for user core Mar 7 01:11:24.353308 systemd[1]: sshd@42-10.0.0.13:22-10.0.0.1:55032.service: Deactivated successfully. Mar 7 01:11:24.362885 systemd[1]: session-43.scope: Deactivated successfully. Mar 7 01:11:24.389900 systemd-logind[1475]: Session 43 logged out. Waiting for processes to exit. Mar 7 01:11:24.407428 systemd[1]: Started sshd@43-10.0.0.13:22-10.0.0.1:55042.service - OpenSSH per-connection server daemon (10.0.0.1:55042). Mar 7 01:11:24.425309 systemd-logind[1475]: Removed session 43. Mar 7 01:11:24.571920 sshd[6536]: Accepted publickey for core from 10.0.0.1 port 55042 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:11:24.588291 sshd[6536]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:11:24.616479 systemd-logind[1475]: New session 44 of user core. Mar 7 01:11:24.641395 systemd[1]: Started session-44.scope - Session 44 of User core. Mar 7 01:11:25.581406 sshd[6536]: pam_unix(sshd:session): session closed for user core Mar 7 01:11:25.617238 systemd[1]: sshd@43-10.0.0.13:22-10.0.0.1:55042.service: Deactivated successfully. Mar 7 01:11:25.630120 systemd[1]: session-44.scope: Deactivated successfully. Mar 7 01:11:25.662239 systemd-logind[1475]: Session 44 logged out. Waiting for processes to exit. Mar 7 01:11:25.705945 systemd[1]: Started sshd@44-10.0.0.13:22-10.0.0.1:55044.service - OpenSSH per-connection server daemon (10.0.0.1:55044). Mar 7 01:11:25.739994 systemd-logind[1475]: Removed session 44. Mar 7 01:11:26.007909 sshd[6549]: Accepted publickey for core from 10.0.0.1 port 55044 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:11:26.023839 sshd[6549]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:11:26.060414 systemd-logind[1475]: New session 45 of user core. Mar 7 01:11:26.087661 systemd[1]: Started session-45.scope - Session 45 of User core. Mar 7 01:11:26.861934 sshd[6549]: pam_unix(sshd:session): session closed for user core Mar 7 01:11:26.879246 systemd-logind[1475]: Session 45 logged out. Waiting for processes to exit. Mar 7 01:11:26.887342 systemd[1]: sshd@44-10.0.0.13:22-10.0.0.1:55044.service: Deactivated successfully. Mar 7 01:11:26.897199 systemd[1]: session-45.scope: Deactivated successfully. Mar 7 01:11:26.900133 systemd-logind[1475]: Removed session 45. Mar 7 01:11:31.891252 systemd[1]: Started sshd@45-10.0.0.13:22-10.0.0.1:52520.service - OpenSSH per-connection server daemon (10.0.0.1:52520). Mar 7 01:11:32.063229 sshd[6583]: Accepted publickey for core from 10.0.0.1 port 52520 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:11:32.077474 sshd[6583]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:11:32.099380 systemd-logind[1475]: New session 46 of user core. Mar 7 01:11:32.113100 systemd[1]: Started session-46.scope - Session 46 of User core. Mar 7 01:11:32.784093 sshd[6583]: pam_unix(sshd:session): session closed for user core Mar 7 01:11:32.803659 systemd[1]: sshd@45-10.0.0.13:22-10.0.0.1:52520.service: Deactivated successfully. Mar 7 01:11:32.826194 systemd[1]: session-46.scope: Deactivated successfully. Mar 7 01:11:32.831484 systemd-logind[1475]: Session 46 logged out. Waiting for processes to exit. Mar 7 01:11:32.838299 systemd-logind[1475]: Removed session 46. Mar 7 01:11:36.719480 systemd[1]: run-containerd-runc-k8s.io-23a5bccb39efd246970799aee1a0248df1bdbf62c7f1d22bb61e46bff8fadc5c-runc.ypRYmP.mount: Deactivated successfully. Mar 7 01:11:37.912816 systemd[1]: Started sshd@46-10.0.0.13:22-10.0.0.1:52534.service - OpenSSH per-connection server daemon (10.0.0.1:52534). Mar 7 01:11:38.541524 sshd[6654]: Accepted publickey for core from 10.0.0.1 port 52534 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:11:38.560638 sshd[6654]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:11:38.645257 systemd-logind[1475]: New session 47 of user core. Mar 7 01:11:38.684510 systemd[1]: Started session-47.scope - Session 47 of User core. Mar 7 01:11:40.086161 sshd[6654]: pam_unix(sshd:session): session closed for user core Mar 7 01:11:40.124101 systemd[1]: sshd@46-10.0.0.13:22-10.0.0.1:52534.service: Deactivated successfully. Mar 7 01:11:40.143496 systemd[1]: session-47.scope: Deactivated successfully. Mar 7 01:11:40.166594 systemd-logind[1475]: Session 47 logged out. Waiting for processes to exit. Mar 7 01:11:40.179573 systemd-logind[1475]: Removed session 47. Mar 7 01:11:41.757513 kubelet[2648]: E0307 01:11:41.757147 2648 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:11:45.253819 systemd[1]: Started sshd@47-10.0.0.13:22-10.0.0.1:50130.service - OpenSSH per-connection server daemon (10.0.0.1:50130). Mar 7 01:11:45.525174 sshd[6750]: Accepted publickey for core from 10.0.0.1 port 50130 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:11:45.534772 sshd[6750]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:11:45.556110 systemd-logind[1475]: New session 48 of user core. Mar 7 01:11:45.588434 systemd[1]: Started session-48.scope - Session 48 of User core. Mar 7 01:11:46.657647 sshd[6750]: pam_unix(sshd:session): session closed for user core Mar 7 01:11:46.685588 systemd[1]: sshd@47-10.0.0.13:22-10.0.0.1:50130.service: Deactivated successfully. Mar 7 01:11:46.699802 systemd[1]: session-48.scope: Deactivated successfully. Mar 7 01:11:46.724897 systemd-logind[1475]: Session 48 logged out. Waiting for processes to exit. Mar 7 01:11:46.884053 systemd-logind[1475]: Removed session 48. Mar 7 01:11:51.742855 systemd[1]: Started sshd@48-10.0.0.13:22-10.0.0.1:58704.service - OpenSSH per-connection server daemon (10.0.0.1:58704). Mar 7 01:11:52.290287 sshd[6764]: Accepted publickey for core from 10.0.0.1 port 58704 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:11:52.300557 sshd[6764]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:11:52.333146 systemd-logind[1475]: New session 49 of user core. Mar 7 01:11:52.363889 systemd[1]: Started session-49.scope - Session 49 of User core. Mar 7 01:11:53.355306 sshd[6764]: pam_unix(sshd:session): session closed for user core Mar 7 01:11:53.382310 systemd[1]: sshd@48-10.0.0.13:22-10.0.0.1:58704.service: Deactivated successfully. Mar 7 01:11:53.511491 systemd[1]: session-49.scope: Deactivated successfully. Mar 7 01:11:53.525491 systemd-logind[1475]: Session 49 logged out. Waiting for processes to exit. Mar 7 01:11:53.536926 systemd-logind[1475]: Removed session 49. Mar 7 01:11:53.761890 kubelet[2648]: E0307 01:11:53.758435 2648 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:11:55.758105 kubelet[2648]: E0307 01:11:55.758021 2648 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:11:55.768359 kubelet[2648]: E0307 01:11:55.758681 2648 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:11:58.420302 systemd[1]: Started sshd@49-10.0.0.13:22-10.0.0.1:58706.service - OpenSSH per-connection server daemon (10.0.0.1:58706). Mar 7 01:11:58.618853 sshd[6800]: Accepted publickey for core from 10.0.0.1 port 58706 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:11:58.628100 sshd[6800]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:11:58.670235 systemd-logind[1475]: New session 50 of user core. Mar 7 01:11:58.711178 systemd[1]: Started session-50.scope - Session 50 of User core. Mar 7 01:11:59.498180 sshd[6800]: pam_unix(sshd:session): session closed for user core Mar 7 01:11:59.513393 systemd[1]: sshd@49-10.0.0.13:22-10.0.0.1:58706.service: Deactivated successfully. Mar 7 01:11:59.525214 systemd[1]: session-50.scope: Deactivated successfully. Mar 7 01:11:59.541929 systemd-logind[1475]: Session 50 logged out. Waiting for processes to exit. Mar 7 01:11:59.546574 systemd-logind[1475]: Removed session 50. Mar 7 01:12:04.663277 systemd[1]: Started sshd@50-10.0.0.13:22-10.0.0.1:60950.service - OpenSSH per-connection server daemon (10.0.0.1:60950). Mar 7 01:12:04.905898 sshd[6814]: Accepted publickey for core from 10.0.0.1 port 60950 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:12:04.936408 sshd[6814]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:12:05.028845 systemd-logind[1475]: New session 51 of user core. Mar 7 01:12:05.057132 systemd[1]: Started session-51.scope - Session 51 of User core. Mar 7 01:12:08.677528 kubelet[2648]: E0307 01:12:08.671437 2648 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:12:09.317515 sshd[6814]: pam_unix(sshd:session): session closed for user core Mar 7 01:12:09.391186 systemd-logind[1475]: Session 51 logged out. Waiting for processes to exit. Mar 7 01:12:09.458642 systemd[1]: sshd@50-10.0.0.13:22-10.0.0.1:60950.service: Deactivated successfully. Mar 7 01:12:09.493382 systemd[1]: session-51.scope: Deactivated successfully. Mar 7 01:12:09.516503 systemd-logind[1475]: Removed session 51. Mar 7 01:12:14.373093 systemd[1]: Started sshd@51-10.0.0.13:22-10.0.0.1:58366.service - OpenSSH per-connection server daemon (10.0.0.1:58366). Mar 7 01:12:14.538769 sshd[6854]: Accepted publickey for core from 10.0.0.1 port 58366 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:12:14.550215 sshd[6854]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:12:14.598971 systemd-logind[1475]: New session 52 of user core. Mar 7 01:12:14.652144 systemd[1]: Started session-52.scope - Session 52 of User core. Mar 7 01:12:15.454578 sshd[6854]: pam_unix(sshd:session): session closed for user core Mar 7 01:12:15.469583 systemd-logind[1475]: Session 52 logged out. Waiting for processes to exit. Mar 7 01:12:15.479069 systemd[1]: sshd@51-10.0.0.13:22-10.0.0.1:58366.service: Deactivated successfully. Mar 7 01:12:15.491049 systemd[1]: session-52.scope: Deactivated successfully. Mar 7 01:12:15.493558 systemd-logind[1475]: Removed session 52. Mar 7 01:12:20.557214 systemd[1]: Started sshd@52-10.0.0.13:22-10.0.0.1:57974.service - OpenSSH per-connection server daemon (10.0.0.1:57974). Mar 7 01:12:20.662789 sshd[6906]: Accepted publickey for core from 10.0.0.1 port 57974 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:12:20.669161 sshd[6906]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:12:20.707896 systemd-logind[1475]: New session 53 of user core. Mar 7 01:12:20.722839 systemd[1]: Started session-53.scope - Session 53 of User core. Mar 7 01:12:21.624299 sshd[6906]: pam_unix(sshd:session): session closed for user core Mar 7 01:12:21.665809 systemd-logind[1475]: Session 53 logged out. Waiting for processes to exit. Mar 7 01:12:21.677245 systemd[1]: sshd@52-10.0.0.13:22-10.0.0.1:57974.service: Deactivated successfully. Mar 7 01:12:22.009260 systemd[1]: session-53.scope: Deactivated successfully. Mar 7 01:12:22.030976 systemd-logind[1475]: Removed session 53. Mar 7 01:12:27.419685 kubelet[2648]: E0307 01:12:27.414264 2648 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:12:27.513907 systemd[1]: Started sshd@53-10.0.0.13:22-10.0.0.1:57984.service - OpenSSH per-connection server daemon (10.0.0.1:57984). Mar 7 01:12:27.669600 systemd[1]: run-containerd-runc-k8s.io-07f95685efd7944cd92b41bf15331c8481428eb4cc5b6b2b0b3f4c2e10e7ce8b-runc.GnoVTn.mount: Deactivated successfully. Mar 7 01:12:27.744766 sshd[6926]: Accepted publickey for core from 10.0.0.1 port 57984 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:12:27.750350 sshd[6926]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:12:27.778945 systemd-logind[1475]: New session 54 of user core. Mar 7 01:12:27.825510 systemd[1]: Started session-54.scope - Session 54 of User core. Mar 7 01:12:28.369123 sshd[6926]: pam_unix(sshd:session): session closed for user core Mar 7 01:12:28.384212 systemd-logind[1475]: Session 54 logged out. Waiting for processes to exit. Mar 7 01:12:28.391074 systemd[1]: sshd@53-10.0.0.13:22-10.0.0.1:57984.service: Deactivated successfully. Mar 7 01:12:28.398841 systemd[1]: session-54.scope: Deactivated successfully. Mar 7 01:12:28.422321 systemd-logind[1475]: Removed session 54. Mar 7 01:12:28.781101 kubelet[2648]: E0307 01:12:28.778010 2648 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:12:33.425179 systemd[1]: Started sshd@54-10.0.0.13:22-10.0.0.1:34460.service - OpenSSH per-connection server daemon (10.0.0.1:34460). Mar 7 01:12:33.636358 sshd[6955]: Accepted publickey for core from 10.0.0.1 port 34460 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:12:33.644110 sshd[6955]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:12:33.681269 systemd-logind[1475]: New session 55 of user core. Mar 7 01:12:33.733301 systemd[1]: Started session-55.scope - Session 55 of User core. Mar 7 01:12:34.497008 sshd[6955]: pam_unix(sshd:session): session closed for user core Mar 7 01:12:34.509975 systemd-logind[1475]: Session 55 logged out. Waiting for processes to exit. Mar 7 01:12:34.525408 systemd[1]: sshd@54-10.0.0.13:22-10.0.0.1:34460.service: Deactivated successfully. Mar 7 01:12:34.537085 systemd[1]: session-55.scope: Deactivated successfully. Mar 7 01:12:34.603445 systemd-logind[1475]: Removed session 55. Mar 7 01:12:39.633626 systemd[1]: Started sshd@55-10.0.0.13:22-10.0.0.1:34476.service - OpenSSH per-connection server daemon (10.0.0.1:34476). Mar 7 01:12:39.871120 sshd[6992]: Accepted publickey for core from 10.0.0.1 port 34476 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:12:39.875340 sshd[6992]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:12:39.908232 systemd-logind[1475]: New session 56 of user core. Mar 7 01:12:39.978916 systemd[1]: Started session-56.scope - Session 56 of User core. Mar 7 01:12:40.744205 sshd[6992]: pam_unix(sshd:session): session closed for user core Mar 7 01:12:40.805354 systemd[1]: sshd@55-10.0.0.13:22-10.0.0.1:34476.service: Deactivated successfully. Mar 7 01:12:40.811251 systemd[1]: session-56.scope: Deactivated successfully. Mar 7 01:12:40.815965 systemd-logind[1475]: Session 56 logged out. Waiting for processes to exit. Mar 7 01:12:40.857291 systemd[1]: Started sshd@56-10.0.0.13:22-10.0.0.1:41278.service - OpenSSH per-connection server daemon (10.0.0.1:41278). Mar 7 01:12:40.889316 systemd-logind[1475]: Removed session 56. Mar 7 01:12:41.107607 sshd[7006]: Accepted publickey for core from 10.0.0.1 port 41278 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:12:41.138857 sshd[7006]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:12:41.175411 systemd-logind[1475]: New session 57 of user core. Mar 7 01:12:41.227633 systemd[1]: Started session-57.scope - Session 57 of User core. Mar 7 01:12:43.456435 sshd[7006]: pam_unix(sshd:session): session closed for user core Mar 7 01:12:43.497131 systemd[1]: sshd@56-10.0.0.13:22-10.0.0.1:41278.service: Deactivated successfully. Mar 7 01:12:43.513127 systemd[1]: session-57.scope: Deactivated successfully. Mar 7 01:12:43.525963 systemd-logind[1475]: Session 57 logged out. Waiting for processes to exit. Mar 7 01:12:43.559350 systemd[1]: Started sshd@57-10.0.0.13:22-10.0.0.1:41288.service - OpenSSH per-connection server daemon (10.0.0.1:41288). Mar 7 01:12:43.566250 systemd-logind[1475]: Removed session 57. Mar 7 01:12:43.793033 sshd[7062]: Accepted publickey for core from 10.0.0.1 port 41288 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:12:43.798530 sshd[7062]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:12:43.835197 systemd-logind[1475]: New session 58 of user core. Mar 7 01:12:43.865223 systemd[1]: Started session-58.scope - Session 58 of User core. Mar 7 01:12:45.125486 systemd[1]: run-containerd-runc-k8s.io-22759cf2d7b1cf35c608c10aca1716b75c0bd7683a6f8a0b8c6650a44614e56a-runc.TmZccy.mount: Deactivated successfully. Mar 7 01:12:47.131423 sshd[7062]: pam_unix(sshd:session): session closed for user core Mar 7 01:12:47.180244 systemd[1]: Started sshd@58-10.0.0.13:22-10.0.0.1:41298.service - OpenSSH per-connection server daemon (10.0.0.1:41298). Mar 7 01:12:47.183483 systemd[1]: sshd@57-10.0.0.13:22-10.0.0.1:41288.service: Deactivated successfully. Mar 7 01:12:47.192959 systemd[1]: session-58.scope: Deactivated successfully. Mar 7 01:12:47.197299 systemd[1]: session-58.scope: Consumed 1.293s CPU time. Mar 7 01:12:47.218419 systemd-logind[1475]: Session 58 logged out. Waiting for processes to exit. Mar 7 01:12:47.248575 systemd-logind[1475]: Removed session 58. Mar 7 01:12:47.447249 sshd[7120]: Accepted publickey for core from 10.0.0.1 port 41298 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:12:47.460005 sshd[7120]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:12:47.533814 systemd-logind[1475]: New session 59 of user core. Mar 7 01:12:47.568413 systemd[1]: Started session-59.scope - Session 59 of User core. Mar 7 01:12:50.671633 sshd[7120]: pam_unix(sshd:session): session closed for user core Mar 7 01:12:50.716660 systemd[1]: sshd@58-10.0.0.13:22-10.0.0.1:41298.service: Deactivated successfully. Mar 7 01:12:50.725212 systemd[1]: session-59.scope: Deactivated successfully. Mar 7 01:12:50.725516 systemd[1]: session-59.scope: Consumed 1.042s CPU time. Mar 7 01:12:50.742606 systemd-logind[1475]: Session 59 logged out. Waiting for processes to exit. Mar 7 01:12:50.830875 systemd[1]: Started sshd@59-10.0.0.13:22-10.0.0.1:57132.service - OpenSSH per-connection server daemon (10.0.0.1:57132). Mar 7 01:12:50.847229 systemd-logind[1475]: Removed session 59. Mar 7 01:12:51.034912 sshd[7136]: Accepted publickey for core from 10.0.0.1 port 57132 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:12:51.045857 sshd[7136]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:12:51.127940 systemd-logind[1475]: New session 60 of user core. Mar 7 01:12:51.156563 systemd[1]: Started session-60.scope - Session 60 of User core. Mar 7 01:12:52.245548 sshd[7136]: pam_unix(sshd:session): session closed for user core Mar 7 01:12:52.276438 systemd[1]: sshd@59-10.0.0.13:22-10.0.0.1:57132.service: Deactivated successfully. Mar 7 01:12:52.294529 systemd[1]: session-60.scope: Deactivated successfully. Mar 7 01:12:52.430997 systemd-logind[1475]: Session 60 logged out. Waiting for processes to exit. Mar 7 01:12:52.438096 systemd-logind[1475]: Removed session 60. Mar 7 01:12:57.343532 systemd[1]: Started sshd@60-10.0.0.13:22-10.0.0.1:57146.service - OpenSSH per-connection server daemon (10.0.0.1:57146). Mar 7 01:12:57.567762 sshd[7158]: Accepted publickey for core from 10.0.0.1 port 57146 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:12:57.585442 sshd[7158]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:12:57.615705 systemd-logind[1475]: New session 61 of user core. Mar 7 01:12:57.641214 systemd[1]: Started session-61.scope - Session 61 of User core. Mar 7 01:12:58.110012 sshd[7158]: pam_unix(sshd:session): session closed for user core Mar 7 01:12:58.129310 systemd[1]: sshd@60-10.0.0.13:22-10.0.0.1:57146.service: Deactivated successfully. Mar 7 01:12:58.137583 systemd[1]: session-61.scope: Deactivated successfully. Mar 7 01:12:58.150433 systemd-logind[1475]: Session 61 logged out. Waiting for processes to exit. Mar 7 01:12:58.187638 systemd-logind[1475]: Removed session 61. Mar 7 01:13:00.781239 kubelet[2648]: E0307 01:13:00.773172 2648 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:13:01.766782 kubelet[2648]: E0307 01:13:01.762986 2648 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:13:03.221510 systemd[1]: Started sshd@61-10.0.0.13:22-10.0.0.1:35928.service - OpenSSH per-connection server daemon (10.0.0.1:35928). Mar 7 01:13:03.491818 sshd[7185]: Accepted publickey for core from 10.0.0.1 port 35928 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:13:03.556305 sshd[7185]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:13:03.628107 systemd-logind[1475]: New session 62 of user core. Mar 7 01:13:03.667140 systemd[1]: Started session-62.scope - Session 62 of User core. Mar 7 01:13:04.380506 sshd[7185]: pam_unix(sshd:session): session closed for user core Mar 7 01:13:04.400669 systemd[1]: sshd@61-10.0.0.13:22-10.0.0.1:35928.service: Deactivated successfully. Mar 7 01:13:04.407964 systemd[1]: session-62.scope: Deactivated successfully. Mar 7 01:13:04.418862 systemd-logind[1475]: Session 62 logged out. Waiting for processes to exit. Mar 7 01:13:04.421056 systemd-logind[1475]: Removed session 62. Mar 7 01:13:09.505666 systemd[1]: Started sshd@62-10.0.0.13:22-10.0.0.1:35940.service - OpenSSH per-connection server daemon (10.0.0.1:35940). Mar 7 01:13:09.630157 sshd[7256]: Accepted publickey for core from 10.0.0.1 port 35940 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:13:09.632152 sshd[7256]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:13:09.659889 systemd-logind[1475]: New session 63 of user core. Mar 7 01:13:09.694906 systemd[1]: Started session-63.scope - Session 63 of User core. Mar 7 01:13:10.431147 sshd[7256]: pam_unix(sshd:session): session closed for user core Mar 7 01:13:10.455365 systemd[1]: sshd@62-10.0.0.13:22-10.0.0.1:35940.service: Deactivated successfully. Mar 7 01:13:10.466228 systemd[1]: session-63.scope: Deactivated successfully. Mar 7 01:13:10.493230 systemd-logind[1475]: Session 63 logged out. Waiting for processes to exit. Mar 7 01:13:10.516562 systemd-logind[1475]: Removed session 63. Mar 7 01:13:10.770312 kubelet[2648]: E0307 01:13:10.769509 2648 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:13:11.761796 kubelet[2648]: E0307 01:13:11.758261 2648 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:13:15.504869 systemd[1]: Started sshd@63-10.0.0.13:22-10.0.0.1:45046.service - OpenSSH per-connection server daemon (10.0.0.1:45046). Mar 7 01:13:15.820975 sshd[7292]: Accepted publickey for core from 10.0.0.1 port 45046 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:13:15.836799 sshd[7292]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:13:15.876866 systemd-logind[1475]: New session 64 of user core. Mar 7 01:13:15.897844 systemd[1]: Started session-64.scope - Session 64 of User core. Mar 7 01:13:16.444995 sshd[7292]: pam_unix(sshd:session): session closed for user core Mar 7 01:13:16.467294 systemd[1]: sshd@63-10.0.0.13:22-10.0.0.1:45046.service: Deactivated successfully. Mar 7 01:13:16.478136 systemd[1]: session-64.scope: Deactivated successfully. Mar 7 01:13:16.492654 systemd-logind[1475]: Session 64 logged out. Waiting for processes to exit. Mar 7 01:13:16.499060 systemd-logind[1475]: Removed session 64. Mar 7 01:13:19.757387 kubelet[2648]: E0307 01:13:19.756897 2648 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:13:21.477041 systemd[1]: Started sshd@64-10.0.0.13:22-10.0.0.1:51192.service - OpenSSH per-connection server daemon (10.0.0.1:51192). Mar 7 01:13:21.601631 sshd[7310]: Accepted publickey for core from 10.0.0.1 port 51192 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:13:21.599985 sshd[7310]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:13:21.623304 systemd-logind[1475]: New session 65 of user core. Mar 7 01:13:21.636553 systemd[1]: Started session-65.scope - Session 65 of User core. Mar 7 01:13:22.139953 sshd[7310]: pam_unix(sshd:session): session closed for user core Mar 7 01:13:22.154891 systemd[1]: sshd@64-10.0.0.13:22-10.0.0.1:51192.service: Deactivated successfully. Mar 7 01:13:22.165073 systemd[1]: session-65.scope: Deactivated successfully. Mar 7 01:13:22.170813 systemd-logind[1475]: Session 65 logged out. Waiting for processes to exit. Mar 7 01:13:22.177987 systemd-logind[1475]: Removed session 65. Mar 7 01:13:27.255281 systemd[1]: Started sshd@65-10.0.0.13:22-10.0.0.1:51206.service - OpenSSH per-connection server daemon (10.0.0.1:51206). Mar 7 01:13:27.460116 systemd[1]: run-containerd-runc-k8s.io-07f95685efd7944cd92b41bf15331c8481428eb4cc5b6b2b0b3f4c2e10e7ce8b-runc.8KkX7D.mount: Deactivated successfully. Mar 7 01:13:27.524080 sshd[7326]: Accepted publickey for core from 10.0.0.1 port 51206 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:13:27.534273 sshd[7326]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:13:27.586228 systemd-logind[1475]: New session 66 of user core. Mar 7 01:13:27.622685 systemd[1]: Started session-66.scope - Session 66 of User core. Mar 7 01:13:28.100109 sshd[7326]: pam_unix(sshd:session): session closed for user core Mar 7 01:13:28.118694 systemd[1]: sshd@65-10.0.0.13:22-10.0.0.1:51206.service: Deactivated successfully. Mar 7 01:13:28.124844 systemd[1]: session-66.scope: Deactivated successfully. Mar 7 01:13:28.126287 systemd-logind[1475]: Session 66 logged out. Waiting for processes to exit. Mar 7 01:13:28.143897 systemd-logind[1475]: Removed session 66. Mar 7 01:13:33.184993 systemd[1]: Started sshd@66-10.0.0.13:22-10.0.0.1:42948.service - OpenSSH per-connection server daemon (10.0.0.1:42948). Mar 7 01:13:33.308618 sshd[7360]: Accepted publickey for core from 10.0.0.1 port 42948 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:13:33.316530 sshd[7360]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:13:33.352908 systemd-logind[1475]: New session 67 of user core. Mar 7 01:13:33.360817 systemd[1]: Started session-67.scope - Session 67 of User core. Mar 7 01:13:33.997381 sshd[7360]: pam_unix(sshd:session): session closed for user core Mar 7 01:13:34.017878 systemd-logind[1475]: Session 67 logged out. Waiting for processes to exit. Mar 7 01:13:34.028162 systemd[1]: sshd@66-10.0.0.13:22-10.0.0.1:42948.service: Deactivated successfully. Mar 7 01:13:34.040461 systemd[1]: session-67.scope: Deactivated successfully. Mar 7 01:13:34.049401 systemd-logind[1475]: Removed session 67. Mar 7 01:13:39.053530 systemd[1]: Started sshd@67-10.0.0.13:22-10.0.0.1:42958.service - OpenSSH per-connection server daemon (10.0.0.1:42958). Mar 7 01:13:39.170488 sshd[7395]: Accepted publickey for core from 10.0.0.1 port 42958 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:13:39.169262 sshd[7395]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:13:39.204426 systemd-logind[1475]: New session 68 of user core. Mar 7 01:13:39.237295 systemd[1]: Started session-68.scope - Session 68 of User core. Mar 7 01:13:39.856688 sshd[7395]: pam_unix(sshd:session): session closed for user core Mar 7 01:13:39.876954 systemd[1]: sshd@67-10.0.0.13:22-10.0.0.1:42958.service: Deactivated successfully. Mar 7 01:13:39.908851 systemd[1]: session-68.scope: Deactivated successfully. Mar 7 01:13:39.940602 systemd-logind[1475]: Session 68 logged out. Waiting for processes to exit. Mar 7 01:13:39.952879 systemd-logind[1475]: Removed session 68. Mar 7 01:13:44.970581 systemd[1]: Started sshd@68-10.0.0.13:22-10.0.0.1:55138.service - OpenSSH per-connection server daemon (10.0.0.1:55138). Mar 7 01:13:45.173690 sshd[7458]: Accepted publickey for core from 10.0.0.1 port 55138 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:13:45.187026 sshd[7458]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:13:45.229113 systemd-logind[1475]: New session 69 of user core. Mar 7 01:13:45.257369 systemd[1]: Started session-69.scope - Session 69 of User core. Mar 7 01:13:45.972542 sshd[7458]: pam_unix(sshd:session): session closed for user core Mar 7 01:13:45.990651 systemd-logind[1475]: Session 69 logged out. Waiting for processes to exit. Mar 7 01:13:45.997075 systemd[1]: sshd@68-10.0.0.13:22-10.0.0.1:55138.service: Deactivated successfully. Mar 7 01:13:46.013577 systemd[1]: session-69.scope: Deactivated successfully. Mar 7 01:13:46.027983 systemd-logind[1475]: Removed session 69. Mar 7 01:13:46.766587 kubelet[2648]: E0307 01:13:46.765677 2648 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"