Mar 13 00:40:59.421090 kernel: Linux version 6.12.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Thu Mar 12 22:08:29 -00 2026 Mar 13 00:40:59.421113 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a2116dc4421f78fe124deb19b9ad6d70a0cb4fc0b3349854f4ce4e2904d4925d Mar 13 00:40:59.421125 kernel: BIOS-provided physical RAM map: Mar 13 00:40:59.421132 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Mar 13 00:40:59.421137 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Mar 13 00:40:59.421143 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Mar 13 00:40:59.421149 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Mar 13 00:40:59.421155 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Mar 13 00:40:59.421195 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 13 00:40:59.421203 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Mar 13 00:40:59.421209 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 13 00:40:59.421218 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Mar 13 00:40:59.421224 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 13 00:40:59.421229 kernel: NX (Execute Disable) protection: active Mar 13 00:40:59.421241 kernel: APIC: Static calls initialized Mar 13 00:40:59.421252 kernel: SMBIOS 2.8 present. Mar 13 00:40:59.421310 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Mar 13 00:40:59.421325 kernel: DMI: Memory slots populated: 1/1 Mar 13 00:40:59.421334 kernel: Hypervisor detected: KVM Mar 13 00:40:59.421344 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Mar 13 00:40:59.421355 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 13 00:40:59.421364 kernel: kvm-clock: using sched offset of 5897220779 cycles Mar 13 00:40:59.421372 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 13 00:40:59.421378 kernel: tsc: Detected 2445.426 MHz processor Mar 13 00:40:59.421385 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 13 00:40:59.421392 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 13 00:40:59.421402 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Mar 13 00:40:59.421409 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Mar 13 00:40:59.421415 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 13 00:40:59.421423 kernel: Using GB pages for direct mapping Mar 13 00:40:59.421435 kernel: ACPI: Early table checksum verification disabled Mar 13 00:40:59.421446 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Mar 13 00:40:59.421529 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 13 00:40:59.421548 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 13 00:40:59.421561 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 13 00:40:59.421575 kernel: ACPI: FACS 0x000000009CFE0000 000040 Mar 13 00:40:59.421584 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 13 00:40:59.421593 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 13 00:40:59.421602 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 13 00:40:59.421613 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 13 00:40:59.421628 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Mar 13 00:40:59.421644 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Mar 13 00:40:59.421653 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Mar 13 00:40:59.421663 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Mar 13 00:40:59.421673 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Mar 13 00:40:59.421685 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Mar 13 00:40:59.421695 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Mar 13 00:40:59.421703 kernel: No NUMA configuration found Mar 13 00:40:59.421713 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Mar 13 00:40:59.421727 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Mar 13 00:40:59.421736 kernel: Zone ranges: Mar 13 00:40:59.421746 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 13 00:40:59.421755 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Mar 13 00:40:59.421766 kernel: Normal empty Mar 13 00:40:59.421777 kernel: Device empty Mar 13 00:40:59.421843 kernel: Movable zone start for each node Mar 13 00:40:59.421853 kernel: Early memory node ranges Mar 13 00:40:59.421862 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Mar 13 00:40:59.421878 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Mar 13 00:40:59.421889 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Mar 13 00:40:59.421900 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 13 00:40:59.421909 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Mar 13 00:40:59.421959 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Mar 13 00:40:59.421972 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 13 00:40:59.421983 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 13 00:40:59.421992 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 13 00:40:59.422001 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 13 00:40:59.422052 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 13 00:40:59.422064 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 13 00:40:59.422074 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 13 00:40:59.422085 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 13 00:40:59.422094 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 13 00:40:59.422104 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 13 00:40:59.422114 kernel: TSC deadline timer available Mar 13 00:40:59.422125 kernel: CPU topo: Max. logical packages: 1 Mar 13 00:40:59.422138 kernel: CPU topo: Max. logical dies: 1 Mar 13 00:40:59.422148 kernel: CPU topo: Max. dies per package: 1 Mar 13 00:40:59.422162 kernel: CPU topo: Max. threads per core: 1 Mar 13 00:40:59.422171 kernel: CPU topo: Num. cores per package: 4 Mar 13 00:40:59.422183 kernel: CPU topo: Num. threads per package: 4 Mar 13 00:40:59.422193 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Mar 13 00:40:59.422202 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 13 00:40:59.422213 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 13 00:40:59.422225 kernel: kvm-guest: setup PV sched yield Mar 13 00:40:59.422236 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Mar 13 00:40:59.422245 kernel: Booting paravirtualized kernel on KVM Mar 13 00:40:59.422259 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 13 00:40:59.422270 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Mar 13 00:40:59.422280 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Mar 13 00:40:59.422289 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Mar 13 00:40:59.422300 kernel: pcpu-alloc: [0] 0 1 2 3 Mar 13 00:40:59.422310 kernel: kvm-guest: PV spinlocks enabled Mar 13 00:40:59.422320 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 13 00:40:59.422331 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a2116dc4421f78fe124deb19b9ad6d70a0cb4fc0b3349854f4ce4e2904d4925d Mar 13 00:40:59.422345 kernel: random: crng init done Mar 13 00:40:59.422355 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 13 00:40:59.422365 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 13 00:40:59.422375 kernel: Fallback order for Node 0: 0 Mar 13 00:40:59.422385 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Mar 13 00:40:59.422396 kernel: Policy zone: DMA32 Mar 13 00:40:59.422406 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 13 00:40:59.422416 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 13 00:40:59.422426 kernel: ftrace: allocating 40099 entries in 157 pages Mar 13 00:40:59.422439 kernel: ftrace: allocated 157 pages with 5 groups Mar 13 00:40:59.422450 kernel: Dynamic Preempt: voluntary Mar 13 00:40:59.422529 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 13 00:40:59.422542 kernel: rcu: RCU event tracing is enabled. Mar 13 00:40:59.422553 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 13 00:40:59.422563 kernel: Trampoline variant of Tasks RCU enabled. Mar 13 00:40:59.422600 kernel: Rude variant of Tasks RCU enabled. Mar 13 00:40:59.422610 kernel: Tracing variant of Tasks RCU enabled. Mar 13 00:40:59.422620 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 13 00:40:59.422630 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 13 00:40:59.422645 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 13 00:40:59.422656 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 13 00:40:59.422666 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 13 00:40:59.422676 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Mar 13 00:40:59.422686 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 13 00:40:59.422705 kernel: Console: colour VGA+ 80x25 Mar 13 00:40:59.422718 kernel: printk: legacy console [ttyS0] enabled Mar 13 00:40:59.422729 kernel: ACPI: Core revision 20240827 Mar 13 00:40:59.422740 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 13 00:40:59.422750 kernel: APIC: Switch to symmetric I/O mode setup Mar 13 00:40:59.422761 kernel: x2apic enabled Mar 13 00:40:59.422774 kernel: APIC: Switched APIC routing to: physical x2apic Mar 13 00:40:59.422860 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Mar 13 00:40:59.422872 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Mar 13 00:40:59.422883 kernel: kvm-guest: setup PV IPIs Mar 13 00:40:59.422893 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 13 00:40:59.422908 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Mar 13 00:40:59.422919 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Mar 13 00:40:59.422931 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 13 00:40:59.422942 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 13 00:40:59.422953 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 13 00:40:59.422966 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 13 00:40:59.422976 kernel: Spectre V2 : Mitigation: Retpolines Mar 13 00:40:59.422985 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 13 00:40:59.422996 kernel: Speculative Store Bypass: Vulnerable Mar 13 00:40:59.423011 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Mar 13 00:40:59.423024 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Mar 13 00:40:59.423035 kernel: active return thunk: srso_alias_return_thunk Mar 13 00:40:59.423046 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Mar 13 00:40:59.423057 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Mar 13 00:40:59.423068 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Mar 13 00:40:59.423078 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 13 00:40:59.423089 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 13 00:40:59.423103 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 13 00:40:59.423115 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 13 00:40:59.423128 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Mar 13 00:40:59.423142 kernel: Freeing SMP alternatives memory: 32K Mar 13 00:40:59.423152 kernel: pid_max: default: 32768 minimum: 301 Mar 13 00:40:59.423161 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Mar 13 00:40:59.423173 kernel: landlock: Up and running. Mar 13 00:40:59.423186 kernel: SELinux: Initializing. Mar 13 00:40:59.423196 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 13 00:40:59.423212 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 13 00:40:59.423263 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Mar 13 00:40:59.423276 kernel: Performance Events: PMU not available due to virtualization, using software events only. Mar 13 00:40:59.423287 kernel: signal: max sigframe size: 1776 Mar 13 00:40:59.423298 kernel: rcu: Hierarchical SRCU implementation. Mar 13 00:40:59.423310 kernel: rcu: Max phase no-delay instances is 400. Mar 13 00:40:59.423321 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Mar 13 00:40:59.423332 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 13 00:40:59.423343 kernel: smp: Bringing up secondary CPUs ... Mar 13 00:40:59.423357 kernel: smpboot: x86: Booting SMP configuration: Mar 13 00:40:59.423368 kernel: .... node #0, CPUs: #1 #2 #3 Mar 13 00:40:59.423379 kernel: smp: Brought up 1 node, 4 CPUs Mar 13 00:40:59.423390 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Mar 13 00:40:59.423402 kernel: Memory: 2420720K/2571752K available (14336K kernel code, 2445K rwdata, 26064K rodata, 46200K init, 2560K bss, 145096K reserved, 0K cma-reserved) Mar 13 00:40:59.423413 kernel: devtmpfs: initialized Mar 13 00:40:59.423424 kernel: x86/mm: Memory block size: 128MB Mar 13 00:40:59.423435 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 13 00:40:59.423446 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 13 00:40:59.423516 kernel: pinctrl core: initialized pinctrl subsystem Mar 13 00:40:59.423528 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 13 00:40:59.423539 kernel: audit: initializing netlink subsys (disabled) Mar 13 00:40:59.423550 kernel: audit: type=2000 audit(1773362454.674:1): state=initialized audit_enabled=0 res=1 Mar 13 00:40:59.423561 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 13 00:40:59.423572 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 13 00:40:59.423582 kernel: cpuidle: using governor menu Mar 13 00:40:59.423592 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 13 00:40:59.423602 kernel: dca service started, version 1.12.1 Mar 13 00:40:59.423617 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Mar 13 00:40:59.423627 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Mar 13 00:40:59.423637 kernel: PCI: Using configuration type 1 for base access Mar 13 00:40:59.423648 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 13 00:40:59.423658 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 13 00:40:59.423668 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 13 00:40:59.423678 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 13 00:40:59.423689 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 13 00:40:59.423699 kernel: ACPI: Added _OSI(Module Device) Mar 13 00:40:59.423712 kernel: ACPI: Added _OSI(Processor Device) Mar 13 00:40:59.423723 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 13 00:40:59.423734 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 13 00:40:59.423745 kernel: ACPI: Interpreter enabled Mar 13 00:40:59.423756 kernel: ACPI: PM: (supports S0 S3 S5) Mar 13 00:40:59.423767 kernel: ACPI: Using IOAPIC for interrupt routing Mar 13 00:40:59.423778 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 13 00:40:59.423834 kernel: PCI: Using E820 reservations for host bridge windows Mar 13 00:40:59.423845 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 13 00:40:59.423860 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 13 00:40:59.424625 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 13 00:40:59.425547 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 13 00:40:59.426914 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 13 00:40:59.426947 kernel: PCI host bridge to bus 0000:00 Mar 13 00:40:59.427334 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 13 00:40:59.427567 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 13 00:40:59.427747 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 13 00:40:59.427972 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Mar 13 00:40:59.428110 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 13 00:40:59.428271 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Mar 13 00:40:59.428437 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 13 00:40:59.428692 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Mar 13 00:40:59.428934 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Mar 13 00:40:59.429096 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Mar 13 00:40:59.429268 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Mar 13 00:40:59.429432 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Mar 13 00:40:59.430090 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 13 00:40:59.430301 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Mar 13 00:40:59.430562 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Mar 13 00:40:59.430740 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Mar 13 00:40:59.430963 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Mar 13 00:40:59.431156 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Mar 13 00:40:59.431343 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Mar 13 00:40:59.431602 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Mar 13 00:40:59.431776 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Mar 13 00:40:59.432020 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Mar 13 00:40:59.432205 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Mar 13 00:40:59.432382 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Mar 13 00:40:59.432958 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Mar 13 00:40:59.433085 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Mar 13 00:40:59.433266 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Mar 13 00:40:59.433419 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 13 00:40:59.433637 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Mar 13 00:40:59.433758 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Mar 13 00:40:59.434333 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Mar 13 00:40:59.435122 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Mar 13 00:40:59.435295 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Mar 13 00:40:59.435310 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 13 00:40:59.435321 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 13 00:40:59.435346 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 13 00:40:59.435358 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 13 00:40:59.435369 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 13 00:40:59.435379 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 13 00:40:59.435390 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 13 00:40:59.435400 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 13 00:40:59.435411 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 13 00:40:59.435421 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 13 00:40:59.435431 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 13 00:40:59.435445 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 13 00:40:59.435524 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 13 00:40:59.435538 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 13 00:40:59.435549 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 13 00:40:59.435560 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 13 00:40:59.435570 kernel: iommu: Default domain type: Translated Mar 13 00:40:59.435581 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 13 00:40:59.435591 kernel: PCI: Using ACPI for IRQ routing Mar 13 00:40:59.435602 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 13 00:40:59.435616 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Mar 13 00:40:59.435627 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Mar 13 00:40:59.436435 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 13 00:40:59.436674 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 13 00:40:59.436880 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 13 00:40:59.436897 kernel: vgaarb: loaded Mar 13 00:40:59.436908 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 13 00:40:59.437354 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 13 00:40:59.437368 kernel: clocksource: Switched to clocksource kvm-clock Mar 13 00:40:59.437388 kernel: VFS: Disk quotas dquot_6.6.0 Mar 13 00:40:59.437400 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 13 00:40:59.437411 kernel: pnp: PnP ACPI init Mar 13 00:40:59.438147 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Mar 13 00:40:59.438168 kernel: pnp: PnP ACPI: found 6 devices Mar 13 00:40:59.438182 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 13 00:40:59.438194 kernel: NET: Registered PF_INET protocol family Mar 13 00:40:59.438204 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 13 00:40:59.438229 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 13 00:40:59.438239 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 13 00:40:59.438250 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 13 00:40:59.438261 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 13 00:40:59.438271 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 13 00:40:59.438282 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 13 00:40:59.438293 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 13 00:40:59.438303 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 13 00:40:59.438314 kernel: NET: Registered PF_XDP protocol family Mar 13 00:40:59.438550 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 13 00:40:59.438725 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 13 00:40:59.438938 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 13 00:40:59.439090 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Mar 13 00:40:59.439253 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Mar 13 00:40:59.439400 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Mar 13 00:40:59.439415 kernel: PCI: CLS 0 bytes, default 64 Mar 13 00:40:59.439426 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Mar 13 00:40:59.439442 kernel: Initialise system trusted keyrings Mar 13 00:40:59.439452 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 13 00:40:59.439541 kernel: Key type asymmetric registered Mar 13 00:40:59.439552 kernel: Asymmetric key parser 'x509' registered Mar 13 00:40:59.439563 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 13 00:40:59.439574 kernel: io scheduler mq-deadline registered Mar 13 00:40:59.439584 kernel: io scheduler kyber registered Mar 13 00:40:59.439594 kernel: io scheduler bfq registered Mar 13 00:40:59.439604 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 13 00:40:59.439620 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 13 00:40:59.439631 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 13 00:40:59.439641 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 13 00:40:59.439651 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 13 00:40:59.439661 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 13 00:40:59.439672 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 13 00:40:59.439682 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 13 00:40:59.439692 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 13 00:40:59.439906 kernel: rtc_cmos 00:04: RTC can wake from S4 Mar 13 00:40:59.439928 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 13 00:40:59.440090 kernel: rtc_cmos 00:04: registered as rtc0 Mar 13 00:40:59.440259 kernel: rtc_cmos 00:04: setting system clock to 2026-03-13T00:40:58 UTC (1773362458) Mar 13 00:40:59.440416 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Mar 13 00:40:59.440428 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 13 00:40:59.440436 kernel: NET: Registered PF_INET6 protocol family Mar 13 00:40:59.440443 kernel: Segment Routing with IPv6 Mar 13 00:40:59.440450 kernel: In-situ OAM (IOAM) with IPv6 Mar 13 00:40:59.440544 kernel: NET: Registered PF_PACKET protocol family Mar 13 00:40:59.440552 kernel: Key type dns_resolver registered Mar 13 00:40:59.440559 kernel: IPI shorthand broadcast: enabled Mar 13 00:40:59.440566 kernel: sched_clock: Marking stable (3960031801, 484449487)->(4591331969, -146850681) Mar 13 00:40:59.440574 kernel: registered taskstats version 1 Mar 13 00:40:59.440581 kernel: Loading compiled-in X.509 certificates Mar 13 00:40:59.440588 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.74-flatcar: 5aff49df330f42445474818d085d5033fee752d8' Mar 13 00:40:59.440595 kernel: Demotion targets for Node 0: null Mar 13 00:40:59.440602 kernel: Key type .fscrypt registered Mar 13 00:40:59.440612 kernel: Key type fscrypt-provisioning registered Mar 13 00:40:59.440619 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 13 00:40:59.440625 kernel: ima: Allocated hash algorithm: sha1 Mar 13 00:40:59.440632 kernel: ima: No architecture policies found Mar 13 00:40:59.440639 kernel: clk: Disabling unused clocks Mar 13 00:40:59.440646 kernel: Warning: unable to open an initial console. Mar 13 00:40:59.440653 kernel: Freeing unused kernel image (initmem) memory: 46200K Mar 13 00:40:59.440660 kernel: Write protecting the kernel read-only data: 40960k Mar 13 00:40:59.440667 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Mar 13 00:40:59.440676 kernel: Run /init as init process Mar 13 00:40:59.440683 kernel: with arguments: Mar 13 00:40:59.440690 kernel: /init Mar 13 00:40:59.440697 kernel: with environment: Mar 13 00:40:59.440704 kernel: HOME=/ Mar 13 00:40:59.440711 kernel: TERM=linux Mar 13 00:40:59.440719 systemd[1]: Successfully made /usr/ read-only. Mar 13 00:40:59.440729 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 13 00:40:59.440740 systemd[1]: Detected virtualization kvm. Mar 13 00:40:59.440747 systemd[1]: Detected architecture x86-64. Mar 13 00:40:59.440754 systemd[1]: Running in initrd. Mar 13 00:40:59.440761 systemd[1]: No hostname configured, using default hostname. Mar 13 00:40:59.440768 systemd[1]: Hostname set to . Mar 13 00:40:59.440775 systemd[1]: Initializing machine ID from VM UUID. Mar 13 00:40:59.441020 systemd[1]: Queued start job for default target initrd.target. Mar 13 00:40:59.441030 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 13 00:40:59.441054 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 13 00:40:59.441065 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 13 00:40:59.441073 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 13 00:40:59.441082 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 13 00:40:59.441091 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 13 00:40:59.441101 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 13 00:40:59.441109 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 13 00:40:59.441117 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 13 00:40:59.441130 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 13 00:40:59.441144 systemd[1]: Reached target paths.target - Path Units. Mar 13 00:40:59.441157 systemd[1]: Reached target slices.target - Slice Units. Mar 13 00:40:59.441168 systemd[1]: Reached target swap.target - Swaps. Mar 13 00:40:59.441178 systemd[1]: Reached target timers.target - Timer Units. Mar 13 00:40:59.441196 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 13 00:40:59.441210 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 13 00:40:59.441221 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 13 00:40:59.441232 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 13 00:40:59.441245 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 13 00:40:59.441259 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 13 00:40:59.441269 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 13 00:40:59.441277 systemd[1]: Reached target sockets.target - Socket Units. Mar 13 00:40:59.441288 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 13 00:40:59.441295 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 13 00:40:59.441303 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 13 00:40:59.441316 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Mar 13 00:40:59.441325 systemd[1]: Starting systemd-fsck-usr.service... Mar 13 00:40:59.441332 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 13 00:40:59.441340 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 13 00:40:59.441348 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 13 00:40:59.441357 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 13 00:40:59.441368 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 13 00:40:59.441378 systemd[1]: Finished systemd-fsck-usr.service. Mar 13 00:40:59.441420 systemd-journald[201]: Collecting audit messages is disabled. Mar 13 00:40:59.441442 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 13 00:40:59.441452 systemd-journald[201]: Journal started Mar 13 00:40:59.441540 systemd-journald[201]: Runtime Journal (/run/log/journal/ed34f41a1d59421389166f7d997d49e5) is 6M, max 48.3M, 42.2M free. Mar 13 00:40:59.447581 systemd[1]: Started systemd-journald.service - Journal Service. Mar 13 00:40:59.457594 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 13 00:40:59.572706 systemd-modules-load[203]: Inserted module 'overlay' Mar 13 00:40:59.574427 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 13 00:40:59.598739 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 13 00:40:59.611915 systemd-tmpfiles[213]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Mar 13 00:40:59.624442 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 13 00:40:59.638405 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 13 00:40:59.672569 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 13 00:40:59.675547 kernel: Bridge firewalling registered Mar 13 00:40:59.675596 systemd-modules-load[203]: Inserted module 'br_netfilter' Mar 13 00:40:59.677951 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 13 00:40:59.948074 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 13 00:40:59.972862 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 00:40:59.981119 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 13 00:41:00.008127 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 13 00:41:00.011859 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 13 00:41:00.031203 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 13 00:41:00.041002 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 13 00:41:00.109231 systemd-resolved[236]: Positive Trust Anchors: Mar 13 00:41:00.109284 systemd-resolved[236]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 13 00:41:00.109332 systemd-resolved[236]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 13 00:41:00.115579 systemd-resolved[236]: Defaulting to hostname 'linux'. Mar 13 00:41:00.117995 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 13 00:41:00.119938 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 13 00:41:00.177773 dracut-cmdline[245]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a2116dc4421f78fe124deb19b9ad6d70a0cb4fc0b3349854f4ce4e2904d4925d Mar 13 00:41:00.400662 kernel: SCSI subsystem initialized Mar 13 00:41:00.414571 kernel: Loading iSCSI transport class v2.0-870. Mar 13 00:41:00.430578 kernel: iscsi: registered transport (tcp) Mar 13 00:41:00.486570 kernel: iscsi: registered transport (qla4xxx) Mar 13 00:41:00.486654 kernel: QLogic iSCSI HBA Driver Mar 13 00:41:00.532206 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 13 00:41:00.566716 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 13 00:41:00.577137 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 13 00:41:00.874931 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 13 00:41:00.877685 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 13 00:41:00.973594 kernel: raid6: avx2x4 gen() 31022 MB/s Mar 13 00:41:01.006708 kernel: raid6: avx2x2 gen() 18836 MB/s Mar 13 00:41:01.024625 kernel: raid6: avx2x1 gen() 20437 MB/s Mar 13 00:41:01.024751 kernel: raid6: using algorithm avx2x4 gen() 31022 MB/s Mar 13 00:41:01.047744 kernel: raid6: .... xor() 3091 MB/s, rmw enabled Mar 13 00:41:01.047906 kernel: raid6: using avx2x2 recovery algorithm Mar 13 00:41:01.106568 kernel: xor: automatically using best checksumming function avx Mar 13 00:41:01.343633 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 13 00:41:01.356301 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 13 00:41:01.367056 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 13 00:41:01.428265 systemd-udevd[453]: Using default interface naming scheme 'v255'. Mar 13 00:41:01.447856 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 13 00:41:01.466950 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 13 00:41:01.578605 dracut-pre-trigger[462]: rd.md=0: removing MD RAID activation Mar 13 00:41:01.648360 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 13 00:41:01.655133 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 13 00:41:01.812448 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 13 00:41:01.839846 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 13 00:41:01.909563 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Mar 13 00:41:01.923597 kernel: cryptd: max_cpu_qlen set to 1000 Mar 13 00:41:01.995380 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 13 00:41:01.995913 kernel: AES CTR mode by8 optimization enabled Mar 13 00:41:01.998552 kernel: libata version 3.00 loaded. Mar 13 00:41:01.998628 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 13 00:41:01.999368 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 00:41:02.025098 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 13 00:41:02.025121 kernel: GPT:9289727 != 19775487 Mar 13 00:41:02.025132 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 13 00:41:02.025141 kernel: GPT:9289727 != 19775487 Mar 13 00:41:02.025157 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 13 00:41:02.025196 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 13 00:41:02.032093 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 13 00:41:02.044222 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 13 00:41:02.055975 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 13 00:41:02.154103 kernel: ahci 0000:00:1f.2: version 3.0 Mar 13 00:41:02.154679 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 13 00:41:02.167657 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 13 00:41:02.178102 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Mar 13 00:41:02.203229 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Mar 13 00:41:02.203416 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 13 00:41:02.205520 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Mar 13 00:41:02.210525 kernel: scsi host0: ahci Mar 13 00:41:02.214525 kernel: scsi host1: ahci Mar 13 00:41:02.217534 kernel: scsi host2: ahci Mar 13 00:41:02.220582 kernel: scsi host3: ahci Mar 13 00:41:02.220912 kernel: scsi host4: ahci Mar 13 00:41:02.222543 kernel: scsi host5: ahci Mar 13 00:41:02.222857 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 lpm-pol 1 Mar 13 00:41:02.222878 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 lpm-pol 1 Mar 13 00:41:02.222889 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 lpm-pol 1 Mar 13 00:41:02.222914 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 lpm-pol 1 Mar 13 00:41:02.222925 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 lpm-pol 1 Mar 13 00:41:02.222935 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 lpm-pol 1 Mar 13 00:41:02.229076 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 13 00:41:02.461027 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 00:41:02.483645 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 13 00:41:02.490710 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 13 00:41:02.541695 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 13 00:41:02.546868 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 13 00:41:02.550542 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 13 00:41:02.635252 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 13 00:41:02.636974 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 13 00:41:02.638116 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Mar 13 00:41:02.639199 kernel: ata3.00: LPM support broken, forcing max_power Mar 13 00:41:02.639222 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Mar 13 00:41:02.639237 kernel: ata3.00: applying bridge limits Mar 13 00:41:02.639252 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 13 00:41:02.639266 kernel: ata3.00: LPM support broken, forcing max_power Mar 13 00:41:02.639280 kernel: ata3.00: configured for UDMA/100 Mar 13 00:41:02.554324 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 13 00:41:02.668297 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 13 00:41:02.710629 disk-uuid[616]: Primary Header is updated. Mar 13 00:41:02.710629 disk-uuid[616]: Secondary Entries is updated. Mar 13 00:41:02.710629 disk-uuid[616]: Secondary Header is updated. Mar 13 00:41:02.723541 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 13 00:41:02.733854 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 13 00:41:02.791184 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Mar 13 00:41:02.792171 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 13 00:41:02.806568 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Mar 13 00:41:03.339156 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 13 00:41:03.350012 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 13 00:41:03.360181 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 13 00:41:03.368965 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 13 00:41:03.377391 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 13 00:41:03.599929 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 13 00:41:03.742559 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 13 00:41:03.743021 disk-uuid[617]: The operation has completed successfully. Mar 13 00:41:03.815641 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 13 00:41:03.815850 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 13 00:41:03.863069 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 13 00:41:03.894451 sh[645]: Success Mar 13 00:41:03.924089 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 13 00:41:03.924178 kernel: device-mapper: uevent: version 1.0.3 Mar 13 00:41:03.927619 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Mar 13 00:41:03.943553 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Mar 13 00:41:04.016782 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 13 00:41:04.020319 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 13 00:41:04.040320 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 13 00:41:04.061348 kernel: BTRFS: device fsid 503642f8-c59c-4168-97a8-9c3603183fa3 devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (657) Mar 13 00:41:04.061371 kernel: BTRFS info (device dm-0): first mount of filesystem 503642f8-c59c-4168-97a8-9c3603183fa3 Mar 13 00:41:04.061382 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 13 00:41:04.077156 kernel: BTRFS info (device dm-0 state E): disabling log replay at mount time Mar 13 00:41:04.077210 kernel: BTRFS info (device dm-0 state E): enabling free space tree Mar 13 00:41:04.080434 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 13 00:41:04.095929 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Mar 13 00:41:04.102267 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 13 00:41:04.103934 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 13 00:41:04.115317 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 13 00:41:04.170629 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (690) Mar 13 00:41:04.178224 kernel: BTRFS info (device vda6): first mount of filesystem 451985e5-e916-48b1-8100-483c174d7b52 Mar 13 00:41:04.178333 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 13 00:41:04.193078 kernel: BTRFS info (device vda6): turning on async discard Mar 13 00:41:04.193162 kernel: BTRFS info (device vda6): enabling free space tree Mar 13 00:41:04.203538 kernel: BTRFS info (device vda6): last unmount of filesystem 451985e5-e916-48b1-8100-483c174d7b52 Mar 13 00:41:04.207121 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 13 00:41:04.213381 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 13 00:41:04.396238 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 13 00:41:04.402598 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 13 00:41:04.633222 systemd-networkd[826]: lo: Link UP Mar 13 00:41:04.633259 systemd-networkd[826]: lo: Gained carrier Mar 13 00:41:04.636573 systemd-networkd[826]: Enumeration completed Mar 13 00:41:04.637636 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 13 00:41:04.641677 systemd-networkd[826]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 13 00:41:04.651288 ignition[743]: Ignition 2.22.0 Mar 13 00:41:04.641685 systemd-networkd[826]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 13 00:41:04.651299 ignition[743]: Stage: fetch-offline Mar 13 00:41:04.645264 systemd-networkd[826]: eth0: Link UP Mar 13 00:41:04.651394 ignition[743]: no configs at "/usr/lib/ignition/base.d" Mar 13 00:41:04.646042 systemd[1]: Reached target network.target - Network. Mar 13 00:41:04.651406 ignition[743]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 13 00:41:04.648281 systemd-networkd[826]: eth0: Gained carrier Mar 13 00:41:04.651632 ignition[743]: parsed url from cmdline: "" Mar 13 00:41:04.648297 systemd-networkd[826]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 13 00:41:04.651639 ignition[743]: no config URL provided Mar 13 00:41:04.672568 systemd-networkd[826]: eth0: DHCPv4 address 10.0.0.68/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 13 00:41:04.651645 ignition[743]: reading system config file "/usr/lib/ignition/user.ign" Mar 13 00:41:04.651654 ignition[743]: no config at "/usr/lib/ignition/user.ign" Mar 13 00:41:04.651725 ignition[743]: op(1): [started] loading QEMU firmware config module Mar 13 00:41:04.651740 ignition[743]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 13 00:41:04.670985 ignition[743]: op(1): [finished] loading QEMU firmware config module Mar 13 00:41:04.831883 kernel: hrtimer: interrupt took 2625690 ns Mar 13 00:41:04.860696 systemd-resolved[236]: Detected conflict on linux IN A 10.0.0.68 Mar 13 00:41:04.860738 systemd-resolved[236]: Hostname conflict, changing published hostname from 'linux' to 'linux6'. Mar 13 00:41:05.050281 ignition[743]: parsing config with SHA512: b999fd8f32874948288346737fbbbfe72b7d0cbfbca235e502ba209dc5364bb6f8e1eadb8d5cf821fbb5de2312e8f68354674dc0f422f2bc652248e4ec9d3f86 Mar 13 00:41:05.061618 unknown[743]: fetched base config from "system" Mar 13 00:41:05.061675 unknown[743]: fetched user config from "qemu" Mar 13 00:41:05.062131 ignition[743]: fetch-offline: fetch-offline passed Mar 13 00:41:05.065986 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 13 00:41:05.062219 ignition[743]: Ignition finished successfully Mar 13 00:41:05.072617 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 13 00:41:05.073889 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 13 00:41:05.165065 ignition[839]: Ignition 2.22.0 Mar 13 00:41:05.165101 ignition[839]: Stage: kargs Mar 13 00:41:05.165369 ignition[839]: no configs at "/usr/lib/ignition/base.d" Mar 13 00:41:05.165382 ignition[839]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 13 00:41:05.172423 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 13 00:41:05.167354 ignition[839]: kargs: kargs passed Mar 13 00:41:05.190111 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 13 00:41:05.167409 ignition[839]: Ignition finished successfully Mar 13 00:41:05.402231 ignition[847]: Ignition 2.22.0 Mar 13 00:41:05.402265 ignition[847]: Stage: disks Mar 13 00:41:05.402701 ignition[847]: no configs at "/usr/lib/ignition/base.d" Mar 13 00:41:05.402713 ignition[847]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 13 00:41:05.409446 ignition[847]: disks: disks passed Mar 13 00:41:05.412951 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 13 00:41:05.409592 ignition[847]: Ignition finished successfully Mar 13 00:41:05.416255 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 13 00:41:05.421673 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 13 00:41:05.425738 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 13 00:41:05.429054 systemd[1]: Reached target sysinit.target - System Initialization. Mar 13 00:41:05.432114 systemd[1]: Reached target basic.target - Basic System. Mar 13 00:41:05.436582 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 13 00:41:05.498706 systemd-fsck[857]: ROOT: clean, 15/553520 files, 52789/553472 blocks Mar 13 00:41:05.506409 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 13 00:41:05.516018 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 13 00:41:05.949605 kernel: EXT4-fs (vda9): mounted filesystem 26348f72-0225-4c06-aedc-823e61beebc6 r/w with ordered data mode. Quota mode: none. Mar 13 00:41:05.950612 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 13 00:41:05.954014 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 13 00:41:05.959187 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 13 00:41:05.967044 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 13 00:41:05.970882 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 13 00:41:05.970928 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 13 00:41:06.028218 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (865) Mar 13 00:41:06.028254 kernel: BTRFS info (device vda6): first mount of filesystem 451985e5-e916-48b1-8100-483c174d7b52 Mar 13 00:41:06.028274 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 13 00:41:05.970953 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 13 00:41:06.039067 kernel: BTRFS info (device vda6): turning on async discard Mar 13 00:41:06.039096 kernel: BTRFS info (device vda6): enabling free space tree Mar 13 00:41:05.994001 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 13 00:41:06.002976 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 13 00:41:06.041061 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 13 00:41:06.070806 initrd-setup-root[890]: cut: /sysroot/etc/passwd: No such file or directory Mar 13 00:41:06.092587 initrd-setup-root[897]: cut: /sysroot/etc/group: No such file or directory Mar 13 00:41:06.102575 initrd-setup-root[904]: cut: /sysroot/etc/shadow: No such file or directory Mar 13 00:41:06.111019 initrd-setup-root[911]: cut: /sysroot/etc/gshadow: No such file or directory Mar 13 00:41:06.397235 systemd-networkd[826]: eth0: Gained IPv6LL Mar 13 00:41:06.521802 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 13 00:41:06.530832 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 13 00:41:06.537767 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 13 00:41:06.725522 kernel: BTRFS info (device vda6): last unmount of filesystem 451985e5-e916-48b1-8100-483c174d7b52 Mar 13 00:41:06.725601 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 13 00:41:06.728954 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 13 00:41:06.814255 ignition[983]: INFO : Ignition 2.22.0 Mar 13 00:41:06.814255 ignition[983]: INFO : Stage: mount Mar 13 00:41:06.820416 ignition[983]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 13 00:41:06.820416 ignition[983]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 13 00:41:06.829655 ignition[983]: INFO : mount: mount passed Mar 13 00:41:06.829655 ignition[983]: INFO : Ignition finished successfully Mar 13 00:41:06.841590 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 13 00:41:06.853683 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 13 00:41:06.954054 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 13 00:41:06.989551 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (992) Mar 13 00:41:06.995533 kernel: BTRFS info (device vda6): first mount of filesystem 451985e5-e916-48b1-8100-483c174d7b52 Mar 13 00:41:06.995583 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 13 00:41:07.003710 kernel: BTRFS info (device vda6): turning on async discard Mar 13 00:41:07.003739 kernel: BTRFS info (device vda6): enabling free space tree Mar 13 00:41:07.006040 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 13 00:41:07.077632 ignition[1009]: INFO : Ignition 2.22.0 Mar 13 00:41:07.077632 ignition[1009]: INFO : Stage: files Mar 13 00:41:07.084122 ignition[1009]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 13 00:41:07.084122 ignition[1009]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 13 00:41:07.084122 ignition[1009]: DEBUG : files: compiled without relabeling support, skipping Mar 13 00:41:07.095241 ignition[1009]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 13 00:41:07.095241 ignition[1009]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 13 00:41:07.104431 ignition[1009]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 13 00:41:07.109687 ignition[1009]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 13 00:41:07.113896 ignition[1009]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 13 00:41:07.113035 unknown[1009]: wrote ssh authorized keys file for user: core Mar 13 00:41:07.120703 ignition[1009]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 13 00:41:07.120703 ignition[1009]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 13 00:41:07.183005 ignition[1009]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 13 00:41:07.368230 ignition[1009]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 13 00:41:07.375344 ignition[1009]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 13 00:41:07.375344 ignition[1009]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 13 00:41:07.375344 ignition[1009]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 13 00:41:07.375344 ignition[1009]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 13 00:41:07.375344 ignition[1009]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 13 00:41:07.375344 ignition[1009]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 13 00:41:07.375344 ignition[1009]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 13 00:41:07.375344 ignition[1009]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 13 00:41:07.375344 ignition[1009]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 13 00:41:07.375344 ignition[1009]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 13 00:41:07.375344 ignition[1009]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 13 00:41:07.437890 ignition[1009]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 13 00:41:07.437890 ignition[1009]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 13 00:41:07.437890 ignition[1009]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-x86-64.raw: attempt #1 Mar 13 00:41:07.794729 ignition[1009]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 13 00:41:09.413302 ignition[1009]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 13 00:41:09.413302 ignition[1009]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 13 00:41:09.430018 ignition[1009]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 13 00:41:09.504351 ignition[1009]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 13 00:41:09.504351 ignition[1009]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 13 00:41:09.519652 ignition[1009]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Mar 13 00:41:09.519652 ignition[1009]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 13 00:41:09.519652 ignition[1009]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 13 00:41:09.519652 ignition[1009]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Mar 13 00:41:09.519652 ignition[1009]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Mar 13 00:41:09.558435 ignition[1009]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 13 00:41:09.558435 ignition[1009]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 13 00:41:09.558435 ignition[1009]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Mar 13 00:41:09.558435 ignition[1009]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Mar 13 00:41:09.558435 ignition[1009]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Mar 13 00:41:09.558435 ignition[1009]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 13 00:41:09.558435 ignition[1009]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 13 00:41:09.558435 ignition[1009]: INFO : files: files passed Mar 13 00:41:09.558435 ignition[1009]: INFO : Ignition finished successfully Mar 13 00:41:09.560441 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 13 00:41:09.573617 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 13 00:41:09.593811 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 13 00:41:09.692321 initrd-setup-root-after-ignition[1036]: grep: /sysroot/oem/oem-release: No such file or directory Mar 13 00:41:09.618242 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 13 00:41:09.708926 initrd-setup-root-after-ignition[1039]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 13 00:41:09.708926 initrd-setup-root-after-ignition[1039]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 13 00:41:09.618408 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 13 00:41:09.730531 initrd-setup-root-after-ignition[1043]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 13 00:41:09.633069 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 13 00:41:09.639253 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 13 00:41:09.652072 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 13 00:41:09.816308 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 13 00:41:09.816577 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 13 00:41:09.821274 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 13 00:41:09.830344 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 13 00:41:09.838042 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 13 00:41:09.851756 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 13 00:41:09.924591 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 13 00:41:09.945372 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 13 00:41:10.007905 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 13 00:41:10.012674 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 13 00:41:10.021965 systemd[1]: Stopped target timers.target - Timer Units. Mar 13 00:41:10.026302 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 13 00:41:10.026540 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 13 00:41:10.036797 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 13 00:41:10.045560 systemd[1]: Stopped target basic.target - Basic System. Mar 13 00:41:10.045710 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 13 00:41:10.058743 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 13 00:41:10.066154 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 13 00:41:10.073538 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Mar 13 00:41:10.085789 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 13 00:41:10.092010 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 13 00:41:10.092191 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 13 00:41:10.099548 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 13 00:41:10.106145 systemd[1]: Stopped target swap.target - Swaps. Mar 13 00:41:10.112645 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 13 00:41:10.112950 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 13 00:41:10.124854 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 13 00:41:10.131157 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 13 00:41:10.151117 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 13 00:41:10.151608 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 13 00:41:10.158095 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 13 00:41:10.158236 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 13 00:41:10.172799 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 13 00:41:10.173114 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 13 00:41:10.184698 systemd[1]: Stopped target paths.target - Path Units. Mar 13 00:41:10.187999 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 13 00:41:10.191639 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 13 00:41:10.194371 systemd[1]: Stopped target slices.target - Slice Units. Mar 13 00:41:10.207202 systemd[1]: Stopped target sockets.target - Socket Units. Mar 13 00:41:10.217573 systemd[1]: iscsid.socket: Deactivated successfully. Mar 13 00:41:10.217763 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 13 00:41:10.223121 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 13 00:41:10.223298 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 13 00:41:10.231842 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 13 00:41:10.232059 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 13 00:41:10.241189 systemd[1]: ignition-files.service: Deactivated successfully. Mar 13 00:41:10.241355 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 13 00:41:10.262710 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 13 00:41:10.265563 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 13 00:41:10.265717 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 13 00:41:10.287156 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 13 00:41:10.287830 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 13 00:41:10.288165 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 13 00:41:10.294089 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 13 00:41:10.294245 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 13 00:41:10.315106 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 13 00:41:10.315254 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 13 00:41:10.332052 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 13 00:41:10.345551 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 13 00:41:10.345789 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 13 00:41:10.393275 ignition[1064]: INFO : Ignition 2.22.0 Mar 13 00:41:10.393275 ignition[1064]: INFO : Stage: umount Mar 13 00:41:10.398651 ignition[1064]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 13 00:41:10.398651 ignition[1064]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 13 00:41:10.398651 ignition[1064]: INFO : umount: umount passed Mar 13 00:41:10.398651 ignition[1064]: INFO : Ignition finished successfully Mar 13 00:41:10.403216 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 13 00:41:10.403424 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 13 00:41:10.411021 systemd[1]: Stopped target network.target - Network. Mar 13 00:41:10.413083 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 13 00:41:10.413191 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 13 00:41:10.422701 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 13 00:41:10.422785 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 13 00:41:10.429528 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 13 00:41:10.429615 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 13 00:41:10.432986 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 13 00:41:10.433048 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 13 00:41:10.441966 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 13 00:41:10.442031 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 13 00:41:10.445065 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 13 00:41:10.451827 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 13 00:41:10.459089 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 13 00:41:10.459289 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 13 00:41:10.476721 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Mar 13 00:41:10.478734 systemd[1]: Stopped target network-pre.target - Preparation for Network. Mar 13 00:41:10.487403 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 13 00:41:10.487567 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 13 00:41:10.493389 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 13 00:41:10.499078 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 13 00:41:10.499135 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 13 00:41:10.505230 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 13 00:41:10.515767 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 13 00:41:10.520660 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 13 00:41:10.536968 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Mar 13 00:41:10.539276 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 13 00:41:10.539395 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 13 00:41:10.552277 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 13 00:41:10.552371 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 13 00:41:10.562226 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 13 00:41:10.562322 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 13 00:41:10.578174 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 13 00:41:10.578288 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Mar 13 00:41:10.579179 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 13 00:41:10.579768 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 13 00:41:10.582208 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 13 00:41:10.582336 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 13 00:41:10.592302 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 13 00:41:10.592385 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 13 00:41:10.606732 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 13 00:41:10.606824 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 13 00:41:10.620928 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 13 00:41:10.621021 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 13 00:41:10.635411 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 13 00:41:10.635585 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 13 00:41:10.651624 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 13 00:41:10.659739 systemd[1]: systemd-network-generator.service: Deactivated successfully. Mar 13 00:41:10.659809 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Mar 13 00:41:10.670608 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 13 00:41:10.670711 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 13 00:41:10.689937 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 13 00:41:10.690146 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 00:41:10.717677 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Mar 13 00:41:10.717780 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Mar 13 00:41:10.717864 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 13 00:41:10.718778 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 13 00:41:10.719025 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 13 00:41:10.735359 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 13 00:41:10.735652 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 13 00:41:10.751035 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 13 00:41:10.768589 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 13 00:41:10.846111 systemd[1]: Switching root. Mar 13 00:41:10.890431 systemd-journald[201]: Journal stopped Mar 13 00:41:13.098836 systemd-journald[201]: Received SIGTERM from PID 1 (systemd). Mar 13 00:41:13.098924 kernel: SELinux: policy capability network_peer_controls=1 Mar 13 00:41:13.098944 kernel: SELinux: policy capability open_perms=1 Mar 13 00:41:13.098955 kernel: SELinux: policy capability extended_socket_class=1 Mar 13 00:41:13.098966 kernel: SELinux: policy capability always_check_network=0 Mar 13 00:41:13.098977 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 13 00:41:13.098991 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 13 00:41:13.099005 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 13 00:41:13.099016 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 13 00:41:13.099027 kernel: SELinux: policy capability userspace_initial_context=0 Mar 13 00:41:13.099037 kernel: audit: type=1403 audit(1773362471.206:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 13 00:41:13.099049 systemd[1]: Successfully loaded SELinux policy in 132.726ms. Mar 13 00:41:13.099069 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 38.763ms. Mar 13 00:41:13.099086 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 13 00:41:13.099098 systemd[1]: Detected virtualization kvm. Mar 13 00:41:13.099110 systemd[1]: Detected architecture x86-64. Mar 13 00:41:13.099123 systemd[1]: Detected first boot. Mar 13 00:41:13.099134 systemd[1]: Initializing machine ID from VM UUID. Mar 13 00:41:13.099145 zram_generator::config[1110]: No configuration found. Mar 13 00:41:13.099157 kernel: Guest personality initialized and is inactive Mar 13 00:41:13.099167 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Mar 13 00:41:13.099225 kernel: Initialized host personality Mar 13 00:41:13.099235 kernel: NET: Registered PF_VSOCK protocol family Mar 13 00:41:13.099246 systemd[1]: Populated /etc with preset unit settings. Mar 13 00:41:13.099261 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Mar 13 00:41:13.099272 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 13 00:41:13.099283 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 13 00:41:13.099294 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 13 00:41:13.099306 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 13 00:41:13.099317 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 13 00:41:13.099328 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 13 00:41:13.099338 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 13 00:41:13.099350 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 13 00:41:13.099363 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 13 00:41:13.099374 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 13 00:41:13.099385 systemd[1]: Created slice user.slice - User and Session Slice. Mar 13 00:41:13.099396 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 13 00:41:13.099407 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 13 00:41:13.099418 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 13 00:41:13.099429 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 13 00:41:13.099441 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 13 00:41:13.099454 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 13 00:41:13.099539 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 13 00:41:13.099551 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 13 00:41:13.099562 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 13 00:41:13.099574 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 13 00:41:13.099584 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 13 00:41:13.099596 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 13 00:41:13.099607 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 13 00:41:13.099621 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 13 00:41:13.099632 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 13 00:41:13.099643 systemd[1]: Reached target slices.target - Slice Units. Mar 13 00:41:13.099654 systemd[1]: Reached target swap.target - Swaps. Mar 13 00:41:13.099665 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 13 00:41:13.099676 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 13 00:41:13.099687 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 13 00:41:13.099699 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 13 00:41:13.099710 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 13 00:41:13.099720 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 13 00:41:13.099734 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 13 00:41:13.099745 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 13 00:41:13.099755 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 13 00:41:13.099766 systemd[1]: Mounting media.mount - External Media Directory... Mar 13 00:41:13.099803 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 13 00:41:13.099814 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 13 00:41:13.099826 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 13 00:41:13.099837 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 13 00:41:13.099851 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 13 00:41:13.099862 systemd[1]: Reached target machines.target - Containers. Mar 13 00:41:13.099873 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 13 00:41:13.099884 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 13 00:41:13.099924 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 13 00:41:13.099937 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 13 00:41:13.099948 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 13 00:41:13.099959 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 13 00:41:13.099970 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 13 00:41:13.099984 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 13 00:41:13.099995 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 13 00:41:13.100006 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 13 00:41:13.100017 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 13 00:41:13.100028 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 13 00:41:13.100039 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 13 00:41:13.100050 systemd[1]: Stopped systemd-fsck-usr.service. Mar 13 00:41:13.100062 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 13 00:41:13.100102 kernel: fuse: init (API version 7.41) Mar 13 00:41:13.100113 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 13 00:41:13.100124 kernel: ACPI: bus type drm_connector registered Mar 13 00:41:13.100135 kernel: loop: module loaded Mar 13 00:41:13.100145 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 13 00:41:13.100156 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 13 00:41:13.100167 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 13 00:41:13.100178 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 13 00:41:13.100212 systemd-journald[1195]: Collecting audit messages is disabled. Mar 13 00:41:13.100239 systemd-journald[1195]: Journal started Mar 13 00:41:13.100258 systemd-journald[1195]: Runtime Journal (/run/log/journal/ed34f41a1d59421389166f7d997d49e5) is 6M, max 48.3M, 42.2M free. Mar 13 00:41:12.286873 systemd[1]: Queued start job for default target multi-user.target. Mar 13 00:41:12.306538 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 13 00:41:12.307360 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 13 00:41:12.308011 systemd[1]: systemd-journald.service: Consumed 1.073s CPU time. Mar 13 00:41:13.110557 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 13 00:41:13.114594 systemd[1]: verity-setup.service: Deactivated successfully. Mar 13 00:41:13.114641 systemd[1]: Stopped verity-setup.service. Mar 13 00:41:13.128596 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 13 00:41:13.138801 systemd[1]: Started systemd-journald.service - Journal Service. Mar 13 00:41:13.140095 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 13 00:41:13.143793 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 13 00:41:13.147638 systemd[1]: Mounted media.mount - External Media Directory. Mar 13 00:41:13.151124 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 13 00:41:13.154998 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 13 00:41:13.158405 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 13 00:41:13.161813 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 13 00:41:13.165779 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 13 00:41:13.170066 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 13 00:41:13.170423 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 13 00:41:13.174413 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 13 00:41:13.174774 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 13 00:41:13.178755 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 13 00:41:13.179167 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 13 00:41:13.185673 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 13 00:41:13.186021 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 13 00:41:13.193258 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 13 00:41:13.193606 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 13 00:41:13.197643 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 13 00:41:13.197935 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 13 00:41:13.203048 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 13 00:41:13.208393 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 13 00:41:13.213103 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 13 00:41:13.217419 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 13 00:41:13.234203 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 13 00:41:13.239537 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 13 00:41:13.262405 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 13 00:41:13.268876 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 13 00:41:13.268972 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 13 00:41:13.270231 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 13 00:41:13.291870 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 13 00:41:13.296636 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 13 00:41:13.298721 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 13 00:41:13.303988 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 13 00:41:13.307891 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 13 00:41:13.311318 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 13 00:41:13.315376 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 13 00:41:13.320629 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 13 00:41:13.327654 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 13 00:41:13.335678 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 13 00:41:13.336057 systemd-journald[1195]: Time spent on flushing to /var/log/journal/ed34f41a1d59421389166f7d997d49e5 is 39.679ms for 978 entries. Mar 13 00:41:13.336057 systemd-journald[1195]: System Journal (/var/log/journal/ed34f41a1d59421389166f7d997d49e5) is 8M, max 195.6M, 187.6M free. Mar 13 00:41:13.414875 systemd-journald[1195]: Received client request to flush runtime journal. Mar 13 00:41:13.346089 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 13 00:41:13.352870 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 13 00:41:13.362091 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 13 00:41:13.369422 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 13 00:41:13.381326 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 13 00:41:13.405661 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 13 00:41:13.417133 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 13 00:41:13.424144 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 13 00:41:13.463893 kernel: loop0: detected capacity change from 0 to 110984 Mar 13 00:41:13.617632 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 13 00:41:13.619431 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 13 00:41:13.633542 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 13 00:41:13.652531 kernel: loop1: detected capacity change from 0 to 219192 Mar 13 00:41:13.656808 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 13 00:41:13.666761 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 13 00:41:13.719574 kernel: loop2: detected capacity change from 0 to 128560 Mar 13 00:41:13.756395 systemd-tmpfiles[1250]: ACLs are not supported, ignoring. Mar 13 00:41:13.763146 systemd-tmpfiles[1250]: ACLs are not supported, ignoring. Mar 13 00:41:13.946617 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 13 00:41:13.965571 kernel: loop3: detected capacity change from 0 to 110984 Mar 13 00:41:14.000576 kernel: loop4: detected capacity change from 0 to 219192 Mar 13 00:41:14.029540 kernel: loop5: detected capacity change from 0 to 128560 Mar 13 00:41:14.067803 (sd-merge)[1255]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 13 00:41:14.070678 (sd-merge)[1255]: Merged extensions into '/usr'. Mar 13 00:41:14.079220 systemd[1]: Reload requested from client PID 1229 ('systemd-sysext') (unit systemd-sysext.service)... Mar 13 00:41:14.079236 systemd[1]: Reloading... Mar 13 00:41:14.402538 zram_generator::config[1281]: No configuration found. Mar 13 00:41:14.901191 ldconfig[1224]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 13 00:41:14.960352 systemd[1]: Reloading finished in 880 ms. Mar 13 00:41:15.002091 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 13 00:41:15.007611 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 13 00:41:15.028208 systemd[1]: Starting ensure-sysext.service... Mar 13 00:41:15.031701 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 13 00:41:15.060856 systemd[1]: Reload requested from client PID 1318 ('systemctl') (unit ensure-sysext.service)... Mar 13 00:41:15.060899 systemd[1]: Reloading... Mar 13 00:41:15.082839 systemd-tmpfiles[1319]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Mar 13 00:41:15.082994 systemd-tmpfiles[1319]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Mar 13 00:41:15.083623 systemd-tmpfiles[1319]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 13 00:41:15.084123 systemd-tmpfiles[1319]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 13 00:41:15.085242 systemd-tmpfiles[1319]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 13 00:41:15.085731 systemd-tmpfiles[1319]: ACLs are not supported, ignoring. Mar 13 00:41:15.085880 systemd-tmpfiles[1319]: ACLs are not supported, ignoring. Mar 13 00:41:15.093752 systemd-tmpfiles[1319]: Detected autofs mount point /boot during canonicalization of boot. Mar 13 00:41:15.093783 systemd-tmpfiles[1319]: Skipping /boot Mar 13 00:41:15.217246 systemd-tmpfiles[1319]: Detected autofs mount point /boot during canonicalization of boot. Mar 13 00:41:15.217262 systemd-tmpfiles[1319]: Skipping /boot Mar 13 00:41:15.254544 zram_generator::config[1349]: No configuration found. Mar 13 00:41:15.477192 systemd[1]: Reloading finished in 415 ms. Mar 13 00:41:15.503279 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 13 00:41:15.533691 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 13 00:41:15.546150 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 13 00:41:15.551065 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 13 00:41:15.572372 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 13 00:41:15.579090 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 13 00:41:15.590885 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 13 00:41:15.597782 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 13 00:41:15.607453 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 13 00:41:15.607779 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 13 00:41:15.610986 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 13 00:41:15.617973 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 13 00:41:15.628854 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 13 00:41:15.632204 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 13 00:41:15.632320 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 13 00:41:15.636155 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 13 00:41:15.639522 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 13 00:41:15.646786 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 13 00:41:15.652087 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 13 00:41:15.652412 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 13 00:41:15.657306 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 13 00:41:15.657704 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 13 00:41:15.661152 systemd-udevd[1395]: Using default interface naming scheme 'v255'. Mar 13 00:41:15.662094 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 13 00:41:15.662337 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 13 00:41:15.666053 augenrules[1414]: No rules Mar 13 00:41:15.669531 systemd[1]: audit-rules.service: Deactivated successfully. Mar 13 00:41:15.669896 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 13 00:41:15.680401 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 13 00:41:15.681793 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 13 00:41:15.684044 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 13 00:41:15.695944 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 13 00:41:15.701896 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 13 00:41:15.704810 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 13 00:41:15.704976 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 13 00:41:15.706453 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 13 00:41:15.706619 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 13 00:41:15.707687 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 13 00:41:15.709859 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 13 00:41:15.711854 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 13 00:41:15.712666 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 13 00:41:15.712863 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 13 00:41:15.730134 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 13 00:41:15.735082 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 13 00:41:15.738198 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 13 00:41:15.741555 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 13 00:41:15.759825 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 13 00:41:15.763539 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 13 00:41:15.763670 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 13 00:41:15.768307 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 13 00:41:15.772131 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 13 00:41:15.772278 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 13 00:41:15.773741 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 13 00:41:15.779114 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 13 00:41:15.785232 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 13 00:41:15.810039 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 13 00:41:15.810304 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 13 00:41:15.813820 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 13 00:41:15.817260 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 13 00:41:15.817670 augenrules[1456]: /sbin/augenrules: No change Mar 13 00:41:15.818061 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 13 00:41:15.821826 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 13 00:41:15.822577 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 13 00:41:15.833539 augenrules[1484]: No rules Mar 13 00:41:15.835006 systemd[1]: audit-rules.service: Deactivated successfully. Mar 13 00:41:15.835277 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 13 00:41:15.846355 systemd[1]: Finished ensure-sysext.service. Mar 13 00:41:15.860029 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 13 00:41:15.862876 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 13 00:41:15.863006 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 13 00:41:15.866032 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 13 00:41:15.903559 kernel: mousedev: PS/2 mouse device common for all mice Mar 13 00:41:15.952543 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Mar 13 00:41:15.963775 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 13 00:41:15.966910 systemd-resolved[1389]: Positive Trust Anchors: Mar 13 00:41:15.966979 systemd-resolved[1389]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 13 00:41:15.967027 systemd-resolved[1389]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 13 00:41:15.971827 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 13 00:41:15.977035 kernel: ACPI: button: Power Button [PWRF] Mar 13 00:41:15.982154 systemd-resolved[1389]: Defaulting to hostname 'linux'. Mar 13 00:41:15.990226 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 13 00:41:15.993801 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 13 00:41:16.042006 systemd-networkd[1462]: lo: Link UP Mar 13 00:41:16.042032 systemd-networkd[1462]: lo: Gained carrier Mar 13 00:41:16.043158 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 13 00:41:16.045259 systemd-networkd[1462]: Enumeration completed Mar 13 00:41:16.045996 systemd-networkd[1462]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 13 00:41:16.046029 systemd-networkd[1462]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 13 00:41:16.047324 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 13 00:41:16.048871 systemd-networkd[1462]: eth0: Link UP Mar 13 00:41:16.050965 systemd-networkd[1462]: eth0: Gained carrier Mar 13 00:41:16.051006 systemd-networkd[1462]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 13 00:41:16.056046 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 13 00:41:16.062413 systemd[1]: Reached target network.target - Network. Mar 13 00:41:16.065619 systemd[1]: Reached target sysinit.target - System Initialization. Mar 13 00:41:16.069563 systemd-networkd[1462]: eth0: DHCPv4 address 10.0.0.68/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 13 00:41:16.069568 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 13 00:41:16.070520 systemd-timesyncd[1503]: Network configuration changed, trying to establish connection. Mar 13 00:41:16.626068 systemd-timesyncd[1503]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 13 00:41:16.626164 systemd-timesyncd[1503]: Initial clock synchronization to Fri 2026-03-13 00:41:16.625682 UTC. Mar 13 00:41:16.626771 systemd-resolved[1389]: Clock change detected. Flushing caches. Mar 13 00:41:16.641086 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 13 00:41:16.641600 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 13 00:41:16.643787 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 13 00:41:16.648516 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Mar 13 00:41:16.652851 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 13 00:41:16.657601 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 13 00:41:16.657677 systemd[1]: Reached target paths.target - Path Units. Mar 13 00:41:16.661653 systemd[1]: Reached target time-set.target - System Time Set. Mar 13 00:41:16.665809 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 13 00:41:16.674127 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 13 00:41:16.681979 systemd[1]: Reached target timers.target - Timer Units. Mar 13 00:41:16.685863 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 13 00:41:16.692293 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 13 00:41:16.697089 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 13 00:41:16.701176 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Mar 13 00:41:16.705241 systemd[1]: Reached target ssh-access.target - SSH Access Available. Mar 13 00:41:16.712411 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 13 00:41:16.716653 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 13 00:41:16.723791 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 13 00:41:16.729736 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 13 00:41:16.735164 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 13 00:41:16.763189 systemd[1]: Reached target sockets.target - Socket Units. Mar 13 00:41:16.766483 systemd[1]: Reached target basic.target - Basic System. Mar 13 00:41:16.770917 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 13 00:41:16.771191 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 13 00:41:16.780681 systemd[1]: Starting containerd.service - containerd container runtime... Mar 13 00:41:16.787850 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 13 00:41:16.794972 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 13 00:41:16.801668 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 13 00:41:16.806460 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 13 00:41:16.809763 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 13 00:41:16.811047 jq[1533]: false Mar 13 00:41:16.812184 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Mar 13 00:41:16.820085 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 13 00:41:16.828873 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 13 00:41:16.836381 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 13 00:41:16.838878 oslogin_cache_refresh[1535]: Refreshing passwd entry cache Mar 13 00:41:16.840941 google_oslogin_nss_cache[1535]: oslogin_cache_refresh[1535]: Refreshing passwd entry cache Mar 13 00:41:16.841138 extend-filesystems[1534]: Found /dev/vda6 Mar 13 00:41:16.844288 extend-filesystems[1534]: Found /dev/vda9 Mar 13 00:41:16.846981 extend-filesystems[1534]: Checking size of /dev/vda9 Mar 13 00:41:16.851902 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 13 00:41:16.866100 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 13 00:41:16.868797 google_oslogin_nss_cache[1535]: oslogin_cache_refresh[1535]: Failure getting users, quitting Mar 13 00:41:16.868797 google_oslogin_nss_cache[1535]: oslogin_cache_refresh[1535]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Mar 13 00:41:16.868797 google_oslogin_nss_cache[1535]: oslogin_cache_refresh[1535]: Refreshing group entry cache Mar 13 00:41:16.867233 oslogin_cache_refresh[1535]: Failure getting users, quitting Mar 13 00:41:16.867256 oslogin_cache_refresh[1535]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Mar 13 00:41:16.867308 oslogin_cache_refresh[1535]: Refreshing group entry cache Mar 13 00:41:16.870483 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 13 00:41:16.871202 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 13 00:41:16.873188 systemd[1]: Starting update-engine.service - Update Engine... Mar 13 00:41:16.873938 extend-filesystems[1534]: Resized partition /dev/vda9 Mar 13 00:41:16.880185 extend-filesystems[1552]: resize2fs 1.47.3 (8-Jul-2025) Mar 13 00:41:16.884359 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 13 00:41:16.890026 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 13 00:41:16.892880 google_oslogin_nss_cache[1535]: oslogin_cache_refresh[1535]: Failure getting groups, quitting Mar 13 00:41:16.892880 google_oslogin_nss_cache[1535]: oslogin_cache_refresh[1535]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Mar 13 00:41:16.892854 oslogin_cache_refresh[1535]: Failure getting groups, quitting Mar 13 00:41:16.892870 oslogin_cache_refresh[1535]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Mar 13 00:41:16.901458 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 13 00:41:16.912658 kernel: kvm_amd: TSC scaling supported Mar 13 00:41:16.912704 kernel: kvm_amd: Nested Virtualization enabled Mar 13 00:41:16.912718 kernel: kvm_amd: Nested Paging enabled Mar 13 00:41:16.918077 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 13 00:41:16.923820 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 13 00:41:16.924124 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 13 00:41:16.924665 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Mar 13 00:41:16.924938 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Mar 13 00:41:16.930041 systemd[1]: motdgen.service: Deactivated successfully. Mar 13 00:41:16.930298 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 13 00:41:16.935484 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 13 00:41:16.935851 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 13 00:41:16.943573 update_engine[1550]: I20260313 00:41:16.941730 1550 main.cc:92] Flatcar Update Engine starting Mar 13 00:41:16.944632 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Mar 13 00:41:16.944672 kernel: kvm_amd: PMU virtualization is disabled Mar 13 00:41:16.948566 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 13 00:41:16.964488 (ntainerd)[1565]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 13 00:41:16.973207 jq[1553]: true Mar 13 00:41:16.977212 extend-filesystems[1552]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 13 00:41:16.977212 extend-filesystems[1552]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 13 00:41:16.977212 extend-filesystems[1552]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 13 00:41:16.987215 extend-filesystems[1534]: Resized filesystem in /dev/vda9 Mar 13 00:41:16.997805 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 13 00:41:16.998155 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 13 00:41:17.004978 systemd-logind[1547]: Watching system buttons on /dev/input/event2 (Power Button) Mar 13 00:41:17.005078 systemd-logind[1547]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 13 00:41:17.006394 systemd-logind[1547]: New seat seat0. Mar 13 00:41:17.046911 systemd[1]: Started systemd-logind.service - User Login Management. Mar 13 00:41:17.048261 jq[1576]: true Mar 13 00:41:17.060708 dbus-daemon[1531]: [system] SELinux support is enabled Mar 13 00:41:17.063873 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 13 00:41:17.076581 update_engine[1550]: I20260313 00:41:17.073825 1550 update_check_scheduler.cc:74] Next update check in 11m54s Mar 13 00:41:17.092375 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 13 00:41:17.095793 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 13 00:41:17.095822 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 13 00:41:17.097767 dbus-daemon[1531]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 13 00:41:17.102107 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 13 00:41:17.105355 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 13 00:41:17.105383 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 13 00:41:17.108958 systemd[1]: Started update-engine.service - Update Engine. Mar 13 00:41:17.111750 tar[1563]: linux-amd64/LICENSE Mar 13 00:41:17.112747 tar[1563]: linux-amd64/helm Mar 13 00:41:17.113508 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 13 00:41:17.129778 kernel: EDAC MC: Ver: 3.0.0 Mar 13 00:41:17.150471 sshd_keygen[1557]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 13 00:41:17.157626 bash[1605]: Updated "/home/core/.ssh/authorized_keys" Mar 13 00:41:17.158620 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 13 00:41:17.160384 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 13 00:41:17.186064 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 13 00:41:17.191847 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 13 00:41:17.196075 systemd[1]: Started sshd@0-10.0.0.68:22-10.0.0.1:38496.service - OpenSSH per-connection server daemon (10.0.0.1:38496). Mar 13 00:41:17.210398 locksmithd[1594]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 13 00:41:17.212367 containerd[1565]: time="2026-03-13T00:41:17Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Mar 13 00:41:17.215354 containerd[1565]: time="2026-03-13T00:41:17.215293735Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Mar 13 00:41:17.218189 systemd[1]: issuegen.service: Deactivated successfully. Mar 13 00:41:17.218505 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 13 00:41:17.225153 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 13 00:41:17.228363 containerd[1565]: time="2026-03-13T00:41:17.228318960Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="11.662µs" Mar 13 00:41:17.228480 containerd[1565]: time="2026-03-13T00:41:17.228453111Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Mar 13 00:41:17.228633 containerd[1565]: time="2026-03-13T00:41:17.228615754Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Mar 13 00:41:17.228859 containerd[1565]: time="2026-03-13T00:41:17.228842518Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Mar 13 00:41:17.228914 containerd[1565]: time="2026-03-13T00:41:17.228902259Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Mar 13 00:41:17.229035 containerd[1565]: time="2026-03-13T00:41:17.228979724Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 13 00:41:17.229159 containerd[1565]: time="2026-03-13T00:41:17.229138670Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 13 00:41:17.229244 containerd[1565]: time="2026-03-13T00:41:17.229225362Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 13 00:41:17.229668 containerd[1565]: time="2026-03-13T00:41:17.229647951Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 13 00:41:17.229750 containerd[1565]: time="2026-03-13T00:41:17.229730835Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 13 00:41:17.229823 containerd[1565]: time="2026-03-13T00:41:17.229807750Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 13 00:41:17.229889 containerd[1565]: time="2026-03-13T00:41:17.229876808Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Mar 13 00:41:17.230092 containerd[1565]: time="2026-03-13T00:41:17.230075358Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Mar 13 00:41:17.230383 containerd[1565]: time="2026-03-13T00:41:17.230361232Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 13 00:41:17.230492 containerd[1565]: time="2026-03-13T00:41:17.230473622Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 13 00:41:17.230601 containerd[1565]: time="2026-03-13T00:41:17.230586823Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Mar 13 00:41:17.230697 containerd[1565]: time="2026-03-13T00:41:17.230683344Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Mar 13 00:41:17.231107 containerd[1565]: time="2026-03-13T00:41:17.231086787Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Mar 13 00:41:17.231269 containerd[1565]: time="2026-03-13T00:41:17.231242928Z" level=info msg="metadata content store policy set" policy=shared Mar 13 00:41:17.248584 containerd[1565]: time="2026-03-13T00:41:17.247167012Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Mar 13 00:41:17.248584 containerd[1565]: time="2026-03-13T00:41:17.247262902Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Mar 13 00:41:17.248584 containerd[1565]: time="2026-03-13T00:41:17.247278460Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Mar 13 00:41:17.248584 containerd[1565]: time="2026-03-13T00:41:17.247290303Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Mar 13 00:41:17.248584 containerd[1565]: time="2026-03-13T00:41:17.247301754Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Mar 13 00:41:17.248584 containerd[1565]: time="2026-03-13T00:41:17.247310681Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Mar 13 00:41:17.248584 containerd[1565]: time="2026-03-13T00:41:17.247321210Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Mar 13 00:41:17.248584 containerd[1565]: time="2026-03-13T00:41:17.247331490Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Mar 13 00:41:17.248584 containerd[1565]: time="2026-03-13T00:41:17.247341658Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Mar 13 00:41:17.248584 containerd[1565]: time="2026-03-13T00:41:17.247350686Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Mar 13 00:41:17.248584 containerd[1565]: time="2026-03-13T00:41:17.247359362Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Mar 13 00:41:17.248584 containerd[1565]: time="2026-03-13T00:41:17.247370342Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Mar 13 00:41:17.248584 containerd[1565]: time="2026-03-13T00:41:17.247597076Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Mar 13 00:41:17.248584 containerd[1565]: time="2026-03-13T00:41:17.247618155Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Mar 13 00:41:17.249137 containerd[1565]: time="2026-03-13T00:41:17.247630698Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Mar 13 00:41:17.249137 containerd[1565]: time="2026-03-13T00:41:17.247639996Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Mar 13 00:41:17.249137 containerd[1565]: time="2026-03-13T00:41:17.247648942Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Mar 13 00:41:17.249137 containerd[1565]: time="2026-03-13T00:41:17.247658540Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Mar 13 00:41:17.249137 containerd[1565]: time="2026-03-13T00:41:17.247667867Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Mar 13 00:41:17.249137 containerd[1565]: time="2026-03-13T00:41:17.247677315Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Mar 13 00:41:17.249137 containerd[1565]: time="2026-03-13T00:41:17.247727198Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Mar 13 00:41:17.249137 containerd[1565]: time="2026-03-13T00:41:17.247797459Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Mar 13 00:41:17.249137 containerd[1565]: time="2026-03-13T00:41:17.247808180Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Mar 13 00:41:17.249137 containerd[1565]: time="2026-03-13T00:41:17.247854215Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Mar 13 00:41:17.249137 containerd[1565]: time="2026-03-13T00:41:17.247866809Z" level=info msg="Start snapshots syncer" Mar 13 00:41:17.249137 containerd[1565]: time="2026-03-13T00:41:17.247889000Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Mar 13 00:41:17.253650 containerd[1565]: time="2026-03-13T00:41:17.253138799Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Mar 13 00:41:17.253650 containerd[1565]: time="2026-03-13T00:41:17.253274442Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Mar 13 00:41:17.267858 containerd[1565]: time="2026-03-13T00:41:17.255605353Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Mar 13 00:41:17.269117 containerd[1565]: time="2026-03-13T00:41:17.268457760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Mar 13 00:41:17.269117 containerd[1565]: time="2026-03-13T00:41:17.268686107Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Mar 13 00:41:17.269117 containerd[1565]: time="2026-03-13T00:41:17.268705503Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Mar 13 00:41:17.269117 containerd[1565]: time="2026-03-13T00:41:17.268721493Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Mar 13 00:41:17.269117 containerd[1565]: time="2026-03-13T00:41:17.268736370Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Mar 13 00:41:17.269117 containerd[1565]: time="2026-03-13T00:41:17.268746629Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Mar 13 00:41:17.269117 containerd[1565]: time="2026-03-13T00:41:17.268789119Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Mar 13 00:41:17.269117 containerd[1565]: time="2026-03-13T00:41:17.268819866Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Mar 13 00:41:17.269117 containerd[1565]: time="2026-03-13T00:41:17.268830045Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Mar 13 00:41:17.269117 containerd[1565]: time="2026-03-13T00:41:17.268839423Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Mar 13 00:41:17.269117 containerd[1565]: time="2026-03-13T00:41:17.268897221Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 13 00:41:17.269117 containerd[1565]: time="2026-03-13T00:41:17.268911818Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 13 00:41:17.269117 containerd[1565]: time="2026-03-13T00:41:17.268919923Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 13 00:41:17.269393 containerd[1565]: time="2026-03-13T00:41:17.268928168Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 13 00:41:17.269393 containerd[1565]: time="2026-03-13T00:41:17.268935181Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Mar 13 00:41:17.269393 containerd[1565]: time="2026-03-13T00:41:17.268943517Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Mar 13 00:41:17.269393 containerd[1565]: time="2026-03-13T00:41:17.269037873Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Mar 13 00:41:17.269393 containerd[1565]: time="2026-03-13T00:41:17.269075714Z" level=info msg="runtime interface created" Mar 13 00:41:17.269393 containerd[1565]: time="2026-03-13T00:41:17.269081224Z" level=info msg="created NRI interface" Mar 13 00:41:17.269393 containerd[1565]: time="2026-03-13T00:41:17.269088849Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Mar 13 00:41:17.269393 containerd[1565]: time="2026-03-13T00:41:17.269172555Z" level=info msg="Connect containerd service" Mar 13 00:41:17.269393 containerd[1565]: time="2026-03-13T00:41:17.269220754Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 13 00:41:17.270422 containerd[1565]: time="2026-03-13T00:41:17.270169025Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 13 00:41:17.309400 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 13 00:41:17.882666 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 13 00:41:17.932349 sshd[1621]: Accepted publickey for core from 10.0.0.1 port 38496 ssh2: RSA SHA256:eWtiGYQHnkjgJlXJLLPoGwe2+/3lLXbpacmtzaUtKgo Mar 13 00:41:17.956456 sshd-session[1621]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:41:18.171768 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 13 00:41:18.320421 systemd-networkd[1462]: eth0: Gained IPv6LL Mar 13 00:41:18.358147 containerd[1565]: time="2026-03-13T00:41:18.357397930Z" level=info msg="Start subscribing containerd event" Mar 13 00:41:18.359199 containerd[1565]: time="2026-03-13T00:41:18.358300655Z" level=info msg="Start recovering state" Mar 13 00:41:18.359199 containerd[1565]: time="2026-03-13T00:41:18.358902088Z" level=info msg="Start event monitor" Mar 13 00:41:18.359199 containerd[1565]: time="2026-03-13T00:41:18.358935570Z" level=info msg="Start cni network conf syncer for default" Mar 13 00:41:18.359199 containerd[1565]: time="2026-03-13T00:41:18.358942894Z" level=info msg="Start streaming server" Mar 13 00:41:18.359199 containerd[1565]: time="2026-03-13T00:41:18.358972490Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Mar 13 00:41:18.359199 containerd[1565]: time="2026-03-13T00:41:18.358983299Z" level=info msg="runtime interface starting up..." Mar 13 00:41:18.359199 containerd[1565]: time="2026-03-13T00:41:18.359022794Z" level=info msg="starting plugins..." Mar 13 00:41:18.359199 containerd[1565]: time="2026-03-13T00:41:18.359067737Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Mar 13 00:41:18.360246 containerd[1565]: time="2026-03-13T00:41:18.359823788Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 13 00:41:18.360246 containerd[1565]: time="2026-03-13T00:41:18.359877439Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 13 00:41:18.360246 containerd[1565]: time="2026-03-13T00:41:18.359943873Z" level=info msg="containerd successfully booted in 1.148092s" Mar 13 00:41:18.399259 systemd[1]: Reached target getty.target - Login Prompts. Mar 13 00:41:18.400778 systemd[1]: Started containerd.service - containerd container runtime. Mar 13 00:41:18.403736 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 13 00:41:18.408211 systemd[1]: Reached target network-online.target - Network is Online. Mar 13 00:41:18.410567 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 13 00:41:18.441081 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 00:41:18.575827 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 13 00:41:18.758125 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 00:41:18.944312 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 13 00:41:18.949501 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 13 00:41:18.959437 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 13 00:41:18.964394 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 13 00:41:18.965340 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 13 00:41:18.974980 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 13 00:41:18.976906 systemd-logind[1547]: New session 1 of user core. Mar 13 00:41:18.982093 tar[1563]: linux-amd64/README.md Mar 13 00:41:18.992399 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 13 00:41:19.000434 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 13 00:41:19.037252 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 13 00:41:19.055328 (systemd)[1674]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 13 00:41:19.061303 systemd-logind[1547]: New session c1 of user core. Mar 13 00:41:19.550059 systemd[1674]: Queued start job for default target default.target. Mar 13 00:41:19.563485 systemd[1674]: Created slice app.slice - User Application Slice. Mar 13 00:41:19.563578 systemd[1674]: Reached target paths.target - Paths. Mar 13 00:41:19.563641 systemd[1674]: Reached target timers.target - Timers. Mar 13 00:41:19.567152 systemd[1674]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 13 00:41:19.614091 systemd[1674]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 13 00:41:19.614264 systemd[1674]: Reached target sockets.target - Sockets. Mar 13 00:41:19.614317 systemd[1674]: Reached target basic.target - Basic System. Mar 13 00:41:19.614362 systemd[1674]: Reached target default.target - Main User Target. Mar 13 00:41:19.614399 systemd[1674]: Startup finished in 535ms. Mar 13 00:41:19.615425 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 13 00:41:19.640860 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 13 00:41:19.827810 systemd[1]: Started sshd@1-10.0.0.68:22-10.0.0.1:59792.service - OpenSSH per-connection server daemon (10.0.0.1:59792). Mar 13 00:41:20.087663 sshd[1686]: Accepted publickey for core from 10.0.0.1 port 59792 ssh2: RSA SHA256:eWtiGYQHnkjgJlXJLLPoGwe2+/3lLXbpacmtzaUtKgo Mar 13 00:41:20.092468 sshd-session[1686]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:41:20.099374 systemd-logind[1547]: New session 2 of user core. Mar 13 00:41:20.107800 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 13 00:41:20.146894 sshd[1689]: Connection closed by 10.0.0.1 port 59792 Mar 13 00:41:20.147382 sshd-session[1686]: pam_unix(sshd:session): session closed for user core Mar 13 00:41:20.159650 systemd[1]: sshd@1-10.0.0.68:22-10.0.0.1:59792.service: Deactivated successfully. Mar 13 00:41:20.162382 systemd[1]: session-2.scope: Deactivated successfully. Mar 13 00:41:20.163999 systemd-logind[1547]: Session 2 logged out. Waiting for processes to exit. Mar 13 00:41:20.168294 systemd[1]: Started sshd@2-10.0.0.68:22-10.0.0.1:59808.service - OpenSSH per-connection server daemon (10.0.0.1:59808). Mar 13 00:41:20.173217 systemd-logind[1547]: Removed session 2. Mar 13 00:41:20.441386 sshd[1695]: Accepted publickey for core from 10.0.0.1 port 59808 ssh2: RSA SHA256:eWtiGYQHnkjgJlXJLLPoGwe2+/3lLXbpacmtzaUtKgo Mar 13 00:41:20.443971 sshd-session[1695]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:41:20.450758 systemd-logind[1547]: New session 3 of user core. Mar 13 00:41:20.463765 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 13 00:41:20.492820 sshd[1699]: Connection closed by 10.0.0.1 port 59808 Mar 13 00:41:20.493190 sshd-session[1695]: pam_unix(sshd:session): session closed for user core Mar 13 00:41:20.499783 systemd[1]: sshd@2-10.0.0.68:22-10.0.0.1:59808.service: Deactivated successfully. Mar 13 00:41:20.502603 systemd[1]: session-3.scope: Deactivated successfully. Mar 13 00:41:20.505647 systemd-logind[1547]: Session 3 logged out. Waiting for processes to exit. Mar 13 00:41:20.507232 systemd-logind[1547]: Removed session 3. Mar 13 00:41:22.476398 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:41:22.480942 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 13 00:41:22.484487 systemd[1]: Startup finished in 4.112s (kernel) + 12.378s (initrd) + 10.853s (userspace) = 27.343s. Mar 13 00:41:22.496280 (kubelet)[1709]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 13 00:41:23.256002 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 1030001209 wd_nsec: 1030000841 Mar 13 00:41:24.219498 kubelet[1709]: E0313 00:41:24.219215 1709 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 13 00:41:24.223246 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 13 00:41:24.223483 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 13 00:41:24.224168 systemd[1]: kubelet.service: Consumed 4.694s CPU time, 258.4M memory peak. Mar 13 00:41:30.516848 systemd[1]: Started sshd@3-10.0.0.68:22-10.0.0.1:38070.service - OpenSSH per-connection server daemon (10.0.0.1:38070). Mar 13 00:41:30.590895 sshd[1722]: Accepted publickey for core from 10.0.0.1 port 38070 ssh2: RSA SHA256:eWtiGYQHnkjgJlXJLLPoGwe2+/3lLXbpacmtzaUtKgo Mar 13 00:41:30.592410 sshd-session[1722]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:41:30.598205 systemd-logind[1547]: New session 4 of user core. Mar 13 00:41:30.608723 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 13 00:41:30.627967 sshd[1725]: Connection closed by 10.0.0.1 port 38070 Mar 13 00:41:30.628604 sshd-session[1722]: pam_unix(sshd:session): session closed for user core Mar 13 00:41:30.644900 systemd[1]: sshd@3-10.0.0.68:22-10.0.0.1:38070.service: Deactivated successfully. Mar 13 00:41:30.647338 systemd[1]: session-4.scope: Deactivated successfully. Mar 13 00:41:30.648479 systemd-logind[1547]: Session 4 logged out. Waiting for processes to exit. Mar 13 00:41:30.652074 systemd[1]: Started sshd@4-10.0.0.68:22-10.0.0.1:38078.service - OpenSSH per-connection server daemon (10.0.0.1:38078). Mar 13 00:41:30.652872 systemd-logind[1547]: Removed session 4. Mar 13 00:41:30.719486 sshd[1731]: Accepted publickey for core from 10.0.0.1 port 38078 ssh2: RSA SHA256:eWtiGYQHnkjgJlXJLLPoGwe2+/3lLXbpacmtzaUtKgo Mar 13 00:41:30.721175 sshd-session[1731]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:41:30.727162 systemd-logind[1547]: New session 5 of user core. Mar 13 00:41:30.740721 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 13 00:41:30.750467 sshd[1734]: Connection closed by 10.0.0.1 port 38078 Mar 13 00:41:30.750937 sshd-session[1731]: pam_unix(sshd:session): session closed for user core Mar 13 00:41:30.768250 systemd[1]: sshd@4-10.0.0.68:22-10.0.0.1:38078.service: Deactivated successfully. Mar 13 00:41:30.770633 systemd[1]: session-5.scope: Deactivated successfully. Mar 13 00:41:30.777435 systemd-logind[1547]: Session 5 logged out. Waiting for processes to exit. Mar 13 00:41:30.780710 systemd[1]: Started sshd@5-10.0.0.68:22-10.0.0.1:38080.service - OpenSSH per-connection server daemon (10.0.0.1:38080). Mar 13 00:41:30.782061 systemd-logind[1547]: Removed session 5. Mar 13 00:41:30.851590 sshd[1740]: Accepted publickey for core from 10.0.0.1 port 38080 ssh2: RSA SHA256:eWtiGYQHnkjgJlXJLLPoGwe2+/3lLXbpacmtzaUtKgo Mar 13 00:41:30.853520 sshd-session[1740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:41:30.869325 systemd-logind[1547]: New session 6 of user core. Mar 13 00:41:30.888260 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 13 00:41:30.920733 sshd[1743]: Connection closed by 10.0.0.1 port 38080 Mar 13 00:41:30.921174 sshd-session[1740]: pam_unix(sshd:session): session closed for user core Mar 13 00:41:30.941322 systemd[1]: sshd@5-10.0.0.68:22-10.0.0.1:38080.service: Deactivated successfully. Mar 13 00:41:30.943333 systemd[1]: session-6.scope: Deactivated successfully. Mar 13 00:41:30.944466 systemd-logind[1547]: Session 6 logged out. Waiting for processes to exit. Mar 13 00:41:30.947143 systemd[1]: Started sshd@6-10.0.0.68:22-10.0.0.1:38086.service - OpenSSH per-connection server daemon (10.0.0.1:38086). Mar 13 00:41:30.948255 systemd-logind[1547]: Removed session 6. Mar 13 00:41:31.056814 sshd[1749]: Accepted publickey for core from 10.0.0.1 port 38086 ssh2: RSA SHA256:eWtiGYQHnkjgJlXJLLPoGwe2+/3lLXbpacmtzaUtKgo Mar 13 00:41:31.058706 sshd-session[1749]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:41:31.065017 systemd-logind[1547]: New session 7 of user core. Mar 13 00:41:31.074711 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 13 00:41:31.097603 sudo[1753]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 13 00:41:31.097926 sudo[1753]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 13 00:41:31.119316 sudo[1753]: pam_unix(sudo:session): session closed for user root Mar 13 00:41:31.121314 sshd[1752]: Connection closed by 10.0.0.1 port 38086 Mar 13 00:41:31.122147 sshd-session[1749]: pam_unix(sshd:session): session closed for user core Mar 13 00:41:31.131279 systemd[1]: sshd@6-10.0.0.68:22-10.0.0.1:38086.service: Deactivated successfully. Mar 13 00:41:31.134915 systemd[1]: session-7.scope: Deactivated successfully. Mar 13 00:41:31.136746 systemd-logind[1547]: Session 7 logged out. Waiting for processes to exit. Mar 13 00:41:31.139287 systemd[1]: Started sshd@7-10.0.0.68:22-10.0.0.1:38100.service - OpenSSH per-connection server daemon (10.0.0.1:38100). Mar 13 00:41:31.140670 systemd-logind[1547]: Removed session 7. Mar 13 00:41:31.196317 sshd[1759]: Accepted publickey for core from 10.0.0.1 port 38100 ssh2: RSA SHA256:eWtiGYQHnkjgJlXJLLPoGwe2+/3lLXbpacmtzaUtKgo Mar 13 00:41:31.197888 sshd-session[1759]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:41:31.204251 systemd-logind[1547]: New session 8 of user core. Mar 13 00:41:31.213732 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 13 00:41:31.231478 sudo[1764]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 13 00:41:31.231851 sudo[1764]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 13 00:41:31.248049 sudo[1764]: pam_unix(sudo:session): session closed for user root Mar 13 00:41:31.255291 sudo[1763]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 13 00:41:31.255693 sudo[1763]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 13 00:41:31.267152 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 13 00:41:31.323409 augenrules[1786]: No rules Mar 13 00:41:31.324949 systemd[1]: audit-rules.service: Deactivated successfully. Mar 13 00:41:31.325287 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 13 00:41:31.326419 sudo[1763]: pam_unix(sudo:session): session closed for user root Mar 13 00:41:31.328038 sshd[1762]: Connection closed by 10.0.0.1 port 38100 Mar 13 00:41:31.328422 sshd-session[1759]: pam_unix(sshd:session): session closed for user core Mar 13 00:41:31.351958 systemd[1]: sshd@7-10.0.0.68:22-10.0.0.1:38100.service: Deactivated successfully. Mar 13 00:41:31.353833 systemd[1]: session-8.scope: Deactivated successfully. Mar 13 00:41:31.354879 systemd-logind[1547]: Session 8 logged out. Waiting for processes to exit. Mar 13 00:41:31.357430 systemd[1]: Started sshd@8-10.0.0.68:22-10.0.0.1:38112.service - OpenSSH per-connection server daemon (10.0.0.1:38112). Mar 13 00:41:31.358578 systemd-logind[1547]: Removed session 8. Mar 13 00:41:31.406991 sshd[1795]: Accepted publickey for core from 10.0.0.1 port 38112 ssh2: RSA SHA256:eWtiGYQHnkjgJlXJLLPoGwe2+/3lLXbpacmtzaUtKgo Mar 13 00:41:31.408297 sshd-session[1795]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:41:31.413334 systemd-logind[1547]: New session 9 of user core. Mar 13 00:41:31.430692 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 13 00:41:31.446007 sudo[1799]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 13 00:41:31.446407 sudo[1799]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 13 00:41:32.585711 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 13 00:41:32.606323 (dockerd)[1821]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 13 00:41:34.586008 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 13 00:41:34.690826 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 00:41:36.668822 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 3904210350 wd_nsec: 3904208591 Mar 13 00:41:38.715879 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:41:38.756257 (kubelet)[1836]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 13 00:41:38.808454 dockerd[1821]: time="2026-03-13T00:41:38.808335539Z" level=info msg="Starting up" Mar 13 00:41:38.810683 dockerd[1821]: time="2026-03-13T00:41:38.810604434Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Mar 13 00:41:39.091214 dockerd[1821]: time="2026-03-13T00:41:39.090738060Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Mar 13 00:41:39.155233 kubelet[1836]: E0313 00:41:39.154984 1836 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 13 00:41:39.161300 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 13 00:41:39.161637 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 13 00:41:39.162654 systemd[1]: kubelet.service: Consumed 3.658s CPU time, 110.6M memory peak. Mar 13 00:41:39.382648 dockerd[1821]: time="2026-03-13T00:41:39.381315531Z" level=info msg="Loading containers: start." Mar 13 00:41:39.400684 kernel: Initializing XFRM netlink socket Mar 13 00:41:39.866627 systemd-networkd[1462]: docker0: Link UP Mar 13 00:41:39.873214 dockerd[1821]: time="2026-03-13T00:41:39.873127240Z" level=info msg="Loading containers: done." Mar 13 00:41:39.900234 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1747013861-merged.mount: Deactivated successfully. Mar 13 00:41:39.901916 dockerd[1821]: time="2026-03-13T00:41:39.901826947Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 13 00:41:39.902035 dockerd[1821]: time="2026-03-13T00:41:39.901994280Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Mar 13 00:41:39.902200 dockerd[1821]: time="2026-03-13T00:41:39.902142536Z" level=info msg="Initializing buildkit" Mar 13 00:41:39.947892 dockerd[1821]: time="2026-03-13T00:41:39.947826166Z" level=info msg="Completed buildkit initialization" Mar 13 00:41:39.958690 dockerd[1821]: time="2026-03-13T00:41:39.958632923Z" level=info msg="Daemon has completed initialization" Mar 13 00:41:39.958922 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 13 00:41:39.959128 dockerd[1821]: time="2026-03-13T00:41:39.958856843Z" level=info msg="API listen on /run/docker.sock" Mar 13 00:41:40.528284 containerd[1565]: time="2026-03-13T00:41:40.528130336Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.5\"" Mar 13 00:41:41.085686 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount488347397.mount: Deactivated successfully. Mar 13 00:41:42.309101 containerd[1565]: time="2026-03-13T00:41:42.308983395Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:41:42.310182 containerd[1565]: time="2026-03-13T00:41:42.310105342Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.5: active requests=0, bytes read=27074497" Mar 13 00:41:42.311937 containerd[1565]: time="2026-03-13T00:41:42.311860916Z" level=info msg="ImageCreate event name:\"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:41:42.316586 containerd[1565]: time="2026-03-13T00:41:42.316481043Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:c548633fcd3b4aad59b70815be4c8be54a0fddaddc3fcffa9371eedb0e96417a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:41:42.317391 containerd[1565]: time="2026-03-13T00:41:42.317298633Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.5\" with image id \"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:c548633fcd3b4aad59b70815be4c8be54a0fddaddc3fcffa9371eedb0e96417a\", size \"27071096\" in 1.788984825s" Mar 13 00:41:42.317391 containerd[1565]: time="2026-03-13T00:41:42.317378122Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.5\" returns image reference \"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\"" Mar 13 00:41:42.318684 containerd[1565]: time="2026-03-13T00:41:42.318094932Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.5\"" Mar 13 00:41:44.014414 containerd[1565]: time="2026-03-13T00:41:44.013917744Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:41:44.015980 containerd[1565]: time="2026-03-13T00:41:44.015030171Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.5: active requests=0, bytes read=21165823" Mar 13 00:41:44.019588 containerd[1565]: time="2026-03-13T00:41:44.017428107Z" level=info msg="ImageCreate event name:\"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:41:44.022113 containerd[1565]: time="2026-03-13T00:41:44.021963457Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f0426100c873816560c520d542fa28999a98dad909edd04365f3b0eead790da3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:41:44.022758 containerd[1565]: time="2026-03-13T00:41:44.022688327Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.5\" with image id \"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f0426100c873816560c520d542fa28999a98dad909edd04365f3b0eead790da3\", size \"22822771\" in 1.704557228s" Mar 13 00:41:44.022812 containerd[1565]: time="2026-03-13T00:41:44.022763368Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.5\" returns image reference \"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\"" Mar 13 00:41:44.026178 containerd[1565]: time="2026-03-13T00:41:44.026133706Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.5\"" Mar 13 00:41:45.140100 containerd[1565]: time="2026-03-13T00:41:45.139910931Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:41:45.141293 containerd[1565]: time="2026-03-13T00:41:45.141209995Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.5: active requests=0, bytes read=15729824" Mar 13 00:41:45.142907 containerd[1565]: time="2026-03-13T00:41:45.142846251Z" level=info msg="ImageCreate event name:\"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:41:45.145839 containerd[1565]: time="2026-03-13T00:41:45.145772784Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b67b0d627c8e99ffa362bd4d9a60ca9a6c449e363a5f88d2aa8c224bd84ca51d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:41:45.146788 containerd[1565]: time="2026-03-13T00:41:45.146723516Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.5\" with image id \"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b67b0d627c8e99ffa362bd4d9a60ca9a6c449e363a5f88d2aa8c224bd84ca51d\", size \"17386790\" in 1.120299668s" Mar 13 00:41:45.146788 containerd[1565]: time="2026-03-13T00:41:45.146772287Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.5\" returns image reference \"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\"" Mar 13 00:41:45.147811 containerd[1565]: time="2026-03-13T00:41:45.147666192Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.5\"" Mar 13 00:41:47.438679 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1532914087.mount: Deactivated successfully. Mar 13 00:41:48.862712 containerd[1565]: time="2026-03-13T00:41:48.862082021Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:41:48.865002 containerd[1565]: time="2026-03-13T00:41:48.863003112Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.5: active requests=0, bytes read=25861770" Mar 13 00:41:48.865002 containerd[1565]: time="2026-03-13T00:41:48.864698467Z" level=info msg="ImageCreate event name:\"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:41:48.867142 containerd[1565]: time="2026-03-13T00:41:48.866950548Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8a22a3bf452d07af3b5a3064b089d2ad6579d5dd3b850386e05cc0f36dc3f4cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:41:48.867623 containerd[1565]: time="2026-03-13T00:41:48.867497211Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.5\" with image id \"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\", repo tag \"registry.k8s.io/kube-proxy:v1.34.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:8a22a3bf452d07af3b5a3064b089d2ad6579d5dd3b850386e05cc0f36dc3f4cf\", size \"25860789\" in 3.71980503s" Mar 13 00:41:48.867701 containerd[1565]: time="2026-03-13T00:41:48.867662419Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.5\" returns image reference \"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\"" Mar 13 00:41:48.869488 containerd[1565]: time="2026-03-13T00:41:48.869376716Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Mar 13 00:41:49.408608 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 13 00:41:49.455002 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 00:41:49.512894 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount517498451.mount: Deactivated successfully. Mar 13 00:41:50.112618 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:41:50.198157 (kubelet)[2152]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 13 00:41:50.620580 kubelet[2152]: E0313 00:41:50.619399 2152 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 13 00:41:50.625214 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 13 00:41:50.625739 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 13 00:41:50.626674 systemd[1]: kubelet.service: Consumed 952ms CPU time, 110M memory peak. Mar 13 00:41:53.325891 containerd[1565]: time="2026-03-13T00:41:53.325415835Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:41:53.327771 containerd[1565]: time="2026-03-13T00:41:53.326359016Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388007" Mar 13 00:41:53.328024 containerd[1565]: time="2026-03-13T00:41:53.327948342Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:41:53.331838 containerd[1565]: time="2026-03-13T00:41:53.331757599Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:41:53.335493 containerd[1565]: time="2026-03-13T00:41:53.335272853Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 4.465795318s" Mar 13 00:41:53.335493 containerd[1565]: time="2026-03-13T00:41:53.335416067Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Mar 13 00:41:53.340491 containerd[1565]: time="2026-03-13T00:41:53.340424896Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Mar 13 00:41:53.966659 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3548004608.mount: Deactivated successfully. Mar 13 00:41:53.973035 containerd[1565]: time="2026-03-13T00:41:53.972942069Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:41:53.974209 containerd[1565]: time="2026-03-13T00:41:53.974103467Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Mar 13 00:41:53.978514 containerd[1565]: time="2026-03-13T00:41:53.978404082Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:41:53.983206 containerd[1565]: time="2026-03-13T00:41:53.983055643Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:41:53.984476 containerd[1565]: time="2026-03-13T00:41:53.984368473Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 643.906548ms" Mar 13 00:41:53.984476 containerd[1565]: time="2026-03-13T00:41:53.984432261Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Mar 13 00:41:53.986461 containerd[1565]: time="2026-03-13T00:41:53.986369820Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\"" Mar 13 00:41:55.022142 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2685357387.mount: Deactivated successfully. Mar 13 00:41:57.324403 containerd[1565]: time="2026-03-13T00:41:57.324010569Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:41:57.325856 containerd[1565]: time="2026-03-13T00:41:57.325057319Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.5-0: active requests=0, bytes read=22860674" Mar 13 00:41:57.327021 containerd[1565]: time="2026-03-13T00:41:57.326948669Z" level=info msg="ImageCreate event name:\"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:41:57.329716 containerd[1565]: time="2026-03-13T00:41:57.329662136Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:41:57.330592 containerd[1565]: time="2026-03-13T00:41:57.330441741Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.5-0\" with image id \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\", repo tag \"registry.k8s.io/etcd:3.6.5-0\", repo digest \"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\", size \"22871747\" in 3.344024393s" Mar 13 00:41:57.330592 containerd[1565]: time="2026-03-13T00:41:57.330584304Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\" returns image reference \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\"" Mar 13 00:42:00.654060 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 13 00:42:00.655927 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 00:42:00.688889 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 13 00:42:00.689046 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 13 00:42:00.689586 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:42:00.693986 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 00:42:00.726055 systemd[1]: Reload requested from client PID 2301 ('systemctl') (unit session-9.scope)... Mar 13 00:42:00.726101 systemd[1]: Reloading... Mar 13 00:42:00.836592 zram_generator::config[2344]: No configuration found. Mar 13 00:42:01.065031 systemd[1]: Reloading finished in 338 ms. Mar 13 00:42:01.152216 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 13 00:42:01.152345 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 13 00:42:01.152739 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:42:01.152791 systemd[1]: kubelet.service: Consumed 174ms CPU time, 98.3M memory peak. Mar 13 00:42:01.154775 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 00:42:01.369509 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:42:01.386296 (kubelet)[2392]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 13 00:42:01.444974 kubelet[2392]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 13 00:42:01.444974 kubelet[2392]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 13 00:42:01.445378 kubelet[2392]: I0313 00:42:01.445097 2392 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 13 00:42:01.726467 kubelet[2392]: I0313 00:42:01.726369 2392 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Mar 13 00:42:01.726467 kubelet[2392]: I0313 00:42:01.726412 2392 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 13 00:42:01.726714 kubelet[2392]: I0313 00:42:01.726514 2392 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 13 00:42:01.726714 kubelet[2392]: I0313 00:42:01.726560 2392 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 13 00:42:01.727917 kubelet[2392]: I0313 00:42:01.727857 2392 server.go:956] "Client rotation is on, will bootstrap in background" Mar 13 00:42:01.893834 kubelet[2392]: E0313 00:42:01.893757 2392 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.68:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.68:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 13 00:42:01.940627 kubelet[2392]: I0313 00:42:01.933672 2392 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 13 00:42:01.975037 kubelet[2392]: I0313 00:42:01.974948 2392 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 13 00:42:01.981840 kubelet[2392]: I0313 00:42:01.981706 2392 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 13 00:42:01.982828 kubelet[2392]: I0313 00:42:01.982737 2392 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 13 00:42:01.983177 kubelet[2392]: I0313 00:42:01.982799 2392 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 13 00:42:01.983400 kubelet[2392]: I0313 00:42:01.983235 2392 topology_manager.go:138] "Creating topology manager with none policy" Mar 13 00:42:01.983400 kubelet[2392]: I0313 00:42:01.983247 2392 container_manager_linux.go:306] "Creating device plugin manager" Mar 13 00:42:01.983474 kubelet[2392]: I0313 00:42:01.983435 2392 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Mar 13 00:42:02.081294 kubelet[2392]: I0313 00:42:02.081177 2392 state_mem.go:36] "Initialized new in-memory state store" Mar 13 00:42:02.083224 kubelet[2392]: I0313 00:42:02.083197 2392 kubelet.go:475] "Attempting to sync node with API server" Mar 13 00:42:02.086448 kubelet[2392]: I0313 00:42:02.086420 2392 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 13 00:42:02.086744 kubelet[2392]: I0313 00:42:02.086712 2392 kubelet.go:387] "Adding apiserver pod source" Mar 13 00:42:02.087846 kubelet[2392]: I0313 00:42:02.087147 2392 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 13 00:42:02.091333 kubelet[2392]: E0313 00:42:02.091289 2392 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.68:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.68:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 13 00:42:02.091435 kubelet[2392]: E0313 00:42:02.091397 2392 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.68:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.68:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 13 00:42:02.093923 kubelet[2392]: I0313 00:42:02.093892 2392 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Mar 13 00:42:02.095763 kubelet[2392]: I0313 00:42:02.095688 2392 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 13 00:42:02.095763 kubelet[2392]: I0313 00:42:02.095759 2392 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 13 00:42:02.096023 kubelet[2392]: W0313 00:42:02.095995 2392 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 13 00:42:02.123919 kubelet[2392]: I0313 00:42:02.123865 2392 server.go:1262] "Started kubelet" Mar 13 00:42:02.124207 kubelet[2392]: I0313 00:42:02.124100 2392 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 13 00:42:02.125831 kubelet[2392]: I0313 00:42:02.125759 2392 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 13 00:42:02.128340 kubelet[2392]: I0313 00:42:02.126163 2392 server.go:310] "Adding debug handlers to kubelet server" Mar 13 00:42:02.130914 kubelet[2392]: I0313 00:42:02.128657 2392 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 13 00:42:02.130914 kubelet[2392]: I0313 00:42:02.129091 2392 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 13 00:42:02.130914 kubelet[2392]: E0313 00:42:02.127822 2392 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.68:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.68:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.189c3fd1752df289 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-13 00:42:02.123793033 +0000 UTC m=+0.731921772,LastTimestamp:2026-03-13 00:42:02.123793033 +0000 UTC m=+0.731921772,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 13 00:42:02.130914 kubelet[2392]: I0313 00:42:02.129805 2392 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 13 00:42:02.136486 kubelet[2392]: I0313 00:42:02.131804 2392 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 13 00:42:02.136486 kubelet[2392]: E0313 00:42:02.132415 2392 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 13 00:42:02.136486 kubelet[2392]: I0313 00:42:02.132446 2392 volume_manager.go:313] "Starting Kubelet Volume Manager" Mar 13 00:42:02.136486 kubelet[2392]: E0313 00:42:02.132594 2392 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.68:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.68:6443: connect: connection refused" interval="200ms" Mar 13 00:42:02.136486 kubelet[2392]: I0313 00:42:02.133329 2392 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 13 00:42:02.136486 kubelet[2392]: E0313 00:42:02.133830 2392 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 13 00:42:02.136486 kubelet[2392]: E0313 00:42:02.134231 2392 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.68:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.68:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 13 00:42:02.136486 kubelet[2392]: I0313 00:42:02.134685 2392 reconciler.go:29] "Reconciler: start to sync state" Mar 13 00:42:02.136486 kubelet[2392]: I0313 00:42:02.134981 2392 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 13 00:42:02.143074 kubelet[2392]: I0313 00:42:02.137700 2392 factory.go:223] Registration of the containerd container factory successfully Mar 13 00:42:02.143074 kubelet[2392]: I0313 00:42:02.137713 2392 factory.go:223] Registration of the systemd container factory successfully Mar 13 00:42:02.162975 update_engine[1550]: I20260313 00:42:02.162806 1550 update_attempter.cc:509] Updating boot flags... Mar 13 00:42:02.166418 kubelet[2392]: I0313 00:42:02.165462 2392 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 13 00:42:02.166418 kubelet[2392]: I0313 00:42:02.165475 2392 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 13 00:42:02.166418 kubelet[2392]: I0313 00:42:02.165489 2392 state_mem.go:36] "Initialized new in-memory state store" Mar 13 00:42:02.171593 kubelet[2392]: I0313 00:42:02.171363 2392 policy_none.go:49] "None policy: Start" Mar 13 00:42:02.171593 kubelet[2392]: I0313 00:42:02.171519 2392 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 13 00:42:02.171706 kubelet[2392]: I0313 00:42:02.171627 2392 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 13 00:42:02.173246 kubelet[2392]: I0313 00:42:02.173197 2392 policy_none.go:47] "Start" Mar 13 00:42:02.199160 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 13 00:42:02.234305 kubelet[2392]: E0313 00:42:02.233702 2392 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 13 00:42:02.274792 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 13 00:42:02.328879 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 13 00:42:02.335468 kubelet[2392]: E0313 00:42:02.335442 2392 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 13 00:42:02.335889 kubelet[2392]: E0313 00:42:02.335641 2392 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.68:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.68:6443: connect: connection refused" interval="400ms" Mar 13 00:42:02.351997 kubelet[2392]: I0313 00:42:02.351899 2392 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 13 00:42:02.354184 kubelet[2392]: I0313 00:42:02.354097 2392 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 13 00:42:02.354368 kubelet[2392]: I0313 00:42:02.354355 2392 status_manager.go:244] "Starting to sync pod status with apiserver" Mar 13 00:42:02.355362 kubelet[2392]: E0313 00:42:02.355322 2392 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.68:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.68:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 13 00:42:02.355738 kubelet[2392]: I0313 00:42:02.355653 2392 kubelet.go:2428] "Starting kubelet main sync loop" Mar 13 00:42:02.357336 kubelet[2392]: E0313 00:42:02.357266 2392 kubelet.go:2452] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 13 00:42:02.369003 kubelet[2392]: E0313 00:42:02.368981 2392 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 13 00:42:02.369467 kubelet[2392]: I0313 00:42:02.369388 2392 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 13 00:42:02.369467 kubelet[2392]: I0313 00:42:02.369428 2392 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 13 00:42:02.370612 kubelet[2392]: E0313 00:42:02.370589 2392 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 13 00:42:02.370804 kubelet[2392]: I0313 00:42:02.370681 2392 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 13 00:42:02.370804 kubelet[2392]: E0313 00:42:02.370788 2392 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 13 00:42:02.471508 kubelet[2392]: I0313 00:42:02.471008 2392 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 13 00:42:02.471939 kubelet[2392]: E0313 00:42:02.471822 2392 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.68:6443/api/v1/nodes\": dial tcp 10.0.0.68:6443: connect: connection refused" node="localhost" Mar 13 00:42:02.474905 systemd[1]: Created slice kubepods-burstable-podfbe3aafb2d21255c907dc1ca27d8c0eb.slice - libcontainer container kubepods-burstable-podfbe3aafb2d21255c907dc1ca27d8c0eb.slice. Mar 13 00:42:02.488094 kubelet[2392]: E0313 00:42:02.487957 2392 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 13 00:42:02.490802 systemd[1]: Created slice kubepods-burstable-poddb0989cdb653dfec284dd4f35625e9e7.slice - libcontainer container kubepods-burstable-poddb0989cdb653dfec284dd4f35625e9e7.slice. Mar 13 00:42:02.493810 kubelet[2392]: E0313 00:42:02.493728 2392 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 13 00:42:02.495992 systemd[1]: Created slice kubepods-burstable-pod89efda49e166906783d8d868d41ebb86.slice - libcontainer container kubepods-burstable-pod89efda49e166906783d8d868d41ebb86.slice. Mar 13 00:42:02.498738 kubelet[2392]: E0313 00:42:02.498692 2392 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 13 00:42:02.537019 kubelet[2392]: I0313 00:42:02.536839 2392 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fbe3aafb2d21255c907dc1ca27d8c0eb-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"fbe3aafb2d21255c907dc1ca27d8c0eb\") " pod="kube-system/kube-apiserver-localhost" Mar 13 00:42:02.537019 kubelet[2392]: I0313 00:42:02.536906 2392 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fbe3aafb2d21255c907dc1ca27d8c0eb-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"fbe3aafb2d21255c907dc1ca27d8c0eb\") " pod="kube-system/kube-apiserver-localhost" Mar 13 00:42:02.537019 kubelet[2392]: I0313 00:42:02.536935 2392 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 13 00:42:02.537019 kubelet[2392]: I0313 00:42:02.536957 2392 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 13 00:42:02.537019 kubelet[2392]: I0313 00:42:02.536978 2392 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 13 00:42:02.537453 kubelet[2392]: I0313 00:42:02.537000 2392 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 13 00:42:02.537453 kubelet[2392]: I0313 00:42:02.537023 2392 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fbe3aafb2d21255c907dc1ca27d8c0eb-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"fbe3aafb2d21255c907dc1ca27d8c0eb\") " pod="kube-system/kube-apiserver-localhost" Mar 13 00:42:02.537453 kubelet[2392]: I0313 00:42:02.537042 2392 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 13 00:42:02.537453 kubelet[2392]: I0313 00:42:02.537063 2392 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/89efda49e166906783d8d868d41ebb86-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"89efda49e166906783d8d868d41ebb86\") " pod="kube-system/kube-scheduler-localhost" Mar 13 00:42:02.674491 kubelet[2392]: I0313 00:42:02.674403 2392 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 13 00:42:02.675082 kubelet[2392]: E0313 00:42:02.674988 2392 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.68:6443/api/v1/nodes\": dial tcp 10.0.0.68:6443: connect: connection refused" node="localhost" Mar 13 00:42:02.737221 kubelet[2392]: E0313 00:42:02.737088 2392 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.68:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.68:6443: connect: connection refused" interval="800ms" Mar 13 00:42:02.792690 kubelet[2392]: E0313 00:42:02.792460 2392 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:42:02.794454 containerd[1565]: time="2026-03-13T00:42:02.794310261Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:fbe3aafb2d21255c907dc1ca27d8c0eb,Namespace:kube-system,Attempt:0,}" Mar 13 00:42:02.797187 kubelet[2392]: E0313 00:42:02.796998 2392 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:42:02.798095 containerd[1565]: time="2026-03-13T00:42:02.797940517Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:db0989cdb653dfec284dd4f35625e9e7,Namespace:kube-system,Attempt:0,}" Mar 13 00:42:02.804071 kubelet[2392]: E0313 00:42:02.803995 2392 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:42:02.804710 containerd[1565]: time="2026-03-13T00:42:02.804653703Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:89efda49e166906783d8d868d41ebb86,Namespace:kube-system,Attempt:0,}" Mar 13 00:42:03.077275 kubelet[2392]: I0313 00:42:03.077076 2392 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 13 00:42:03.077696 kubelet[2392]: E0313 00:42:03.077642 2392 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.68:6443/api/v1/nodes\": dial tcp 10.0.0.68:6443: connect: connection refused" node="localhost" Mar 13 00:42:03.162173 kubelet[2392]: E0313 00:42:03.162019 2392 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.68:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.68:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 13 00:42:03.207862 kubelet[2392]: E0313 00:42:03.207780 2392 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.68:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.68:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 13 00:42:03.243310 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4198270033.mount: Deactivated successfully. Mar 13 00:42:03.245782 kubelet[2392]: E0313 00:42:03.245712 2392 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.68:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.68:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 13 00:42:03.250315 containerd[1565]: time="2026-03-13T00:42:03.250218696Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 13 00:42:03.253677 containerd[1565]: time="2026-03-13T00:42:03.253609062Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Mar 13 00:42:03.256185 containerd[1565]: time="2026-03-13T00:42:03.256006111Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 13 00:42:03.258735 containerd[1565]: time="2026-03-13T00:42:03.258596893Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 13 00:42:03.260159 containerd[1565]: time="2026-03-13T00:42:03.260045689Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 13 00:42:03.261620 containerd[1565]: time="2026-03-13T00:42:03.261562564Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Mar 13 00:42:03.262681 containerd[1565]: time="2026-03-13T00:42:03.262638396Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Mar 13 00:42:03.265627 containerd[1565]: time="2026-03-13T00:42:03.263920691Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 13 00:42:03.267045 containerd[1565]: time="2026-03-13T00:42:03.266997397Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 465.576741ms" Mar 13 00:42:03.268701 containerd[1565]: time="2026-03-13T00:42:03.268652114Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 471.810957ms" Mar 13 00:42:03.270702 containerd[1565]: time="2026-03-13T00:42:03.270628873Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 463.939873ms" Mar 13 00:42:03.322442 containerd[1565]: time="2026-03-13T00:42:03.322281026Z" level=info msg="connecting to shim 0a9282608b83fafcc8421d63ee88a30a65a5abcf8d6e67efad0834f33ed9700e" address="unix:///run/containerd/s/285b5e60fb733c80ff8391aa5e0719ad9326dd459ab66ef2d7d368f8fa988cd4" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:42:03.328660 containerd[1565]: time="2026-03-13T00:42:03.327749236Z" level=info msg="connecting to shim 1210a5ec4e83fb27b7943040e404175771d2fe5627488d9b1126dc97ab1dc1d1" address="unix:///run/containerd/s/b70a0b4e367a0c9a189a3e17fc938c6b206baceb3c2bf1364ac9e20b054458d1" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:42:03.328660 containerd[1565]: time="2026-03-13T00:42:03.328464524Z" level=info msg="connecting to shim 096b433f96cb6e28b459240ca73c85d68ff3dec36f0ea007efb7b4f761b13e73" address="unix:///run/containerd/s/5d1f62fe29644523e27c8bcb862596c71afe88d7fd1b26ef7828279896880c25" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:42:03.377804 systemd[1]: Started cri-containerd-1210a5ec4e83fb27b7943040e404175771d2fe5627488d9b1126dc97ab1dc1d1.scope - libcontainer container 1210a5ec4e83fb27b7943040e404175771d2fe5627488d9b1126dc97ab1dc1d1. Mar 13 00:42:03.385362 systemd[1]: Started cri-containerd-096b433f96cb6e28b459240ca73c85d68ff3dec36f0ea007efb7b4f761b13e73.scope - libcontainer container 096b433f96cb6e28b459240ca73c85d68ff3dec36f0ea007efb7b4f761b13e73. Mar 13 00:42:03.388853 systemd[1]: Started cri-containerd-0a9282608b83fafcc8421d63ee88a30a65a5abcf8d6e67efad0834f33ed9700e.scope - libcontainer container 0a9282608b83fafcc8421d63ee88a30a65a5abcf8d6e67efad0834f33ed9700e. Mar 13 00:42:03.431938 kubelet[2392]: E0313 00:42:03.431779 2392 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.68:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.68:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.189c3fd1752df289 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-13 00:42:02.123793033 +0000 UTC m=+0.731921772,LastTimestamp:2026-03-13 00:42:02.123793033 +0000 UTC m=+0.731921772,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 13 00:42:03.470070 containerd[1565]: time="2026-03-13T00:42:03.469650344Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:89efda49e166906783d8d868d41ebb86,Namespace:kube-system,Attempt:0,} returns sandbox id \"1210a5ec4e83fb27b7943040e404175771d2fe5627488d9b1126dc97ab1dc1d1\"" Mar 13 00:42:03.474362 kubelet[2392]: E0313 00:42:03.473928 2392 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:42:03.482689 containerd[1565]: time="2026-03-13T00:42:03.482620009Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:db0989cdb653dfec284dd4f35625e9e7,Namespace:kube-system,Attempt:0,} returns sandbox id \"096b433f96cb6e28b459240ca73c85d68ff3dec36f0ea007efb7b4f761b13e73\"" Mar 13 00:42:03.484040 kubelet[2392]: E0313 00:42:03.483984 2392 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:42:03.487460 containerd[1565]: time="2026-03-13T00:42:03.487367221Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:fbe3aafb2d21255c907dc1ca27d8c0eb,Namespace:kube-system,Attempt:0,} returns sandbox id \"0a9282608b83fafcc8421d63ee88a30a65a5abcf8d6e67efad0834f33ed9700e\"" Mar 13 00:42:03.487621 containerd[1565]: time="2026-03-13T00:42:03.487466360Z" level=info msg="CreateContainer within sandbox \"1210a5ec4e83fb27b7943040e404175771d2fe5627488d9b1126dc97ab1dc1d1\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 13 00:42:03.489807 kubelet[2392]: E0313 00:42:03.489632 2392 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:42:03.490780 containerd[1565]: time="2026-03-13T00:42:03.490638702Z" level=info msg="CreateContainer within sandbox \"096b433f96cb6e28b459240ca73c85d68ff3dec36f0ea007efb7b4f761b13e73\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 13 00:42:03.497415 containerd[1565]: time="2026-03-13T00:42:03.497034479Z" level=info msg="CreateContainer within sandbox \"0a9282608b83fafcc8421d63ee88a30a65a5abcf8d6e67efad0834f33ed9700e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 13 00:42:03.510399 containerd[1565]: time="2026-03-13T00:42:03.510281690Z" level=info msg="Container 6146ed936621681b5054cd7b5e4a381647158127fda1eac44ecf6ad82bc5a967: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:42:03.516307 containerd[1565]: time="2026-03-13T00:42:03.516191255Z" level=info msg="Container 68754737adaf52c6afe7b8d69c95582fbba9039f860d0465e9be9dd07d8141fe: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:42:03.519360 containerd[1565]: time="2026-03-13T00:42:03.519308184Z" level=info msg="Container 3999616a8a91c5257629ea243b6d583a8aac5b259bce18c441692455d8be2bac: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:42:03.526600 containerd[1565]: time="2026-03-13T00:42:03.526493396Z" level=info msg="CreateContainer within sandbox \"1210a5ec4e83fb27b7943040e404175771d2fe5627488d9b1126dc97ab1dc1d1\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"6146ed936621681b5054cd7b5e4a381647158127fda1eac44ecf6ad82bc5a967\"" Mar 13 00:42:03.527584 containerd[1565]: time="2026-03-13T00:42:03.527497493Z" level=info msg="StartContainer for \"6146ed936621681b5054cd7b5e4a381647158127fda1eac44ecf6ad82bc5a967\"" Mar 13 00:42:03.529467 containerd[1565]: time="2026-03-13T00:42:03.529391067Z" level=info msg="connecting to shim 6146ed936621681b5054cd7b5e4a381647158127fda1eac44ecf6ad82bc5a967" address="unix:///run/containerd/s/b70a0b4e367a0c9a189a3e17fc938c6b206baceb3c2bf1364ac9e20b054458d1" protocol=ttrpc version=3 Mar 13 00:42:03.537331 containerd[1565]: time="2026-03-13T00:42:03.537288187Z" level=info msg="CreateContainer within sandbox \"096b433f96cb6e28b459240ca73c85d68ff3dec36f0ea007efb7b4f761b13e73\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"68754737adaf52c6afe7b8d69c95582fbba9039f860d0465e9be9dd07d8141fe\"" Mar 13 00:42:03.539184 kubelet[2392]: E0313 00:42:03.539081 2392 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.68:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.68:6443: connect: connection refused" interval="1.6s" Mar 13 00:42:03.539299 containerd[1565]: time="2026-03-13T00:42:03.539044733Z" level=info msg="StartContainer for \"68754737adaf52c6afe7b8d69c95582fbba9039f860d0465e9be9dd07d8141fe\"" Mar 13 00:42:03.540878 containerd[1565]: time="2026-03-13T00:42:03.540829066Z" level=info msg="connecting to shim 68754737adaf52c6afe7b8d69c95582fbba9039f860d0465e9be9dd07d8141fe" address="unix:///run/containerd/s/5d1f62fe29644523e27c8bcb862596c71afe88d7fd1b26ef7828279896880c25" protocol=ttrpc version=3 Mar 13 00:42:03.543733 containerd[1565]: time="2026-03-13T00:42:03.543512897Z" level=info msg="CreateContainer within sandbox \"0a9282608b83fafcc8421d63ee88a30a65a5abcf8d6e67efad0834f33ed9700e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"3999616a8a91c5257629ea243b6d583a8aac5b259bce18c441692455d8be2bac\"" Mar 13 00:42:03.544161 containerd[1565]: time="2026-03-13T00:42:03.544019538Z" level=info msg="StartContainer for \"3999616a8a91c5257629ea243b6d583a8aac5b259bce18c441692455d8be2bac\"" Mar 13 00:42:03.546632 containerd[1565]: time="2026-03-13T00:42:03.546461832Z" level=info msg="connecting to shim 3999616a8a91c5257629ea243b6d583a8aac5b259bce18c441692455d8be2bac" address="unix:///run/containerd/s/285b5e60fb733c80ff8391aa5e0719ad9326dd459ab66ef2d7d368f8fa988cd4" protocol=ttrpc version=3 Mar 13 00:42:03.563897 systemd[1]: Started cri-containerd-6146ed936621681b5054cd7b5e4a381647158127fda1eac44ecf6ad82bc5a967.scope - libcontainer container 6146ed936621681b5054cd7b5e4a381647158127fda1eac44ecf6ad82bc5a967. Mar 13 00:42:03.581773 systemd[1]: Started cri-containerd-68754737adaf52c6afe7b8d69c95582fbba9039f860d0465e9be9dd07d8141fe.scope - libcontainer container 68754737adaf52c6afe7b8d69c95582fbba9039f860d0465e9be9dd07d8141fe. Mar 13 00:42:03.589254 systemd[1]: Started cri-containerd-3999616a8a91c5257629ea243b6d583a8aac5b259bce18c441692455d8be2bac.scope - libcontainer container 3999616a8a91c5257629ea243b6d583a8aac5b259bce18c441692455d8be2bac. Mar 13 00:42:03.630588 kubelet[2392]: E0313 00:42:03.629508 2392 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.68:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.68:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 13 00:42:03.685796 containerd[1565]: time="2026-03-13T00:42:03.685718483Z" level=info msg="StartContainer for \"3999616a8a91c5257629ea243b6d583a8aac5b259bce18c441692455d8be2bac\" returns successfully" Mar 13 00:42:03.688501 containerd[1565]: time="2026-03-13T00:42:03.688234114Z" level=info msg="StartContainer for \"6146ed936621681b5054cd7b5e4a381647158127fda1eac44ecf6ad82bc5a967\" returns successfully" Mar 13 00:42:03.704036 containerd[1565]: time="2026-03-13T00:42:03.703652609Z" level=info msg="StartContainer for \"68754737adaf52c6afe7b8d69c95582fbba9039f860d0465e9be9dd07d8141fe\" returns successfully" Mar 13 00:42:03.880671 kubelet[2392]: I0313 00:42:03.880443 2392 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 13 00:42:04.377005 kubelet[2392]: E0313 00:42:04.376959 2392 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 13 00:42:04.379759 kubelet[2392]: E0313 00:42:04.379658 2392 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:42:04.384406 kubelet[2392]: E0313 00:42:04.384095 2392 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 13 00:42:04.384406 kubelet[2392]: E0313 00:42:04.384331 2392 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:42:04.388229 kubelet[2392]: E0313 00:42:04.388087 2392 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 13 00:42:04.388328 kubelet[2392]: E0313 00:42:04.388316 2392 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:42:05.159754 kubelet[2392]: E0313 00:42:05.159379 2392 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Mar 13 00:42:05.208813 kubelet[2392]: I0313 00:42:05.208300 2392 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Mar 13 00:42:05.208813 kubelet[2392]: E0313 00:42:05.208715 2392 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Mar 13 00:42:05.225775 kubelet[2392]: E0313 00:42:05.225684 2392 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 13 00:42:05.326747 kubelet[2392]: E0313 00:42:05.326210 2392 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 13 00:42:05.406788 kubelet[2392]: E0313 00:42:05.406646 2392 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 13 00:42:05.407715 kubelet[2392]: E0313 00:42:05.407207 2392 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:42:05.407766 kubelet[2392]: E0313 00:42:05.407746 2392 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 13 00:42:05.407927 kubelet[2392]: E0313 00:42:05.407888 2392 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:42:05.427690 kubelet[2392]: E0313 00:42:05.426976 2392 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 13 00:42:05.528392 kubelet[2392]: E0313 00:42:05.527830 2392 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 13 00:42:05.628758 kubelet[2392]: E0313 00:42:05.628674 2392 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 13 00:42:05.733520 kubelet[2392]: E0313 00:42:05.730610 2392 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 13 00:42:05.833314 kubelet[2392]: E0313 00:42:05.832516 2392 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 13 00:42:05.932948 kubelet[2392]: E0313 00:42:05.932839 2392 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 13 00:42:06.041163 kubelet[2392]: E0313 00:42:06.038836 2392 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 13 00:42:06.140483 kubelet[2392]: E0313 00:42:06.139945 2392 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 13 00:42:06.240482 kubelet[2392]: E0313 00:42:06.240368 2392 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 13 00:42:06.337638 kubelet[2392]: I0313 00:42:06.335395 2392 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 13 00:42:06.348497 kubelet[2392]: I0313 00:42:06.348443 2392 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 13 00:42:06.356057 kubelet[2392]: I0313 00:42:06.355954 2392 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 13 00:42:06.763837 kubelet[2392]: I0313 00:42:06.763436 2392 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 13 00:42:06.772154 kubelet[2392]: E0313 00:42:06.772011 2392 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Mar 13 00:42:06.773598 kubelet[2392]: E0313 00:42:06.773422 2392 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:42:07.094228 kubelet[2392]: I0313 00:42:07.092194 2392 apiserver.go:52] "Watching apiserver" Mar 13 00:42:07.098719 kubelet[2392]: E0313 00:42:07.098622 2392 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:42:07.099020 kubelet[2392]: E0313 00:42:07.098660 2392 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:42:07.135770 kubelet[2392]: I0313 00:42:07.135427 2392 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 13 00:42:07.410428 kubelet[2392]: E0313 00:42:07.409740 2392 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:42:07.844211 systemd[1]: Reload requested from client PID 2697 ('systemctl') (unit session-9.scope)... Mar 13 00:42:07.844254 systemd[1]: Reloading... Mar 13 00:42:07.968791 zram_generator::config[2743]: No configuration found. Mar 13 00:42:08.207950 systemd[1]: Reloading finished in 363 ms. Mar 13 00:42:08.247031 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 00:42:08.262003 systemd[1]: kubelet.service: Deactivated successfully. Mar 13 00:42:08.262458 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:42:08.262591 systemd[1]: kubelet.service: Consumed 1.621s CPU time, 127.1M memory peak. Mar 13 00:42:08.264874 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 00:42:08.521924 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:42:08.532075 (kubelet)[2785]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 13 00:42:08.595349 kubelet[2785]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 13 00:42:08.595349 kubelet[2785]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 13 00:42:08.595828 kubelet[2785]: I0313 00:42:08.595398 2785 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 13 00:42:08.606038 kubelet[2785]: I0313 00:42:08.605972 2785 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Mar 13 00:42:08.606038 kubelet[2785]: I0313 00:42:08.606012 2785 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 13 00:42:08.606038 kubelet[2785]: I0313 00:42:08.606040 2785 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 13 00:42:08.606038 kubelet[2785]: I0313 00:42:08.606051 2785 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 13 00:42:08.606305 kubelet[2785]: I0313 00:42:08.606263 2785 server.go:956] "Client rotation is on, will bootstrap in background" Mar 13 00:42:08.607573 kubelet[2785]: I0313 00:42:08.607464 2785 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 13 00:42:08.609762 kubelet[2785]: I0313 00:42:08.609681 2785 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 13 00:42:08.614828 kubelet[2785]: I0313 00:42:08.614807 2785 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 13 00:42:08.621046 kubelet[2785]: I0313 00:42:08.620983 2785 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 13 00:42:08.621437 kubelet[2785]: I0313 00:42:08.621352 2785 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 13 00:42:08.621645 kubelet[2785]: I0313 00:42:08.621419 2785 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 13 00:42:08.621645 kubelet[2785]: I0313 00:42:08.621637 2785 topology_manager.go:138] "Creating topology manager with none policy" Mar 13 00:42:08.621802 kubelet[2785]: I0313 00:42:08.621648 2785 container_manager_linux.go:306] "Creating device plugin manager" Mar 13 00:42:08.621802 kubelet[2785]: I0313 00:42:08.621683 2785 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Mar 13 00:42:08.621923 kubelet[2785]: I0313 00:42:08.621894 2785 state_mem.go:36] "Initialized new in-memory state store" Mar 13 00:42:08.622190 kubelet[2785]: I0313 00:42:08.622162 2785 kubelet.go:475] "Attempting to sync node with API server" Mar 13 00:42:08.622223 kubelet[2785]: I0313 00:42:08.622192 2785 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 13 00:42:08.622223 kubelet[2785]: I0313 00:42:08.622215 2785 kubelet.go:387] "Adding apiserver pod source" Mar 13 00:42:08.622271 kubelet[2785]: I0313 00:42:08.622228 2785 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 13 00:42:08.624710 kubelet[2785]: I0313 00:42:08.624676 2785 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Mar 13 00:42:08.626142 kubelet[2785]: I0313 00:42:08.626039 2785 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 13 00:42:08.626620 kubelet[2785]: I0313 00:42:08.626274 2785 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 13 00:42:08.633167 kubelet[2785]: I0313 00:42:08.633098 2785 server.go:1262] "Started kubelet" Mar 13 00:42:08.633363 kubelet[2785]: I0313 00:42:08.633244 2785 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 13 00:42:08.634733 kubelet[2785]: I0313 00:42:08.634666 2785 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 13 00:42:08.634820 kubelet[2785]: I0313 00:42:08.634737 2785 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 13 00:42:08.635091 kubelet[2785]: I0313 00:42:08.635038 2785 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 13 00:42:08.640995 kubelet[2785]: I0313 00:42:08.640942 2785 server.go:310] "Adding debug handlers to kubelet server" Mar 13 00:42:08.641970 kubelet[2785]: I0313 00:42:08.641925 2785 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 13 00:42:08.642606 kubelet[2785]: I0313 00:42:08.642498 2785 volume_manager.go:313] "Starting Kubelet Volume Manager" Mar 13 00:42:08.643667 kubelet[2785]: I0313 00:42:08.642737 2785 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 13 00:42:08.643667 kubelet[2785]: I0313 00:42:08.642508 2785 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 13 00:42:08.643667 kubelet[2785]: I0313 00:42:08.642992 2785 reconciler.go:29] "Reconciler: start to sync state" Mar 13 00:42:08.644657 kubelet[2785]: I0313 00:42:08.644635 2785 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 13 00:42:08.645438 kubelet[2785]: E0313 00:42:08.644987 2785 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 13 00:42:08.647247 kubelet[2785]: I0313 00:42:08.647149 2785 factory.go:223] Registration of the containerd container factory successfully Mar 13 00:42:08.647247 kubelet[2785]: I0313 00:42:08.647183 2785 factory.go:223] Registration of the systemd container factory successfully Mar 13 00:42:08.668627 kubelet[2785]: I0313 00:42:08.668089 2785 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 13 00:42:08.671152 kubelet[2785]: I0313 00:42:08.670977 2785 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 13 00:42:08.671152 kubelet[2785]: I0313 00:42:08.671088 2785 status_manager.go:244] "Starting to sync pod status with apiserver" Mar 13 00:42:08.671297 kubelet[2785]: I0313 00:42:08.671215 2785 kubelet.go:2428] "Starting kubelet main sync loop" Mar 13 00:42:08.671460 kubelet[2785]: E0313 00:42:08.671325 2785 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 13 00:42:08.697465 kubelet[2785]: I0313 00:42:08.697398 2785 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 13 00:42:08.697465 kubelet[2785]: I0313 00:42:08.697438 2785 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 13 00:42:08.697465 kubelet[2785]: I0313 00:42:08.697463 2785 state_mem.go:36] "Initialized new in-memory state store" Mar 13 00:42:08.697881 kubelet[2785]: I0313 00:42:08.697743 2785 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 13 00:42:08.697881 kubelet[2785]: I0313 00:42:08.697761 2785 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 13 00:42:08.697881 kubelet[2785]: I0313 00:42:08.697785 2785 policy_none.go:49] "None policy: Start" Mar 13 00:42:08.697881 kubelet[2785]: I0313 00:42:08.697799 2785 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 13 00:42:08.697881 kubelet[2785]: I0313 00:42:08.697814 2785 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 13 00:42:08.699588 kubelet[2785]: I0313 00:42:08.699478 2785 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Mar 13 00:42:08.699651 kubelet[2785]: I0313 00:42:08.699612 2785 policy_none.go:47] "Start" Mar 13 00:42:08.706051 kubelet[2785]: E0313 00:42:08.705995 2785 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 13 00:42:08.706332 kubelet[2785]: I0313 00:42:08.706266 2785 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 13 00:42:08.706332 kubelet[2785]: I0313 00:42:08.706311 2785 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 13 00:42:08.706580 kubelet[2785]: I0313 00:42:08.706514 2785 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 13 00:42:08.708014 kubelet[2785]: E0313 00:42:08.707973 2785 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 13 00:42:08.777038 kubelet[2785]: I0313 00:42:08.776006 2785 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 13 00:42:08.777038 kubelet[2785]: I0313 00:42:08.776040 2785 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 13 00:42:08.777038 kubelet[2785]: I0313 00:42:08.775990 2785 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 13 00:42:08.786607 kubelet[2785]: E0313 00:42:08.786481 2785 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Mar 13 00:42:08.788977 kubelet[2785]: E0313 00:42:08.788847 2785 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Mar 13 00:42:08.789894 kubelet[2785]: E0313 00:42:08.789818 2785 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 13 00:42:08.819887 kubelet[2785]: I0313 00:42:08.819796 2785 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 13 00:42:08.835462 kubelet[2785]: I0313 00:42:08.835278 2785 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Mar 13 00:42:08.835462 kubelet[2785]: I0313 00:42:08.835372 2785 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Mar 13 00:42:08.846400 kubelet[2785]: I0313 00:42:08.845796 2785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 13 00:42:08.846400 kubelet[2785]: I0313 00:42:08.845833 2785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 13 00:42:08.846400 kubelet[2785]: I0313 00:42:08.845851 2785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 13 00:42:08.846400 kubelet[2785]: I0313 00:42:08.845865 2785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 13 00:42:08.846400 kubelet[2785]: I0313 00:42:08.845883 2785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fbe3aafb2d21255c907dc1ca27d8c0eb-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"fbe3aafb2d21255c907dc1ca27d8c0eb\") " pod="kube-system/kube-apiserver-localhost" Mar 13 00:42:08.846766 kubelet[2785]: I0313 00:42:08.845921 2785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fbe3aafb2d21255c907dc1ca27d8c0eb-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"fbe3aafb2d21255c907dc1ca27d8c0eb\") " pod="kube-system/kube-apiserver-localhost" Mar 13 00:42:08.846766 kubelet[2785]: I0313 00:42:08.846207 2785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 13 00:42:08.846766 kubelet[2785]: I0313 00:42:08.846434 2785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/89efda49e166906783d8d868d41ebb86-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"89efda49e166906783d8d868d41ebb86\") " pod="kube-system/kube-scheduler-localhost" Mar 13 00:42:08.846766 kubelet[2785]: I0313 00:42:08.846706 2785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fbe3aafb2d21255c907dc1ca27d8c0eb-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"fbe3aafb2d21255c907dc1ca27d8c0eb\") " pod="kube-system/kube-apiserver-localhost" Mar 13 00:42:09.089211 kubelet[2785]: E0313 00:42:09.088484 2785 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:42:09.089211 kubelet[2785]: E0313 00:42:09.089209 2785 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:42:09.090606 kubelet[2785]: E0313 00:42:09.090495 2785 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:42:09.625785 kubelet[2785]: I0313 00:42:09.625331 2785 apiserver.go:52] "Watching apiserver" Mar 13 00:42:09.645956 kubelet[2785]: I0313 00:42:09.645368 2785 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 13 00:42:09.692228 kubelet[2785]: I0313 00:42:09.691942 2785 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 13 00:42:09.693960 kubelet[2785]: E0313 00:42:09.692292 2785 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:42:09.693960 kubelet[2785]: E0313 00:42:09.692304 2785 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:42:09.703308 kubelet[2785]: E0313 00:42:09.702893 2785 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 13 00:42:09.703308 kubelet[2785]: E0313 00:42:09.703111 2785 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:42:09.743347 kubelet[2785]: I0313 00:42:09.743190 2785 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.7430948109999997 podStartE2EDuration="3.743094811s" podCreationTimestamp="2026-03-13 00:42:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 00:42:09.723437928 +0000 UTC m=+1.183712918" watchObservedRunningTime="2026-03-13 00:42:09.743094811 +0000 UTC m=+1.203369802" Mar 13 00:42:09.764578 kubelet[2785]: I0313 00:42:09.763106 2785 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.7630883219999998 podStartE2EDuration="3.763088322s" podCreationTimestamp="2026-03-13 00:42:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 00:42:09.762947258 +0000 UTC m=+1.223222249" watchObservedRunningTime="2026-03-13 00:42:09.763088322 +0000 UTC m=+1.223363313" Mar 13 00:42:09.764578 kubelet[2785]: I0313 00:42:09.763245 2785 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.76323773 podStartE2EDuration="3.76323773s" podCreationTimestamp="2026-03-13 00:42:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 00:42:09.744824599 +0000 UTC m=+1.205099590" watchObservedRunningTime="2026-03-13 00:42:09.76323773 +0000 UTC m=+1.223512721" Mar 13 00:42:10.693856 kubelet[2785]: E0313 00:42:10.693719 2785 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:42:10.693856 kubelet[2785]: E0313 00:42:10.693799 2785 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:42:11.700455 kubelet[2785]: E0313 00:42:11.700059 2785 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:42:12.168784 kubelet[2785]: E0313 00:42:12.168646 2785 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:42:13.814635 kubelet[2785]: I0313 00:42:13.814592 2785 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 13 00:42:13.815481 containerd[1565]: time="2026-03-13T00:42:13.815397710Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 13 00:42:13.815851 kubelet[2785]: I0313 00:42:13.815811 2785 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 13 00:42:14.571826 systemd[1]: Created slice kubepods-besteffort-pod79434e7e_f8c4_42af_b35b_4ac251eff3e7.slice - libcontainer container kubepods-besteffort-pod79434e7e_f8c4_42af_b35b_4ac251eff3e7.slice. Mar 13 00:42:14.594724 kubelet[2785]: I0313 00:42:14.594683 2785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/79434e7e-f8c4-42af-b35b-4ac251eff3e7-kube-proxy\") pod \"kube-proxy-765gh\" (UID: \"79434e7e-f8c4-42af-b35b-4ac251eff3e7\") " pod="kube-system/kube-proxy-765gh" Mar 13 00:42:14.595058 kubelet[2785]: I0313 00:42:14.594984 2785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/79434e7e-f8c4-42af-b35b-4ac251eff3e7-xtables-lock\") pod \"kube-proxy-765gh\" (UID: \"79434e7e-f8c4-42af-b35b-4ac251eff3e7\") " pod="kube-system/kube-proxy-765gh" Mar 13 00:42:14.595359 kubelet[2785]: I0313 00:42:14.595220 2785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/79434e7e-f8c4-42af-b35b-4ac251eff3e7-lib-modules\") pod \"kube-proxy-765gh\" (UID: \"79434e7e-f8c4-42af-b35b-4ac251eff3e7\") " pod="kube-system/kube-proxy-765gh" Mar 13 00:42:14.595359 kubelet[2785]: I0313 00:42:14.595260 2785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bscw8\" (UniqueName: \"kubernetes.io/projected/79434e7e-f8c4-42af-b35b-4ac251eff3e7-kube-api-access-bscw8\") pod \"kube-proxy-765gh\" (UID: \"79434e7e-f8c4-42af-b35b-4ac251eff3e7\") " pod="kube-system/kube-proxy-765gh" Mar 13 00:42:14.890620 kubelet[2785]: E0313 00:42:14.889984 2785 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:42:14.892086 containerd[1565]: time="2026-03-13T00:42:14.891767346Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-765gh,Uid:79434e7e-f8c4-42af-b35b-4ac251eff3e7,Namespace:kube-system,Attempt:0,}" Mar 13 00:42:14.917889 containerd[1565]: time="2026-03-13T00:42:14.917778683Z" level=info msg="connecting to shim ba5fb9d6d0c543b6e987c625b1c9961747ab73a209fe86457c3857989d00b63d" address="unix:///run/containerd/s/3f3b9e8801c1b79febb0a5b75a72d7f511b2ca713b86cc6f31633c670eccc592" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:42:14.959863 systemd[1]: Started cri-containerd-ba5fb9d6d0c543b6e987c625b1c9961747ab73a209fe86457c3857989d00b63d.scope - libcontainer container ba5fb9d6d0c543b6e987c625b1c9961747ab73a209fe86457c3857989d00b63d. Mar 13 00:42:15.020439 containerd[1565]: time="2026-03-13T00:42:15.019592945Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-765gh,Uid:79434e7e-f8c4-42af-b35b-4ac251eff3e7,Namespace:kube-system,Attempt:0,} returns sandbox id \"ba5fb9d6d0c543b6e987c625b1c9961747ab73a209fe86457c3857989d00b63d\"" Mar 13 00:42:15.025263 kubelet[2785]: E0313 00:42:15.024751 2785 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:42:15.037687 containerd[1565]: time="2026-03-13T00:42:15.037619557Z" level=info msg="CreateContainer within sandbox \"ba5fb9d6d0c543b6e987c625b1c9961747ab73a209fe86457c3857989d00b63d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 13 00:42:15.053728 containerd[1565]: time="2026-03-13T00:42:15.052500035Z" level=info msg="Container 1e4cfac30dd407f6e4463e5c1fe849ec5ba761793530661abd377f7dcb61994e: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:42:15.065427 containerd[1565]: time="2026-03-13T00:42:15.065337588Z" level=info msg="CreateContainer within sandbox \"ba5fb9d6d0c543b6e987c625b1c9961747ab73a209fe86457c3857989d00b63d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1e4cfac30dd407f6e4463e5c1fe849ec5ba761793530661abd377f7dcb61994e\"" Mar 13 00:42:15.067035 containerd[1565]: time="2026-03-13T00:42:15.066945290Z" level=info msg="StartContainer for \"1e4cfac30dd407f6e4463e5c1fe849ec5ba761793530661abd377f7dcb61994e\"" Mar 13 00:42:15.070091 containerd[1565]: time="2026-03-13T00:42:15.068735152Z" level=info msg="connecting to shim 1e4cfac30dd407f6e4463e5c1fe849ec5ba761793530661abd377f7dcb61994e" address="unix:///run/containerd/s/3f3b9e8801c1b79febb0a5b75a72d7f511b2ca713b86cc6f31633c670eccc592" protocol=ttrpc version=3 Mar 13 00:42:15.074655 systemd[1]: Created slice kubepods-besteffort-podbd0b4090_96fd_4269_979b_56da5f6d2bf2.slice - libcontainer container kubepods-besteffort-podbd0b4090_96fd_4269_979b_56da5f6d2bf2.slice. Mar 13 00:42:15.098866 kubelet[2785]: I0313 00:42:15.098779 2785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/bd0b4090-96fd-4269-979b-56da5f6d2bf2-var-lib-calico\") pod \"tigera-operator-5588576f44-8hmsg\" (UID: \"bd0b4090-96fd-4269-979b-56da5f6d2bf2\") " pod="tigera-operator/tigera-operator-5588576f44-8hmsg" Mar 13 00:42:15.098866 kubelet[2785]: I0313 00:42:15.098826 2785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kzvh\" (UniqueName: \"kubernetes.io/projected/bd0b4090-96fd-4269-979b-56da5f6d2bf2-kube-api-access-7kzvh\") pod \"tigera-operator-5588576f44-8hmsg\" (UID: \"bd0b4090-96fd-4269-979b-56da5f6d2bf2\") " pod="tigera-operator/tigera-operator-5588576f44-8hmsg" Mar 13 00:42:15.102761 systemd[1]: Started cri-containerd-1e4cfac30dd407f6e4463e5c1fe849ec5ba761793530661abd377f7dcb61994e.scope - libcontainer container 1e4cfac30dd407f6e4463e5c1fe849ec5ba761793530661abd377f7dcb61994e. Mar 13 00:42:15.218181 containerd[1565]: time="2026-03-13T00:42:15.218058645Z" level=info msg="StartContainer for \"1e4cfac30dd407f6e4463e5c1fe849ec5ba761793530661abd377f7dcb61994e\" returns successfully" Mar 13 00:42:15.305806 kubelet[2785]: E0313 00:42:15.305756 2785 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:42:15.382475 containerd[1565]: time="2026-03-13T00:42:15.382396382Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5588576f44-8hmsg,Uid:bd0b4090-96fd-4269-979b-56da5f6d2bf2,Namespace:tigera-operator,Attempt:0,}" Mar 13 00:42:15.402010 containerd[1565]: time="2026-03-13T00:42:15.401507283Z" level=info msg="connecting to shim 8c014b77fcf569197966713341e7066f0243361695067b0289be4b3e4ddcd6f2" address="unix:///run/containerd/s/c61fe229ac5dc7c8390008a8165c8677dd13b5e797b13e8e4feca5bb0d471d34" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:42:15.440762 systemd[1]: Started cri-containerd-8c014b77fcf569197966713341e7066f0243361695067b0289be4b3e4ddcd6f2.scope - libcontainer container 8c014b77fcf569197966713341e7066f0243361695067b0289be4b3e4ddcd6f2. Mar 13 00:42:15.493482 containerd[1565]: time="2026-03-13T00:42:15.493128425Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5588576f44-8hmsg,Uid:bd0b4090-96fd-4269-979b-56da5f6d2bf2,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"8c014b77fcf569197966713341e7066f0243361695067b0289be4b3e4ddcd6f2\"" Mar 13 00:42:15.497192 containerd[1565]: time="2026-03-13T00:42:15.496963219Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Mar 13 00:42:15.723311 kubelet[2785]: E0313 00:42:15.721899 2785 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:42:15.723311 kubelet[2785]: E0313 00:42:15.722596 2785 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:42:15.750678 kubelet[2785]: I0313 00:42:15.750337 2785 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-765gh" podStartSLOduration=1.750319847 podStartE2EDuration="1.750319847s" podCreationTimestamp="2026-03-13 00:42:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 00:42:15.737515621 +0000 UTC m=+7.197790612" watchObservedRunningTime="2026-03-13 00:42:15.750319847 +0000 UTC m=+7.210594837" Mar 13 00:42:16.774995 kubelet[2785]: E0313 00:42:16.772837 2785 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:42:16.893315 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2211832276.mount: Deactivated successfully. Mar 13 00:42:20.297004 kubelet[2785]: E0313 00:42:20.296437 2785 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:42:22.326643 kubelet[2785]: E0313 00:42:22.322687 2785 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:42:22.993069 containerd[1565]: time="2026-03-13T00:42:22.992465543Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:42:22.996397 containerd[1565]: time="2026-03-13T00:42:22.995251916Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Mar 13 00:42:22.996514 containerd[1565]: time="2026-03-13T00:42:22.996482469Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:42:23.017014 containerd[1565]: time="2026-03-13T00:42:23.016205053Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:42:23.019746 containerd[1565]: time="2026-03-13T00:42:23.019232317Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 7.522234203s" Mar 13 00:42:23.019746 containerd[1565]: time="2026-03-13T00:42:23.019431648Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Mar 13 00:42:23.096898 containerd[1565]: time="2026-03-13T00:42:23.094800520Z" level=info msg="CreateContainer within sandbox \"8c014b77fcf569197966713341e7066f0243361695067b0289be4b3e4ddcd6f2\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 13 00:42:23.173016 containerd[1565]: time="2026-03-13T00:42:23.172760129Z" level=info msg="Container 142207e3089f9e2a6330960e7d8f5199bb123a810620670de781beebb96ddfb0: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:42:23.196497 containerd[1565]: time="2026-03-13T00:42:23.195121324Z" level=info msg="CreateContainer within sandbox \"8c014b77fcf569197966713341e7066f0243361695067b0289be4b3e4ddcd6f2\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"142207e3089f9e2a6330960e7d8f5199bb123a810620670de781beebb96ddfb0\"" Mar 13 00:42:23.214095 containerd[1565]: time="2026-03-13T00:42:23.213696583Z" level=info msg="StartContainer for \"142207e3089f9e2a6330960e7d8f5199bb123a810620670de781beebb96ddfb0\"" Mar 13 00:42:23.219761 containerd[1565]: time="2026-03-13T00:42:23.218694637Z" level=info msg="connecting to shim 142207e3089f9e2a6330960e7d8f5199bb123a810620670de781beebb96ddfb0" address="unix:///run/containerd/s/c61fe229ac5dc7c8390008a8165c8677dd13b5e797b13e8e4feca5bb0d471d34" protocol=ttrpc version=3 Mar 13 00:42:23.441263 systemd[1]: Started cri-containerd-142207e3089f9e2a6330960e7d8f5199bb123a810620670de781beebb96ddfb0.scope - libcontainer container 142207e3089f9e2a6330960e7d8f5199bb123a810620670de781beebb96ddfb0. Mar 13 00:42:24.170881 containerd[1565]: time="2026-03-13T00:42:24.169383560Z" level=info msg="StartContainer for \"142207e3089f9e2a6330960e7d8f5199bb123a810620670de781beebb96ddfb0\" returns successfully" Mar 13 00:42:25.294703 kubelet[2785]: I0313 00:42:25.292606 2785 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5588576f44-8hmsg" podStartSLOduration=3.765855668 podStartE2EDuration="11.292385883s" podCreationTimestamp="2026-03-13 00:42:14 +0000 UTC" firstStartedPulling="2026-03-13 00:42:15.495972554 +0000 UTC m=+6.956247546" lastFinishedPulling="2026-03-13 00:42:23.02250277 +0000 UTC m=+14.482777761" observedRunningTime="2026-03-13 00:42:25.289965222 +0000 UTC m=+16.750240224" watchObservedRunningTime="2026-03-13 00:42:25.292385883 +0000 UTC m=+16.752660874" Mar 13 00:42:30.071613 systemd[1]: cri-containerd-142207e3089f9e2a6330960e7d8f5199bb123a810620670de781beebb96ddfb0.scope: Deactivated successfully. Mar 13 00:42:30.075344 systemd[1]: cri-containerd-142207e3089f9e2a6330960e7d8f5199bb123a810620670de781beebb96ddfb0.scope: Consumed 1.865s CPU time, 41M memory peak. Mar 13 00:42:30.079565 containerd[1565]: time="2026-03-13T00:42:30.079348716Z" level=info msg="received container exit event container_id:\"142207e3089f9e2a6330960e7d8f5199bb123a810620670de781beebb96ddfb0\" id:\"142207e3089f9e2a6330960e7d8f5199bb123a810620670de781beebb96ddfb0\" pid:3118 exit_status:1 exited_at:{seconds:1773362550 nanos:78246799}" Mar 13 00:42:30.167341 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-142207e3089f9e2a6330960e7d8f5199bb123a810620670de781beebb96ddfb0-rootfs.mount: Deactivated successfully. Mar 13 00:42:31.197205 kubelet[2785]: I0313 00:42:31.197150 2785 scope.go:117] "RemoveContainer" containerID="142207e3089f9e2a6330960e7d8f5199bb123a810620670de781beebb96ddfb0" Mar 13 00:42:31.201624 containerd[1565]: time="2026-03-13T00:42:31.201483473Z" level=info msg="CreateContainer within sandbox \"8c014b77fcf569197966713341e7066f0243361695067b0289be4b3e4ddcd6f2\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Mar 13 00:42:31.218264 containerd[1565]: time="2026-03-13T00:42:31.218137900Z" level=info msg="Container a44460ab964b682601f1b99e9f2bc444fe552f8ad83b0238ceb287a3d4bb5373: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:42:31.221984 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount404303388.mount: Deactivated successfully. Mar 13 00:42:31.229082 containerd[1565]: time="2026-03-13T00:42:31.228969921Z" level=info msg="CreateContainer within sandbox \"8c014b77fcf569197966713341e7066f0243361695067b0289be4b3e4ddcd6f2\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"a44460ab964b682601f1b99e9f2bc444fe552f8ad83b0238ceb287a3d4bb5373\"" Mar 13 00:42:31.229709 containerd[1565]: time="2026-03-13T00:42:31.229626156Z" level=info msg="StartContainer for \"a44460ab964b682601f1b99e9f2bc444fe552f8ad83b0238ceb287a3d4bb5373\"" Mar 13 00:42:31.231254 containerd[1565]: time="2026-03-13T00:42:31.231189932Z" level=info msg="connecting to shim a44460ab964b682601f1b99e9f2bc444fe552f8ad83b0238ceb287a3d4bb5373" address="unix:///run/containerd/s/c61fe229ac5dc7c8390008a8165c8677dd13b5e797b13e8e4feca5bb0d471d34" protocol=ttrpc version=3 Mar 13 00:42:31.251829 systemd[1]: Started cri-containerd-a44460ab964b682601f1b99e9f2bc444fe552f8ad83b0238ceb287a3d4bb5373.scope - libcontainer container a44460ab964b682601f1b99e9f2bc444fe552f8ad83b0238ceb287a3d4bb5373. Mar 13 00:42:31.288819 containerd[1565]: time="2026-03-13T00:42:31.288769938Z" level=info msg="StartContainer for \"a44460ab964b682601f1b99e9f2bc444fe552f8ad83b0238ceb287a3d4bb5373\" returns successfully" Mar 13 00:42:33.327727 sudo[1799]: pam_unix(sudo:session): session closed for user root Mar 13 00:42:33.331800 sshd[1798]: Connection closed by 10.0.0.1 port 38112 Mar 13 00:42:33.335489 sshd-session[1795]: pam_unix(sshd:session): session closed for user core Mar 13 00:42:33.344335 systemd[1]: sshd@8-10.0.0.68:22-10.0.0.1:38112.service: Deactivated successfully. Mar 13 00:42:33.347583 systemd[1]: session-9.scope: Deactivated successfully. Mar 13 00:42:33.347850 systemd[1]: session-9.scope: Consumed 7.260s CPU time, 227.8M memory peak. Mar 13 00:42:33.351051 systemd-logind[1547]: Session 9 logged out. Waiting for processes to exit. Mar 13 00:42:33.353174 systemd-logind[1547]: Removed session 9. Mar 13 00:42:36.774088 systemd[1]: Created slice kubepods-besteffort-pod09b711ff_25e6_48f8_ad25_c3dcd873ee02.slice - libcontainer container kubepods-besteffort-pod09b711ff_25e6_48f8_ad25_c3dcd873ee02.slice. Mar 13 00:42:36.832144 kubelet[2785]: I0313 00:42:36.832049 2785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09b711ff-25e6-48f8-ad25-c3dcd873ee02-tigera-ca-bundle\") pod \"calico-typha-7568948895-49zl6\" (UID: \"09b711ff-25e6-48f8-ad25-c3dcd873ee02\") " pod="calico-system/calico-typha-7568948895-49zl6" Mar 13 00:42:36.832144 kubelet[2785]: I0313 00:42:36.832112 2785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/09b711ff-25e6-48f8-ad25-c3dcd873ee02-typha-certs\") pod \"calico-typha-7568948895-49zl6\" (UID: \"09b711ff-25e6-48f8-ad25-c3dcd873ee02\") " pod="calico-system/calico-typha-7568948895-49zl6" Mar 13 00:42:36.832144 kubelet[2785]: I0313 00:42:36.832130 2785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwvh2\" (UniqueName: \"kubernetes.io/projected/09b711ff-25e6-48f8-ad25-c3dcd873ee02-kube-api-access-lwvh2\") pod \"calico-typha-7568948895-49zl6\" (UID: \"09b711ff-25e6-48f8-ad25-c3dcd873ee02\") " pod="calico-system/calico-typha-7568948895-49zl6" Mar 13 00:42:36.862597 systemd[1]: Created slice kubepods-besteffort-pod73db60f1_ba7e_4a8c_8e16_c60b91ba1afe.slice - libcontainer container kubepods-besteffort-pod73db60f1_ba7e_4a8c_8e16_c60b91ba1afe.slice. Mar 13 00:42:36.933220 kubelet[2785]: I0313 00:42:36.933160 2785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/73db60f1-ba7e-4a8c-8e16-c60b91ba1afe-lib-modules\") pod \"calico-node-lpm9v\" (UID: \"73db60f1-ba7e-4a8c-8e16-c60b91ba1afe\") " pod="calico-system/calico-node-lpm9v" Mar 13 00:42:36.933880 kubelet[2785]: I0313 00:42:36.933456 2785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/73db60f1-ba7e-4a8c-8e16-c60b91ba1afe-nodeproc\") pod \"calico-node-lpm9v\" (UID: \"73db60f1-ba7e-4a8c-8e16-c60b91ba1afe\") " pod="calico-system/calico-node-lpm9v" Mar 13 00:42:36.934159 kubelet[2785]: I0313 00:42:36.934059 2785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/73db60f1-ba7e-4a8c-8e16-c60b91ba1afe-policysync\") pod \"calico-node-lpm9v\" (UID: \"73db60f1-ba7e-4a8c-8e16-c60b91ba1afe\") " pod="calico-system/calico-node-lpm9v" Mar 13 00:42:36.934717 kubelet[2785]: I0313 00:42:36.934234 2785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/73db60f1-ba7e-4a8c-8e16-c60b91ba1afe-tigera-ca-bundle\") pod \"calico-node-lpm9v\" (UID: \"73db60f1-ba7e-4a8c-8e16-c60b91ba1afe\") " pod="calico-system/calico-node-lpm9v" Mar 13 00:42:36.935080 kubelet[2785]: I0313 00:42:36.934933 2785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/73db60f1-ba7e-4a8c-8e16-c60b91ba1afe-node-certs\") pod \"calico-node-lpm9v\" (UID: \"73db60f1-ba7e-4a8c-8e16-c60b91ba1afe\") " pod="calico-system/calico-node-lpm9v" Mar 13 00:42:36.935080 kubelet[2785]: I0313 00:42:36.934968 2785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/73db60f1-ba7e-4a8c-8e16-c60b91ba1afe-var-lib-calico\") pod \"calico-node-lpm9v\" (UID: \"73db60f1-ba7e-4a8c-8e16-c60b91ba1afe\") " pod="calico-system/calico-node-lpm9v" Mar 13 00:42:36.935080 kubelet[2785]: I0313 00:42:36.934990 2785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/73db60f1-ba7e-4a8c-8e16-c60b91ba1afe-var-run-calico\") pod \"calico-node-lpm9v\" (UID: \"73db60f1-ba7e-4a8c-8e16-c60b91ba1afe\") " pod="calico-system/calico-node-lpm9v" Mar 13 00:42:36.935080 kubelet[2785]: I0313 00:42:36.935013 2785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/73db60f1-ba7e-4a8c-8e16-c60b91ba1afe-cni-net-dir\") pod \"calico-node-lpm9v\" (UID: \"73db60f1-ba7e-4a8c-8e16-c60b91ba1afe\") " pod="calico-system/calico-node-lpm9v" Mar 13 00:42:36.935080 kubelet[2785]: I0313 00:42:36.935037 2785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/73db60f1-ba7e-4a8c-8e16-c60b91ba1afe-flexvol-driver-host\") pod \"calico-node-lpm9v\" (UID: \"73db60f1-ba7e-4a8c-8e16-c60b91ba1afe\") " pod="calico-system/calico-node-lpm9v" Mar 13 00:42:36.935196 kubelet[2785]: I0313 00:42:36.935061 2785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/73db60f1-ba7e-4a8c-8e16-c60b91ba1afe-sys-fs\") pod \"calico-node-lpm9v\" (UID: \"73db60f1-ba7e-4a8c-8e16-c60b91ba1afe\") " pod="calico-system/calico-node-lpm9v" Mar 13 00:42:36.935196 kubelet[2785]: I0313 00:42:36.935096 2785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/73db60f1-ba7e-4a8c-8e16-c60b91ba1afe-bpffs\") pod \"calico-node-lpm9v\" (UID: \"73db60f1-ba7e-4a8c-8e16-c60b91ba1afe\") " pod="calico-system/calico-node-lpm9v" Mar 13 00:42:36.935196 kubelet[2785]: I0313 00:42:36.935136 2785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/73db60f1-ba7e-4a8c-8e16-c60b91ba1afe-xtables-lock\") pod \"calico-node-lpm9v\" (UID: \"73db60f1-ba7e-4a8c-8e16-c60b91ba1afe\") " pod="calico-system/calico-node-lpm9v" Mar 13 00:42:36.935196 kubelet[2785]: I0313 00:42:36.935155 2785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9lr2\" (UniqueName: \"kubernetes.io/projected/73db60f1-ba7e-4a8c-8e16-c60b91ba1afe-kube-api-access-v9lr2\") pod \"calico-node-lpm9v\" (UID: \"73db60f1-ba7e-4a8c-8e16-c60b91ba1afe\") " pod="calico-system/calico-node-lpm9v" Mar 13 00:42:36.936306 kubelet[2785]: I0313 00:42:36.935176 2785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/73db60f1-ba7e-4a8c-8e16-c60b91ba1afe-cni-bin-dir\") pod \"calico-node-lpm9v\" (UID: \"73db60f1-ba7e-4a8c-8e16-c60b91ba1afe\") " pod="calico-system/calico-node-lpm9v" Mar 13 00:42:36.936306 kubelet[2785]: I0313 00:42:36.935381 2785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/73db60f1-ba7e-4a8c-8e16-c60b91ba1afe-cni-log-dir\") pod \"calico-node-lpm9v\" (UID: \"73db60f1-ba7e-4a8c-8e16-c60b91ba1afe\") " pod="calico-system/calico-node-lpm9v" Mar 13 00:42:36.980175 kubelet[2785]: E0313 00:42:36.979966 2785 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c48ht" podUID="53e73110-bf2b-4a57-8079-bc3d1303e5a7" Mar 13 00:42:37.042638 kubelet[2785]: E0313 00:42:37.041907 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:42:37.042638 kubelet[2785]: W0313 00:42:37.041928 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:42:37.042638 kubelet[2785]: E0313 00:42:37.041987 2785 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:42:37.048424 kubelet[2785]: E0313 00:42:37.048391 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:42:37.048424 kubelet[2785]: W0313 00:42:37.048418 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:42:37.048515 kubelet[2785]: E0313 00:42:37.048431 2785 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:42:37.071937 kubelet[2785]: E0313 00:42:37.071834 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:42:37.071937 kubelet[2785]: W0313 00:42:37.071870 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:42:37.071937 kubelet[2785]: E0313 00:42:37.071890 2785 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:42:37.072227 kubelet[2785]: E0313 00:42:37.072202 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:42:37.072227 kubelet[2785]: W0313 00:42:37.072212 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:42:37.072227 kubelet[2785]: E0313 00:42:37.072221 2785 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:42:37.072679 kubelet[2785]: E0313 00:42:37.072603 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:42:37.072679 kubelet[2785]: W0313 00:42:37.072651 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:42:37.072679 kubelet[2785]: E0313 00:42:37.072665 2785 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:42:37.073211 kubelet[2785]: E0313 00:42:37.073141 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:42:37.073211 kubelet[2785]: W0313 00:42:37.073186 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:42:37.073211 kubelet[2785]: E0313 00:42:37.073201 2785 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:42:37.073703 kubelet[2785]: E0313 00:42:37.073647 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:42:37.073703 kubelet[2785]: W0313 00:42:37.073691 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:42:37.073768 kubelet[2785]: E0313 00:42:37.073708 2785 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:42:37.074151 kubelet[2785]: E0313 00:42:37.074100 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:42:37.074151 kubelet[2785]: W0313 00:42:37.074137 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:42:37.074151 kubelet[2785]: E0313 00:42:37.074151 2785 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:42:37.074700 kubelet[2785]: E0313 00:42:37.074622 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:42:37.074700 kubelet[2785]: W0313 00:42:37.074665 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:42:37.074700 kubelet[2785]: E0313 00:42:37.074678 2785 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:42:37.075052 kubelet[2785]: E0313 00:42:37.075006 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:42:37.075052 kubelet[2785]: W0313 00:42:37.075036 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:42:37.075052 kubelet[2785]: E0313 00:42:37.075048 2785 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:42:37.075399 kubelet[2785]: E0313 00:42:37.075356 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:42:37.075399 kubelet[2785]: W0313 00:42:37.075381 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:42:37.075399 kubelet[2785]: E0313 00:42:37.075389 2785 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:42:37.075811 kubelet[2785]: E0313 00:42:37.075742 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:42:37.075811 kubelet[2785]: W0313 00:42:37.075772 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:42:37.075811 kubelet[2785]: E0313 00:42:37.075781 2785 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:42:37.076110 kubelet[2785]: E0313 00:42:37.076042 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:42:37.076110 kubelet[2785]: W0313 00:42:37.076073 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:42:37.076110 kubelet[2785]: E0313 00:42:37.076083 2785 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:42:37.076407 kubelet[2785]: E0313 00:42:37.076357 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:42:37.076407 kubelet[2785]: W0313 00:42:37.076381 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:42:37.076407 kubelet[2785]: E0313 00:42:37.076389 2785 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:42:37.076772 kubelet[2785]: E0313 00:42:37.076730 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:42:37.076772 kubelet[2785]: W0313 00:42:37.076754 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:42:37.076772 kubelet[2785]: E0313 00:42:37.076762 2785 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:42:37.077147 kubelet[2785]: E0313 00:42:37.077078 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:42:37.077147 kubelet[2785]: W0313 00:42:37.077108 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:42:37.077147 kubelet[2785]: E0313 00:42:37.077116 2785 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:42:37.077465 kubelet[2785]: E0313 00:42:37.077422 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:42:37.077465 kubelet[2785]: W0313 00:42:37.077445 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:42:37.077465 kubelet[2785]: E0313 00:42:37.077454 2785 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:42:37.077867 kubelet[2785]: E0313 00:42:37.077799 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:42:37.077867 kubelet[2785]: W0313 00:42:37.077833 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:42:37.077867 kubelet[2785]: E0313 00:42:37.077847 2785 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:42:37.078404 kubelet[2785]: E0313 00:42:37.078161 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:42:37.078404 kubelet[2785]: W0313 00:42:37.078171 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:42:37.078404 kubelet[2785]: E0313 00:42:37.078180 2785 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:42:37.078649 kubelet[2785]: E0313 00:42:37.078449 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:42:37.078649 kubelet[2785]: W0313 00:42:37.078458 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:42:37.078649 kubelet[2785]: E0313 00:42:37.078466 2785 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:42:37.079454 kubelet[2785]: E0313 00:42:37.079364 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:42:37.079454 kubelet[2785]: W0313 00:42:37.079395 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:42:37.079454 kubelet[2785]: E0313 00:42:37.079408 2785 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:42:37.079783 kubelet[2785]: E0313 00:42:37.079724 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:42:37.079783 kubelet[2785]: W0313 00:42:37.079736 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:42:37.079783 kubelet[2785]: E0313 00:42:37.079746 2785 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:42:37.083443 kubelet[2785]: E0313 00:42:37.083348 2785 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:42:37.084414 containerd[1565]: time="2026-03-13T00:42:37.084357626Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7568948895-49zl6,Uid:09b711ff-25e6-48f8-ad25-c3dcd873ee02,Namespace:calico-system,Attempt:0,}" Mar 13 00:42:37.132357 containerd[1565]: time="2026-03-13T00:42:37.132246227Z" level=info msg="connecting to shim 8492680ca977c2b96f6838d7f99217e887e289a69254b0d88fd8a4b2d7294390" address="unix:///run/containerd/s/1de09cc3fb7c2fb180506afba94682ccbe1fce92a0ec4ecaec0f25ff6aae02d9" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:42:37.137061 kubelet[2785]: E0313 00:42:37.136965 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:42:37.137313 kubelet[2785]: W0313 00:42:37.137181 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:42:37.137355 kubelet[2785]: E0313 00:42:37.137309 2785 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:42:37.137440 kubelet[2785]: I0313 00:42:37.137368 2785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/53e73110-bf2b-4a57-8079-bc3d1303e5a7-varrun\") pod \"csi-node-driver-c48ht\" (UID: \"53e73110-bf2b-4a57-8079-bc3d1303e5a7\") " pod="calico-system/csi-node-driver-c48ht" Mar 13 00:42:37.137902 kubelet[2785]: E0313 00:42:37.137869 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:42:37.137949 kubelet[2785]: W0313 00:42:37.137903 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:42:37.137949 kubelet[2785]: E0313 00:42:37.137919 2785 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:42:37.137994 kubelet[2785]: I0313 00:42:37.137946 2785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/53e73110-bf2b-4a57-8079-bc3d1303e5a7-kubelet-dir\") pod \"csi-node-driver-c48ht\" (UID: \"53e73110-bf2b-4a57-8079-bc3d1303e5a7\") " pod="calico-system/csi-node-driver-c48ht" Mar 13 00:42:37.138457 kubelet[2785]: E0313 00:42:37.138318 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:42:37.138457 kubelet[2785]: W0313 00:42:37.138348 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:42:37.138457 kubelet[2785]: E0313 00:42:37.138359 2785 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:42:37.138841 kubelet[2785]: E0313 00:42:37.138733 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:42:37.138841 kubelet[2785]: W0313 00:42:37.138761 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:42:37.138841 kubelet[2785]: E0313 00:42:37.138770 2785 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:42:37.139389 kubelet[2785]: E0313 00:42:37.139246 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:42:37.139389 kubelet[2785]: W0313 00:42:37.139311 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:42:37.139389 kubelet[2785]: E0313 00:42:37.139321 2785 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:42:37.139686 kubelet[2785]: I0313 00:42:37.139619 2785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssj2m\" (UniqueName: \"kubernetes.io/projected/53e73110-bf2b-4a57-8079-bc3d1303e5a7-kube-api-access-ssj2m\") pod \"csi-node-driver-c48ht\" (UID: \"53e73110-bf2b-4a57-8079-bc3d1303e5a7\") " pod="calico-system/csi-node-driver-c48ht" Mar 13 00:42:37.140462 kubelet[2785]: E0313 00:42:37.140376 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:42:37.140462 kubelet[2785]: W0313 00:42:37.140409 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:42:37.140462 kubelet[2785]: E0313 00:42:37.140420 2785 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:42:37.140740 kubelet[2785]: E0313 00:42:37.140715 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:42:37.140740 kubelet[2785]: W0313 00:42:37.140726 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:42:37.140740 kubelet[2785]: E0313 00:42:37.140734 2785 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:42:37.141276 kubelet[2785]: E0313 00:42:37.141217 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:42:37.141276 kubelet[2785]: W0313 00:42:37.141244 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:42:37.141330 kubelet[2785]: E0313 00:42:37.141284 2785 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:42:37.141330 kubelet[2785]: I0313 00:42:37.141302 2785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/53e73110-bf2b-4a57-8079-bc3d1303e5a7-socket-dir\") pod \"csi-node-driver-c48ht\" (UID: \"53e73110-bf2b-4a57-8079-bc3d1303e5a7\") " pod="calico-system/csi-node-driver-c48ht" Mar 13 00:42:37.141882 kubelet[2785]: E0313 00:42:37.141667 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:42:37.141882 kubelet[2785]: W0313 00:42:37.141693 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:42:37.141882 kubelet[2785]: E0313 00:42:37.141702 2785 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:42:37.141882 kubelet[2785]: I0313 00:42:37.141793 2785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/53e73110-bf2b-4a57-8079-bc3d1303e5a7-registration-dir\") pod \"csi-node-driver-c48ht\" (UID: \"53e73110-bf2b-4a57-8079-bc3d1303e5a7\") " pod="calico-system/csi-node-driver-c48ht" Mar 13 00:42:37.142179 kubelet[2785]: E0313 00:42:37.142148 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:42:37.142179 kubelet[2785]: W0313 00:42:37.142173 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:42:37.142238 kubelet[2785]: E0313 00:42:37.142182 2785 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:42:37.142626 kubelet[2785]: E0313 00:42:37.142597 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:42:37.142626 kubelet[2785]: W0313 00:42:37.142622 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:42:37.142687 kubelet[2785]: E0313 00:42:37.142633 2785 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:42:37.143048 kubelet[2785]: E0313 00:42:37.142996 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:42:37.143048 kubelet[2785]: W0313 00:42:37.143029 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:42:37.143048 kubelet[2785]: E0313 00:42:37.143039 2785 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:42:37.143471 kubelet[2785]: E0313 00:42:37.143393 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:42:37.143471 kubelet[2785]: W0313 00:42:37.143417 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:42:37.143471 kubelet[2785]: E0313 00:42:37.143426 2785 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:42:37.143838 kubelet[2785]: E0313 00:42:37.143808 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:42:37.143838 kubelet[2785]: W0313 00:42:37.143835 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:42:37.143922 kubelet[2785]: E0313 00:42:37.143844 2785 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:42:37.145116 kubelet[2785]: E0313 00:42:37.145067 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:42:37.145116 kubelet[2785]: W0313 00:42:37.145100 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:42:37.145116 kubelet[2785]: E0313 00:42:37.145110 2785 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:42:37.173712 systemd[1]: Started cri-containerd-8492680ca977c2b96f6838d7f99217e887e289a69254b0d88fd8a4b2d7294390.scope - libcontainer container 8492680ca977c2b96f6838d7f99217e887e289a69254b0d88fd8a4b2d7294390. Mar 13 00:42:37.176102 containerd[1565]: time="2026-03-13T00:42:37.176069021Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-lpm9v,Uid:73db60f1-ba7e-4a8c-8e16-c60b91ba1afe,Namespace:calico-system,Attempt:0,}" Mar 13 00:42:37.202206 containerd[1565]: time="2026-03-13T00:42:37.202157851Z" level=info msg="connecting to shim 88a83a63dd4f51406ca367b201817eaa2389a9222f52aed9fae01d069ef6c6ab" address="unix:///run/containerd/s/39eb1442a29d370cbc8954c40d56c8f4beb829f774382fccc7687a4319121d55" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:42:37.242780 kubelet[2785]: E0313 00:42:37.242736 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:42:37.242780 kubelet[2785]: W0313 00:42:37.242753 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:42:37.242780 kubelet[2785]: E0313 00:42:37.242772 2785 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:42:37.244316 kubelet[2785]: E0313 00:42:37.244217 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:42:37.244316 kubelet[2785]: W0313 00:42:37.244281 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:42:37.244316 kubelet[2785]: E0313 00:42:37.244296 2785 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:42:37.247015 kubelet[2785]: E0313 00:42:37.246925 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:42:37.247015 kubelet[2785]: W0313 00:42:37.246969 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:42:37.247015 kubelet[2785]: E0313 00:42:37.246983 2785 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:42:37.247615 kubelet[2785]: E0313 00:42:37.247572 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:42:37.247615 kubelet[2785]: W0313 00:42:37.247585 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:42:37.247615 kubelet[2785]: E0313 00:42:37.247595 2785 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:42:37.248280 kubelet[2785]: E0313 00:42:37.248185 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:42:37.248280 kubelet[2785]: W0313 00:42:37.248210 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:42:37.248280 kubelet[2785]: E0313 00:42:37.248219 2785 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:42:37.248728 kubelet[2785]: E0313 00:42:37.248698 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:42:37.248807 kubelet[2785]: W0313 00:42:37.248727 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:42:37.248807 kubelet[2785]: E0313 00:42:37.248737 2785 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:42:37.251166 kubelet[2785]: E0313 00:42:37.251093 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:42:37.251166 kubelet[2785]: W0313 00:42:37.251106 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:42:37.251166 kubelet[2785]: E0313 00:42:37.251115 2785 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:42:37.251669 kubelet[2785]: E0313 00:42:37.251620 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:42:37.251669 kubelet[2785]: W0313 00:42:37.251650 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:42:37.251669 kubelet[2785]: E0313 00:42:37.251660 2785 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:42:37.252123 kubelet[2785]: E0313 00:42:37.252104 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:42:37.252123 kubelet[2785]: W0313 00:42:37.252115 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:42:37.252123 kubelet[2785]: E0313 00:42:37.252124 2785 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:42:37.253132 kubelet[2785]: E0313 00:42:37.253021 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:42:37.253132 kubelet[2785]: W0313 00:42:37.253126 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:42:37.253196 kubelet[2785]: E0313 00:42:37.253137 2785 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:42:37.254115 systemd[1]: Started cri-containerd-88a83a63dd4f51406ca367b201817eaa2389a9222f52aed9fae01d069ef6c6ab.scope - libcontainer container 88a83a63dd4f51406ca367b201817eaa2389a9222f52aed9fae01d069ef6c6ab. Mar 13 00:42:37.254947 kubelet[2785]: E0313 00:42:37.254873 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:42:37.254947 kubelet[2785]: W0313 00:42:37.254916 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:42:37.254947 kubelet[2785]: E0313 00:42:37.254928 2785 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:42:37.256309 kubelet[2785]: E0313 00:42:37.256193 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:42:37.256630 kubelet[2785]: W0313 00:42:37.256424 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:42:37.256889 kubelet[2785]: E0313 00:42:37.256722 2785 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:42:37.257616 kubelet[2785]: E0313 00:42:37.257492 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:42:37.257616 kubelet[2785]: W0313 00:42:37.257503 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:42:37.257616 kubelet[2785]: E0313 00:42:37.257518 2785 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:42:37.259727 kubelet[2785]: E0313 00:42:37.259636 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:42:37.259727 kubelet[2785]: W0313 00:42:37.259671 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:42:37.259727 kubelet[2785]: E0313 00:42:37.259681 2785 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:42:37.261613 kubelet[2785]: E0313 00:42:37.260126 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:42:37.261613 kubelet[2785]: W0313 00:42:37.260165 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:42:37.261613 kubelet[2785]: E0313 00:42:37.260175 2785 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:42:37.261803 kubelet[2785]: E0313 00:42:37.261722 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:42:37.261803 kubelet[2785]: W0313 00:42:37.261735 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:42:37.261803 kubelet[2785]: E0313 00:42:37.261747 2785 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:42:37.262628 kubelet[2785]: E0313 00:42:37.262578 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:42:37.262744 kubelet[2785]: W0313 00:42:37.262644 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:42:37.262744 kubelet[2785]: E0313 00:42:37.262728 2785 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:42:37.263825 kubelet[2785]: E0313 00:42:37.263771 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:42:37.263825 kubelet[2785]: W0313 00:42:37.263812 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:42:37.263825 kubelet[2785]: E0313 00:42:37.263823 2785 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:42:37.264430 kubelet[2785]: E0313 00:42:37.264380 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:42:37.264430 kubelet[2785]: W0313 00:42:37.264416 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:42:37.264430 kubelet[2785]: E0313 00:42:37.264426 2785 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:42:37.265090 kubelet[2785]: E0313 00:42:37.265055 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:42:37.265090 kubelet[2785]: W0313 00:42:37.265084 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:42:37.265148 kubelet[2785]: E0313 00:42:37.265095 2785 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:42:37.265583 kubelet[2785]: E0313 00:42:37.265490 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:42:37.265697 kubelet[2785]: W0313 00:42:37.265521 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:42:37.265795 kubelet[2785]: E0313 00:42:37.265765 2785 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:42:37.267137 kubelet[2785]: E0313 00:42:37.267108 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:42:37.267137 kubelet[2785]: W0313 00:42:37.267121 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:42:37.267137 kubelet[2785]: E0313 00:42:37.267130 2785 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:42:37.268042 kubelet[2785]: E0313 00:42:37.267964 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:42:37.268042 kubelet[2785]: W0313 00:42:37.268031 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:42:37.268042 kubelet[2785]: E0313 00:42:37.268041 2785 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:42:37.268767 kubelet[2785]: E0313 00:42:37.268671 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:42:37.268767 kubelet[2785]: W0313 00:42:37.268753 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:42:37.268767 kubelet[2785]: E0313 00:42:37.268763 2785 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:42:37.270449 kubelet[2785]: E0313 00:42:37.270362 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:42:37.270968 kubelet[2785]: W0313 00:42:37.270833 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:42:37.271766 kubelet[2785]: E0313 00:42:37.271696 2785 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:42:37.286950 kubelet[2785]: E0313 00:42:37.286688 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:42:37.287055 kubelet[2785]: W0313 00:42:37.286929 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:42:37.287055 kubelet[2785]: E0313 00:42:37.287041 2785 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:42:37.294928 containerd[1565]: time="2026-03-13T00:42:37.294812459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7568948895-49zl6,Uid:09b711ff-25e6-48f8-ad25-c3dcd873ee02,Namespace:calico-system,Attempt:0,} returns sandbox id \"8492680ca977c2b96f6838d7f99217e887e289a69254b0d88fd8a4b2d7294390\"" Mar 13 00:42:37.295914 kubelet[2785]: E0313 00:42:37.295894 2785 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:42:37.299833 containerd[1565]: time="2026-03-13T00:42:37.299782398Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Mar 13 00:42:37.320409 containerd[1565]: time="2026-03-13T00:42:37.320362354Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-lpm9v,Uid:73db60f1-ba7e-4a8c-8e16-c60b91ba1afe,Namespace:calico-system,Attempt:0,} returns sandbox id \"88a83a63dd4f51406ca367b201817eaa2389a9222f52aed9fae01d069ef6c6ab\"" Mar 13 00:42:38.574412 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3346769729.mount: Deactivated successfully. Mar 13 00:42:38.672179 kubelet[2785]: E0313 00:42:38.672082 2785 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c48ht" podUID="53e73110-bf2b-4a57-8079-bc3d1303e5a7" Mar 13 00:42:39.192176 containerd[1565]: time="2026-03-13T00:42:39.192089360Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:42:39.193306 containerd[1565]: time="2026-03-13T00:42:39.193233946Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=36107596" Mar 13 00:42:39.194505 containerd[1565]: time="2026-03-13T00:42:39.194464942Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:42:39.199235 containerd[1565]: time="2026-03-13T00:42:39.198605801Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:42:39.202392 containerd[1565]: time="2026-03-13T00:42:39.202307668Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 1.902465629s" Mar 13 00:42:39.202392 containerd[1565]: time="2026-03-13T00:42:39.202373671Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Mar 13 00:42:39.203458 containerd[1565]: time="2026-03-13T00:42:39.203371564Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Mar 13 00:42:39.216974 containerd[1565]: time="2026-03-13T00:42:39.216926306Z" level=info msg="CreateContainer within sandbox \"8492680ca977c2b96f6838d7f99217e887e289a69254b0d88fd8a4b2d7294390\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 13 00:42:39.229467 containerd[1565]: time="2026-03-13T00:42:39.227469725Z" level=info msg="Container 150ba8314318aa08e72918ed8a3cbf681f3ce78c50d771d760f27e4049d0a709: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:42:39.236502 containerd[1565]: time="2026-03-13T00:42:39.236407733Z" level=info msg="CreateContainer within sandbox \"8492680ca977c2b96f6838d7f99217e887e289a69254b0d88fd8a4b2d7294390\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"150ba8314318aa08e72918ed8a3cbf681f3ce78c50d771d760f27e4049d0a709\"" Mar 13 00:42:39.237352 containerd[1565]: time="2026-03-13T00:42:39.237176177Z" level=info msg="StartContainer for \"150ba8314318aa08e72918ed8a3cbf681f3ce78c50d771d760f27e4049d0a709\"" Mar 13 00:42:39.238307 containerd[1565]: time="2026-03-13T00:42:39.238240844Z" level=info msg="connecting to shim 150ba8314318aa08e72918ed8a3cbf681f3ce78c50d771d760f27e4049d0a709" address="unix:///run/containerd/s/1de09cc3fb7c2fb180506afba94682ccbe1fce92a0ec4ecaec0f25ff6aae02d9" protocol=ttrpc version=3 Mar 13 00:42:39.283747 systemd[1]: Started cri-containerd-150ba8314318aa08e72918ed8a3cbf681f3ce78c50d771d760f27e4049d0a709.scope - libcontainer container 150ba8314318aa08e72918ed8a3cbf681f3ce78c50d771d760f27e4049d0a709. Mar 13 00:42:39.399802 containerd[1565]: time="2026-03-13T00:42:39.399668884Z" level=info msg="StartContainer for \"150ba8314318aa08e72918ed8a3cbf681f3ce78c50d771d760f27e4049d0a709\" returns successfully" Mar 13 00:42:40.089688 containerd[1565]: time="2026-03-13T00:42:40.089606503Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:42:40.090742 containerd[1565]: time="2026-03-13T00:42:40.090699884Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4630250" Mar 13 00:42:40.092147 containerd[1565]: time="2026-03-13T00:42:40.092090397Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:42:40.094888 containerd[1565]: time="2026-03-13T00:42:40.094826372Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:42:40.095440 containerd[1565]: time="2026-03-13T00:42:40.095358839Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 891.932934ms" Mar 13 00:42:40.095440 containerd[1565]: time="2026-03-13T00:42:40.095417409Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Mar 13 00:42:40.100824 containerd[1565]: time="2026-03-13T00:42:40.100773095Z" level=info msg="CreateContainer within sandbox \"88a83a63dd4f51406ca367b201817eaa2389a9222f52aed9fae01d069ef6c6ab\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 13 00:42:40.113263 containerd[1565]: time="2026-03-13T00:42:40.113210874Z" level=info msg="Container 00a8e6852a0b0d4709d9edcd6fef66c10f5e8fecb3a3b3876caefd0eb38237e3: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:42:40.122454 containerd[1565]: time="2026-03-13T00:42:40.122398875Z" level=info msg="CreateContainer within sandbox \"88a83a63dd4f51406ca367b201817eaa2389a9222f52aed9fae01d069ef6c6ab\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"00a8e6852a0b0d4709d9edcd6fef66c10f5e8fecb3a3b3876caefd0eb38237e3\"" Mar 13 00:42:40.123189 containerd[1565]: time="2026-03-13T00:42:40.123108219Z" level=info msg="StartContainer for \"00a8e6852a0b0d4709d9edcd6fef66c10f5e8fecb3a3b3876caefd0eb38237e3\"" Mar 13 00:42:40.125035 containerd[1565]: time="2026-03-13T00:42:40.124992083Z" level=info msg="connecting to shim 00a8e6852a0b0d4709d9edcd6fef66c10f5e8fecb3a3b3876caefd0eb38237e3" address="unix:///run/containerd/s/39eb1442a29d370cbc8954c40d56c8f4beb829f774382fccc7687a4319121d55" protocol=ttrpc version=3 Mar 13 00:42:40.157904 systemd[1]: Started cri-containerd-00a8e6852a0b0d4709d9edcd6fef66c10f5e8fecb3a3b3876caefd0eb38237e3.scope - libcontainer container 00a8e6852a0b0d4709d9edcd6fef66c10f5e8fecb3a3b3876caefd0eb38237e3. Mar 13 00:42:40.231937 kubelet[2785]: E0313 00:42:40.231889 2785 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:42:40.249265 kubelet[2785]: I0313 00:42:40.249160 2785 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7568948895-49zl6" podStartSLOduration=2.343800449 podStartE2EDuration="4.249143724s" podCreationTimestamp="2026-03-13 00:42:36 +0000 UTC" firstStartedPulling="2026-03-13 00:42:37.297889819 +0000 UTC m=+28.758164810" lastFinishedPulling="2026-03-13 00:42:39.203233094 +0000 UTC m=+30.663508085" observedRunningTime="2026-03-13 00:42:40.248818518 +0000 UTC m=+31.709093509" watchObservedRunningTime="2026-03-13 00:42:40.249143724 +0000 UTC m=+31.709418715" Mar 13 00:42:40.259979 containerd[1565]: time="2026-03-13T00:42:40.259878541Z" level=info msg="StartContainer for \"00a8e6852a0b0d4709d9edcd6fef66c10f5e8fecb3a3b3876caefd0eb38237e3\" returns successfully" Mar 13 00:42:40.282386 systemd[1]: cri-containerd-00a8e6852a0b0d4709d9edcd6fef66c10f5e8fecb3a3b3876caefd0eb38237e3.scope: Deactivated successfully. Mar 13 00:42:40.286304 containerd[1565]: time="2026-03-13T00:42:40.286141818Z" level=info msg="received container exit event container_id:\"00a8e6852a0b0d4709d9edcd6fef66c10f5e8fecb3a3b3876caefd0eb38237e3\" id:\"00a8e6852a0b0d4709d9edcd6fef66c10f5e8fecb3a3b3876caefd0eb38237e3\" pid:3476 exited_at:{seconds:1773362560 nanos:285153725}" Mar 13 00:42:40.330621 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-00a8e6852a0b0d4709d9edcd6fef66c10f5e8fecb3a3b3876caefd0eb38237e3-rootfs.mount: Deactivated successfully. Mar 13 00:42:40.673607 kubelet[2785]: E0313 00:42:40.671796 2785 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c48ht" podUID="53e73110-bf2b-4a57-8079-bc3d1303e5a7" Mar 13 00:42:41.242980 kubelet[2785]: I0313 00:42:41.242819 2785 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 00:42:41.243695 kubelet[2785]: E0313 00:42:41.243160 2785 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:42:41.244236 containerd[1565]: time="2026-03-13T00:42:41.243912419Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Mar 13 00:42:42.674368 kubelet[2785]: E0313 00:42:42.674202 2785 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c48ht" podUID="53e73110-bf2b-4a57-8079-bc3d1303e5a7" Mar 13 00:42:44.677789 kubelet[2785]: E0313 00:42:44.676482 2785 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c48ht" podUID="53e73110-bf2b-4a57-8079-bc3d1303e5a7" Mar 13 00:42:46.672980 kubelet[2785]: E0313 00:42:46.672921 2785 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c48ht" podUID="53e73110-bf2b-4a57-8079-bc3d1303e5a7" Mar 13 00:42:47.944923 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1968932087.mount: Deactivated successfully. Mar 13 00:42:48.184129 containerd[1565]: time="2026-03-13T00:42:48.184021180Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:42:48.185373 containerd[1565]: time="2026-03-13T00:42:48.185221760Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Mar 13 00:42:48.186371 containerd[1565]: time="2026-03-13T00:42:48.186282630Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:42:48.189789 containerd[1565]: time="2026-03-13T00:42:48.189749943Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:42:48.190689 containerd[1565]: time="2026-03-13T00:42:48.190629246Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 6.946666893s" Mar 13 00:42:48.190737 containerd[1565]: time="2026-03-13T00:42:48.190687484Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Mar 13 00:42:48.196670 containerd[1565]: time="2026-03-13T00:42:48.195588479Z" level=info msg="CreateContainer within sandbox \"88a83a63dd4f51406ca367b201817eaa2389a9222f52aed9fae01d069ef6c6ab\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Mar 13 00:42:48.223607 containerd[1565]: time="2026-03-13T00:42:48.223413718Z" level=info msg="Container b7f546412e22ddd4eec6ac6b15660439294aa96e9befde1f41c2ff2a879035c0: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:42:48.281132 containerd[1565]: time="2026-03-13T00:42:48.281013881Z" level=info msg="CreateContainer within sandbox \"88a83a63dd4f51406ca367b201817eaa2389a9222f52aed9fae01d069ef6c6ab\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"b7f546412e22ddd4eec6ac6b15660439294aa96e9befde1f41c2ff2a879035c0\"" Mar 13 00:42:48.281874 containerd[1565]: time="2026-03-13T00:42:48.281846282Z" level=info msg="StartContainer for \"b7f546412e22ddd4eec6ac6b15660439294aa96e9befde1f41c2ff2a879035c0\"" Mar 13 00:42:48.283623 containerd[1565]: time="2026-03-13T00:42:48.283512401Z" level=info msg="connecting to shim b7f546412e22ddd4eec6ac6b15660439294aa96e9befde1f41c2ff2a879035c0" address="unix:///run/containerd/s/39eb1442a29d370cbc8954c40d56c8f4beb829f774382fccc7687a4319121d55" protocol=ttrpc version=3 Mar 13 00:42:48.316784 systemd[1]: Started cri-containerd-b7f546412e22ddd4eec6ac6b15660439294aa96e9befde1f41c2ff2a879035c0.scope - libcontainer container b7f546412e22ddd4eec6ac6b15660439294aa96e9befde1f41c2ff2a879035c0. Mar 13 00:42:48.453144 containerd[1565]: time="2026-03-13T00:42:48.452971462Z" level=info msg="StartContainer for \"b7f546412e22ddd4eec6ac6b15660439294aa96e9befde1f41c2ff2a879035c0\" returns successfully" Mar 13 00:42:48.551294 systemd[1]: cri-containerd-b7f546412e22ddd4eec6ac6b15660439294aa96e9befde1f41c2ff2a879035c0.scope: Deactivated successfully. Mar 13 00:42:48.553201 containerd[1565]: time="2026-03-13T00:42:48.553049646Z" level=info msg="received container exit event container_id:\"b7f546412e22ddd4eec6ac6b15660439294aa96e9befde1f41c2ff2a879035c0\" id:\"b7f546412e22ddd4eec6ac6b15660439294aa96e9befde1f41c2ff2a879035c0\" pid:3538 exited_at:{seconds:1773362568 nanos:552742052}" Mar 13 00:42:48.672280 kubelet[2785]: E0313 00:42:48.672196 2785 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c48ht" podUID="53e73110-bf2b-4a57-8079-bc3d1303e5a7" Mar 13 00:42:48.945390 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b7f546412e22ddd4eec6ac6b15660439294aa96e9befde1f41c2ff2a879035c0-rootfs.mount: Deactivated successfully. Mar 13 00:42:49.274946 containerd[1565]: time="2026-03-13T00:42:49.274762585Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Mar 13 00:42:50.671736 kubelet[2785]: E0313 00:42:50.671680 2785 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c48ht" podUID="53e73110-bf2b-4a57-8079-bc3d1303e5a7" Mar 13 00:42:51.212231 containerd[1565]: time="2026-03-13T00:42:51.212116657Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:42:51.213398 containerd[1565]: time="2026-03-13T00:42:51.213292402Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Mar 13 00:42:51.214629 containerd[1565]: time="2026-03-13T00:42:51.214570950Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:42:51.217031 containerd[1565]: time="2026-03-13T00:42:51.216966085Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:42:51.217624 containerd[1565]: time="2026-03-13T00:42:51.217468859Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 1.942647222s" Mar 13 00:42:51.217676 containerd[1565]: time="2026-03-13T00:42:51.217629308Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Mar 13 00:42:51.223270 containerd[1565]: time="2026-03-13T00:42:51.223236548Z" level=info msg="CreateContainer within sandbox \"88a83a63dd4f51406ca367b201817eaa2389a9222f52aed9fae01d069ef6c6ab\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 13 00:42:51.244587 containerd[1565]: time="2026-03-13T00:42:51.244425161Z" level=info msg="Container 57ca96dcdec99bdcb68a68f98c6aff064d87992ea02aa0007fb0a8fba0bb0cbb: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:42:51.255251 containerd[1565]: time="2026-03-13T00:42:51.255155008Z" level=info msg="CreateContainer within sandbox \"88a83a63dd4f51406ca367b201817eaa2389a9222f52aed9fae01d069ef6c6ab\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"57ca96dcdec99bdcb68a68f98c6aff064d87992ea02aa0007fb0a8fba0bb0cbb\"" Mar 13 00:42:51.256181 containerd[1565]: time="2026-03-13T00:42:51.256062501Z" level=info msg="StartContainer for \"57ca96dcdec99bdcb68a68f98c6aff064d87992ea02aa0007fb0a8fba0bb0cbb\"" Mar 13 00:42:51.258324 containerd[1565]: time="2026-03-13T00:42:51.258257658Z" level=info msg="connecting to shim 57ca96dcdec99bdcb68a68f98c6aff064d87992ea02aa0007fb0a8fba0bb0cbb" address="unix:///run/containerd/s/39eb1442a29d370cbc8954c40d56c8f4beb829f774382fccc7687a4319121d55" protocol=ttrpc version=3 Mar 13 00:42:51.296994 systemd[1]: Started cri-containerd-57ca96dcdec99bdcb68a68f98c6aff064d87992ea02aa0007fb0a8fba0bb0cbb.scope - libcontainer container 57ca96dcdec99bdcb68a68f98c6aff064d87992ea02aa0007fb0a8fba0bb0cbb. Mar 13 00:42:51.575252 containerd[1565]: time="2026-03-13T00:42:51.569620504Z" level=info msg="StartContainer for \"57ca96dcdec99bdcb68a68f98c6aff064d87992ea02aa0007fb0a8fba0bb0cbb\" returns successfully" Mar 13 00:42:52.672268 kubelet[2785]: E0313 00:42:52.672121 2785 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c48ht" podUID="53e73110-bf2b-4a57-8079-bc3d1303e5a7" Mar 13 00:42:53.137502 systemd[1]: cri-containerd-57ca96dcdec99bdcb68a68f98c6aff064d87992ea02aa0007fb0a8fba0bb0cbb.scope: Deactivated successfully. Mar 13 00:42:53.140406 systemd[1]: cri-containerd-57ca96dcdec99bdcb68a68f98c6aff064d87992ea02aa0007fb0a8fba0bb0cbb.scope: Consumed 1.609s CPU time, 179.2M memory peak, 4.4M read from disk, 177M written to disk. Mar 13 00:42:53.141236 containerd[1565]: time="2026-03-13T00:42:53.141027211Z" level=info msg="received container exit event container_id:\"57ca96dcdec99bdcb68a68f98c6aff064d87992ea02aa0007fb0a8fba0bb0cbb\" id:\"57ca96dcdec99bdcb68a68f98c6aff064d87992ea02aa0007fb0a8fba0bb0cbb\" pid:3598 exited_at:{seconds:1773362573 nanos:140378641}" Mar 13 00:42:53.178259 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-57ca96dcdec99bdcb68a68f98c6aff064d87992ea02aa0007fb0a8fba0bb0cbb-rootfs.mount: Deactivated successfully. Mar 13 00:42:53.225497 kubelet[2785]: I0313 00:42:53.225402 2785 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Mar 13 00:42:53.373457 systemd[1]: Created slice kubepods-besteffort-pod9b5fd946_17a9_4a05_b878_6849fbb45881.slice - libcontainer container kubepods-besteffort-pod9b5fd946_17a9_4a05_b878_6849fbb45881.slice. Mar 13 00:42:53.385266 systemd[1]: Created slice kubepods-besteffort-pod86afacf3_dd5f_4699_9570_d8f6390eafa0.slice - libcontainer container kubepods-besteffort-pod86afacf3_dd5f_4699_9570_d8f6390eafa0.slice. Mar 13 00:42:53.403945 kubelet[2785]: I0313 00:42:53.403808 2785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvd5b\" (UniqueName: \"kubernetes.io/projected/9b5fd946-17a9-4a05-b878-6849fbb45881-kube-api-access-hvd5b\") pod \"whisker-6fb4cd9fcf-7pwg9\" (UID: \"9b5fd946-17a9-4a05-b878-6849fbb45881\") " pod="calico-system/whisker-6fb4cd9fcf-7pwg9" Mar 13 00:42:53.403945 kubelet[2785]: I0313 00:42:53.403844 2785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9b5fd946-17a9-4a05-b878-6849fbb45881-whisker-backend-key-pair\") pod \"whisker-6fb4cd9fcf-7pwg9\" (UID: \"9b5fd946-17a9-4a05-b878-6849fbb45881\") " pod="calico-system/whisker-6fb4cd9fcf-7pwg9" Mar 13 00:42:53.403945 kubelet[2785]: I0313 00:42:53.403864 2785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/9b5fd946-17a9-4a05-b878-6849fbb45881-nginx-config\") pod \"whisker-6fb4cd9fcf-7pwg9\" (UID: \"9b5fd946-17a9-4a05-b878-6849fbb45881\") " pod="calico-system/whisker-6fb4cd9fcf-7pwg9" Mar 13 00:42:53.403945 kubelet[2785]: I0313 00:42:53.403880 2785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9b5fd946-17a9-4a05-b878-6849fbb45881-whisker-ca-bundle\") pod \"whisker-6fb4cd9fcf-7pwg9\" (UID: \"9b5fd946-17a9-4a05-b878-6849fbb45881\") " pod="calico-system/whisker-6fb4cd9fcf-7pwg9" Mar 13 00:42:53.410619 systemd[1]: Created slice kubepods-burstable-poddfc1f841_0529_484b_b59f_9cc1adfd0779.slice - libcontainer container kubepods-burstable-poddfc1f841_0529_484b_b59f_9cc1adfd0779.slice. Mar 13 00:42:53.419612 containerd[1565]: time="2026-03-13T00:42:53.419122045Z" level=info msg="CreateContainer within sandbox \"88a83a63dd4f51406ca367b201817eaa2389a9222f52aed9fae01d069ef6c6ab\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 13 00:42:53.423220 systemd[1]: Created slice kubepods-burstable-poda8a6ad84_f7f5_49f3_9436_19390a6f9006.slice - libcontainer container kubepods-burstable-poda8a6ad84_f7f5_49f3_9436_19390a6f9006.slice. Mar 13 00:42:53.443008 systemd[1]: Created slice kubepods-besteffort-pod63cfed16_ada0_435d_bd56_248f4b5a20e5.slice - libcontainer container kubepods-besteffort-pod63cfed16_ada0_435d_bd56_248f4b5a20e5.slice. Mar 13 00:42:53.453239 containerd[1565]: time="2026-03-13T00:42:53.453084923Z" level=info msg="Container f5167323c0200e2d2cc6e3cb98011c5fde9bc6d66fe40c5b724fcf5e51458728: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:42:53.465816 systemd[1]: Created slice kubepods-besteffort-podcdcb1061_459f_4cd8_8428_64325669e3a9.slice - libcontainer container kubepods-besteffort-podcdcb1061_459f_4cd8_8428_64325669e3a9.slice. Mar 13 00:42:53.474634 systemd[1]: Created slice kubepods-besteffort-podd36f9439_4cd4_4996_b6fa_de6c76e8b792.slice - libcontainer container kubepods-besteffort-podd36f9439_4cd4_4996_b6fa_de6c76e8b792.slice. Mar 13 00:42:53.480608 containerd[1565]: time="2026-03-13T00:42:53.480410334Z" level=info msg="CreateContainer within sandbox \"88a83a63dd4f51406ca367b201817eaa2389a9222f52aed9fae01d069ef6c6ab\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"f5167323c0200e2d2cc6e3cb98011c5fde9bc6d66fe40c5b724fcf5e51458728\"" Mar 13 00:42:53.485158 containerd[1565]: time="2026-03-13T00:42:53.485066087Z" level=info msg="StartContainer for \"f5167323c0200e2d2cc6e3cb98011c5fde9bc6d66fe40c5b724fcf5e51458728\"" Mar 13 00:42:53.488566 containerd[1565]: time="2026-03-13T00:42:53.488416447Z" level=info msg="connecting to shim f5167323c0200e2d2cc6e3cb98011c5fde9bc6d66fe40c5b724fcf5e51458728" address="unix:///run/containerd/s/39eb1442a29d370cbc8954c40d56c8f4beb829f774382fccc7687a4319121d55" protocol=ttrpc version=3 Mar 13 00:42:53.505006 kubelet[2785]: I0313 00:42:53.504924 2785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhdc8\" (UniqueName: \"kubernetes.io/projected/a8a6ad84-f7f5-49f3-9436-19390a6f9006-kube-api-access-vhdc8\") pod \"coredns-66bc5c9577-24t98\" (UID: \"a8a6ad84-f7f5-49f3-9436-19390a6f9006\") " pod="kube-system/coredns-66bc5c9577-24t98" Mar 13 00:42:53.505983 kubelet[2785]: I0313 00:42:53.505887 2785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svbhq\" (UniqueName: \"kubernetes.io/projected/86afacf3-dd5f-4699-9570-d8f6390eafa0-kube-api-access-svbhq\") pod \"calico-kube-controllers-769b4596d5-fc5l8\" (UID: \"86afacf3-dd5f-4699-9570-d8f6390eafa0\") " pod="calico-system/calico-kube-controllers-769b4596d5-fc5l8" Mar 13 00:42:53.505983 kubelet[2785]: I0313 00:42:53.505947 2785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dfc1f841-0529-484b-b59f-9cc1adfd0779-config-volume\") pod \"coredns-66bc5c9577-4drb9\" (UID: \"dfc1f841-0529-484b-b59f-9cc1adfd0779\") " pod="kube-system/coredns-66bc5c9577-4drb9" Mar 13 00:42:53.506103 kubelet[2785]: I0313 00:42:53.505993 2785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/63cfed16-ada0-435d-bd56-248f4b5a20e5-calico-apiserver-certs\") pod \"calico-apiserver-67947fdc4c-zg5gm\" (UID: \"63cfed16-ada0-435d-bd56-248f4b5a20e5\") " pod="calico-system/calico-apiserver-67947fdc4c-zg5gm" Mar 13 00:42:53.506103 kubelet[2785]: I0313 00:42:53.506044 2785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d36f9439-4cd4-4996-b6fa-de6c76e8b792-goldmane-ca-bundle\") pod \"goldmane-cccfbd5cf-h5w9q\" (UID: \"d36f9439-4cd4-4996-b6fa-de6c76e8b792\") " pod="calico-system/goldmane-cccfbd5cf-h5w9q" Mar 13 00:42:53.506103 kubelet[2785]: I0313 00:42:53.506086 2785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4g5h\" (UniqueName: \"kubernetes.io/projected/63cfed16-ada0-435d-bd56-248f4b5a20e5-kube-api-access-c4g5h\") pod \"calico-apiserver-67947fdc4c-zg5gm\" (UID: \"63cfed16-ada0-435d-bd56-248f4b5a20e5\") " pod="calico-system/calico-apiserver-67947fdc4c-zg5gm" Mar 13 00:42:53.506236 kubelet[2785]: I0313 00:42:53.506107 2785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/d36f9439-4cd4-4996-b6fa-de6c76e8b792-goldmane-key-pair\") pod \"goldmane-cccfbd5cf-h5w9q\" (UID: \"d36f9439-4cd4-4996-b6fa-de6c76e8b792\") " pod="calico-system/goldmane-cccfbd5cf-h5w9q" Mar 13 00:42:53.506236 kubelet[2785]: I0313 00:42:53.506169 2785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/cdcb1061-459f-4cd8-8428-64325669e3a9-calico-apiserver-certs\") pod \"calico-apiserver-67947fdc4c-9bd4j\" (UID: \"cdcb1061-459f-4cd8-8428-64325669e3a9\") " pod="calico-system/calico-apiserver-67947fdc4c-9bd4j" Mar 13 00:42:53.506236 kubelet[2785]: I0313 00:42:53.506192 2785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a8a6ad84-f7f5-49f3-9436-19390a6f9006-config-volume\") pod \"coredns-66bc5c9577-24t98\" (UID: \"a8a6ad84-f7f5-49f3-9436-19390a6f9006\") " pod="kube-system/coredns-66bc5c9577-24t98" Mar 13 00:42:53.506236 kubelet[2785]: I0313 00:42:53.506217 2785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzvwx\" (UniqueName: \"kubernetes.io/projected/cdcb1061-459f-4cd8-8428-64325669e3a9-kube-api-access-tzvwx\") pod \"calico-apiserver-67947fdc4c-9bd4j\" (UID: \"cdcb1061-459f-4cd8-8428-64325669e3a9\") " pod="calico-system/calico-apiserver-67947fdc4c-9bd4j" Mar 13 00:42:53.506236 kubelet[2785]: I0313 00:42:53.506237 2785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d36f9439-4cd4-4996-b6fa-de6c76e8b792-config\") pod \"goldmane-cccfbd5cf-h5w9q\" (UID: \"d36f9439-4cd4-4996-b6fa-de6c76e8b792\") " pod="calico-system/goldmane-cccfbd5cf-h5w9q" Mar 13 00:42:53.506481 kubelet[2785]: I0313 00:42:53.506273 2785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/86afacf3-dd5f-4699-9570-d8f6390eafa0-tigera-ca-bundle\") pod \"calico-kube-controllers-769b4596d5-fc5l8\" (UID: \"86afacf3-dd5f-4699-9570-d8f6390eafa0\") " pod="calico-system/calico-kube-controllers-769b4596d5-fc5l8" Mar 13 00:42:53.506481 kubelet[2785]: I0313 00:42:53.506296 2785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlgbn\" (UniqueName: \"kubernetes.io/projected/dfc1f841-0529-484b-b59f-9cc1adfd0779-kube-api-access-tlgbn\") pod \"coredns-66bc5c9577-4drb9\" (UID: \"dfc1f841-0529-484b-b59f-9cc1adfd0779\") " pod="kube-system/coredns-66bc5c9577-4drb9" Mar 13 00:42:53.506481 kubelet[2785]: I0313 00:42:53.506321 2785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xr7v\" (UniqueName: \"kubernetes.io/projected/d36f9439-4cd4-4996-b6fa-de6c76e8b792-kube-api-access-9xr7v\") pod \"goldmane-cccfbd5cf-h5w9q\" (UID: \"d36f9439-4cd4-4996-b6fa-de6c76e8b792\") " pod="calico-system/goldmane-cccfbd5cf-h5w9q" Mar 13 00:42:53.537878 systemd[1]: Started cri-containerd-f5167323c0200e2d2cc6e3cb98011c5fde9bc6d66fe40c5b724fcf5e51458728.scope - libcontainer container f5167323c0200e2d2cc6e3cb98011c5fde9bc6d66fe40c5b724fcf5e51458728. Mar 13 00:42:53.666836 containerd[1565]: time="2026-03-13T00:42:53.666660147Z" level=info msg="StartContainer for \"f5167323c0200e2d2cc6e3cb98011c5fde9bc6d66fe40c5b724fcf5e51458728\" returns successfully" Mar 13 00:42:53.695579 containerd[1565]: time="2026-03-13T00:42:53.695239133Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6fb4cd9fcf-7pwg9,Uid:9b5fd946-17a9-4a05-b878-6849fbb45881,Namespace:calico-system,Attempt:0,}" Mar 13 00:42:53.707711 containerd[1565]: time="2026-03-13T00:42:53.706763019Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-769b4596d5-fc5l8,Uid:86afacf3-dd5f-4699-9570-d8f6390eafa0,Namespace:calico-system,Attempt:0,}" Mar 13 00:42:53.723199 kubelet[2785]: E0313 00:42:53.723121 2785 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:42:53.725807 containerd[1565]: time="2026-03-13T00:42:53.725767835Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-4drb9,Uid:dfc1f841-0529-484b-b59f-9cc1adfd0779,Namespace:kube-system,Attempt:0,}" Mar 13 00:42:53.742670 kubelet[2785]: E0313 00:42:53.742453 2785 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:42:53.744428 containerd[1565]: time="2026-03-13T00:42:53.744258356Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-24t98,Uid:a8a6ad84-f7f5-49f3-9436-19390a6f9006,Namespace:kube-system,Attempt:0,}" Mar 13 00:42:53.769914 containerd[1565]: time="2026-03-13T00:42:53.769871271Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67947fdc4c-zg5gm,Uid:63cfed16-ada0-435d-bd56-248f4b5a20e5,Namespace:calico-system,Attempt:0,}" Mar 13 00:42:53.792786 containerd[1565]: time="2026-03-13T00:42:53.791958049Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67947fdc4c-9bd4j,Uid:cdcb1061-459f-4cd8-8428-64325669e3a9,Namespace:calico-system,Attempt:0,}" Mar 13 00:42:53.792786 containerd[1565]: time="2026-03-13T00:42:53.792166635Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-h5w9q,Uid:d36f9439-4cd4-4996-b6fa-de6c76e8b792,Namespace:calico-system,Attempt:0,}" Mar 13 00:42:54.075107 containerd[1565]: time="2026-03-13T00:42:54.074982173Z" level=error msg="Failed to destroy network for sandbox \"af03e3e12d129d60ea66c1fa79bf6920b2696868a8e0c18468511708b7bfa949\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:42:54.082272 containerd[1565]: time="2026-03-13T00:42:54.082231758Z" level=error msg="Failed to destroy network for sandbox \"c8be6229b28401b86ecb7a557fece1d473a6ac3cd8497ca395e94be2f80cb06d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:42:54.088130 containerd[1565]: time="2026-03-13T00:42:54.088032833Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6fb4cd9fcf-7pwg9,Uid:9b5fd946-17a9-4a05-b878-6849fbb45881,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"af03e3e12d129d60ea66c1fa79bf6920b2696868a8e0c18468511708b7bfa949\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:42:54.093033 containerd[1565]: time="2026-03-13T00:42:54.093003161Z" level=error msg="Failed to destroy network for sandbox \"06a9df77f3d9c12bc25e1439c87ea16d01d44eac167cc175016bfedf7b8097db\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:42:54.093393 containerd[1565]: time="2026-03-13T00:42:54.093323440Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-24t98,Uid:a8a6ad84-f7f5-49f3-9436-19390a6f9006,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8be6229b28401b86ecb7a557fece1d473a6ac3cd8497ca395e94be2f80cb06d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:42:54.095636 containerd[1565]: time="2026-03-13T00:42:54.095607939Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-769b4596d5-fc5l8,Uid:86afacf3-dd5f-4699-9570-d8f6390eafa0,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"06a9df77f3d9c12bc25e1439c87ea16d01d44eac167cc175016bfedf7b8097db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:42:54.099952 containerd[1565]: time="2026-03-13T00:42:54.099925057Z" level=error msg="Failed to destroy network for sandbox \"8c78f1def98ecb03336de6ed8df00da71bf00ef3b06884baa1555409e5840c3a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:42:54.103180 containerd[1565]: time="2026-03-13T00:42:54.103106010Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-4drb9,Uid:dfc1f841-0529-484b-b59f-9cc1adfd0779,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c78f1def98ecb03336de6ed8df00da71bf00ef3b06884baa1555409e5840c3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:42:54.111051 containerd[1565]: time="2026-03-13T00:42:54.110755384Z" level=error msg="Failed to destroy network for sandbox \"8ecd1e0b32913fcd74072075a98c95ab9b8e6343cb6c7cbd72016fb7b3aca8d8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:42:54.116245 containerd[1565]: time="2026-03-13T00:42:54.116024932Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-h5w9q,Uid:d36f9439-4cd4-4996-b6fa-de6c76e8b792,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ecd1e0b32913fcd74072075a98c95ab9b8e6343cb6c7cbd72016fb7b3aca8d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:42:54.116890 kubelet[2785]: E0313 00:42:54.116793 2785 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ecd1e0b32913fcd74072075a98c95ab9b8e6343cb6c7cbd72016fb7b3aca8d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:42:54.117381 kubelet[2785]: E0313 00:42:54.117033 2785 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c78f1def98ecb03336de6ed8df00da71bf00ef3b06884baa1555409e5840c3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:42:54.117640 kubelet[2785]: E0313 00:42:54.117617 2785 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c78f1def98ecb03336de6ed8df00da71bf00ef3b06884baa1555409e5840c3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-4drb9" Mar 13 00:42:54.117937 kubelet[2785]: E0313 00:42:54.117729 2785 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c78f1def98ecb03336de6ed8df00da71bf00ef3b06884baa1555409e5840c3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-4drb9" Mar 13 00:42:54.118034 kubelet[2785]: E0313 00:42:54.117065 2785 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"06a9df77f3d9c12bc25e1439c87ea16d01d44eac167cc175016bfedf7b8097db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:42:54.119090 kubelet[2785]: E0313 00:42:54.118690 2785 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"06a9df77f3d9c12bc25e1439c87ea16d01d44eac167cc175016bfedf7b8097db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-769b4596d5-fc5l8" Mar 13 00:42:54.119090 kubelet[2785]: E0313 00:42:54.118726 2785 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"06a9df77f3d9c12bc25e1439c87ea16d01d44eac167cc175016bfedf7b8097db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-769b4596d5-fc5l8" Mar 13 00:42:54.119090 kubelet[2785]: E0313 00:42:54.118788 2785 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-769b4596d5-fc5l8_calico-system(86afacf3-dd5f-4699-9570-d8f6390eafa0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-769b4596d5-fc5l8_calico-system(86afacf3-dd5f-4699-9570-d8f6390eafa0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"06a9df77f3d9c12bc25e1439c87ea16d01d44eac167cc175016bfedf7b8097db\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-769b4596d5-fc5l8" podUID="86afacf3-dd5f-4699-9570-d8f6390eafa0" Mar 13 00:42:54.119401 kubelet[2785]: E0313 00:42:54.118635 2785 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ecd1e0b32913fcd74072075a98c95ab9b8e6343cb6c7cbd72016fb7b3aca8d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-cccfbd5cf-h5w9q" Mar 13 00:42:54.119401 kubelet[2785]: E0313 00:42:54.118829 2785 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ecd1e0b32913fcd74072075a98c95ab9b8e6343cb6c7cbd72016fb7b3aca8d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-cccfbd5cf-h5w9q" Mar 13 00:42:54.119401 kubelet[2785]: E0313 00:42:54.118865 2785 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-cccfbd5cf-h5w9q_calico-system(d36f9439-4cd4-4996-b6fa-de6c76e8b792)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-cccfbd5cf-h5w9q_calico-system(d36f9439-4cd4-4996-b6fa-de6c76e8b792)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8ecd1e0b32913fcd74072075a98c95ab9b8e6343cb6c7cbd72016fb7b3aca8d8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-cccfbd5cf-h5w9q" podUID="d36f9439-4cd4-4996-b6fa-de6c76e8b792" Mar 13 00:42:54.119670 kubelet[2785]: E0313 00:42:54.117050 2785 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8be6229b28401b86ecb7a557fece1d473a6ac3cd8497ca395e94be2f80cb06d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:42:54.119670 kubelet[2785]: E0313 00:42:54.118920 2785 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8be6229b28401b86ecb7a557fece1d473a6ac3cd8497ca395e94be2f80cb06d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-24t98" Mar 13 00:42:54.119670 kubelet[2785]: E0313 00:42:54.118937 2785 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8be6229b28401b86ecb7a557fece1d473a6ac3cd8497ca395e94be2f80cb06d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-24t98" Mar 13 00:42:54.119926 kubelet[2785]: E0313 00:42:54.118971 2785 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-24t98_kube-system(a8a6ad84-f7f5-49f3-9436-19390a6f9006)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-24t98_kube-system(a8a6ad84-f7f5-49f3-9436-19390a6f9006)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c8be6229b28401b86ecb7a557fece1d473a6ac3cd8497ca395e94be2f80cb06d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-24t98" podUID="a8a6ad84-f7f5-49f3-9436-19390a6f9006" Mar 13 00:42:54.119926 kubelet[2785]: E0313 00:42:54.117206 2785 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"af03e3e12d129d60ea66c1fa79bf6920b2696868a8e0c18468511708b7bfa949\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:42:54.119926 kubelet[2785]: E0313 00:42:54.118999 2785 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"af03e3e12d129d60ea66c1fa79bf6920b2696868a8e0c18468511708b7bfa949\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6fb4cd9fcf-7pwg9" Mar 13 00:42:54.120103 kubelet[2785]: E0313 00:42:54.119013 2785 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"af03e3e12d129d60ea66c1fa79bf6920b2696868a8e0c18468511708b7bfa949\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6fb4cd9fcf-7pwg9" Mar 13 00:42:54.120103 kubelet[2785]: E0313 00:42:54.119043 2785 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6fb4cd9fcf-7pwg9_calico-system(9b5fd946-17a9-4a05-b878-6849fbb45881)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6fb4cd9fcf-7pwg9_calico-system(9b5fd946-17a9-4a05-b878-6849fbb45881)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"af03e3e12d129d60ea66c1fa79bf6920b2696868a8e0c18468511708b7bfa949\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6fb4cd9fcf-7pwg9" podUID="9b5fd946-17a9-4a05-b878-6849fbb45881" Mar 13 00:42:54.120103 kubelet[2785]: E0313 00:42:54.118151 2785 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-4drb9_kube-system(dfc1f841-0529-484b-b59f-9cc1adfd0779)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-4drb9_kube-system(dfc1f841-0529-484b-b59f-9cc1adfd0779)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8c78f1def98ecb03336de6ed8df00da71bf00ef3b06884baa1555409e5840c3a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-4drb9" podUID="dfc1f841-0529-484b-b59f-9cc1adfd0779" Mar 13 00:42:54.171681 containerd[1565]: time="2026-03-13T00:42:54.171492490Z" level=error msg="Failed to destroy network for sandbox \"26f30f94675294ff469787f764a7a908e1605c203f670022c660a8217c007951\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:42:54.197709 containerd[1565]: time="2026-03-13T00:42:54.196703414Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67947fdc4c-zg5gm,Uid:63cfed16-ada0-435d-bd56-248f4b5a20e5,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"26f30f94675294ff469787f764a7a908e1605c203f670022c660a8217c007951\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:42:54.199276 kubelet[2785]: E0313 00:42:54.198097 2785 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26f30f94675294ff469787f764a7a908e1605c203f670022c660a8217c007951\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:42:54.199276 kubelet[2785]: E0313 00:42:54.198414 2785 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26f30f94675294ff469787f764a7a908e1605c203f670022c660a8217c007951\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-67947fdc4c-zg5gm" Mar 13 00:42:54.199276 kubelet[2785]: E0313 00:42:54.198444 2785 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26f30f94675294ff469787f764a7a908e1605c203f670022c660a8217c007951\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-67947fdc4c-zg5gm" Mar 13 00:42:54.200876 kubelet[2785]: E0313 00:42:54.198504 2785 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-67947fdc4c-zg5gm_calico-system(63cfed16-ada0-435d-bd56-248f4b5a20e5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-67947fdc4c-zg5gm_calico-system(63cfed16-ada0-435d-bd56-248f4b5a20e5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"26f30f94675294ff469787f764a7a908e1605c203f670022c660a8217c007951\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-67947fdc4c-zg5gm" podUID="63cfed16-ada0-435d-bd56-248f4b5a20e5" Mar 13 00:42:54.235150 kubelet[2785]: I0313 00:42:54.234050 2785 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 00:42:54.240483 kubelet[2785]: E0313 00:42:54.239163 2785 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:42:54.372269 kubelet[2785]: E0313 00:42:54.370885 2785 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:42:54.385293 containerd[1565]: 2026-03-13 00:42:54.304 [INFO][3890] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8e6b4e0001cec887451bd0f6bd6e5d2ef9778c09838396ef91f527b4c8f830dc" Mar 13 00:42:54.385293 containerd[1565]: 2026-03-13 00:42:54.304 [INFO][3890] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8e6b4e0001cec887451bd0f6bd6e5d2ef9778c09838396ef91f527b4c8f830dc" iface="eth0" netns="/var/run/netns/cni-02f32dfb-010b-938e-284e-aeef42788e16" Mar 13 00:42:54.385293 containerd[1565]: 2026-03-13 00:42:54.306 [INFO][3890] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8e6b4e0001cec887451bd0f6bd6e5d2ef9778c09838396ef91f527b4c8f830dc" iface="eth0" netns="/var/run/netns/cni-02f32dfb-010b-938e-284e-aeef42788e16" Mar 13 00:42:54.385293 containerd[1565]: 2026-03-13 00:42:54.306 [INFO][3890] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8e6b4e0001cec887451bd0f6bd6e5d2ef9778c09838396ef91f527b4c8f830dc" iface="eth0" netns="/var/run/netns/cni-02f32dfb-010b-938e-284e-aeef42788e16" Mar 13 00:42:54.385293 containerd[1565]: 2026-03-13 00:42:54.307 [INFO][3890] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8e6b4e0001cec887451bd0f6bd6e5d2ef9778c09838396ef91f527b4c8f830dc" Mar 13 00:42:54.385293 containerd[1565]: 2026-03-13 00:42:54.307 [INFO][3890] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8e6b4e0001cec887451bd0f6bd6e5d2ef9778c09838396ef91f527b4c8f830dc" Mar 13 00:42:54.385293 containerd[1565]: 2026-03-13 00:42:54.352 [INFO][3911] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8e6b4e0001cec887451bd0f6bd6e5d2ef9778c09838396ef91f527b4c8f830dc" HandleID="k8s-pod-network.8e6b4e0001cec887451bd0f6bd6e5d2ef9778c09838396ef91f527b4c8f830dc" Workload="localhost-k8s-calico--apiserver--67947fdc4c--9bd4j-eth0" Mar 13 00:42:54.385293 containerd[1565]: 2026-03-13 00:42:54.353 [INFO][3911] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 00:42:54.385293 containerd[1565]: 2026-03-13 00:42:54.353 [INFO][3911] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 00:42:54.385684 containerd[1565]: 2026-03-13 00:42:54.362 [WARNING][3911] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8e6b4e0001cec887451bd0f6bd6e5d2ef9778c09838396ef91f527b4c8f830dc" HandleID="k8s-pod-network.8e6b4e0001cec887451bd0f6bd6e5d2ef9778c09838396ef91f527b4c8f830dc" Workload="localhost-k8s-calico--apiserver--67947fdc4c--9bd4j-eth0" Mar 13 00:42:54.385684 containerd[1565]: 2026-03-13 00:42:54.362 [INFO][3911] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8e6b4e0001cec887451bd0f6bd6e5d2ef9778c09838396ef91f527b4c8f830dc" HandleID="k8s-pod-network.8e6b4e0001cec887451bd0f6bd6e5d2ef9778c09838396ef91f527b4c8f830dc" Workload="localhost-k8s-calico--apiserver--67947fdc4c--9bd4j-eth0" Mar 13 00:42:54.385684 containerd[1565]: 2026-03-13 00:42:54.365 [INFO][3911] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 00:42:54.385684 containerd[1565]: 2026-03-13 00:42:54.374 [INFO][3890] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8e6b4e0001cec887451bd0f6bd6e5d2ef9778c09838396ef91f527b4c8f830dc" Mar 13 00:42:54.389504 systemd[1]: run-netns-cni\x2d02f32dfb\x2d010b\x2d938e\x2d284e\x2daeef42788e16.mount: Deactivated successfully. Mar 13 00:42:54.393118 containerd[1565]: time="2026-03-13T00:42:54.392893274Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67947fdc4c-9bd4j,Uid:cdcb1061-459f-4cd8-8428-64325669e3a9,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e6b4e0001cec887451bd0f6bd6e5d2ef9778c09838396ef91f527b4c8f830dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:42:54.393845 kubelet[2785]: E0313 00:42:54.393789 2785 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e6b4e0001cec887451bd0f6bd6e5d2ef9778c09838396ef91f527b4c8f830dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:42:54.393963 kubelet[2785]: E0313 00:42:54.393872 2785 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e6b4e0001cec887451bd0f6bd6e5d2ef9778c09838396ef91f527b4c8f830dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-67947fdc4c-9bd4j" Mar 13 00:42:54.393963 kubelet[2785]: E0313 00:42:54.393897 2785 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e6b4e0001cec887451bd0f6bd6e5d2ef9778c09838396ef91f527b4c8f830dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-67947fdc4c-9bd4j" Mar 13 00:42:54.394029 kubelet[2785]: E0313 00:42:54.393962 2785 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-67947fdc4c-9bd4j_calico-system(cdcb1061-459f-4cd8-8428-64325669e3a9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-67947fdc4c-9bd4j_calico-system(cdcb1061-459f-4cd8-8428-64325669e3a9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8e6b4e0001cec887451bd0f6bd6e5d2ef9778c09838396ef91f527b4c8f830dc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-67947fdc4c-9bd4j" podUID="cdcb1061-459f-4cd8-8428-64325669e3a9" Mar 13 00:42:54.481678 kubelet[2785]: I0313 00:42:54.481090 2785 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-lpm9v" podStartSLOduration=4.584927445 podStartE2EDuration="18.481065317s" podCreationTimestamp="2026-03-13 00:42:36 +0000 UTC" firstStartedPulling="2026-03-13 00:42:37.322516415 +0000 UTC m=+28.782791406" lastFinishedPulling="2026-03-13 00:42:51.218654287 +0000 UTC m=+42.678929278" observedRunningTime="2026-03-13 00:42:54.455180141 +0000 UTC m=+45.915455132" watchObservedRunningTime="2026-03-13 00:42:54.481065317 +0000 UTC m=+45.941340308" Mar 13 00:42:54.519761 kubelet[2785]: I0313 00:42:54.519627 2785 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/9b5fd946-17a9-4a05-b878-6849fbb45881-nginx-config\") pod \"9b5fd946-17a9-4a05-b878-6849fbb45881\" (UID: \"9b5fd946-17a9-4a05-b878-6849fbb45881\") " Mar 13 00:42:54.519761 kubelet[2785]: I0313 00:42:54.519697 2785 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9b5fd946-17a9-4a05-b878-6849fbb45881-whisker-backend-key-pair\") pod \"9b5fd946-17a9-4a05-b878-6849fbb45881\" (UID: \"9b5fd946-17a9-4a05-b878-6849fbb45881\") " Mar 13 00:42:54.519761 kubelet[2785]: I0313 00:42:54.519737 2785 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9b5fd946-17a9-4a05-b878-6849fbb45881-whisker-ca-bundle\") pod \"9b5fd946-17a9-4a05-b878-6849fbb45881\" (UID: \"9b5fd946-17a9-4a05-b878-6849fbb45881\") " Mar 13 00:42:54.519761 kubelet[2785]: I0313 00:42:54.519773 2785 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hvd5b\" (UniqueName: \"kubernetes.io/projected/9b5fd946-17a9-4a05-b878-6849fbb45881-kube-api-access-hvd5b\") pod \"9b5fd946-17a9-4a05-b878-6849fbb45881\" (UID: \"9b5fd946-17a9-4a05-b878-6849fbb45881\") " Mar 13 00:42:54.521869 kubelet[2785]: I0313 00:42:54.521783 2785 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b5fd946-17a9-4a05-b878-6849fbb45881-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "9b5fd946-17a9-4a05-b878-6849fbb45881" (UID: "9b5fd946-17a9-4a05-b878-6849fbb45881"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 13 00:42:54.522446 kubelet[2785]: I0313 00:42:54.522256 2785 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b5fd946-17a9-4a05-b878-6849fbb45881-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "9b5fd946-17a9-4a05-b878-6849fbb45881" (UID: "9b5fd946-17a9-4a05-b878-6849fbb45881"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 13 00:42:54.529432 systemd[1]: var-lib-kubelet-pods-9b5fd946\x2d17a9\x2d4a05\x2db878\x2d6849fbb45881-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Mar 13 00:42:54.532248 kubelet[2785]: I0313 00:42:54.531819 2785 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b5fd946-17a9-4a05-b878-6849fbb45881-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "9b5fd946-17a9-4a05-b878-6849fbb45881" (UID: "9b5fd946-17a9-4a05-b878-6849fbb45881"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 13 00:42:54.538187 systemd[1]: var-lib-kubelet-pods-9b5fd946\x2d17a9\x2d4a05\x2db878\x2d6849fbb45881-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhvd5b.mount: Deactivated successfully. Mar 13 00:42:54.544089 kubelet[2785]: I0313 00:42:54.543971 2785 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b5fd946-17a9-4a05-b878-6849fbb45881-kube-api-access-hvd5b" (OuterVolumeSpecName: "kube-api-access-hvd5b") pod "9b5fd946-17a9-4a05-b878-6849fbb45881" (UID: "9b5fd946-17a9-4a05-b878-6849fbb45881"). InnerVolumeSpecName "kube-api-access-hvd5b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 13 00:42:54.620937 kubelet[2785]: I0313 00:42:54.620880 2785 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hvd5b\" (UniqueName: \"kubernetes.io/projected/9b5fd946-17a9-4a05-b878-6849fbb45881-kube-api-access-hvd5b\") on node \"localhost\" DevicePath \"\"" Mar 13 00:42:54.620937 kubelet[2785]: I0313 00:42:54.620926 2785 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/9b5fd946-17a9-4a05-b878-6849fbb45881-nginx-config\") on node \"localhost\" DevicePath \"\"" Mar 13 00:42:54.620937 kubelet[2785]: I0313 00:42:54.620936 2785 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9b5fd946-17a9-4a05-b878-6849fbb45881-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Mar 13 00:42:54.620937 kubelet[2785]: I0313 00:42:54.620943 2785 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9b5fd946-17a9-4a05-b878-6849fbb45881-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Mar 13 00:42:54.686618 systemd[1]: Created slice kubepods-besteffort-pod53e73110_bf2b_4a57_8079_bc3d1303e5a7.slice - libcontainer container kubepods-besteffort-pod53e73110_bf2b_4a57_8079_bc3d1303e5a7.slice. Mar 13 00:42:54.689100 systemd[1]: Removed slice kubepods-besteffort-pod9b5fd946_17a9_4a05_b878_6849fbb45881.slice - libcontainer container kubepods-besteffort-pod9b5fd946_17a9_4a05_b878_6849fbb45881.slice. Mar 13 00:42:54.703289 containerd[1565]: time="2026-03-13T00:42:54.703112765Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-c48ht,Uid:53e73110-bf2b-4a57-8079-bc3d1303e5a7,Namespace:calico-system,Attempt:0,}" Mar 13 00:42:54.890587 systemd-networkd[1462]: calib7d553daa3f: Link UP Mar 13 00:42:54.892753 systemd-networkd[1462]: calib7d553daa3f: Gained carrier Mar 13 00:42:54.916834 containerd[1565]: 2026-03-13 00:42:54.746 [ERROR][3959] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 13 00:42:54.916834 containerd[1565]: 2026-03-13 00:42:54.763 [INFO][3959] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--c48ht-eth0 csi-node-driver- calico-system 53e73110-bf2b-4a57-8079-bc3d1303e5a7 725 0 2026-03-13 00:42:36 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:98cbb5577 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-c48ht eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calib7d553daa3f [] [] }} ContainerID="9282642a011c46fcbf510c995e5b7afacaf829d274db47620ee6ba80ca56d22f" Namespace="calico-system" Pod="csi-node-driver-c48ht" WorkloadEndpoint="localhost-k8s-csi--node--driver--c48ht-" Mar 13 00:42:54.916834 containerd[1565]: 2026-03-13 00:42:54.763 [INFO][3959] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9282642a011c46fcbf510c995e5b7afacaf829d274db47620ee6ba80ca56d22f" Namespace="calico-system" Pod="csi-node-driver-c48ht" WorkloadEndpoint="localhost-k8s-csi--node--driver--c48ht-eth0" Mar 13 00:42:54.916834 containerd[1565]: 2026-03-13 00:42:54.809 [INFO][3972] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9282642a011c46fcbf510c995e5b7afacaf829d274db47620ee6ba80ca56d22f" HandleID="k8s-pod-network.9282642a011c46fcbf510c995e5b7afacaf829d274db47620ee6ba80ca56d22f" Workload="localhost-k8s-csi--node--driver--c48ht-eth0" Mar 13 00:42:54.917221 containerd[1565]: 2026-03-13 00:42:54.817 [INFO][3972] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="9282642a011c46fcbf510c995e5b7afacaf829d274db47620ee6ba80ca56d22f" HandleID="k8s-pod-network.9282642a011c46fcbf510c995e5b7afacaf829d274db47620ee6ba80ca56d22f" Workload="localhost-k8s-csi--node--driver--c48ht-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ef5b0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-c48ht", "timestamp":"2026-03-13 00:42:54.808998761 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000192840)} Mar 13 00:42:54.917221 containerd[1565]: 2026-03-13 00:42:54.817 [INFO][3972] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 00:42:54.917221 containerd[1565]: 2026-03-13 00:42:54.817 [INFO][3972] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 00:42:54.917221 containerd[1565]: 2026-03-13 00:42:54.817 [INFO][3972] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 13 00:42:54.917221 containerd[1565]: 2026-03-13 00:42:54.822 [INFO][3972] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.9282642a011c46fcbf510c995e5b7afacaf829d274db47620ee6ba80ca56d22f" host="localhost" Mar 13 00:42:54.917221 containerd[1565]: 2026-03-13 00:42:54.831 [INFO][3972] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 13 00:42:54.917221 containerd[1565]: 2026-03-13 00:42:54.840 [INFO][3972] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 13 00:42:54.917221 containerd[1565]: 2026-03-13 00:42:54.844 [INFO][3972] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 13 00:42:54.917221 containerd[1565]: 2026-03-13 00:42:54.848 [INFO][3972] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 13 00:42:54.917221 containerd[1565]: 2026-03-13 00:42:54.848 [INFO][3972] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9282642a011c46fcbf510c995e5b7afacaf829d274db47620ee6ba80ca56d22f" host="localhost" Mar 13 00:42:54.917698 containerd[1565]: 2026-03-13 00:42:54.851 [INFO][3972] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.9282642a011c46fcbf510c995e5b7afacaf829d274db47620ee6ba80ca56d22f Mar 13 00:42:54.917698 containerd[1565]: 2026-03-13 00:42:54.856 [INFO][3972] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9282642a011c46fcbf510c995e5b7afacaf829d274db47620ee6ba80ca56d22f" host="localhost" Mar 13 00:42:54.917698 containerd[1565]: 2026-03-13 00:42:54.863 [INFO][3972] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.9282642a011c46fcbf510c995e5b7afacaf829d274db47620ee6ba80ca56d22f" host="localhost" Mar 13 00:42:54.917698 containerd[1565]: 2026-03-13 00:42:54.864 [INFO][3972] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.9282642a011c46fcbf510c995e5b7afacaf829d274db47620ee6ba80ca56d22f" host="localhost" Mar 13 00:42:54.917698 containerd[1565]: 2026-03-13 00:42:54.864 [INFO][3972] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 00:42:54.917698 containerd[1565]: 2026-03-13 00:42:54.864 [INFO][3972] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="9282642a011c46fcbf510c995e5b7afacaf829d274db47620ee6ba80ca56d22f" HandleID="k8s-pod-network.9282642a011c46fcbf510c995e5b7afacaf829d274db47620ee6ba80ca56d22f" Workload="localhost-k8s-csi--node--driver--c48ht-eth0" Mar 13 00:42:54.917806 containerd[1565]: 2026-03-13 00:42:54.867 [INFO][3959] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9282642a011c46fcbf510c995e5b7afacaf829d274db47620ee6ba80ca56d22f" Namespace="calico-system" Pod="csi-node-driver-c48ht" WorkloadEndpoint="localhost-k8s-csi--node--driver--c48ht-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--c48ht-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"53e73110-bf2b-4a57-8079-bc3d1303e5a7", ResourceVersion:"725", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 42, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-c48ht", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib7d553daa3f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:42:54.917899 containerd[1565]: 2026-03-13 00:42:54.868 [INFO][3959] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="9282642a011c46fcbf510c995e5b7afacaf829d274db47620ee6ba80ca56d22f" Namespace="calico-system" Pod="csi-node-driver-c48ht" WorkloadEndpoint="localhost-k8s-csi--node--driver--c48ht-eth0" Mar 13 00:42:54.917899 containerd[1565]: 2026-03-13 00:42:54.868 [INFO][3959] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib7d553daa3f ContainerID="9282642a011c46fcbf510c995e5b7afacaf829d274db47620ee6ba80ca56d22f" Namespace="calico-system" Pod="csi-node-driver-c48ht" WorkloadEndpoint="localhost-k8s-csi--node--driver--c48ht-eth0" Mar 13 00:42:54.917899 containerd[1565]: 2026-03-13 00:42:54.893 [INFO][3959] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9282642a011c46fcbf510c995e5b7afacaf829d274db47620ee6ba80ca56d22f" Namespace="calico-system" Pod="csi-node-driver-c48ht" WorkloadEndpoint="localhost-k8s-csi--node--driver--c48ht-eth0" Mar 13 00:42:54.917962 containerd[1565]: 2026-03-13 00:42:54.894 [INFO][3959] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9282642a011c46fcbf510c995e5b7afacaf829d274db47620ee6ba80ca56d22f" Namespace="calico-system" Pod="csi-node-driver-c48ht" WorkloadEndpoint="localhost-k8s-csi--node--driver--c48ht-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--c48ht-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"53e73110-bf2b-4a57-8079-bc3d1303e5a7", ResourceVersion:"725", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 42, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9282642a011c46fcbf510c995e5b7afacaf829d274db47620ee6ba80ca56d22f", Pod:"csi-node-driver-c48ht", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib7d553daa3f", MAC:"02:67:4c:bc:74:fa", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:42:54.918045 containerd[1565]: 2026-03-13 00:42:54.911 [INFO][3959] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9282642a011c46fcbf510c995e5b7afacaf829d274db47620ee6ba80ca56d22f" Namespace="calico-system" Pod="csi-node-driver-c48ht" WorkloadEndpoint="localhost-k8s-csi--node--driver--c48ht-eth0" Mar 13 00:42:54.946642 containerd[1565]: time="2026-03-13T00:42:54.946411050Z" level=info msg="connecting to shim 9282642a011c46fcbf510c995e5b7afacaf829d274db47620ee6ba80ca56d22f" address="unix:///run/containerd/s/e084bc1b8b6bd181e9da1c62b20b5d227b0c34e03868b03798fe7d37e6cfc033" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:42:54.976786 systemd[1]: Started cri-containerd-9282642a011c46fcbf510c995e5b7afacaf829d274db47620ee6ba80ca56d22f.scope - libcontainer container 9282642a011c46fcbf510c995e5b7afacaf829d274db47620ee6ba80ca56d22f. Mar 13 00:42:54.991912 systemd-resolved[1389]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 13 00:42:55.015500 containerd[1565]: time="2026-03-13T00:42:55.015392649Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-c48ht,Uid:53e73110-bf2b-4a57-8079-bc3d1303e5a7,Namespace:calico-system,Attempt:0,} returns sandbox id \"9282642a011c46fcbf510c995e5b7afacaf829d274db47620ee6ba80ca56d22f\"" Mar 13 00:42:55.018081 containerd[1565]: time="2026-03-13T00:42:55.017988830Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Mar 13 00:42:55.387208 containerd[1565]: time="2026-03-13T00:42:55.386100388Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67947fdc4c-9bd4j,Uid:cdcb1061-459f-4cd8-8428-64325669e3a9,Namespace:calico-system,Attempt:0,}" Mar 13 00:42:55.499072 systemd[1]: Created slice kubepods-besteffort-pod7999b7a3_82a0_4ea1_a9ae_ca3330869bd7.slice - libcontainer container kubepods-besteffort-pod7999b7a3_82a0_4ea1_a9ae_ca3330869bd7.slice. Mar 13 00:42:55.632469 kubelet[2785]: I0313 00:42:55.631124 2785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/7999b7a3-82a0-4ea1-a9ae-ca3330869bd7-nginx-config\") pod \"whisker-6cc479f7dd-s5vxs\" (UID: \"7999b7a3-82a0-4ea1-a9ae-ca3330869bd7\") " pod="calico-system/whisker-6cc479f7dd-s5vxs" Mar 13 00:42:55.632469 kubelet[2785]: I0313 00:42:55.631251 2785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29v8p\" (UniqueName: \"kubernetes.io/projected/7999b7a3-82a0-4ea1-a9ae-ca3330869bd7-kube-api-access-29v8p\") pod \"whisker-6cc479f7dd-s5vxs\" (UID: \"7999b7a3-82a0-4ea1-a9ae-ca3330869bd7\") " pod="calico-system/whisker-6cc479f7dd-s5vxs" Mar 13 00:42:55.632469 kubelet[2785]: I0313 00:42:55.631284 2785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7999b7a3-82a0-4ea1-a9ae-ca3330869bd7-whisker-ca-bundle\") pod \"whisker-6cc479f7dd-s5vxs\" (UID: \"7999b7a3-82a0-4ea1-a9ae-ca3330869bd7\") " pod="calico-system/whisker-6cc479f7dd-s5vxs" Mar 13 00:42:55.632469 kubelet[2785]: I0313 00:42:55.631354 2785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7999b7a3-82a0-4ea1-a9ae-ca3330869bd7-whisker-backend-key-pair\") pod \"whisker-6cc479f7dd-s5vxs\" (UID: \"7999b7a3-82a0-4ea1-a9ae-ca3330869bd7\") " pod="calico-system/whisker-6cc479f7dd-s5vxs" Mar 13 00:42:55.782149 systemd-networkd[1462]: cali640bc46ab97: Link UP Mar 13 00:42:55.782966 systemd-networkd[1462]: cali640bc46ab97: Gained carrier Mar 13 00:42:55.815104 containerd[1565]: 2026-03-13 00:42:55.512 [ERROR][4041] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 13 00:42:55.815104 containerd[1565]: 2026-03-13 00:42:55.543 [INFO][4041] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--67947fdc4c--9bd4j-eth0 calico-apiserver-67947fdc4c- calico-system cdcb1061-459f-4cd8-8428-64325669e3a9 900 0 2026-03-13 00:42:36 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:67947fdc4c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-67947fdc4c-9bd4j eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali640bc46ab97 [] [] }} ContainerID="fd65e63777338db9336bffc579d99b92509d86c03961ebe0c28d3225df35870a" Namespace="calico-system" Pod="calico-apiserver-67947fdc4c-9bd4j" WorkloadEndpoint="localhost-k8s-calico--apiserver--67947fdc4c--9bd4j-" Mar 13 00:42:55.815104 containerd[1565]: 2026-03-13 00:42:55.544 [INFO][4041] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fd65e63777338db9336bffc579d99b92509d86c03961ebe0c28d3225df35870a" Namespace="calico-system" Pod="calico-apiserver-67947fdc4c-9bd4j" WorkloadEndpoint="localhost-k8s-calico--apiserver--67947fdc4c--9bd4j-eth0" Mar 13 00:42:55.815104 containerd[1565]: 2026-03-13 00:42:55.643 [INFO][4080] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fd65e63777338db9336bffc579d99b92509d86c03961ebe0c28d3225df35870a" HandleID="k8s-pod-network.fd65e63777338db9336bffc579d99b92509d86c03961ebe0c28d3225df35870a" Workload="localhost-k8s-calico--apiserver--67947fdc4c--9bd4j-eth0" Mar 13 00:42:55.815899 containerd[1565]: 2026-03-13 00:42:55.656 [INFO][4080] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="fd65e63777338db9336bffc579d99b92509d86c03961ebe0c28d3225df35870a" HandleID="k8s-pod-network.fd65e63777338db9336bffc579d99b92509d86c03961ebe0c28d3225df35870a" Workload="localhost-k8s-calico--apiserver--67947fdc4c--9bd4j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003371f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-67947fdc4c-9bd4j", "timestamp":"2026-03-13 00:42:55.643174611 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000687340)} Mar 13 00:42:55.815899 containerd[1565]: 2026-03-13 00:42:55.656 [INFO][4080] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 00:42:55.815899 containerd[1565]: 2026-03-13 00:42:55.656 [INFO][4080] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 00:42:55.815899 containerd[1565]: 2026-03-13 00:42:55.656 [INFO][4080] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 13 00:42:55.815899 containerd[1565]: 2026-03-13 00:42:55.661 [INFO][4080] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.fd65e63777338db9336bffc579d99b92509d86c03961ebe0c28d3225df35870a" host="localhost" Mar 13 00:42:55.815899 containerd[1565]: 2026-03-13 00:42:55.674 [INFO][4080] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 13 00:42:55.815899 containerd[1565]: 2026-03-13 00:42:55.683 [INFO][4080] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 13 00:42:55.815899 containerd[1565]: 2026-03-13 00:42:55.686 [INFO][4080] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 13 00:42:55.815899 containerd[1565]: 2026-03-13 00:42:55.690 [INFO][4080] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 13 00:42:55.815899 containerd[1565]: 2026-03-13 00:42:55.690 [INFO][4080] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.fd65e63777338db9336bffc579d99b92509d86c03961ebe0c28d3225df35870a" host="localhost" Mar 13 00:42:55.816520 containerd[1565]: 2026-03-13 00:42:55.694 [INFO][4080] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.fd65e63777338db9336bffc579d99b92509d86c03961ebe0c28d3225df35870a Mar 13 00:42:55.816520 containerd[1565]: 2026-03-13 00:42:55.714 [INFO][4080] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.fd65e63777338db9336bffc579d99b92509d86c03961ebe0c28d3225df35870a" host="localhost" Mar 13 00:42:55.816520 containerd[1565]: 2026-03-13 00:42:55.729 [INFO][4080] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.fd65e63777338db9336bffc579d99b92509d86c03961ebe0c28d3225df35870a" host="localhost" Mar 13 00:42:55.816520 containerd[1565]: 2026-03-13 00:42:55.729 [INFO][4080] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.fd65e63777338db9336bffc579d99b92509d86c03961ebe0c28d3225df35870a" host="localhost" Mar 13 00:42:55.816520 containerd[1565]: 2026-03-13 00:42:55.730 [INFO][4080] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 00:42:55.816520 containerd[1565]: 2026-03-13 00:42:55.730 [INFO][4080] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="fd65e63777338db9336bffc579d99b92509d86c03961ebe0c28d3225df35870a" HandleID="k8s-pod-network.fd65e63777338db9336bffc579d99b92509d86c03961ebe0c28d3225df35870a" Workload="localhost-k8s-calico--apiserver--67947fdc4c--9bd4j-eth0" Mar 13 00:42:55.819640 containerd[1565]: 2026-03-13 00:42:55.746 [INFO][4041] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fd65e63777338db9336bffc579d99b92509d86c03961ebe0c28d3225df35870a" Namespace="calico-system" Pod="calico-apiserver-67947fdc4c-9bd4j" WorkloadEndpoint="localhost-k8s-calico--apiserver--67947fdc4c--9bd4j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--67947fdc4c--9bd4j-eth0", GenerateName:"calico-apiserver-67947fdc4c-", Namespace:"calico-system", SelfLink:"", UID:"cdcb1061-459f-4cd8-8428-64325669e3a9", ResourceVersion:"900", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 42, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67947fdc4c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-67947fdc4c-9bd4j", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali640bc46ab97", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:42:55.819913 containerd[1565]: 2026-03-13 00:42:55.750 [INFO][4041] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="fd65e63777338db9336bffc579d99b92509d86c03961ebe0c28d3225df35870a" Namespace="calico-system" Pod="calico-apiserver-67947fdc4c-9bd4j" WorkloadEndpoint="localhost-k8s-calico--apiserver--67947fdc4c--9bd4j-eth0" Mar 13 00:42:55.819913 containerd[1565]: 2026-03-13 00:42:55.757 [INFO][4041] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali640bc46ab97 ContainerID="fd65e63777338db9336bffc579d99b92509d86c03961ebe0c28d3225df35870a" Namespace="calico-system" Pod="calico-apiserver-67947fdc4c-9bd4j" WorkloadEndpoint="localhost-k8s-calico--apiserver--67947fdc4c--9bd4j-eth0" Mar 13 00:42:55.819913 containerd[1565]: 2026-03-13 00:42:55.782 [INFO][4041] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fd65e63777338db9336bffc579d99b92509d86c03961ebe0c28d3225df35870a" Namespace="calico-system" Pod="calico-apiserver-67947fdc4c-9bd4j" WorkloadEndpoint="localhost-k8s-calico--apiserver--67947fdc4c--9bd4j-eth0" Mar 13 00:42:55.819985 containerd[1565]: 2026-03-13 00:42:55.784 [INFO][4041] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fd65e63777338db9336bffc579d99b92509d86c03961ebe0c28d3225df35870a" Namespace="calico-system" Pod="calico-apiserver-67947fdc4c-9bd4j" WorkloadEndpoint="localhost-k8s-calico--apiserver--67947fdc4c--9bd4j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--67947fdc4c--9bd4j-eth0", GenerateName:"calico-apiserver-67947fdc4c-", Namespace:"calico-system", SelfLink:"", UID:"cdcb1061-459f-4cd8-8428-64325669e3a9", ResourceVersion:"900", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 42, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67947fdc4c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fd65e63777338db9336bffc579d99b92509d86c03961ebe0c28d3225df35870a", Pod:"calico-apiserver-67947fdc4c-9bd4j", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali640bc46ab97", MAC:"e6:e7:c5:20:2d:3a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:42:55.820081 containerd[1565]: 2026-03-13 00:42:55.806 [INFO][4041] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fd65e63777338db9336bffc579d99b92509d86c03961ebe0c28d3225df35870a" Namespace="calico-system" Pod="calico-apiserver-67947fdc4c-9bd4j" WorkloadEndpoint="localhost-k8s-calico--apiserver--67947fdc4c--9bd4j-eth0" Mar 13 00:42:55.838853 containerd[1565]: time="2026-03-13T00:42:55.838142103Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6cc479f7dd-s5vxs,Uid:7999b7a3-82a0-4ea1-a9ae-ca3330869bd7,Namespace:calico-system,Attempt:0,}" Mar 13 00:42:55.914483 containerd[1565]: time="2026-03-13T00:42:55.914430568Z" level=info msg="connecting to shim fd65e63777338db9336bffc579d99b92509d86c03961ebe0c28d3225df35870a" address="unix:///run/containerd/s/9b858a72fc7f8f38b495ff35389c11662eeebdc5dad381bcf6ab2481e5667ee2" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:42:55.982050 containerd[1565]: time="2026-03-13T00:42:55.982010945Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:42:55.983024 containerd[1565]: time="2026-03-13T00:42:55.982940072Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Mar 13 00:42:55.984199 containerd[1565]: time="2026-03-13T00:42:55.984168759Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:42:55.987244 containerd[1565]: time="2026-03-13T00:42:55.985631099Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:42:55.987244 containerd[1565]: time="2026-03-13T00:42:55.985875727Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 967.385917ms" Mar 13 00:42:55.987244 containerd[1565]: time="2026-03-13T00:42:55.985900072Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Mar 13 00:42:55.993187 systemd[1]: Started cri-containerd-fd65e63777338db9336bffc579d99b92509d86c03961ebe0c28d3225df35870a.scope - libcontainer container fd65e63777338db9336bffc579d99b92509d86c03961ebe0c28d3225df35870a. Mar 13 00:42:55.996441 containerd[1565]: time="2026-03-13T00:42:55.996360096Z" level=info msg="CreateContainer within sandbox \"9282642a011c46fcbf510c995e5b7afacaf829d274db47620ee6ba80ca56d22f\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 13 00:42:56.052346 containerd[1565]: time="2026-03-13T00:42:56.052149175Z" level=info msg="Container 205c4519dddd86e56adb87834683b2b504dcfcc450398907be70e32da29c803a: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:42:56.068021 systemd-resolved[1389]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 13 00:42:56.075829 containerd[1565]: time="2026-03-13T00:42:56.075584746Z" level=info msg="CreateContainer within sandbox \"9282642a011c46fcbf510c995e5b7afacaf829d274db47620ee6ba80ca56d22f\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"205c4519dddd86e56adb87834683b2b504dcfcc450398907be70e32da29c803a\"" Mar 13 00:42:56.076204 containerd[1565]: time="2026-03-13T00:42:56.076185210Z" level=info msg="StartContainer for \"205c4519dddd86e56adb87834683b2b504dcfcc450398907be70e32da29c803a\"" Mar 13 00:42:56.081458 containerd[1565]: time="2026-03-13T00:42:56.080666938Z" level=info msg="connecting to shim 205c4519dddd86e56adb87834683b2b504dcfcc450398907be70e32da29c803a" address="unix:///run/containerd/s/e084bc1b8b6bd181e9da1c62b20b5d227b0c34e03868b03798fe7d37e6cfc033" protocol=ttrpc version=3 Mar 13 00:42:56.149996 systemd[1]: Started cri-containerd-205c4519dddd86e56adb87834683b2b504dcfcc450398907be70e32da29c803a.scope - libcontainer container 205c4519dddd86e56adb87834683b2b504dcfcc450398907be70e32da29c803a. Mar 13 00:42:56.219909 containerd[1565]: time="2026-03-13T00:42:56.219764922Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67947fdc4c-9bd4j,Uid:cdcb1061-459f-4cd8-8428-64325669e3a9,Namespace:calico-system,Attempt:0,} returns sandbox id \"fd65e63777338db9336bffc579d99b92509d86c03961ebe0c28d3225df35870a\"" Mar 13 00:42:56.226988 containerd[1565]: time="2026-03-13T00:42:56.226823953Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 13 00:42:56.268732 systemd-networkd[1462]: cali5fb5dd037e5: Link UP Mar 13 00:42:56.270676 systemd-networkd[1462]: cali5fb5dd037e5: Gained carrier Mar 13 00:42:56.298227 containerd[1565]: 2026-03-13 00:42:56.016 [ERROR][4195] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 13 00:42:56.298227 containerd[1565]: 2026-03-13 00:42:56.051 [INFO][4195] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--6cc479f7dd--s5vxs-eth0 whisker-6cc479f7dd- calico-system 7999b7a3-82a0-4ea1-a9ae-ca3330869bd7 933 0 2026-03-13 00:42:55 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6cc479f7dd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-6cc479f7dd-s5vxs eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali5fb5dd037e5 [] [] }} ContainerID="c591ed5105540d450ab39df409b96d7c44c66c52f1091ebb428966e28af91043" Namespace="calico-system" Pod="whisker-6cc479f7dd-s5vxs" WorkloadEndpoint="localhost-k8s-whisker--6cc479f7dd--s5vxs-" Mar 13 00:42:56.298227 containerd[1565]: 2026-03-13 00:42:56.054 [INFO][4195] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c591ed5105540d450ab39df409b96d7c44c66c52f1091ebb428966e28af91043" Namespace="calico-system" Pod="whisker-6cc479f7dd-s5vxs" WorkloadEndpoint="localhost-k8s-whisker--6cc479f7dd--s5vxs-eth0" Mar 13 00:42:56.298227 containerd[1565]: 2026-03-13 00:42:56.156 [INFO][4266] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c591ed5105540d450ab39df409b96d7c44c66c52f1091ebb428966e28af91043" HandleID="k8s-pod-network.c591ed5105540d450ab39df409b96d7c44c66c52f1091ebb428966e28af91043" Workload="localhost-k8s-whisker--6cc479f7dd--s5vxs-eth0" Mar 13 00:42:56.298638 containerd[1565]: 2026-03-13 00:42:56.176 [INFO][4266] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="c591ed5105540d450ab39df409b96d7c44c66c52f1091ebb428966e28af91043" HandleID="k8s-pod-network.c591ed5105540d450ab39df409b96d7c44c66c52f1091ebb428966e28af91043" Workload="localhost-k8s-whisker--6cc479f7dd--s5vxs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004eab0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-6cc479f7dd-s5vxs", "timestamp":"2026-03-13 00:42:56.156738984 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003c9080)} Mar 13 00:42:56.298638 containerd[1565]: 2026-03-13 00:42:56.176 [INFO][4266] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 00:42:56.298638 containerd[1565]: 2026-03-13 00:42:56.176 [INFO][4266] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 00:42:56.298638 containerd[1565]: 2026-03-13 00:42:56.176 [INFO][4266] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 13 00:42:56.298638 containerd[1565]: 2026-03-13 00:42:56.189 [INFO][4266] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.c591ed5105540d450ab39df409b96d7c44c66c52f1091ebb428966e28af91043" host="localhost" Mar 13 00:42:56.298638 containerd[1565]: 2026-03-13 00:42:56.202 [INFO][4266] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 13 00:42:56.298638 containerd[1565]: 2026-03-13 00:42:56.211 [INFO][4266] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 13 00:42:56.298638 containerd[1565]: 2026-03-13 00:42:56.215 [INFO][4266] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 13 00:42:56.298638 containerd[1565]: 2026-03-13 00:42:56.220 [INFO][4266] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 13 00:42:56.298638 containerd[1565]: 2026-03-13 00:42:56.221 [INFO][4266] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c591ed5105540d450ab39df409b96d7c44c66c52f1091ebb428966e28af91043" host="localhost" Mar 13 00:42:56.298880 containerd[1565]: 2026-03-13 00:42:56.232 [INFO][4266] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.c591ed5105540d450ab39df409b96d7c44c66c52f1091ebb428966e28af91043 Mar 13 00:42:56.298880 containerd[1565]: 2026-03-13 00:42:56.241 [INFO][4266] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c591ed5105540d450ab39df409b96d7c44c66c52f1091ebb428966e28af91043" host="localhost" Mar 13 00:42:56.298880 containerd[1565]: 2026-03-13 00:42:56.250 [INFO][4266] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.c591ed5105540d450ab39df409b96d7c44c66c52f1091ebb428966e28af91043" host="localhost" Mar 13 00:42:56.298880 containerd[1565]: 2026-03-13 00:42:56.250 [INFO][4266] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.c591ed5105540d450ab39df409b96d7c44c66c52f1091ebb428966e28af91043" host="localhost" Mar 13 00:42:56.298880 containerd[1565]: 2026-03-13 00:42:56.250 [INFO][4266] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 00:42:56.298880 containerd[1565]: 2026-03-13 00:42:56.251 [INFO][4266] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="c591ed5105540d450ab39df409b96d7c44c66c52f1091ebb428966e28af91043" HandleID="k8s-pod-network.c591ed5105540d450ab39df409b96d7c44c66c52f1091ebb428966e28af91043" Workload="localhost-k8s-whisker--6cc479f7dd--s5vxs-eth0" Mar 13 00:42:56.298985 containerd[1565]: 2026-03-13 00:42:56.255 [INFO][4195] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c591ed5105540d450ab39df409b96d7c44c66c52f1091ebb428966e28af91043" Namespace="calico-system" Pod="whisker-6cc479f7dd-s5vxs" WorkloadEndpoint="localhost-k8s-whisker--6cc479f7dd--s5vxs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6cc479f7dd--s5vxs-eth0", GenerateName:"whisker-6cc479f7dd-", Namespace:"calico-system", SelfLink:"", UID:"7999b7a3-82a0-4ea1-a9ae-ca3330869bd7", ResourceVersion:"933", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 42, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6cc479f7dd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-6cc479f7dd-s5vxs", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali5fb5dd037e5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:42:56.298985 containerd[1565]: 2026-03-13 00:42:56.255 [INFO][4195] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="c591ed5105540d450ab39df409b96d7c44c66c52f1091ebb428966e28af91043" Namespace="calico-system" Pod="whisker-6cc479f7dd-s5vxs" WorkloadEndpoint="localhost-k8s-whisker--6cc479f7dd--s5vxs-eth0" Mar 13 00:42:56.299131 containerd[1565]: 2026-03-13 00:42:56.256 [INFO][4195] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5fb5dd037e5 ContainerID="c591ed5105540d450ab39df409b96d7c44c66c52f1091ebb428966e28af91043" Namespace="calico-system" Pod="whisker-6cc479f7dd-s5vxs" WorkloadEndpoint="localhost-k8s-whisker--6cc479f7dd--s5vxs-eth0" Mar 13 00:42:56.299131 containerd[1565]: 2026-03-13 00:42:56.274 [INFO][4195] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c591ed5105540d450ab39df409b96d7c44c66c52f1091ebb428966e28af91043" Namespace="calico-system" Pod="whisker-6cc479f7dd-s5vxs" WorkloadEndpoint="localhost-k8s-whisker--6cc479f7dd--s5vxs-eth0" Mar 13 00:42:56.299172 containerd[1565]: 2026-03-13 00:42:56.275 [INFO][4195] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c591ed5105540d450ab39df409b96d7c44c66c52f1091ebb428966e28af91043" Namespace="calico-system" Pod="whisker-6cc479f7dd-s5vxs" WorkloadEndpoint="localhost-k8s-whisker--6cc479f7dd--s5vxs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6cc479f7dd--s5vxs-eth0", GenerateName:"whisker-6cc479f7dd-", Namespace:"calico-system", SelfLink:"", UID:"7999b7a3-82a0-4ea1-a9ae-ca3330869bd7", ResourceVersion:"933", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 42, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6cc479f7dd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c591ed5105540d450ab39df409b96d7c44c66c52f1091ebb428966e28af91043", Pod:"whisker-6cc479f7dd-s5vxs", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali5fb5dd037e5", MAC:"1e:2b:1d:fe:90:2a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:42:56.299270 containerd[1565]: 2026-03-13 00:42:56.293 [INFO][4195] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c591ed5105540d450ab39df409b96d7c44c66c52f1091ebb428966e28af91043" Namespace="calico-system" Pod="whisker-6cc479f7dd-s5vxs" WorkloadEndpoint="localhost-k8s-whisker--6cc479f7dd--s5vxs-eth0" Mar 13 00:42:56.315519 containerd[1565]: time="2026-03-13T00:42:56.315384650Z" level=info msg="StartContainer for \"205c4519dddd86e56adb87834683b2b504dcfcc450398907be70e32da29c803a\" returns successfully" Mar 13 00:42:56.371670 containerd[1565]: time="2026-03-13T00:42:56.370962573Z" level=info msg="connecting to shim c591ed5105540d450ab39df409b96d7c44c66c52f1091ebb428966e28af91043" address="unix:///run/containerd/s/195406065f8d471460a8999eee3f8236321c8572a7bbebb08e6bc8914ccc9969" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:42:56.454861 systemd-networkd[1462]: calib7d553daa3f: Gained IPv6LL Mar 13 00:42:56.464265 systemd[1]: Started cri-containerd-c591ed5105540d450ab39df409b96d7c44c66c52f1091ebb428966e28af91043.scope - libcontainer container c591ed5105540d450ab39df409b96d7c44c66c52f1091ebb428966e28af91043. Mar 13 00:42:56.529186 systemd-resolved[1389]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 13 00:42:56.674441 containerd[1565]: time="2026-03-13T00:42:56.674267188Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6cc479f7dd-s5vxs,Uid:7999b7a3-82a0-4ea1-a9ae-ca3330869bd7,Namespace:calico-system,Attempt:0,} returns sandbox id \"c591ed5105540d450ab39df409b96d7c44c66c52f1091ebb428966e28af91043\"" Mar 13 00:42:56.680495 kubelet[2785]: I0313 00:42:56.680394 2785 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b5fd946-17a9-4a05-b878-6849fbb45881" path="/var/lib/kubelet/pods/9b5fd946-17a9-4a05-b878-6849fbb45881/volumes" Mar 13 00:42:57.126684 systemd-networkd[1462]: vxlan.calico: Link UP Mar 13 00:42:57.126887 systemd-networkd[1462]: vxlan.calico: Gained carrier Mar 13 00:42:57.605837 systemd-networkd[1462]: cali5fb5dd037e5: Gained IPv6LL Mar 13 00:42:57.735416 systemd-networkd[1462]: cali640bc46ab97: Gained IPv6LL Mar 13 00:42:58.817622 containerd[1565]: time="2026-03-13T00:42:58.817358222Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:42:58.818795 containerd[1565]: time="2026-03-13T00:42:58.818755462Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Mar 13 00:42:58.821083 containerd[1565]: time="2026-03-13T00:42:58.821025388Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:42:58.824613 containerd[1565]: time="2026-03-13T00:42:58.824498759Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:42:58.825218 containerd[1565]: time="2026-03-13T00:42:58.825114672Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 2.598208958s" Mar 13 00:42:58.825218 containerd[1565]: time="2026-03-13T00:42:58.825171196Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 13 00:42:58.827775 containerd[1565]: time="2026-03-13T00:42:58.827735483Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Mar 13 00:42:58.832459 containerd[1565]: time="2026-03-13T00:42:58.832373211Z" level=info msg="CreateContainer within sandbox \"fd65e63777338db9336bffc579d99b92509d86c03961ebe0c28d3225df35870a\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 13 00:42:58.852820 containerd[1565]: time="2026-03-13T00:42:58.852721536Z" level=info msg="Container 721f4eb387ca862bc3731aed0d27fb8834e25b21cb8370e174717223a890204c: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:42:58.867993 containerd[1565]: time="2026-03-13T00:42:58.867850331Z" level=info msg="CreateContainer within sandbox \"fd65e63777338db9336bffc579d99b92509d86c03961ebe0c28d3225df35870a\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"721f4eb387ca862bc3731aed0d27fb8834e25b21cb8370e174717223a890204c\"" Mar 13 00:42:58.869283 containerd[1565]: time="2026-03-13T00:42:58.869076509Z" level=info msg="StartContainer for \"721f4eb387ca862bc3731aed0d27fb8834e25b21cb8370e174717223a890204c\"" Mar 13 00:42:58.871049 containerd[1565]: time="2026-03-13T00:42:58.870926642Z" level=info msg="connecting to shim 721f4eb387ca862bc3731aed0d27fb8834e25b21cb8370e174717223a890204c" address="unix:///run/containerd/s/9b858a72fc7f8f38b495ff35389c11662eeebdc5dad381bcf6ab2481e5667ee2" protocol=ttrpc version=3 Mar 13 00:42:58.907824 systemd[1]: Started cri-containerd-721f4eb387ca862bc3731aed0d27fb8834e25b21cb8370e174717223a890204c.scope - libcontainer container 721f4eb387ca862bc3731aed0d27fb8834e25b21cb8370e174717223a890204c. Mar 13 00:42:58.950610 systemd-networkd[1462]: vxlan.calico: Gained IPv6LL Mar 13 00:42:59.032678 containerd[1565]: time="2026-03-13T00:42:59.032439035Z" level=info msg="StartContainer for \"721f4eb387ca862bc3731aed0d27fb8834e25b21cb8370e174717223a890204c\" returns successfully" Mar 13 00:42:59.481257 kubelet[2785]: I0313 00:42:59.479208 2785 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-67947fdc4c-9bd4j" podStartSLOduration=20.878375808 podStartE2EDuration="23.479193454s" podCreationTimestamp="2026-03-13 00:42:36 +0000 UTC" firstStartedPulling="2026-03-13 00:42:56.225514246 +0000 UTC m=+47.685789237" lastFinishedPulling="2026-03-13 00:42:58.826331902 +0000 UTC m=+50.286606883" observedRunningTime="2026-03-13 00:42:59.478611161 +0000 UTC m=+50.938886152" watchObservedRunningTime="2026-03-13 00:42:59.479193454 +0000 UTC m=+50.939468445" Mar 13 00:42:59.732723 containerd[1565]: time="2026-03-13T00:42:59.731658255Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:42:59.739029 containerd[1565]: time="2026-03-13T00:42:59.732842525Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Mar 13 00:42:59.739165 containerd[1565]: time="2026-03-13T00:42:59.736688724Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:42:59.745114 containerd[1565]: time="2026-03-13T00:42:59.744379459Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:42:59.746958 containerd[1565]: time="2026-03-13T00:42:59.746880447Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 919.117042ms" Mar 13 00:42:59.747013 containerd[1565]: time="2026-03-13T00:42:59.746963851Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Mar 13 00:42:59.749783 containerd[1565]: time="2026-03-13T00:42:59.749720980Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Mar 13 00:42:59.753886 containerd[1565]: time="2026-03-13T00:42:59.753817971Z" level=info msg="CreateContainer within sandbox \"9282642a011c46fcbf510c995e5b7afacaf829d274db47620ee6ba80ca56d22f\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 13 00:42:59.768374 containerd[1565]: time="2026-03-13T00:42:59.768313741Z" level=info msg="Container 5ff6d7a292706fbb1ac301679060e08dfebf72dc158dd61557e63564cc3a790e: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:42:59.799878 containerd[1565]: time="2026-03-13T00:42:59.799756959Z" level=info msg="CreateContainer within sandbox \"9282642a011c46fcbf510c995e5b7afacaf829d274db47620ee6ba80ca56d22f\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"5ff6d7a292706fbb1ac301679060e08dfebf72dc158dd61557e63564cc3a790e\"" Mar 13 00:42:59.802631 containerd[1565]: time="2026-03-13T00:42:59.801350297Z" level=info msg="StartContainer for \"5ff6d7a292706fbb1ac301679060e08dfebf72dc158dd61557e63564cc3a790e\"" Mar 13 00:42:59.803983 containerd[1565]: time="2026-03-13T00:42:59.803928683Z" level=info msg="connecting to shim 5ff6d7a292706fbb1ac301679060e08dfebf72dc158dd61557e63564cc3a790e" address="unix:///run/containerd/s/e084bc1b8b6bd181e9da1c62b20b5d227b0c34e03868b03798fe7d37e6cfc033" protocol=ttrpc version=3 Mar 13 00:42:59.839964 systemd[1]: Started cri-containerd-5ff6d7a292706fbb1ac301679060e08dfebf72dc158dd61557e63564cc3a790e.scope - libcontainer container 5ff6d7a292706fbb1ac301679060e08dfebf72dc158dd61557e63564cc3a790e. Mar 13 00:42:59.983473 containerd[1565]: time="2026-03-13T00:42:59.983030192Z" level=info msg="StartContainer for \"5ff6d7a292706fbb1ac301679060e08dfebf72dc158dd61557e63564cc3a790e\" returns successfully" Mar 13 00:43:00.345807 containerd[1565]: time="2026-03-13T00:43:00.345601306Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:43:00.346849 containerd[1565]: time="2026-03-13T00:43:00.346785596Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Mar 13 00:43:00.348609 containerd[1565]: time="2026-03-13T00:43:00.348434948Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:43:00.351869 containerd[1565]: time="2026-03-13T00:43:00.351824299Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:43:00.353100 containerd[1565]: time="2026-03-13T00:43:00.352995055Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 603.201641ms" Mar 13 00:43:00.353100 containerd[1565]: time="2026-03-13T00:43:00.353063340Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Mar 13 00:43:00.360672 containerd[1565]: time="2026-03-13T00:43:00.360645634Z" level=info msg="CreateContainer within sandbox \"c591ed5105540d450ab39df409b96d7c44c66c52f1091ebb428966e28af91043\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Mar 13 00:43:00.368603 containerd[1565]: time="2026-03-13T00:43:00.368420372Z" level=info msg="Container 0dae3d0fcd51a59a126262558a80928bd2573b21bbe3e79f0ed1978289bd1f68: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:43:00.382867 containerd[1565]: time="2026-03-13T00:43:00.382756077Z" level=info msg="CreateContainer within sandbox \"c591ed5105540d450ab39df409b96d7c44c66c52f1091ebb428966e28af91043\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"0dae3d0fcd51a59a126262558a80928bd2573b21bbe3e79f0ed1978289bd1f68\"" Mar 13 00:43:00.384107 containerd[1565]: time="2026-03-13T00:43:00.383945587Z" level=info msg="StartContainer for \"0dae3d0fcd51a59a126262558a80928bd2573b21bbe3e79f0ed1978289bd1f68\"" Mar 13 00:43:00.386877 containerd[1565]: time="2026-03-13T00:43:00.386824488Z" level=info msg="connecting to shim 0dae3d0fcd51a59a126262558a80928bd2573b21bbe3e79f0ed1978289bd1f68" address="unix:///run/containerd/s/195406065f8d471460a8999eee3f8236321c8572a7bbebb08e6bc8914ccc9969" protocol=ttrpc version=3 Mar 13 00:43:00.422761 systemd[1]: Started cri-containerd-0dae3d0fcd51a59a126262558a80928bd2573b21bbe3e79f0ed1978289bd1f68.scope - libcontainer container 0dae3d0fcd51a59a126262558a80928bd2573b21bbe3e79f0ed1978289bd1f68. Mar 13 00:43:00.468367 kubelet[2785]: I0313 00:43:00.468291 2785 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 00:43:00.489727 kubelet[2785]: I0313 00:43:00.488999 2785 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-c48ht" podStartSLOduration=19.757534499 podStartE2EDuration="24.488979873s" podCreationTimestamp="2026-03-13 00:42:36 +0000 UTC" firstStartedPulling="2026-03-13 00:42:55.017379307 +0000 UTC m=+46.477654288" lastFinishedPulling="2026-03-13 00:42:59.74882467 +0000 UTC m=+51.209099662" observedRunningTime="2026-03-13 00:43:00.485301627 +0000 UTC m=+51.945576618" watchObservedRunningTime="2026-03-13 00:43:00.488979873 +0000 UTC m=+51.949254864" Mar 13 00:43:00.545604 containerd[1565]: time="2026-03-13T00:43:00.545263356Z" level=info msg="StartContainer for \"0dae3d0fcd51a59a126262558a80928bd2573b21bbe3e79f0ed1978289bd1f68\" returns successfully" Mar 13 00:43:00.549299 containerd[1565]: time="2026-03-13T00:43:00.549186430Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Mar 13 00:43:00.949326 kubelet[2785]: I0313 00:43:00.949115 2785 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 13 00:43:00.978845 kubelet[2785]: I0313 00:43:00.978607 2785 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 13 00:43:01.525094 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount941026369.mount: Deactivated successfully. Mar 13 00:43:01.558994 containerd[1565]: time="2026-03-13T00:43:01.558895885Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:43:01.560260 containerd[1565]: time="2026-03-13T00:43:01.560093081Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Mar 13 00:43:01.561424 containerd[1565]: time="2026-03-13T00:43:01.561342261Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:43:01.564092 containerd[1565]: time="2026-03-13T00:43:01.563880351Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:43:01.565053 containerd[1565]: time="2026-03-13T00:43:01.564964979Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 1.015679766s" Mar 13 00:43:01.565053 containerd[1565]: time="2026-03-13T00:43:01.565024999Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Mar 13 00:43:01.577663 containerd[1565]: time="2026-03-13T00:43:01.576800233Z" level=info msg="CreateContainer within sandbox \"c591ed5105540d450ab39df409b96d7c44c66c52f1091ebb428966e28af91043\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Mar 13 00:43:01.597362 containerd[1565]: time="2026-03-13T00:43:01.597294775Z" level=info msg="Container 3c96769a0c537bbd048e78b4b5eb38d7f726b4f4924826a0d23df93e07057245: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:43:01.609053 containerd[1565]: time="2026-03-13T00:43:01.608957586Z" level=info msg="CreateContainer within sandbox \"c591ed5105540d450ab39df409b96d7c44c66c52f1091ebb428966e28af91043\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"3c96769a0c537bbd048e78b4b5eb38d7f726b4f4924826a0d23df93e07057245\"" Mar 13 00:43:01.609964 containerd[1565]: time="2026-03-13T00:43:01.609921320Z" level=info msg="StartContainer for \"3c96769a0c537bbd048e78b4b5eb38d7f726b4f4924826a0d23df93e07057245\"" Mar 13 00:43:01.611764 containerd[1565]: time="2026-03-13T00:43:01.611657799Z" level=info msg="connecting to shim 3c96769a0c537bbd048e78b4b5eb38d7f726b4f4924826a0d23df93e07057245" address="unix:///run/containerd/s/195406065f8d471460a8999eee3f8236321c8572a7bbebb08e6bc8914ccc9969" protocol=ttrpc version=3 Mar 13 00:43:01.654756 systemd[1]: Started cri-containerd-3c96769a0c537bbd048e78b4b5eb38d7f726b4f4924826a0d23df93e07057245.scope - libcontainer container 3c96769a0c537bbd048e78b4b5eb38d7f726b4f4924826a0d23df93e07057245. Mar 13 00:43:01.730093 containerd[1565]: time="2026-03-13T00:43:01.730006475Z" level=info msg="StartContainer for \"3c96769a0c537bbd048e78b4b5eb38d7f726b4f4924826a0d23df93e07057245\" returns successfully" Mar 13 00:43:02.500674 kubelet[2785]: I0313 00:43:02.500593 2785 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-6cc479f7dd-s5vxs" podStartSLOduration=2.612039938 podStartE2EDuration="7.500577704s" podCreationTimestamp="2026-03-13 00:42:55 +0000 UTC" firstStartedPulling="2026-03-13 00:42:56.677801224 +0000 UTC m=+48.138076225" lastFinishedPulling="2026-03-13 00:43:01.566339 +0000 UTC m=+53.026613991" observedRunningTime="2026-03-13 00:43:02.499777453 +0000 UTC m=+53.960052454" watchObservedRunningTime="2026-03-13 00:43:02.500577704 +0000 UTC m=+53.960852695" Mar 13 00:43:05.676699 containerd[1565]: time="2026-03-13T00:43:05.676520806Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-769b4596d5-fc5l8,Uid:86afacf3-dd5f-4699-9570-d8f6390eafa0,Namespace:calico-system,Attempt:0,}" Mar 13 00:43:05.678761 containerd[1565]: time="2026-03-13T00:43:05.678634539Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-h5w9q,Uid:d36f9439-4cd4-4996-b6fa-de6c76e8b792,Namespace:calico-system,Attempt:0,}" Mar 13 00:43:05.855685 systemd-networkd[1462]: cali80ee780166a: Link UP Mar 13 00:43:05.855965 systemd-networkd[1462]: cali80ee780166a: Gained carrier Mar 13 00:43:05.882746 containerd[1565]: 2026-03-13 00:43:05.743 [INFO][4695] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--769b4596d5--fc5l8-eth0 calico-kube-controllers-769b4596d5- calico-system 86afacf3-dd5f-4699-9570-d8f6390eafa0 869 0 2026-03-13 00:42:36 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:769b4596d5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-769b4596d5-fc5l8 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali80ee780166a [] [] }} ContainerID="a34e32bcd4c34cf34201a1862a4132e285bdefc8ea4c5bad10e5321ba9e749e4" Namespace="calico-system" Pod="calico-kube-controllers-769b4596d5-fc5l8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--769b4596d5--fc5l8-" Mar 13 00:43:05.882746 containerd[1565]: 2026-03-13 00:43:05.743 [INFO][4695] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a34e32bcd4c34cf34201a1862a4132e285bdefc8ea4c5bad10e5321ba9e749e4" Namespace="calico-system" Pod="calico-kube-controllers-769b4596d5-fc5l8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--769b4596d5--fc5l8-eth0" Mar 13 00:43:05.882746 containerd[1565]: 2026-03-13 00:43:05.786 [INFO][4726] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a34e32bcd4c34cf34201a1862a4132e285bdefc8ea4c5bad10e5321ba9e749e4" HandleID="k8s-pod-network.a34e32bcd4c34cf34201a1862a4132e285bdefc8ea4c5bad10e5321ba9e749e4" Workload="localhost-k8s-calico--kube--controllers--769b4596d5--fc5l8-eth0" Mar 13 00:43:05.883015 containerd[1565]: 2026-03-13 00:43:05.796 [INFO][4726] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="a34e32bcd4c34cf34201a1862a4132e285bdefc8ea4c5bad10e5321ba9e749e4" HandleID="k8s-pod-network.a34e32bcd4c34cf34201a1862a4132e285bdefc8ea4c5bad10e5321ba9e749e4" Workload="localhost-k8s-calico--kube--controllers--769b4596d5--fc5l8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000139ba0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-769b4596d5-fc5l8", "timestamp":"2026-03-13 00:43:05.786439117 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003e7ce0)} Mar 13 00:43:05.883015 containerd[1565]: 2026-03-13 00:43:05.796 [INFO][4726] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 00:43:05.883015 containerd[1565]: 2026-03-13 00:43:05.796 [INFO][4726] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 00:43:05.883015 containerd[1565]: 2026-03-13 00:43:05.796 [INFO][4726] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 13 00:43:05.883015 containerd[1565]: 2026-03-13 00:43:05.802 [INFO][4726] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.a34e32bcd4c34cf34201a1862a4132e285bdefc8ea4c5bad10e5321ba9e749e4" host="localhost" Mar 13 00:43:05.883015 containerd[1565]: 2026-03-13 00:43:05.811 [INFO][4726] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 13 00:43:05.883015 containerd[1565]: 2026-03-13 00:43:05.820 [INFO][4726] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 13 00:43:05.883015 containerd[1565]: 2026-03-13 00:43:05.824 [INFO][4726] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 13 00:43:05.883015 containerd[1565]: 2026-03-13 00:43:05.827 [INFO][4726] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 13 00:43:05.883345 containerd[1565]: 2026-03-13 00:43:05.827 [INFO][4726] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a34e32bcd4c34cf34201a1862a4132e285bdefc8ea4c5bad10e5321ba9e749e4" host="localhost" Mar 13 00:43:05.883345 containerd[1565]: 2026-03-13 00:43:05.830 [INFO][4726] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.a34e32bcd4c34cf34201a1862a4132e285bdefc8ea4c5bad10e5321ba9e749e4 Mar 13 00:43:05.883345 containerd[1565]: 2026-03-13 00:43:05.839 [INFO][4726] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a34e32bcd4c34cf34201a1862a4132e285bdefc8ea4c5bad10e5321ba9e749e4" host="localhost" Mar 13 00:43:05.883345 containerd[1565]: 2026-03-13 00:43:05.848 [INFO][4726] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.a34e32bcd4c34cf34201a1862a4132e285bdefc8ea4c5bad10e5321ba9e749e4" host="localhost" Mar 13 00:43:05.883345 containerd[1565]: 2026-03-13 00:43:05.848 [INFO][4726] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.a34e32bcd4c34cf34201a1862a4132e285bdefc8ea4c5bad10e5321ba9e749e4" host="localhost" Mar 13 00:43:05.883345 containerd[1565]: 2026-03-13 00:43:05.848 [INFO][4726] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 00:43:05.883345 containerd[1565]: 2026-03-13 00:43:05.848 [INFO][4726] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="a34e32bcd4c34cf34201a1862a4132e285bdefc8ea4c5bad10e5321ba9e749e4" HandleID="k8s-pod-network.a34e32bcd4c34cf34201a1862a4132e285bdefc8ea4c5bad10e5321ba9e749e4" Workload="localhost-k8s-calico--kube--controllers--769b4596d5--fc5l8-eth0" Mar 13 00:43:05.883614 containerd[1565]: 2026-03-13 00:43:05.852 [INFO][4695] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a34e32bcd4c34cf34201a1862a4132e285bdefc8ea4c5bad10e5321ba9e749e4" Namespace="calico-system" Pod="calico-kube-controllers-769b4596d5-fc5l8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--769b4596d5--fc5l8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--769b4596d5--fc5l8-eth0", GenerateName:"calico-kube-controllers-769b4596d5-", Namespace:"calico-system", SelfLink:"", UID:"86afacf3-dd5f-4699-9570-d8f6390eafa0", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 42, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"769b4596d5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-769b4596d5-fc5l8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali80ee780166a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:43:05.883701 containerd[1565]: 2026-03-13 00:43:05.852 [INFO][4695] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="a34e32bcd4c34cf34201a1862a4132e285bdefc8ea4c5bad10e5321ba9e749e4" Namespace="calico-system" Pod="calico-kube-controllers-769b4596d5-fc5l8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--769b4596d5--fc5l8-eth0" Mar 13 00:43:05.883701 containerd[1565]: 2026-03-13 00:43:05.852 [INFO][4695] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali80ee780166a ContainerID="a34e32bcd4c34cf34201a1862a4132e285bdefc8ea4c5bad10e5321ba9e749e4" Namespace="calico-system" Pod="calico-kube-controllers-769b4596d5-fc5l8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--769b4596d5--fc5l8-eth0" Mar 13 00:43:05.883701 containerd[1565]: 2026-03-13 00:43:05.856 [INFO][4695] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a34e32bcd4c34cf34201a1862a4132e285bdefc8ea4c5bad10e5321ba9e749e4" Namespace="calico-system" Pod="calico-kube-controllers-769b4596d5-fc5l8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--769b4596d5--fc5l8-eth0" Mar 13 00:43:05.883774 containerd[1565]: 2026-03-13 00:43:05.858 [INFO][4695] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a34e32bcd4c34cf34201a1862a4132e285bdefc8ea4c5bad10e5321ba9e749e4" Namespace="calico-system" Pod="calico-kube-controllers-769b4596d5-fc5l8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--769b4596d5--fc5l8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--769b4596d5--fc5l8-eth0", GenerateName:"calico-kube-controllers-769b4596d5-", Namespace:"calico-system", SelfLink:"", UID:"86afacf3-dd5f-4699-9570-d8f6390eafa0", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 42, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"769b4596d5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a34e32bcd4c34cf34201a1862a4132e285bdefc8ea4c5bad10e5321ba9e749e4", Pod:"calico-kube-controllers-769b4596d5-fc5l8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali80ee780166a", MAC:"2a:07:87:13:1d:91", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:43:05.883849 containerd[1565]: 2026-03-13 00:43:05.874 [INFO][4695] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a34e32bcd4c34cf34201a1862a4132e285bdefc8ea4c5bad10e5321ba9e749e4" Namespace="calico-system" Pod="calico-kube-controllers-769b4596d5-fc5l8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--769b4596d5--fc5l8-eth0" Mar 13 00:43:05.939007 containerd[1565]: time="2026-03-13T00:43:05.937771724Z" level=info msg="connecting to shim a34e32bcd4c34cf34201a1862a4132e285bdefc8ea4c5bad10e5321ba9e749e4" address="unix:///run/containerd/s/c51369852d45b1009a990e1b25fc5c451d8d640c2434f1fdf58a97cfb1bfc252" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:43:05.989789 systemd[1]: Started cri-containerd-a34e32bcd4c34cf34201a1862a4132e285bdefc8ea4c5bad10e5321ba9e749e4.scope - libcontainer container a34e32bcd4c34cf34201a1862a4132e285bdefc8ea4c5bad10e5321ba9e749e4. Mar 13 00:43:05.991681 systemd-networkd[1462]: cali942ebea32e3: Link UP Mar 13 00:43:05.995310 systemd-networkd[1462]: cali942ebea32e3: Gained carrier Mar 13 00:43:06.018990 containerd[1565]: 2026-03-13 00:43:05.741 [INFO][4701] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--cccfbd5cf--h5w9q-eth0 goldmane-cccfbd5cf- calico-system d36f9439-4cd4-4996-b6fa-de6c76e8b792 868 0 2026-03-13 00:42:36 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:cccfbd5cf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-cccfbd5cf-h5w9q eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali942ebea32e3 [] [] }} ContainerID="200ad9054b4f4a14b8326f23d652c1743b13ac2a489502f64c13fc5052c069fc" Namespace="calico-system" Pod="goldmane-cccfbd5cf-h5w9q" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--h5w9q-" Mar 13 00:43:06.018990 containerd[1565]: 2026-03-13 00:43:05.741 [INFO][4701] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="200ad9054b4f4a14b8326f23d652c1743b13ac2a489502f64c13fc5052c069fc" Namespace="calico-system" Pod="goldmane-cccfbd5cf-h5w9q" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--h5w9q-eth0" Mar 13 00:43:06.018990 containerd[1565]: 2026-03-13 00:43:05.792 [INFO][4724] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="200ad9054b4f4a14b8326f23d652c1743b13ac2a489502f64c13fc5052c069fc" HandleID="k8s-pod-network.200ad9054b4f4a14b8326f23d652c1743b13ac2a489502f64c13fc5052c069fc" Workload="localhost-k8s-goldmane--cccfbd5cf--h5w9q-eth0" Mar 13 00:43:06.019476 containerd[1565]: 2026-03-13 00:43:05.804 [INFO][4724] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="200ad9054b4f4a14b8326f23d652c1743b13ac2a489502f64c13fc5052c069fc" HandleID="k8s-pod-network.200ad9054b4f4a14b8326f23d652c1743b13ac2a489502f64c13fc5052c069fc" Workload="localhost-k8s-goldmane--cccfbd5cf--h5w9q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003f15c0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-cccfbd5cf-h5w9q", "timestamp":"2026-03-13 00:43:05.792708654 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000425760)} Mar 13 00:43:06.019476 containerd[1565]: 2026-03-13 00:43:05.804 [INFO][4724] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 00:43:06.019476 containerd[1565]: 2026-03-13 00:43:05.848 [INFO][4724] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 00:43:06.019476 containerd[1565]: 2026-03-13 00:43:05.848 [INFO][4724] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 13 00:43:06.019476 containerd[1565]: 2026-03-13 00:43:05.907 [INFO][4724] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.200ad9054b4f4a14b8326f23d652c1743b13ac2a489502f64c13fc5052c069fc" host="localhost" Mar 13 00:43:06.019476 containerd[1565]: 2026-03-13 00:43:05.915 [INFO][4724] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 13 00:43:06.019476 containerd[1565]: 2026-03-13 00:43:05.926 [INFO][4724] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 13 00:43:06.019476 containerd[1565]: 2026-03-13 00:43:05.931 [INFO][4724] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 13 00:43:06.019476 containerd[1565]: 2026-03-13 00:43:05.937 [INFO][4724] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 13 00:43:06.019476 containerd[1565]: 2026-03-13 00:43:05.937 [INFO][4724] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.200ad9054b4f4a14b8326f23d652c1743b13ac2a489502f64c13fc5052c069fc" host="localhost" Mar 13 00:43:06.019817 containerd[1565]: 2026-03-13 00:43:05.940 [INFO][4724] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.200ad9054b4f4a14b8326f23d652c1743b13ac2a489502f64c13fc5052c069fc Mar 13 00:43:06.019817 containerd[1565]: 2026-03-13 00:43:05.954 [INFO][4724] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.200ad9054b4f4a14b8326f23d652c1743b13ac2a489502f64c13fc5052c069fc" host="localhost" Mar 13 00:43:06.019817 containerd[1565]: 2026-03-13 00:43:05.975 [INFO][4724] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.200ad9054b4f4a14b8326f23d652c1743b13ac2a489502f64c13fc5052c069fc" host="localhost" Mar 13 00:43:06.019817 containerd[1565]: 2026-03-13 00:43:05.975 [INFO][4724] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.200ad9054b4f4a14b8326f23d652c1743b13ac2a489502f64c13fc5052c069fc" host="localhost" Mar 13 00:43:06.019817 containerd[1565]: 2026-03-13 00:43:05.976 [INFO][4724] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 00:43:06.019817 containerd[1565]: 2026-03-13 00:43:05.976 [INFO][4724] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="200ad9054b4f4a14b8326f23d652c1743b13ac2a489502f64c13fc5052c069fc" HandleID="k8s-pod-network.200ad9054b4f4a14b8326f23d652c1743b13ac2a489502f64c13fc5052c069fc" Workload="localhost-k8s-goldmane--cccfbd5cf--h5w9q-eth0" Mar 13 00:43:06.019979 containerd[1565]: 2026-03-13 00:43:05.984 [INFO][4701] cni-plugin/k8s.go 418: Populated endpoint ContainerID="200ad9054b4f4a14b8326f23d652c1743b13ac2a489502f64c13fc5052c069fc" Namespace="calico-system" Pod="goldmane-cccfbd5cf-h5w9q" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--h5w9q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--cccfbd5cf--h5w9q-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"d36f9439-4cd4-4996-b6fa-de6c76e8b792", ResourceVersion:"868", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 42, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-cccfbd5cf-h5w9q", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali942ebea32e3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:43:06.019979 containerd[1565]: 2026-03-13 00:43:05.984 [INFO][4701] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="200ad9054b4f4a14b8326f23d652c1743b13ac2a489502f64c13fc5052c069fc" Namespace="calico-system" Pod="goldmane-cccfbd5cf-h5w9q" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--h5w9q-eth0" Mar 13 00:43:06.020080 containerd[1565]: 2026-03-13 00:43:05.984 [INFO][4701] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali942ebea32e3 ContainerID="200ad9054b4f4a14b8326f23d652c1743b13ac2a489502f64c13fc5052c069fc" Namespace="calico-system" Pod="goldmane-cccfbd5cf-h5w9q" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--h5w9q-eth0" Mar 13 00:43:06.020080 containerd[1565]: 2026-03-13 00:43:05.997 [INFO][4701] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="200ad9054b4f4a14b8326f23d652c1743b13ac2a489502f64c13fc5052c069fc" Namespace="calico-system" Pod="goldmane-cccfbd5cf-h5w9q" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--h5w9q-eth0" Mar 13 00:43:06.020177 containerd[1565]: 2026-03-13 00:43:05.997 [INFO][4701] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="200ad9054b4f4a14b8326f23d652c1743b13ac2a489502f64c13fc5052c069fc" Namespace="calico-system" Pod="goldmane-cccfbd5cf-h5w9q" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--h5w9q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--cccfbd5cf--h5w9q-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"d36f9439-4cd4-4996-b6fa-de6c76e8b792", ResourceVersion:"868", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 42, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"200ad9054b4f4a14b8326f23d652c1743b13ac2a489502f64c13fc5052c069fc", Pod:"goldmane-cccfbd5cf-h5w9q", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali942ebea32e3", MAC:"0e:99:38:23:d0:34", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:43:06.020265 containerd[1565]: 2026-03-13 00:43:06.011 [INFO][4701] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="200ad9054b4f4a14b8326f23d652c1743b13ac2a489502f64c13fc5052c069fc" Namespace="calico-system" Pod="goldmane-cccfbd5cf-h5w9q" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--h5w9q-eth0" Mar 13 00:43:06.040339 systemd-resolved[1389]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 13 00:43:06.071806 containerd[1565]: time="2026-03-13T00:43:06.071674462Z" level=info msg="connecting to shim 200ad9054b4f4a14b8326f23d652c1743b13ac2a489502f64c13fc5052c069fc" address="unix:///run/containerd/s/d641ebf2922e123c45c574b699a44674404753ebc62c3e33ec02c733504fccbc" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:43:06.104836 containerd[1565]: time="2026-03-13T00:43:06.104714362Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-769b4596d5-fc5l8,Uid:86afacf3-dd5f-4699-9570-d8f6390eafa0,Namespace:calico-system,Attempt:0,} returns sandbox id \"a34e32bcd4c34cf34201a1862a4132e285bdefc8ea4c5bad10e5321ba9e749e4\"" Mar 13 00:43:06.109243 containerd[1565]: time="2026-03-13T00:43:06.108390633Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Mar 13 00:43:06.120833 systemd[1]: Started cri-containerd-200ad9054b4f4a14b8326f23d652c1743b13ac2a489502f64c13fc5052c069fc.scope - libcontainer container 200ad9054b4f4a14b8326f23d652c1743b13ac2a489502f64c13fc5052c069fc. Mar 13 00:43:06.147043 systemd-resolved[1389]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 13 00:43:06.186956 containerd[1565]: time="2026-03-13T00:43:06.186850999Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-h5w9q,Uid:d36f9439-4cd4-4996-b6fa-de6c76e8b792,Namespace:calico-system,Attempt:0,} returns sandbox id \"200ad9054b4f4a14b8326f23d652c1743b13ac2a489502f64c13fc5052c069fc\"" Mar 13 00:43:07.142310 systemd-networkd[1462]: cali942ebea32e3: Gained IPv6LL Mar 13 00:43:07.333968 systemd-networkd[1462]: cali80ee780166a: Gained IPv6LL Mar 13 00:43:07.554346 containerd[1565]: time="2026-03-13T00:43:07.554289728Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:43:07.555153 containerd[1565]: time="2026-03-13T00:43:07.555120040Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Mar 13 00:43:07.556577 containerd[1565]: time="2026-03-13T00:43:07.556498574Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:43:07.559129 containerd[1565]: time="2026-03-13T00:43:07.559047735Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:43:07.559858 containerd[1565]: time="2026-03-13T00:43:07.559790525Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 1.451364957s" Mar 13 00:43:07.559858 containerd[1565]: time="2026-03-13T00:43:07.559840077Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Mar 13 00:43:07.560962 containerd[1565]: time="2026-03-13T00:43:07.560834687Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Mar 13 00:43:07.581751 containerd[1565]: time="2026-03-13T00:43:07.581665799Z" level=info msg="CreateContainer within sandbox \"a34e32bcd4c34cf34201a1862a4132e285bdefc8ea4c5bad10e5321ba9e749e4\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 13 00:43:07.591659 containerd[1565]: time="2026-03-13T00:43:07.591603093Z" level=info msg="Container dce5c7a6df8f1ce32034ac250433980ee10c18090d55db5692e0e754022f3425: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:43:07.600908 containerd[1565]: time="2026-03-13T00:43:07.600855615Z" level=info msg="CreateContainer within sandbox \"a34e32bcd4c34cf34201a1862a4132e285bdefc8ea4c5bad10e5321ba9e749e4\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"dce5c7a6df8f1ce32034ac250433980ee10c18090d55db5692e0e754022f3425\"" Mar 13 00:43:07.601902 containerd[1565]: time="2026-03-13T00:43:07.601861667Z" level=info msg="StartContainer for \"dce5c7a6df8f1ce32034ac250433980ee10c18090d55db5692e0e754022f3425\"" Mar 13 00:43:07.603047 containerd[1565]: time="2026-03-13T00:43:07.602849511Z" level=info msg="connecting to shim dce5c7a6df8f1ce32034ac250433980ee10c18090d55db5692e0e754022f3425" address="unix:///run/containerd/s/c51369852d45b1009a990e1b25fc5c451d8d640c2434f1fdf58a97cfb1bfc252" protocol=ttrpc version=3 Mar 13 00:43:07.660829 systemd[1]: Started cri-containerd-dce5c7a6df8f1ce32034ac250433980ee10c18090d55db5692e0e754022f3425.scope - libcontainer container dce5c7a6df8f1ce32034ac250433980ee10c18090d55db5692e0e754022f3425. Mar 13 00:43:07.673902 kubelet[2785]: E0313 00:43:07.673877 2785 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:43:07.674670 containerd[1565]: time="2026-03-13T00:43:07.674619939Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-24t98,Uid:a8a6ad84-f7f5-49f3-9436-19390a6f9006,Namespace:kube-system,Attempt:0,}" Mar 13 00:43:07.678279 containerd[1565]: time="2026-03-13T00:43:07.678171872Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67947fdc4c-zg5gm,Uid:63cfed16-ada0-435d-bd56-248f4b5a20e5,Namespace:calico-system,Attempt:0,}" Mar 13 00:43:07.772606 containerd[1565]: time="2026-03-13T00:43:07.772462787Z" level=info msg="StartContainer for \"dce5c7a6df8f1ce32034ac250433980ee10c18090d55db5692e0e754022f3425\" returns successfully" Mar 13 00:43:07.892630 systemd-networkd[1462]: cali9cefc3c312b: Link UP Mar 13 00:43:07.897830 systemd-networkd[1462]: cali9cefc3c312b: Gained carrier Mar 13 00:43:07.924765 containerd[1565]: 2026-03-13 00:43:07.750 [INFO][4928] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--67947fdc4c--zg5gm-eth0 calico-apiserver-67947fdc4c- calico-system 63cfed16-ada0-435d-bd56-248f4b5a20e5 864 0 2026-03-13 00:42:36 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:67947fdc4c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-67947fdc4c-zg5gm eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali9cefc3c312b [] [] }} ContainerID="fc8724212b428259804e4f9eece3beb54059853508a9e474659b6bbd6db7a84a" Namespace="calico-system" Pod="calico-apiserver-67947fdc4c-zg5gm" WorkloadEndpoint="localhost-k8s-calico--apiserver--67947fdc4c--zg5gm-" Mar 13 00:43:07.924765 containerd[1565]: 2026-03-13 00:43:07.751 [INFO][4928] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fc8724212b428259804e4f9eece3beb54059853508a9e474659b6bbd6db7a84a" Namespace="calico-system" Pod="calico-apiserver-67947fdc4c-zg5gm" WorkloadEndpoint="localhost-k8s-calico--apiserver--67947fdc4c--zg5gm-eth0" Mar 13 00:43:07.924765 containerd[1565]: 2026-03-13 00:43:07.804 [INFO][4958] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fc8724212b428259804e4f9eece3beb54059853508a9e474659b6bbd6db7a84a" HandleID="k8s-pod-network.fc8724212b428259804e4f9eece3beb54059853508a9e474659b6bbd6db7a84a" Workload="localhost-k8s-calico--apiserver--67947fdc4c--zg5gm-eth0" Mar 13 00:43:07.925055 containerd[1565]: 2026-03-13 00:43:07.820 [INFO][4958] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="fc8724212b428259804e4f9eece3beb54059853508a9e474659b6bbd6db7a84a" HandleID="k8s-pod-network.fc8724212b428259804e4f9eece3beb54059853508a9e474659b6bbd6db7a84a" Workload="localhost-k8s-calico--apiserver--67947fdc4c--zg5gm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f630), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-67947fdc4c-zg5gm", "timestamp":"2026-03-13 00:43:07.804813918 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00026a160)} Mar 13 00:43:07.925055 containerd[1565]: 2026-03-13 00:43:07.820 [INFO][4958] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 00:43:07.925055 containerd[1565]: 2026-03-13 00:43:07.821 [INFO][4958] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 00:43:07.925055 containerd[1565]: 2026-03-13 00:43:07.822 [INFO][4958] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 13 00:43:07.925055 containerd[1565]: 2026-03-13 00:43:07.826 [INFO][4958] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.fc8724212b428259804e4f9eece3beb54059853508a9e474659b6bbd6db7a84a" host="localhost" Mar 13 00:43:07.925055 containerd[1565]: 2026-03-13 00:43:07.851 [INFO][4958] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 13 00:43:07.925055 containerd[1565]: 2026-03-13 00:43:07.859 [INFO][4958] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 13 00:43:07.925055 containerd[1565]: 2026-03-13 00:43:07.864 [INFO][4958] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 13 00:43:07.925055 containerd[1565]: 2026-03-13 00:43:07.867 [INFO][4958] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 13 00:43:07.925055 containerd[1565]: 2026-03-13 00:43:07.867 [INFO][4958] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.fc8724212b428259804e4f9eece3beb54059853508a9e474659b6bbd6db7a84a" host="localhost" Mar 13 00:43:07.925342 containerd[1565]: 2026-03-13 00:43:07.869 [INFO][4958] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.fc8724212b428259804e4f9eece3beb54059853508a9e474659b6bbd6db7a84a Mar 13 00:43:07.925342 containerd[1565]: 2026-03-13 00:43:07.875 [INFO][4958] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.fc8724212b428259804e4f9eece3beb54059853508a9e474659b6bbd6db7a84a" host="localhost" Mar 13 00:43:07.925342 containerd[1565]: 2026-03-13 00:43:07.882 [INFO][4958] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.fc8724212b428259804e4f9eece3beb54059853508a9e474659b6bbd6db7a84a" host="localhost" Mar 13 00:43:07.925342 containerd[1565]: 2026-03-13 00:43:07.882 [INFO][4958] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.fc8724212b428259804e4f9eece3beb54059853508a9e474659b6bbd6db7a84a" host="localhost" Mar 13 00:43:07.925342 containerd[1565]: 2026-03-13 00:43:07.883 [INFO][4958] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 00:43:07.925342 containerd[1565]: 2026-03-13 00:43:07.883 [INFO][4958] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="fc8724212b428259804e4f9eece3beb54059853508a9e474659b6bbd6db7a84a" HandleID="k8s-pod-network.fc8724212b428259804e4f9eece3beb54059853508a9e474659b6bbd6db7a84a" Workload="localhost-k8s-calico--apiserver--67947fdc4c--zg5gm-eth0" Mar 13 00:43:07.925453 containerd[1565]: 2026-03-13 00:43:07.886 [INFO][4928] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fc8724212b428259804e4f9eece3beb54059853508a9e474659b6bbd6db7a84a" Namespace="calico-system" Pod="calico-apiserver-67947fdc4c-zg5gm" WorkloadEndpoint="localhost-k8s-calico--apiserver--67947fdc4c--zg5gm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--67947fdc4c--zg5gm-eth0", GenerateName:"calico-apiserver-67947fdc4c-", Namespace:"calico-system", SelfLink:"", UID:"63cfed16-ada0-435d-bd56-248f4b5a20e5", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 42, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67947fdc4c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-67947fdc4c-zg5gm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali9cefc3c312b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:43:07.926681 containerd[1565]: 2026-03-13 00:43:07.886 [INFO][4928] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="fc8724212b428259804e4f9eece3beb54059853508a9e474659b6bbd6db7a84a" Namespace="calico-system" Pod="calico-apiserver-67947fdc4c-zg5gm" WorkloadEndpoint="localhost-k8s-calico--apiserver--67947fdc4c--zg5gm-eth0" Mar 13 00:43:07.926681 containerd[1565]: 2026-03-13 00:43:07.886 [INFO][4928] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9cefc3c312b ContainerID="fc8724212b428259804e4f9eece3beb54059853508a9e474659b6bbd6db7a84a" Namespace="calico-system" Pod="calico-apiserver-67947fdc4c-zg5gm" WorkloadEndpoint="localhost-k8s-calico--apiserver--67947fdc4c--zg5gm-eth0" Mar 13 00:43:07.926681 containerd[1565]: 2026-03-13 00:43:07.901 [INFO][4928] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fc8724212b428259804e4f9eece3beb54059853508a9e474659b6bbd6db7a84a" Namespace="calico-system" Pod="calico-apiserver-67947fdc4c-zg5gm" WorkloadEndpoint="localhost-k8s-calico--apiserver--67947fdc4c--zg5gm-eth0" Mar 13 00:43:07.926772 containerd[1565]: 2026-03-13 00:43:07.904 [INFO][4928] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fc8724212b428259804e4f9eece3beb54059853508a9e474659b6bbd6db7a84a" Namespace="calico-system" Pod="calico-apiserver-67947fdc4c-zg5gm" WorkloadEndpoint="localhost-k8s-calico--apiserver--67947fdc4c--zg5gm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--67947fdc4c--zg5gm-eth0", GenerateName:"calico-apiserver-67947fdc4c-", Namespace:"calico-system", SelfLink:"", UID:"63cfed16-ada0-435d-bd56-248f4b5a20e5", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 42, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67947fdc4c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fc8724212b428259804e4f9eece3beb54059853508a9e474659b6bbd6db7a84a", Pod:"calico-apiserver-67947fdc4c-zg5gm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali9cefc3c312b", MAC:"6e:52:a8:52:75:35", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:43:07.926898 containerd[1565]: 2026-03-13 00:43:07.917 [INFO][4928] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fc8724212b428259804e4f9eece3beb54059853508a9e474659b6bbd6db7a84a" Namespace="calico-system" Pod="calico-apiserver-67947fdc4c-zg5gm" WorkloadEndpoint="localhost-k8s-calico--apiserver--67947fdc4c--zg5gm-eth0" Mar 13 00:43:08.002428 systemd-networkd[1462]: cali4d3bfc7d64a: Link UP Mar 13 00:43:08.004971 systemd-networkd[1462]: cali4d3bfc7d64a: Gained carrier Mar 13 00:43:08.028296 containerd[1565]: 2026-03-13 00:43:07.742 [INFO][4920] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--24t98-eth0 coredns-66bc5c9577- kube-system a8a6ad84-f7f5-49f3-9436-19390a6f9006 861 0 2026-03-13 00:42:15 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-24t98 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali4d3bfc7d64a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="2ab0ba6cfaefe39fb39565061a7b732869f82b4a45ffa3370f98ffbc606e1dcc" Namespace="kube-system" Pod="coredns-66bc5c9577-24t98" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--24t98-" Mar 13 00:43:08.028296 containerd[1565]: 2026-03-13 00:43:07.743 [INFO][4920] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2ab0ba6cfaefe39fb39565061a7b732869f82b4a45ffa3370f98ffbc606e1dcc" Namespace="kube-system" Pod="coredns-66bc5c9577-24t98" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--24t98-eth0" Mar 13 00:43:08.028296 containerd[1565]: 2026-03-13 00:43:07.815 [INFO][4951] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2ab0ba6cfaefe39fb39565061a7b732869f82b4a45ffa3370f98ffbc606e1dcc" HandleID="k8s-pod-network.2ab0ba6cfaefe39fb39565061a7b732869f82b4a45ffa3370f98ffbc606e1dcc" Workload="localhost-k8s-coredns--66bc5c9577--24t98-eth0" Mar 13 00:43:08.029838 containerd[1565]: 2026-03-13 00:43:07.825 [INFO][4951] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="2ab0ba6cfaefe39fb39565061a7b732869f82b4a45ffa3370f98ffbc606e1dcc" HandleID="k8s-pod-network.2ab0ba6cfaefe39fb39565061a7b732869f82b4a45ffa3370f98ffbc606e1dcc" Workload="localhost-k8s-coredns--66bc5c9577--24t98-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000367860), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-24t98", "timestamp":"2026-03-13 00:43:07.815282735 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001ee6e0)} Mar 13 00:43:08.029838 containerd[1565]: 2026-03-13 00:43:07.825 [INFO][4951] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 00:43:08.029838 containerd[1565]: 2026-03-13 00:43:07.883 [INFO][4951] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 00:43:08.029838 containerd[1565]: 2026-03-13 00:43:07.883 [INFO][4951] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 13 00:43:08.029838 containerd[1565]: 2026-03-13 00:43:07.929 [INFO][4951] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.2ab0ba6cfaefe39fb39565061a7b732869f82b4a45ffa3370f98ffbc606e1dcc" host="localhost" Mar 13 00:43:08.029838 containerd[1565]: 2026-03-13 00:43:07.952 [INFO][4951] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 13 00:43:08.029838 containerd[1565]: 2026-03-13 00:43:07.960 [INFO][4951] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 13 00:43:08.029838 containerd[1565]: 2026-03-13 00:43:07.962 [INFO][4951] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 13 00:43:08.029838 containerd[1565]: 2026-03-13 00:43:07.965 [INFO][4951] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 13 00:43:08.029838 containerd[1565]: 2026-03-13 00:43:07.965 [INFO][4951] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2ab0ba6cfaefe39fb39565061a7b732869f82b4a45ffa3370f98ffbc606e1dcc" host="localhost" Mar 13 00:43:08.030200 containerd[1565]: 2026-03-13 00:43:07.968 [INFO][4951] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.2ab0ba6cfaefe39fb39565061a7b732869f82b4a45ffa3370f98ffbc606e1dcc Mar 13 00:43:08.030200 containerd[1565]: 2026-03-13 00:43:07.974 [INFO][4951] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2ab0ba6cfaefe39fb39565061a7b732869f82b4a45ffa3370f98ffbc606e1dcc" host="localhost" Mar 13 00:43:08.030200 containerd[1565]: 2026-03-13 00:43:07.981 [INFO][4951] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.2ab0ba6cfaefe39fb39565061a7b732869f82b4a45ffa3370f98ffbc606e1dcc" host="localhost" Mar 13 00:43:08.030200 containerd[1565]: 2026-03-13 00:43:07.981 [INFO][4951] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.2ab0ba6cfaefe39fb39565061a7b732869f82b4a45ffa3370f98ffbc606e1dcc" host="localhost" Mar 13 00:43:08.030200 containerd[1565]: 2026-03-13 00:43:07.982 [INFO][4951] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 00:43:08.030200 containerd[1565]: 2026-03-13 00:43:07.982 [INFO][4951] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="2ab0ba6cfaefe39fb39565061a7b732869f82b4a45ffa3370f98ffbc606e1dcc" HandleID="k8s-pod-network.2ab0ba6cfaefe39fb39565061a7b732869f82b4a45ffa3370f98ffbc606e1dcc" Workload="localhost-k8s-coredns--66bc5c9577--24t98-eth0" Mar 13 00:43:08.030361 containerd[1565]: 2026-03-13 00:43:07.989 [INFO][4920] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2ab0ba6cfaefe39fb39565061a7b732869f82b4a45ffa3370f98ffbc606e1dcc" Namespace="kube-system" Pod="coredns-66bc5c9577-24t98" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--24t98-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--24t98-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"a8a6ad84-f7f5-49f3-9436-19390a6f9006", ResourceVersion:"861", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 42, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-24t98", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4d3bfc7d64a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:43:08.030361 containerd[1565]: 2026-03-13 00:43:07.989 [INFO][4920] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="2ab0ba6cfaefe39fb39565061a7b732869f82b4a45ffa3370f98ffbc606e1dcc" Namespace="kube-system" Pod="coredns-66bc5c9577-24t98" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--24t98-eth0" Mar 13 00:43:08.030361 containerd[1565]: 2026-03-13 00:43:07.989 [INFO][4920] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4d3bfc7d64a ContainerID="2ab0ba6cfaefe39fb39565061a7b732869f82b4a45ffa3370f98ffbc606e1dcc" Namespace="kube-system" Pod="coredns-66bc5c9577-24t98" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--24t98-eth0" Mar 13 00:43:08.030361 containerd[1565]: 2026-03-13 00:43:08.010 [INFO][4920] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2ab0ba6cfaefe39fb39565061a7b732869f82b4a45ffa3370f98ffbc606e1dcc" Namespace="kube-system" Pod="coredns-66bc5c9577-24t98" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--24t98-eth0" Mar 13 00:43:08.030361 containerd[1565]: 2026-03-13 00:43:08.010 [INFO][4920] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2ab0ba6cfaefe39fb39565061a7b732869f82b4a45ffa3370f98ffbc606e1dcc" Namespace="kube-system" Pod="coredns-66bc5c9577-24t98" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--24t98-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--24t98-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"a8a6ad84-f7f5-49f3-9436-19390a6f9006", ResourceVersion:"861", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 42, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2ab0ba6cfaefe39fb39565061a7b732869f82b4a45ffa3370f98ffbc606e1dcc", Pod:"coredns-66bc5c9577-24t98", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4d3bfc7d64a", MAC:"52:54:7d:f7:16:9c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:43:08.030361 containerd[1565]: 2026-03-13 00:43:08.021 [INFO][4920] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2ab0ba6cfaefe39fb39565061a7b732869f82b4a45ffa3370f98ffbc606e1dcc" Namespace="kube-system" Pod="coredns-66bc5c9577-24t98" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--24t98-eth0" Mar 13 00:43:08.041943 containerd[1565]: time="2026-03-13T00:43:08.041858561Z" level=info msg="connecting to shim fc8724212b428259804e4f9eece3beb54059853508a9e474659b6bbd6db7a84a" address="unix:///run/containerd/s/df19c0970dd8af94a79dcead9c805ed098cc3f3cfb6b6ed6e0083e034cffec51" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:43:08.091701 systemd[1]: Started cri-containerd-fc8724212b428259804e4f9eece3beb54059853508a9e474659b6bbd6db7a84a.scope - libcontainer container fc8724212b428259804e4f9eece3beb54059853508a9e474659b6bbd6db7a84a. Mar 13 00:43:08.095892 containerd[1565]: time="2026-03-13T00:43:08.095706636Z" level=info msg="connecting to shim 2ab0ba6cfaefe39fb39565061a7b732869f82b4a45ffa3370f98ffbc606e1dcc" address="unix:///run/containerd/s/4926e32537d5e360bf70722b7420f02fc23bdcf374f53cddd5fc566011570dbb" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:43:08.117928 systemd-resolved[1389]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 13 00:43:08.131713 systemd[1]: Started cri-containerd-2ab0ba6cfaefe39fb39565061a7b732869f82b4a45ffa3370f98ffbc606e1dcc.scope - libcontainer container 2ab0ba6cfaefe39fb39565061a7b732869f82b4a45ffa3370f98ffbc606e1dcc. Mar 13 00:43:08.168163 systemd-resolved[1389]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 13 00:43:08.213306 containerd[1565]: time="2026-03-13T00:43:08.213190146Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67947fdc4c-zg5gm,Uid:63cfed16-ada0-435d-bd56-248f4b5a20e5,Namespace:calico-system,Attempt:0,} returns sandbox id \"fc8724212b428259804e4f9eece3beb54059853508a9e474659b6bbd6db7a84a\"" Mar 13 00:43:08.216375 containerd[1565]: time="2026-03-13T00:43:08.216303007Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-24t98,Uid:a8a6ad84-f7f5-49f3-9436-19390a6f9006,Namespace:kube-system,Attempt:0,} returns sandbox id \"2ab0ba6cfaefe39fb39565061a7b732869f82b4a45ffa3370f98ffbc606e1dcc\"" Mar 13 00:43:08.219214 kubelet[2785]: E0313 00:43:08.219072 2785 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:43:08.222211 containerd[1565]: time="2026-03-13T00:43:08.222132535Z" level=info msg="CreateContainer within sandbox \"fc8724212b428259804e4f9eece3beb54059853508a9e474659b6bbd6db7a84a\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 13 00:43:08.224843 containerd[1565]: time="2026-03-13T00:43:08.224762533Z" level=info msg="CreateContainer within sandbox \"2ab0ba6cfaefe39fb39565061a7b732869f82b4a45ffa3370f98ffbc606e1dcc\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 13 00:43:08.235203 containerd[1565]: time="2026-03-13T00:43:08.235053990Z" level=info msg="Container b5dfcff9943024fdbdee0637ffca737ffa0678db11e0f18c9c4ccc088aa0bc9c: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:43:08.249717 containerd[1565]: time="2026-03-13T00:43:08.249509017Z" level=info msg="CreateContainer within sandbox \"fc8724212b428259804e4f9eece3beb54059853508a9e474659b6bbd6db7a84a\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"b5dfcff9943024fdbdee0637ffca737ffa0678db11e0f18c9c4ccc088aa0bc9c\"" Mar 13 00:43:08.250518 containerd[1565]: time="2026-03-13T00:43:08.250455624Z" level=info msg="StartContainer for \"b5dfcff9943024fdbdee0637ffca737ffa0678db11e0f18c9c4ccc088aa0bc9c\"" Mar 13 00:43:08.251779 containerd[1565]: time="2026-03-13T00:43:08.251737772Z" level=info msg="connecting to shim b5dfcff9943024fdbdee0637ffca737ffa0678db11e0f18c9c4ccc088aa0bc9c" address="unix:///run/containerd/s/df19c0970dd8af94a79dcead9c805ed098cc3f3cfb6b6ed6e0083e034cffec51" protocol=ttrpc version=3 Mar 13 00:43:08.262326 containerd[1565]: time="2026-03-13T00:43:08.262219467Z" level=info msg="Container 4b76754127c5d1f27c81632d424d7a53728b5427739924d1965ac9e1735d139a: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:43:08.272205 containerd[1565]: time="2026-03-13T00:43:08.272174986Z" level=info msg="CreateContainer within sandbox \"2ab0ba6cfaefe39fb39565061a7b732869f82b4a45ffa3370f98ffbc606e1dcc\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4b76754127c5d1f27c81632d424d7a53728b5427739924d1965ac9e1735d139a\"" Mar 13 00:43:08.277014 containerd[1565]: time="2026-03-13T00:43:08.276390038Z" level=info msg="StartContainer for \"4b76754127c5d1f27c81632d424d7a53728b5427739924d1965ac9e1735d139a\"" Mar 13 00:43:08.278945 containerd[1565]: time="2026-03-13T00:43:08.278919708Z" level=info msg="connecting to shim 4b76754127c5d1f27c81632d424d7a53728b5427739924d1965ac9e1735d139a" address="unix:///run/containerd/s/4926e32537d5e360bf70722b7420f02fc23bdcf374f53cddd5fc566011570dbb" protocol=ttrpc version=3 Mar 13 00:43:08.282762 systemd[1]: Started cri-containerd-b5dfcff9943024fdbdee0637ffca737ffa0678db11e0f18c9c4ccc088aa0bc9c.scope - libcontainer container b5dfcff9943024fdbdee0637ffca737ffa0678db11e0f18c9c4ccc088aa0bc9c. Mar 13 00:43:08.307759 systemd[1]: Started cri-containerd-4b76754127c5d1f27c81632d424d7a53728b5427739924d1965ac9e1735d139a.scope - libcontainer container 4b76754127c5d1f27c81632d424d7a53728b5427739924d1965ac9e1735d139a. Mar 13 00:43:08.362438 containerd[1565]: time="2026-03-13T00:43:08.362338699Z" level=info msg="StartContainer for \"4b76754127c5d1f27c81632d424d7a53728b5427739924d1965ac9e1735d139a\" returns successfully" Mar 13 00:43:08.378977 containerd[1565]: time="2026-03-13T00:43:08.378801627Z" level=info msg="StartContainer for \"b5dfcff9943024fdbdee0637ffca737ffa0678db11e0f18c9c4ccc088aa0bc9c\" returns successfully" Mar 13 00:43:08.516970 kubelet[2785]: E0313 00:43:08.515499 2785 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:43:08.567372 kubelet[2785]: I0313 00:43:08.566897 2785 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-24t98" podStartSLOduration=53.566878556 podStartE2EDuration="53.566878556s" podCreationTimestamp="2026-03-13 00:42:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 00:43:08.545016685 +0000 UTC m=+60.005291676" watchObservedRunningTime="2026-03-13 00:43:08.566878556 +0000 UTC m=+60.027153547" Mar 13 00:43:08.593249 kubelet[2785]: I0313 00:43:08.593167 2785 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-67947fdc4c-zg5gm" podStartSLOduration=32.59150565 podStartE2EDuration="32.59150565s" podCreationTimestamp="2026-03-13 00:42:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 00:43:08.568616463 +0000 UTC m=+60.028891454" watchObservedRunningTime="2026-03-13 00:43:08.59150565 +0000 UTC m=+60.051780641" Mar 13 00:43:08.656484 kubelet[2785]: I0313 00:43:08.654463 2785 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-769b4596d5-fc5l8" podStartSLOduration=31.200949619 podStartE2EDuration="32.6544489s" podCreationTimestamp="2026-03-13 00:42:36 +0000 UTC" firstStartedPulling="2026-03-13 00:43:06.107186781 +0000 UTC m=+57.567461773" lastFinishedPulling="2026-03-13 00:43:07.560686063 +0000 UTC m=+59.020961054" observedRunningTime="2026-03-13 00:43:08.594595345 +0000 UTC m=+60.054870336" watchObservedRunningTime="2026-03-13 00:43:08.6544489 +0000 UTC m=+60.114723892" Mar 13 00:43:09.125984 systemd-networkd[1462]: cali4d3bfc7d64a: Gained IPv6LL Mar 13 00:43:09.528699 kubelet[2785]: I0313 00:43:09.528657 2785 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 00:43:09.529732 kubelet[2785]: E0313 00:43:09.529512 2785 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:43:09.691160 kubelet[2785]: E0313 00:43:09.691028 2785 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:43:09.693046 containerd[1565]: time="2026-03-13T00:43:09.692870143Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-4drb9,Uid:dfc1f841-0529-484b-b59f-9cc1adfd0779,Namespace:kube-system,Attempt:0,}" Mar 13 00:43:09.831462 systemd-networkd[1462]: cali9cefc3c312b: Gained IPv6LL Mar 13 00:43:09.843914 systemd-networkd[1462]: calib191d04475d: Link UP Mar 13 00:43:09.844862 systemd-networkd[1462]: calib191d04475d: Gained carrier Mar 13 00:43:09.862265 containerd[1565]: 2026-03-13 00:43:09.751 [INFO][5227] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--4drb9-eth0 coredns-66bc5c9577- kube-system dfc1f841-0529-484b-b59f-9cc1adfd0779 863 0 2026-03-13 00:42:14 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-4drb9 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib191d04475d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="2897acfbccc4b38638f06c4f3f0e8f949eec743e3b9fe67ca45c634cd5351653" Namespace="kube-system" Pod="coredns-66bc5c9577-4drb9" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--4drb9-" Mar 13 00:43:09.862265 containerd[1565]: 2026-03-13 00:43:09.752 [INFO][5227] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2897acfbccc4b38638f06c4f3f0e8f949eec743e3b9fe67ca45c634cd5351653" Namespace="kube-system" Pod="coredns-66bc5c9577-4drb9" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--4drb9-eth0" Mar 13 00:43:09.862265 containerd[1565]: 2026-03-13 00:43:09.790 [INFO][5242] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2897acfbccc4b38638f06c4f3f0e8f949eec743e3b9fe67ca45c634cd5351653" HandleID="k8s-pod-network.2897acfbccc4b38638f06c4f3f0e8f949eec743e3b9fe67ca45c634cd5351653" Workload="localhost-k8s-coredns--66bc5c9577--4drb9-eth0" Mar 13 00:43:09.862265 containerd[1565]: 2026-03-13 00:43:09.797 [INFO][5242] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="2897acfbccc4b38638f06c4f3f0e8f949eec743e3b9fe67ca45c634cd5351653" HandleID="k8s-pod-network.2897acfbccc4b38638f06c4f3f0e8f949eec743e3b9fe67ca45c634cd5351653" Workload="localhost-k8s-coredns--66bc5c9577--4drb9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001a4300), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-4drb9", "timestamp":"2026-03-13 00:43:09.790014724 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00042c420)} Mar 13 00:43:09.862265 containerd[1565]: 2026-03-13 00:43:09.797 [INFO][5242] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 00:43:09.862265 containerd[1565]: 2026-03-13 00:43:09.797 [INFO][5242] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 00:43:09.862265 containerd[1565]: 2026-03-13 00:43:09.797 [INFO][5242] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 13 00:43:09.862265 containerd[1565]: 2026-03-13 00:43:09.800 [INFO][5242] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.2897acfbccc4b38638f06c4f3f0e8f949eec743e3b9fe67ca45c634cd5351653" host="localhost" Mar 13 00:43:09.862265 containerd[1565]: 2026-03-13 00:43:09.807 [INFO][5242] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 13 00:43:09.862265 containerd[1565]: 2026-03-13 00:43:09.813 [INFO][5242] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 13 00:43:09.862265 containerd[1565]: 2026-03-13 00:43:09.815 [INFO][5242] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 13 00:43:09.862265 containerd[1565]: 2026-03-13 00:43:09.818 [INFO][5242] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 13 00:43:09.862265 containerd[1565]: 2026-03-13 00:43:09.818 [INFO][5242] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2897acfbccc4b38638f06c4f3f0e8f949eec743e3b9fe67ca45c634cd5351653" host="localhost" Mar 13 00:43:09.862265 containerd[1565]: 2026-03-13 00:43:09.820 [INFO][5242] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.2897acfbccc4b38638f06c4f3f0e8f949eec743e3b9fe67ca45c634cd5351653 Mar 13 00:43:09.862265 containerd[1565]: 2026-03-13 00:43:09.827 [INFO][5242] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2897acfbccc4b38638f06c4f3f0e8f949eec743e3b9fe67ca45c634cd5351653" host="localhost" Mar 13 00:43:09.862265 containerd[1565]: 2026-03-13 00:43:09.834 [INFO][5242] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.2897acfbccc4b38638f06c4f3f0e8f949eec743e3b9fe67ca45c634cd5351653" host="localhost" Mar 13 00:43:09.862265 containerd[1565]: 2026-03-13 00:43:09.835 [INFO][5242] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.2897acfbccc4b38638f06c4f3f0e8f949eec743e3b9fe67ca45c634cd5351653" host="localhost" Mar 13 00:43:09.862265 containerd[1565]: 2026-03-13 00:43:09.835 [INFO][5242] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 00:43:09.862265 containerd[1565]: 2026-03-13 00:43:09.835 [INFO][5242] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="2897acfbccc4b38638f06c4f3f0e8f949eec743e3b9fe67ca45c634cd5351653" HandleID="k8s-pod-network.2897acfbccc4b38638f06c4f3f0e8f949eec743e3b9fe67ca45c634cd5351653" Workload="localhost-k8s-coredns--66bc5c9577--4drb9-eth0" Mar 13 00:43:09.862957 containerd[1565]: 2026-03-13 00:43:09.839 [INFO][5227] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2897acfbccc4b38638f06c4f3f0e8f949eec743e3b9fe67ca45c634cd5351653" Namespace="kube-system" Pod="coredns-66bc5c9577-4drb9" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--4drb9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--4drb9-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"dfc1f841-0529-484b-b59f-9cc1adfd0779", ResourceVersion:"863", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 42, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-4drb9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib191d04475d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:43:09.862957 containerd[1565]: 2026-03-13 00:43:09.839 [INFO][5227] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="2897acfbccc4b38638f06c4f3f0e8f949eec743e3b9fe67ca45c634cd5351653" Namespace="kube-system" Pod="coredns-66bc5c9577-4drb9" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--4drb9-eth0" Mar 13 00:43:09.862957 containerd[1565]: 2026-03-13 00:43:09.839 [INFO][5227] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib191d04475d ContainerID="2897acfbccc4b38638f06c4f3f0e8f949eec743e3b9fe67ca45c634cd5351653" Namespace="kube-system" Pod="coredns-66bc5c9577-4drb9" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--4drb9-eth0" Mar 13 00:43:09.862957 containerd[1565]: 2026-03-13 00:43:09.845 [INFO][5227] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2897acfbccc4b38638f06c4f3f0e8f949eec743e3b9fe67ca45c634cd5351653" Namespace="kube-system" Pod="coredns-66bc5c9577-4drb9" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--4drb9-eth0" Mar 13 00:43:09.862957 containerd[1565]: 2026-03-13 00:43:09.846 [INFO][5227] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2897acfbccc4b38638f06c4f3f0e8f949eec743e3b9fe67ca45c634cd5351653" Namespace="kube-system" Pod="coredns-66bc5c9577-4drb9" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--4drb9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--4drb9-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"dfc1f841-0529-484b-b59f-9cc1adfd0779", ResourceVersion:"863", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 42, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2897acfbccc4b38638f06c4f3f0e8f949eec743e3b9fe67ca45c634cd5351653", Pod:"coredns-66bc5c9577-4drb9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib191d04475d", MAC:"76:da:64:2d:35:d3", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:43:09.862957 containerd[1565]: 2026-03-13 00:43:09.856 [INFO][5227] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2897acfbccc4b38638f06c4f3f0e8f949eec743e3b9fe67ca45c634cd5351653" Namespace="kube-system" Pod="coredns-66bc5c9577-4drb9" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--4drb9-eth0" Mar 13 00:43:09.898172 containerd[1565]: time="2026-03-13T00:43:09.898029899Z" level=info msg="connecting to shim 2897acfbccc4b38638f06c4f3f0e8f949eec743e3b9fe67ca45c634cd5351653" address="unix:///run/containerd/s/16cbdfb7ccffa7c57033b34b4b99f034d77cc9941d34c40ac48d9c14d7cf9d57" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:43:09.926704 systemd[1]: Started cri-containerd-2897acfbccc4b38638f06c4f3f0e8f949eec743e3b9fe67ca45c634cd5351653.scope - libcontainer container 2897acfbccc4b38638f06c4f3f0e8f949eec743e3b9fe67ca45c634cd5351653. Mar 13 00:43:09.944426 systemd-resolved[1389]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 13 00:43:09.988438 containerd[1565]: time="2026-03-13T00:43:09.988365767Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-4drb9,Uid:dfc1f841-0529-484b-b59f-9cc1adfd0779,Namespace:kube-system,Attempt:0,} returns sandbox id \"2897acfbccc4b38638f06c4f3f0e8f949eec743e3b9fe67ca45c634cd5351653\"" Mar 13 00:43:09.990033 kubelet[2785]: E0313 00:43:09.989928 2785 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:43:09.997169 containerd[1565]: time="2026-03-13T00:43:09.996830689Z" level=info msg="CreateContainer within sandbox \"2897acfbccc4b38638f06c4f3f0e8f949eec743e3b9fe67ca45c634cd5351653\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 13 00:43:10.014460 containerd[1565]: time="2026-03-13T00:43:10.012953363Z" level=info msg="Container 52b5f03723d4952393d2322c36a93765b7cfedc87f101322dae8ef2506600624: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:43:10.022374 containerd[1565]: time="2026-03-13T00:43:10.022186481Z" level=info msg="CreateContainer within sandbox \"2897acfbccc4b38638f06c4f3f0e8f949eec743e3b9fe67ca45c634cd5351653\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"52b5f03723d4952393d2322c36a93765b7cfedc87f101322dae8ef2506600624\"" Mar 13 00:43:10.022991 containerd[1565]: time="2026-03-13T00:43:10.022970068Z" level=info msg="StartContainer for \"52b5f03723d4952393d2322c36a93765b7cfedc87f101322dae8ef2506600624\"" Mar 13 00:43:10.023981 containerd[1565]: time="2026-03-13T00:43:10.023825879Z" level=info msg="connecting to shim 52b5f03723d4952393d2322c36a93765b7cfedc87f101322dae8ef2506600624" address="unix:///run/containerd/s/16cbdfb7ccffa7c57033b34b4b99f034d77cc9941d34c40ac48d9c14d7cf9d57" protocol=ttrpc version=3 Mar 13 00:43:10.054967 systemd[1]: Started cri-containerd-52b5f03723d4952393d2322c36a93765b7cfedc87f101322dae8ef2506600624.scope - libcontainer container 52b5f03723d4952393d2322c36a93765b7cfedc87f101322dae8ef2506600624. Mar 13 00:43:10.103177 containerd[1565]: time="2026-03-13T00:43:10.102907212Z" level=info msg="StartContainer for \"52b5f03723d4952393d2322c36a93765b7cfedc87f101322dae8ef2506600624\" returns successfully" Mar 13 00:43:10.535706 kubelet[2785]: E0313 00:43:10.535310 2785 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:43:10.536609 kubelet[2785]: E0313 00:43:10.536487 2785 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:43:10.551477 kubelet[2785]: I0313 00:43:10.551326 2785 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-4drb9" podStartSLOduration=56.551311892 podStartE2EDuration="56.551311892s" podCreationTimestamp="2026-03-13 00:42:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 00:43:10.551150665 +0000 UTC m=+62.011425656" watchObservedRunningTime="2026-03-13 00:43:10.551311892 +0000 UTC m=+62.011586883" Mar 13 00:43:11.110193 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2162995132.mount: Deactivated successfully. Mar 13 00:43:11.237845 systemd-networkd[1462]: calib191d04475d: Gained IPv6LL Mar 13 00:43:11.536999 kubelet[2785]: E0313 00:43:11.536814 2785 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:43:11.565366 containerd[1565]: time="2026-03-13T00:43:11.565295989Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:43:11.566470 containerd[1565]: time="2026-03-13T00:43:11.566192174Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Mar 13 00:43:11.567695 containerd[1565]: time="2026-03-13T00:43:11.567638367Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:43:11.570440 containerd[1565]: time="2026-03-13T00:43:11.570343935Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:43:11.571330 containerd[1565]: time="2026-03-13T00:43:11.571287920Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 4.01042503s" Mar 13 00:43:11.571378 containerd[1565]: time="2026-03-13T00:43:11.571333804Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Mar 13 00:43:11.576903 containerd[1565]: time="2026-03-13T00:43:11.576862088Z" level=info msg="CreateContainer within sandbox \"200ad9054b4f4a14b8326f23d652c1743b13ac2a489502f64c13fc5052c069fc\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Mar 13 00:43:11.583881 containerd[1565]: time="2026-03-13T00:43:11.583831323Z" level=info msg="Container 12ee638b8d26b5a9d8f6e3b78a7294f4777647ef05a9a7849f2c127e90c5745c: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:43:11.593095 containerd[1565]: time="2026-03-13T00:43:11.593007781Z" level=info msg="CreateContainer within sandbox \"200ad9054b4f4a14b8326f23d652c1743b13ac2a489502f64c13fc5052c069fc\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"12ee638b8d26b5a9d8f6e3b78a7294f4777647ef05a9a7849f2c127e90c5745c\"" Mar 13 00:43:11.593566 containerd[1565]: time="2026-03-13T00:43:11.593478622Z" level=info msg="StartContainer for \"12ee638b8d26b5a9d8f6e3b78a7294f4777647ef05a9a7849f2c127e90c5745c\"" Mar 13 00:43:11.594712 containerd[1565]: time="2026-03-13T00:43:11.594635539Z" level=info msg="connecting to shim 12ee638b8d26b5a9d8f6e3b78a7294f4777647ef05a9a7849f2c127e90c5745c" address="unix:///run/containerd/s/d641ebf2922e123c45c574b699a44674404753ebc62c3e33ec02c733504fccbc" protocol=ttrpc version=3 Mar 13 00:43:11.625706 systemd[1]: Started cri-containerd-12ee638b8d26b5a9d8f6e3b78a7294f4777647ef05a9a7849f2c127e90c5745c.scope - libcontainer container 12ee638b8d26b5a9d8f6e3b78a7294f4777647ef05a9a7849f2c127e90c5745c. Mar 13 00:43:11.688589 containerd[1565]: time="2026-03-13T00:43:11.688451032Z" level=info msg="StartContainer for \"12ee638b8d26b5a9d8f6e3b78a7294f4777647ef05a9a7849f2c127e90c5745c\" returns successfully" Mar 13 00:43:11.978981 kubelet[2785]: I0313 00:43:11.978890 2785 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 00:43:12.544805 kubelet[2785]: E0313 00:43:12.544724 2785 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:43:12.563875 kubelet[2785]: I0313 00:43:12.563159 2785 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-cccfbd5cf-h5w9q" podStartSLOduration=31.179811133 podStartE2EDuration="36.563133455s" podCreationTimestamp="2026-03-13 00:42:36 +0000 UTC" firstStartedPulling="2026-03-13 00:43:06.188860112 +0000 UTC m=+57.649135104" lastFinishedPulling="2026-03-13 00:43:11.572182434 +0000 UTC m=+63.032457426" observedRunningTime="2026-03-13 00:43:12.560399471 +0000 UTC m=+64.020674453" watchObservedRunningTime="2026-03-13 00:43:12.563133455 +0000 UTC m=+64.023408446" Mar 13 00:43:15.584986 systemd[1]: Started sshd@9-10.0.0.68:22-10.0.0.1:60030.service - OpenSSH per-connection server daemon (10.0.0.1:60030). Mar 13 00:43:15.716277 sshd[5474]: Accepted publickey for core from 10.0.0.1 port 60030 ssh2: RSA SHA256:eWtiGYQHnkjgJlXJLLPoGwe2+/3lLXbpacmtzaUtKgo Mar 13 00:43:15.718845 sshd-session[5474]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:43:15.729305 systemd-logind[1547]: New session 10 of user core. Mar 13 00:43:15.744227 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 13 00:43:16.003047 sshd[5484]: Connection closed by 10.0.0.1 port 60030 Mar 13 00:43:16.003424 sshd-session[5474]: pam_unix(sshd:session): session closed for user core Mar 13 00:43:16.009372 systemd[1]: sshd@9-10.0.0.68:22-10.0.0.1:60030.service: Deactivated successfully. Mar 13 00:43:16.011461 systemd[1]: session-10.scope: Deactivated successfully. Mar 13 00:43:16.012845 systemd-logind[1547]: Session 10 logged out. Waiting for processes to exit. Mar 13 00:43:16.014458 systemd-logind[1547]: Removed session 10. Mar 13 00:43:19.672562 kubelet[2785]: E0313 00:43:19.672485 2785 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:43:21.019464 systemd[1]: Started sshd@10-10.0.0.68:22-10.0.0.1:42036.service - OpenSSH per-connection server daemon (10.0.0.1:42036). Mar 13 00:43:21.092843 sshd[5520]: Accepted publickey for core from 10.0.0.1 port 42036 ssh2: RSA SHA256:eWtiGYQHnkjgJlXJLLPoGwe2+/3lLXbpacmtzaUtKgo Mar 13 00:43:21.094682 sshd-session[5520]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:43:21.101079 systemd-logind[1547]: New session 11 of user core. Mar 13 00:43:21.111799 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 13 00:43:21.241912 sshd[5523]: Connection closed by 10.0.0.1 port 42036 Mar 13 00:43:21.242351 sshd-session[5520]: pam_unix(sshd:session): session closed for user core Mar 13 00:43:21.246485 systemd[1]: sshd@10-10.0.0.68:22-10.0.0.1:42036.service: Deactivated successfully. Mar 13 00:43:21.248875 systemd[1]: session-11.scope: Deactivated successfully. Mar 13 00:43:21.249872 systemd-logind[1547]: Session 11 logged out. Waiting for processes to exit. Mar 13 00:43:21.251683 systemd-logind[1547]: Removed session 11. Mar 13 00:43:26.256192 systemd[1]: Started sshd@11-10.0.0.68:22-10.0.0.1:42050.service - OpenSSH per-connection server daemon (10.0.0.1:42050). Mar 13 00:43:26.339507 sshd[5538]: Accepted publickey for core from 10.0.0.1 port 42050 ssh2: RSA SHA256:eWtiGYQHnkjgJlXJLLPoGwe2+/3lLXbpacmtzaUtKgo Mar 13 00:43:26.342446 sshd-session[5538]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:43:26.349784 systemd-logind[1547]: New session 12 of user core. Mar 13 00:43:26.357784 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 13 00:43:26.506173 sshd[5541]: Connection closed by 10.0.0.1 port 42050 Mar 13 00:43:26.507474 sshd-session[5538]: pam_unix(sshd:session): session closed for user core Mar 13 00:43:26.517224 systemd[1]: sshd@11-10.0.0.68:22-10.0.0.1:42050.service: Deactivated successfully. Mar 13 00:43:26.521400 systemd[1]: session-12.scope: Deactivated successfully. Mar 13 00:43:26.523202 systemd-logind[1547]: Session 12 logged out. Waiting for processes to exit. Mar 13 00:43:26.525207 systemd-logind[1547]: Removed session 12. Mar 13 00:43:30.674819 kubelet[2785]: E0313 00:43:30.674743 2785 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:43:31.530184 systemd[1]: Started sshd@12-10.0.0.68:22-10.0.0.1:49330.service - OpenSSH per-connection server daemon (10.0.0.1:49330). Mar 13 00:43:31.606389 sshd[5634]: Accepted publickey for core from 10.0.0.1 port 49330 ssh2: RSA SHA256:eWtiGYQHnkjgJlXJLLPoGwe2+/3lLXbpacmtzaUtKgo Mar 13 00:43:31.608122 sshd-session[5634]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:43:31.614152 systemd-logind[1547]: New session 13 of user core. Mar 13 00:43:31.620698 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 13 00:43:31.736640 sshd[5637]: Connection closed by 10.0.0.1 port 49330 Mar 13 00:43:31.738475 sshd-session[5634]: pam_unix(sshd:session): session closed for user core Mar 13 00:43:31.746410 systemd[1]: sshd@12-10.0.0.68:22-10.0.0.1:49330.service: Deactivated successfully. Mar 13 00:43:31.748512 systemd[1]: session-13.scope: Deactivated successfully. Mar 13 00:43:31.749584 systemd-logind[1547]: Session 13 logged out. Waiting for processes to exit. Mar 13 00:43:31.752175 systemd[1]: Started sshd@13-10.0.0.68:22-10.0.0.1:49334.service - OpenSSH per-connection server daemon (10.0.0.1:49334). Mar 13 00:43:31.753653 systemd-logind[1547]: Removed session 13. Mar 13 00:43:31.806282 sshd[5652]: Accepted publickey for core from 10.0.0.1 port 49334 ssh2: RSA SHA256:eWtiGYQHnkjgJlXJLLPoGwe2+/3lLXbpacmtzaUtKgo Mar 13 00:43:31.807630 sshd-session[5652]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:43:31.813488 systemd-logind[1547]: New session 14 of user core. Mar 13 00:43:31.820709 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 13 00:43:31.950336 sshd[5655]: Connection closed by 10.0.0.1 port 49334 Mar 13 00:43:31.950819 sshd-session[5652]: pam_unix(sshd:session): session closed for user core Mar 13 00:43:31.968385 systemd[1]: sshd@13-10.0.0.68:22-10.0.0.1:49334.service: Deactivated successfully. Mar 13 00:43:31.972841 systemd[1]: session-14.scope: Deactivated successfully. Mar 13 00:43:31.977646 systemd-logind[1547]: Session 14 logged out. Waiting for processes to exit. Mar 13 00:43:31.981094 systemd[1]: Started sshd@14-10.0.0.68:22-10.0.0.1:49342.service - OpenSSH per-connection server daemon (10.0.0.1:49342). Mar 13 00:43:31.983048 systemd-logind[1547]: Removed session 14. Mar 13 00:43:32.040821 sshd[5666]: Accepted publickey for core from 10.0.0.1 port 49342 ssh2: RSA SHA256:eWtiGYQHnkjgJlXJLLPoGwe2+/3lLXbpacmtzaUtKgo Mar 13 00:43:32.042426 sshd-session[5666]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:43:32.047968 systemd-logind[1547]: New session 15 of user core. Mar 13 00:43:32.057735 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 13 00:43:32.146231 sshd[5669]: Connection closed by 10.0.0.1 port 49342 Mar 13 00:43:32.148093 sshd-session[5666]: pam_unix(sshd:session): session closed for user core Mar 13 00:43:32.152330 systemd[1]: sshd@14-10.0.0.68:22-10.0.0.1:49342.service: Deactivated successfully. Mar 13 00:43:32.154458 systemd[1]: session-15.scope: Deactivated successfully. Mar 13 00:43:32.155890 systemd-logind[1547]: Session 15 logged out. Waiting for processes to exit. Mar 13 00:43:32.158060 systemd-logind[1547]: Removed session 15. Mar 13 00:43:37.165935 systemd[1]: Started sshd@15-10.0.0.68:22-10.0.0.1:49344.service - OpenSSH per-connection server daemon (10.0.0.1:49344). Mar 13 00:43:37.246597 sshd[5682]: Accepted publickey for core from 10.0.0.1 port 49344 ssh2: RSA SHA256:eWtiGYQHnkjgJlXJLLPoGwe2+/3lLXbpacmtzaUtKgo Mar 13 00:43:37.248807 sshd-session[5682]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:43:37.256275 systemd-logind[1547]: New session 16 of user core. Mar 13 00:43:37.270695 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 13 00:43:37.411069 sshd[5689]: Connection closed by 10.0.0.1 port 49344 Mar 13 00:43:37.411420 sshd-session[5682]: pam_unix(sshd:session): session closed for user core Mar 13 00:43:37.418106 systemd[1]: sshd@15-10.0.0.68:22-10.0.0.1:49344.service: Deactivated successfully. Mar 13 00:43:37.420170 systemd[1]: session-16.scope: Deactivated successfully. Mar 13 00:43:37.421353 systemd-logind[1547]: Session 16 logged out. Waiting for processes to exit. Mar 13 00:43:37.423242 systemd-logind[1547]: Removed session 16. Mar 13 00:43:41.672754 kubelet[2785]: E0313 00:43:41.672660 2785 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:43:42.428738 systemd[1]: Started sshd@16-10.0.0.68:22-10.0.0.1:51734.service - OpenSSH per-connection server daemon (10.0.0.1:51734). Mar 13 00:43:42.526356 sshd[5737]: Accepted publickey for core from 10.0.0.1 port 51734 ssh2: RSA SHA256:eWtiGYQHnkjgJlXJLLPoGwe2+/3lLXbpacmtzaUtKgo Mar 13 00:43:42.528026 sshd-session[5737]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:43:42.533381 systemd-logind[1547]: New session 17 of user core. Mar 13 00:43:42.546849 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 13 00:43:42.686829 sshd[5740]: Connection closed by 10.0.0.1 port 51734 Mar 13 00:43:42.687457 sshd-session[5737]: pam_unix(sshd:session): session closed for user core Mar 13 00:43:42.701433 systemd[1]: sshd@16-10.0.0.68:22-10.0.0.1:51734.service: Deactivated successfully. Mar 13 00:43:42.703963 systemd[1]: session-17.scope: Deactivated successfully. Mar 13 00:43:42.705134 systemd-logind[1547]: Session 17 logged out. Waiting for processes to exit. Mar 13 00:43:42.709488 systemd[1]: Started sshd@17-10.0.0.68:22-10.0.0.1:51750.service - OpenSSH per-connection server daemon (10.0.0.1:51750). Mar 13 00:43:42.710292 systemd-logind[1547]: Removed session 17. Mar 13 00:43:42.770735 sshd[5753]: Accepted publickey for core from 10.0.0.1 port 51750 ssh2: RSA SHA256:eWtiGYQHnkjgJlXJLLPoGwe2+/3lLXbpacmtzaUtKgo Mar 13 00:43:42.772230 sshd-session[5753]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:43:42.779099 systemd-logind[1547]: New session 18 of user core. Mar 13 00:43:42.794746 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 13 00:43:43.122966 sshd[5756]: Connection closed by 10.0.0.1 port 51750 Mar 13 00:43:43.124366 sshd-session[5753]: pam_unix(sshd:session): session closed for user core Mar 13 00:43:43.135008 systemd[1]: sshd@17-10.0.0.68:22-10.0.0.1:51750.service: Deactivated successfully. Mar 13 00:43:43.137813 systemd[1]: session-18.scope: Deactivated successfully. Mar 13 00:43:43.139287 systemd-logind[1547]: Session 18 logged out. Waiting for processes to exit. Mar 13 00:43:43.143277 systemd[1]: Started sshd@18-10.0.0.68:22-10.0.0.1:51756.service - OpenSSH per-connection server daemon (10.0.0.1:51756). Mar 13 00:43:43.145078 systemd-logind[1547]: Removed session 18. Mar 13 00:43:43.245032 sshd[5767]: Accepted publickey for core from 10.0.0.1 port 51756 ssh2: RSA SHA256:eWtiGYQHnkjgJlXJLLPoGwe2+/3lLXbpacmtzaUtKgo Mar 13 00:43:43.247015 sshd-session[5767]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:43:43.254189 systemd-logind[1547]: New session 19 of user core. Mar 13 00:43:43.273829 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 13 00:43:43.678637 kubelet[2785]: E0313 00:43:43.678232 2785 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:43:43.892396 sshd[5770]: Connection closed by 10.0.0.1 port 51756 Mar 13 00:43:43.893107 sshd-session[5767]: pam_unix(sshd:session): session closed for user core Mar 13 00:43:43.904728 systemd[1]: Started sshd@19-10.0.0.68:22-10.0.0.1:51758.service - OpenSSH per-connection server daemon (10.0.0.1:51758). Mar 13 00:43:43.905297 systemd[1]: sshd@18-10.0.0.68:22-10.0.0.1:51756.service: Deactivated successfully. Mar 13 00:43:43.907787 systemd[1]: session-19.scope: Deactivated successfully. Mar 13 00:43:43.911239 systemd-logind[1547]: Session 19 logged out. Waiting for processes to exit. Mar 13 00:43:43.914418 systemd-logind[1547]: Removed session 19. Mar 13 00:43:43.972363 sshd[5816]: Accepted publickey for core from 10.0.0.1 port 51758 ssh2: RSA SHA256:eWtiGYQHnkjgJlXJLLPoGwe2+/3lLXbpacmtzaUtKgo Mar 13 00:43:43.973826 sshd-session[5816]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:43:43.980178 systemd-logind[1547]: New session 20 of user core. Mar 13 00:43:43.989689 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 13 00:43:44.382638 sshd[5822]: Connection closed by 10.0.0.1 port 51758 Mar 13 00:43:44.383680 sshd-session[5816]: pam_unix(sshd:session): session closed for user core Mar 13 00:43:44.392983 systemd[1]: sshd@19-10.0.0.68:22-10.0.0.1:51758.service: Deactivated successfully. Mar 13 00:43:44.397281 systemd[1]: session-20.scope: Deactivated successfully. Mar 13 00:43:44.400945 systemd-logind[1547]: Session 20 logged out. Waiting for processes to exit. Mar 13 00:43:44.407752 systemd-logind[1547]: Removed session 20. Mar 13 00:43:44.410636 systemd[1]: Started sshd@20-10.0.0.68:22-10.0.0.1:51760.service - OpenSSH per-connection server daemon (10.0.0.1:51760). Mar 13 00:43:44.478688 sshd[5834]: Accepted publickey for core from 10.0.0.1 port 51760 ssh2: RSA SHA256:eWtiGYQHnkjgJlXJLLPoGwe2+/3lLXbpacmtzaUtKgo Mar 13 00:43:44.480371 sshd-session[5834]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:43:44.486495 systemd-logind[1547]: New session 21 of user core. Mar 13 00:43:44.496001 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 13 00:43:44.593367 sshd[5837]: Connection closed by 10.0.0.1 port 51760 Mar 13 00:43:44.593773 sshd-session[5834]: pam_unix(sshd:session): session closed for user core Mar 13 00:43:44.599427 systemd[1]: sshd@20-10.0.0.68:22-10.0.0.1:51760.service: Deactivated successfully. Mar 13 00:43:44.603132 systemd[1]: session-21.scope: Deactivated successfully. Mar 13 00:43:44.604456 systemd-logind[1547]: Session 21 logged out. Waiting for processes to exit. Mar 13 00:43:44.607204 systemd-logind[1547]: Removed session 21. Mar 13 00:43:49.611335 systemd[1]: Started sshd@21-10.0.0.68:22-10.0.0.1:43188.service - OpenSSH per-connection server daemon (10.0.0.1:43188). Mar 13 00:43:49.673674 sshd[5854]: Accepted publickey for core from 10.0.0.1 port 43188 ssh2: RSA SHA256:eWtiGYQHnkjgJlXJLLPoGwe2+/3lLXbpacmtzaUtKgo Mar 13 00:43:49.675953 sshd-session[5854]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:43:49.683002 systemd-logind[1547]: New session 22 of user core. Mar 13 00:43:49.690856 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 13 00:43:49.788791 sshd[5857]: Connection closed by 10.0.0.1 port 43188 Mar 13 00:43:49.789240 sshd-session[5854]: pam_unix(sshd:session): session closed for user core Mar 13 00:43:49.794015 systemd[1]: sshd@21-10.0.0.68:22-10.0.0.1:43188.service: Deactivated successfully. Mar 13 00:43:49.796434 systemd[1]: session-22.scope: Deactivated successfully. Mar 13 00:43:49.797839 systemd-logind[1547]: Session 22 logged out. Waiting for processes to exit. Mar 13 00:43:49.800049 systemd-logind[1547]: Removed session 22. Mar 13 00:43:53.223568 kubelet[2785]: I0313 00:43:53.223497 2785 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 00:43:54.807468 systemd[1]: Started sshd@22-10.0.0.68:22-10.0.0.1:43194.service - OpenSSH per-connection server daemon (10.0.0.1:43194). Mar 13 00:43:54.881300 sshd[5880]: Accepted publickey for core from 10.0.0.1 port 43194 ssh2: RSA SHA256:eWtiGYQHnkjgJlXJLLPoGwe2+/3lLXbpacmtzaUtKgo Mar 13 00:43:54.882872 sshd-session[5880]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:43:54.889325 systemd-logind[1547]: New session 23 of user core. Mar 13 00:43:54.903719 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 13 00:43:55.006212 sshd[5883]: Connection closed by 10.0.0.1 port 43194 Mar 13 00:43:55.006663 sshd-session[5880]: pam_unix(sshd:session): session closed for user core Mar 13 00:43:55.012782 systemd[1]: sshd@22-10.0.0.68:22-10.0.0.1:43194.service: Deactivated successfully. Mar 13 00:43:55.015620 systemd[1]: session-23.scope: Deactivated successfully. Mar 13 00:43:55.017011 systemd-logind[1547]: Session 23 logged out. Waiting for processes to exit. Mar 13 00:43:55.019469 systemd-logind[1547]: Removed session 23. Mar 13 00:44:00.021118 systemd[1]: Started sshd@23-10.0.0.68:22-10.0.0.1:44502.service - OpenSSH per-connection server daemon (10.0.0.1:44502). Mar 13 00:44:00.153715 sshd[5922]: Accepted publickey for core from 10.0.0.1 port 44502 ssh2: RSA SHA256:eWtiGYQHnkjgJlXJLLPoGwe2+/3lLXbpacmtzaUtKgo Mar 13 00:44:00.155825 sshd-session[5922]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:44:00.161359 systemd-logind[1547]: New session 24 of user core. Mar 13 00:44:00.169785 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 13 00:44:00.363627 sshd[5925]: Connection closed by 10.0.0.1 port 44502 Mar 13 00:44:00.363790 sshd-session[5922]: pam_unix(sshd:session): session closed for user core Mar 13 00:44:00.367766 systemd[1]: sshd@23-10.0.0.68:22-10.0.0.1:44502.service: Deactivated successfully. Mar 13 00:44:00.370188 systemd[1]: session-24.scope: Deactivated successfully. Mar 13 00:44:00.372263 systemd-logind[1547]: Session 24 logged out. Waiting for processes to exit. Mar 13 00:44:00.373883 systemd-logind[1547]: Removed session 24.